The evolution of technology has brought many innovations. One particular technology that is transforming the media landscape is deepfakes. Deepfake software is videos, images, or audio recordings that can be manipulated by artificial intelligence (AI) technology. Criminals can present an individual as saying or doing something that did not really happen. In this article at ZDNet, Danny Palmer explains why deepfakes are a severe problem for everyone.
How Does Deepfake Software Work?
Threat actors have access to a machine learning (ML) technology called deep learning that teaches neural networks to create fakes from existing images and videos. Cybercriminals use deepfake software to mislead the public into believing in false information and propaganda. This widespread availability of ML technology makes people question the following:
What do deepfakes mean for identity verification?Can we trust anything that we hear and see?
Will Deepfake Software Impact Cybersecurity?
Security professionals believe there is a growing risk of stolen or recreated identities belonging to employees. Threat actors use these identities to conduct malicious activities such as bypassing banking and other account verification controls — especially those involving video and face-to-face verification methods. Furthermore, there is a growing concern over criminals impersonating top-level executives at organizations to initiate fraudulent money transfers.
“Scammers have already used artificial intelligence to convince employees they’re speaking to their boss on the phone. Adding the video element will make it harder to detect that they’re talking to fraudsters,” says Palmer.
Can Organizations Detect and Prevent a Deepfake Scam?
Experts believe deepfake videos can be hard to identify, mainly if the impersonation displays the individual acting reasonably. However, several telltale signs can help you identify potential audio or video deepfake. Some of them include the following:
Unnatural speech cadenceRobotic toneUnnatural eye movementPoor audio or video quality
To combat deepfake attacks, organizations must educate their workforce on recognizing potential audio and video impersonation. Additionally, enterprises must have an internal verification system to identify suspicious communication.
To read the original article, click on https://www.zdnet.com/article/the-next-big-security-threat-is-staring-us-in-the-face-tackling-it-is-going-to-be-tough/.
The post Is Deepfake Software a Serious Threat to Your Business? appeared first on AITS CAI’s Accelerating IT Success.