When Martin Scorsese released his film The Irishman five years ago, audiences marveled at the incredible digital trickery that allowed the award-winning director to make his principal actors Robert De Niro and Al Pacino appear decades younger on screen. And this type of “deepfake” technology has only gotten better since then. But while deepfakes can do wonderful things in the hands of thoughtful creative artists, they can become a powerful tools for criminal activity and general disruption in the hands of the wrong people.
The University of Virginia offers an excellent general definition of “deepfake” as “an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).” Although there are certainly countless positive and productive applications for this incredible technology, its less desirable applications have given the term “deepfake” decidedly pejorative overtones. In fact, Oxford Dictionary defines “deepfake” as “any of various media, esp. a video, that has been digitally manipulated to replace one person’s likeness convincingly with that of another, often used maliciously to show someone doing something that he or she did not do.”
Driven by the power of artificial intelligence (AI) deepfakes can both replicate and alter original media to generate synthetic media that can easily fool the eye as well as the ear of the average person. As deepfake technology has grown more believable, the expanding potential for its misuse has become more troubling. Today, any malicious actor with even a small amount of tech savvy can create a deepfake that imitates a business leader or another powerful individual to disseminate misinformation and/or solicit valuable private information.
The son of an engineer and a math teacher, Hassan Taher has been fascinated by the vast potential of technology since he was a child. As a voracious reader of science fiction, he was exposed to incredible devices and highly advanced tools that could only exist in the human imagination. But today, he is playing a key role in making science fiction a modern reality as the founder and head of the influential tech consultancy Taher AI Solutions.
Beyond his work as an advisor, Taher has informed large audiences as both a public speaker and a prolific author. He has written three highly influential books on subjects related to the responsible use of AI: The Rise of Intelligent Machines, AI and Ethics: Navigating the Moral Maze, and The Future of Work in an AI-Powered World.
As deepfake technology continues to proliferate at a staggering pace, Hassan Taher recently spoke out on the phenomenon. “Anyone — including savvy 12-year-olds — can make a deepfake image or video if they have a high-end desktop computer with solid graphics cards,” he warns.
Taher points to the user-friendliness of modern deepfake software and related platforms as a primary cause for its rapid spread. “Tools and apps are also available online to enhance deepfake media, making it virtually impossible to determine if the deepfake is authentic, he writes. “Swapping one face with another in a picture or video is easy using an encoder, an AI algorithm that detects facial similarities. Once similarities are pinpointed, the encoder compresses both images into one image (the deepfake) that shares common characteristics.”
When used by malicious actors, deepfake technology can facilitate fraud, cybersecurity breeches, and other criminal endeavors. So it is vitally important to educate the general public about the threat that deepfakes present.
In the business world, deepfakes can have a disastrous effect on brand image, consumer reputation, and public credibility. After all, deepfake technology allows companies to release videos of their competitors doing and saying absolutely anything. This ability becomes even more damaging when paired with cybercriminal activity. “A recent example of a company suffering an economic calamity involved criminals using a deepfake voice that sounded like the company’s CEO,” writes Hassan Taher. “Thinking the voice belonged to the CEO, company administrators transferred more than $240,000 to the account as instructed by the deepfake voice.”
But CEOs aren’t the only subject of deepfakes in the business world. Taher warns of the dangers inherent in deepfakes that mimic the images of employees and customers. Malicious actors might use these imitations to initiate any number of fraudulent transactions.
“Deepfake technology allows competitors, disgruntled employees, ex-employees, or unsatisfied customers to damage a company’s viability and reputation irreversibly,” contends Hassan Taher. “Moreover, the widespread presence of social media makes the risk of companies being victimized by deepfake pictures, videos, or audio clips even more challenging to prevent. Once someone posts a deepfake on social media, millions will share the deepfake in seconds with millions of others who will go on to do the same thing.”
Given the severity of the threat, it is essential for organizations and their individual team members to guard against deepfake deception however they can. As Taher puts it, “companies not adequately protected against deepfakes face a heightened risk of cybercriminals stealing product prototypes, trade secrets, and other forms of intellectual property through deepfake technology.”
Hassan Taher goes on to present several key approaches that companies can use to combat deepfakes, beginning with alerting company team members to the problem. Specifically, team leaders can encourage workers and colleagues to examine all incoming and outgoing photos and videos closely for strange glitches as well as tell-tale inconsistencies in body movements, facial expressions, and speech patterns. Although highly trained experts with state-of-the-art technology can create a deepfake that is nearly indetectable, less sophisticated deepfakes typically exhibit “photoshopped effects” that can be easily spotted if one knows what to look for. Companies should also ensure that they have a rigorous cybersecurity defense that can both filter and track messages from malicious actors.
While companies can take significant educational and security measures on their own, their best defense lies in enlisting the help of an experienced professional. “Consulting with cybersecurity specialists and AI/machine learning research scientists can give companies top-level tools and insights for mitigating the risk of deepfake calamities,” writes Taher. Because it is powered by complex machine learning algorithms, navigating the intricate complexities of modern deepfake detection software requires an expert touch.
But while businesses certainly have a great deal to gain by securing the services of AI specialists, the relationship is nothing if not mutually beneficial. As Hassan Taher reports, tech professionals have much to gain by serving large companies with deep pockets. He writes, “Partnering with these businesses provides access to the latest technologies for rooting out deepfakes that are now expected to cost companies billions of dollars this year in revenue.”
Next: The Use of Artificial Intelligence in Mass Media Communication