Introduction
Today, deepfakes are costing corporations millions. With the widespread embrace of a new piece of technology, frauds and con artists usually follow. In the 19th century, swindlers like Charles Ponzi showed how financial systems could be manipulated for personal gain. Before him, the Tulip Bubble and the South Sea Bubble bankrupted thousands of investors who thought they were investing in some new and exciting financial instrument the likes of which the world had never seen. The invention of the telephone led to new scams, including phone phishing and telemarketing fraud. The rise of the internet led to email scams, identity theft, and online auction fraud. The cycle just keeps repeating.
AI deepfakes are just the latest tool in the fraudster’s burgeoning arsenal. These, however, are much more disturbing because they can trick a person into seeing something that isn’t real. Deepfakes are much more dangerous than any previous scam. “Are you going to trust me or your lying eyes?” has never been more apt. In many cases, those scammed by deepfakes willingly hand over huge sums of money because they believed they were interacting with people they personally knew. The visibility of deepfake technology can lead to a society where distinguishing truth from falsehood becomes difficult.
What are Deepfakes?
Deepfakes are images, videos, animation, or audio which are created or edited using artificial intelligence (AI) tools. They are synthetic media created using machine learning and deep learning algorithms to create convincing fake images, videos, and audio recordings. Deepfakes can convincingly swap faces or voices in images and videos, often making it difficult for viewers to differentiate them from real content. This technology poses significant risks because fraudsters exploit it by creating realistic audio and even real-time video to impersonate executives, employees, and customers to deceive organizations into transferring huge sums of money to the hackers. This isn’t a case of people paying ransoms because they have been blackmailed. With deepfakes, the defrauded willingly make financial transactions to people they think they personally know. It could be a plot straight out of a science fiction novel.
Deepfakes often make it difficult for people to distinguish between real and fake content due to their convincing nature. They can create videos that show people saying or doing completely made up things. They can undermine trust in genuine audio and video content. Today, there is an urgent need for corporations to implement robust detection methods and verification processes to combat this deepfake fraud because Deepfakes can generate large-scale scams, such as personalized videos requesting money under false pretenses.
Deepfake Incidents
In her article, Half of Executives Expect More Deepfake Attacks on Financial and Accounting Data in Year Ahead , Christine Oh claims, “Deepfake financial fraud is rising, with bad actors increasingly leveraging illicit synthetic information like falsified invoices and customer service interactions to access sensitive financial data and even manipulate organizations’ AI models to wreak havoc on financial reports.”
According to CNN , a finance worker at a multinational firm in Hong Kong “was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” This was one of the first cases of a fraudster using deepfake technology to commit serious financial fraud. “The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations,” says CNN. Everyone on the call was fake. Deepfake faces must have been attached to real bodies on the other end of the call at the fraudster’s office
Internet Crime Statistics
$12.5B+
Potential Losses from Internet Crime in 2023
In 2023, the IC3 received 880,418 complaints from the American public, with potential losses exceeding $12.5 billion. This represents a 10% increase in complaints and a 22% increase in losses compared to 2022.
Believing everyone on the call was real, the worker sent about $25.6 million to the fraudsters, CNN reports. It was an elaborate and highly profitable scam. No doubt a portend of things to come.
Although these deepfake incidents are bad, they will get much, much worse as well as much more expensive for companies hit by them.
What are Deepfakes?
In their Business Insider article, What are deepfakes? How fake AI-powered audio and video warps our perception of reality , Dave Johnson and Alexander Johnson claim, “The term ‘deepfake’ comes from the underlying technology — deep learning algorithms — which teach themselves to solve problems with large sets of data and can be used to create fake content of real people.” Using deep learning models like neural nets, deepfakes replace one person’s face for another in an image or a video. In many cases, the reproduction is so realistic, the eyes is easily fooled.
The most common method of creating deepfakes is by using deep neural networks to swap a face in a video. A target video used as the basis of the deepfake is selected and then the neural net is provided a collection of video clips of the person needing to be inserted, explains Business Insider.
Deepfake Growth in Finance
700%
Increase in Deepfake Incidents in Fintech (2023)
One report found deepfake incidents increased 700% in fintech in 2023.
The videos don’t need to be related. They can even be random clips taken from YouTube. “The program guesses what a person looks like from multiple angles and conditions, then maps that person onto the other person in the target video by finding common features,” contends Business Insider . Generative Adversarial Networks (GANs), another AI tool, irons out any apparent flaws in the deepfake. This makes the image more realistic and harder for deepfake detectors to spot and uncover them, says Business Insider .
Although the process sounds difficult, finding the software to build a deepfake is easy. Apps like DeepFace Lab, FakeApp, Face Swap, and the Chinese Zao are simple to download and cheap to use. A large open-source development community exists for this technology as well, so help is easy to find.
Deepfake Videos
In November 2017, a Reddit user posted an algorithm that leveraged existing AI algorithms to create realistic fake videos and deepfakes were born. Today, criminals use video deepfakes to fabricate realistic videos of known individuals making false statements or fraudulent requests. These videos can manipulate viewers into believing that the person is endorsing an action or sharing sensitive information. Fraudsters might create videos of executives instructing employees to conduct financial transactions, leveraging the trust associated with the executive’s likeness.
In the case of the Hong Kong financial institution that lost $25 million, CNN reports, “The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations.” In the multi-person video conference call, every person he saw was a deepfake. Although skeptical at first, the caller’s doubts were alleviated when he recognized his colleagues on the call. The whole episode looked and sounded like a normal call. This was a highly professional operation. Pulling off something like this in real-time requires serious technological capability. Its impressiveness should not be minimized. It should also be a stark warning to every corporate finance department — believing in your own eyes could be extremely costly.
Counterfeit Bank Accounts
The use of deepfakes in fraud is becoming increasingly sophisticated and poses significant risks to individuals and organizations alike. As technology advances, so too does the potential for misuse. It is crucial for businesses to implement robust verification processes to fight deepfakes. Businesses should educate all their staff about the dangers associated with deepfakes.
Deepfakes can also bolster counterfeit applications for bank accounts or loans by creating seemingly authentic identities. In its article about the $25 million fraud, CNN reported that “eight stolen Hong Kong identity cards – all of which had been reported as lost by their owners – were used to make 90 loan applications and 54 bank account registrations between July and September last year.” This type of deepfake fraud contributes significantly to financial losses for institutions because it enables criminals to present fabricated identities that appear completely legitimate.
As CNN warns, the Hong Kong $25 million fraud “is one of several recent episodes in which fraudsters are believed to have used deepfake technology to modify publicly available video and other footage to cheat people out of money.” This problem isn’t going away anytime soon, and organizations need to prepare for the worst.
The Rise of Deepfake Fraud
“Knowledge of evolving deepfake fraud schemes is evolving quickly,” says Christine Oh . “But as illicit actors’ techniques advance, they become more impervious to human detection, technologies and tools,” she adds. Oh quotes Mike Weil , the digital forensics leader and a managing director, Deloitte Financial Advisory Services LLP. Weil claims, “In the years ahead, it’ll become more critical for organizations to both identify and address deepfake risks, not the least of which will be those efforts targeting potentially market-moving accounting and financial data to perpetrate fraud.”
Fraudsters now use deepfakes to impersonate executives, employees, and customers. They use sophisticated techniques like voice phishing, and fake audio and video scams, account takeovers, and application fraud. They exploit AI’s ability to create realistic audio and video content that can fool the unsophisticated. Naive companies are targets because they are only now recognizing the deepfake threat.
With voice phishing or vishing, fraudsters create highly convincing audio deepfakes that replicate the voice of a trusted individual. This technique can also involve mimicking urgent requests for financial assistance, leading victims to make unauthorized transfers, costing millions.
Financial institutions and banks will always remain vulnerable. Fraudsters will always target them because, as the bank robber Willie Sutton once said when asked why he robbed banks — “Because that’s where the money is”. Right now, fraudsters find it easy to exploit the flaws in the financial system. However, there are ways for companies to protect themselves.
Combating Deepfake Fraud
Although deepfakes are, by their very nature, designed to fool the viewer, there are techniques available to root them out. For instance, users can check the true source of an image on a reverse image search provider like Google. This helps uncover the original source video used to create the deepfake. Users should see who posted the image online, where it was posted, and whether or not it made sense for it to be posted.
The technology required to create a deepfake has become so cheap and simple to use that there’s been an explosion of deepfake content on the internet. The trend shows no signs of abating anytime soon. Quite the opposite. With barriers to entry rapidly coming down, deepfake technology adoption will only increase in the coming years, which means vigilance to it needs to rise. With deepfakes, creativity will certainly flourish, but so will the danger.
Future of Authentication
30%
Of Enterprises Will Consider Identity Verification Solutions Unreliable by 2026
Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026.
Dr. Marco’s YouTube video on AI incidents and how to counteract them.
Thankfully, help is on the way. Several technology companies have devised methods to spot deepfakes. In its press release Intel Introduces Real-Time Deepfake Detector , Intel introduced FakeCatcher, the world’s first real-time deepfake detector. The technology detects fake images and videos by analyzing blood flow in video pixels and returns result within milliseconds. Its accuracy rate is 96%. FakeCatcher might help rebuild trust in industries currently wrestling with a difficult misinformation landscape.
Media companies, social media platforms, and global news organizations could use FakeCatcher to spot manipulated videos and fake news before amplifying them. Nonprofit organizations could use it to, as Intel claims , “democratize detection of deepfakes for everyone.” Finance companies could also use it to ensure the people they are talking to are really who they say they are.
The Tech Industry’s Responsibility
“The responsibility should be on the developers, on the toolmakers, on the tech companies to develop invisible watermarks and signal what the source of that image is,” said Shirin Anlen , a media technologist for WITNESS, an organization focused on the use of media to defend human rights. Startup companies like Sensity have developed “a detection platform that’s akin to an antivirus for deepfakes,” say Johnson and Johnson. The software alerts a user they are watching something that looks like AI-generated media. Another company, Operation Minerva, uses algorithms to compare potential deepfakes to known deepfake videos, note Johnson and Johnson.
However, there haven’t been too many efforts to combat deepfakes at scale, warns Nasir Memon, an NYU professor of computer science. “I think the solution overall is not technology based, but instead it’s education, awareness, the right business models, incentives, policies, laws,” says Memon. Unfortunately, there’s no magic bullet to solve this problem. “Don’t jump to conclusions when you see an image now, contends Memon. “Look at the source. Wait until you have corroborating evidence from reliable sources,” he concludes. It’s great advice when dealing with deepfakes.
Regulatory and Legal Considerations
The current legal framework for deepfakes is evolving as governments around the world recognize the potential risks associated with this technology. According to NBC News , The Defiance Act, which would give victims the ability to sue anyone who creates, shares or receives nonconsensual sexually explicit deepfakes or pornographic videos depicting them, recently passed the US Senate.
The No AI FRAUD Act is another piece of legislation making it was through Congress. It attempts to safeguard property rights associated with an individual’s likeness and voice, but it has many critics. These people argue that while the intentions are good, the execution is flawed, the terms too broadly defined. “Congress must consider a proposal that is narrower and more balanced, aimed at creating a federal right to publicity that safeguards name, image, voice, and digital likeness without hindering innovation,” claims the App Association , a group of independent developers, innovators, and entrepreneurs who work in the global app ecosystem.
Internationally, China requires enterprises to verify user identities and disclose the use of any AI-generated content. The EU’s Artificial Intelligence Act mandates that users disclose manipulated content. The UK Online Safety Act also addresses explicit deepfakes but does not comprehensively cover all forms of AI-generated media. This is all good, but future regulations and laws should address the long-term challenges posed by deepfakes.
While legal frameworks addressing deepfakes in various jurisdictions are emerging, there remains a consensus on the need for more comprehensive and specific legislation to protect individuals from the potential harms associated with this technology. As deepfake technology evolves, so, too, should the legal responses to ensure accountability for offenders as well as adequate protection and legal remedies for potential victims.
Threat Landscape
Deepfakes are a serious threat to every business operating today. Financial fraud is getting more and more sophisticated and the amounts looted are getting higher and higher. The case of the Hong Kong finance worker who transferred $25 million to fraudsters posing as several people within the company illustrates how convincingly realistic deepfakes can be. Besides the financial loss, deepfakes threaten to weaken the all-important bonds of trust between a company and its customers. As deepfake technology becomes more accessible and sophisticated, it can challenge the integrity of visual and audio evidence, complicating verification processes in business transactions and communications.
A company falling victim to deepfake fraud could suffer from substantial reputational harm. Customers might lose confidence in the organization’s ability to safeguard its operations, which could result in a negative long-term impact on customer loyalty.
The good news is deepfakes aren’t operating in a vacuum. As their threat grow, so too does the development of detection technologies. The rise of deepfake detection technology presents multifaceted challenges for companies, necessitating vigilance, proactive measures, and the implementation of advanced detection tools to mitigate risks associated with this evolving threat landscape. Companies need to stay informed about these advancements and consider integrating such technologies into their security measures to combat potential threats effectively.
The deepfake threat isn’t a mirage. If companies don’t prepare for them, they will lose millions from fraud. A deepfake fraud can almost be like magic. One minute you think you’re talking to someone you know, the next minute you realize it was all a fraudulent conversation with fake person. It’s enough to make you question what is real and what is not. For those caught up in a deepfake fraud, they will tell you the financial losses are very, very real.