Biometric Technology Poses a Greater Security Risk Than You Think
Unfortunately, biometric technology such as facial and voice recognition are increasingly seen as a security risk for both consumers and brands. To protect themselves, businesses need to play a constant game of one upmanship with hackers. This means accepting that AI security is not a one-time investment. Businesses that are willing to continuously invest to project themselves will be the most successful.
Since I blogged about the risks of facial recognition hacking, biometric hacking has increased sharply and U.S. legislators have realized that the issue is even bigger than facial recognition hacking.
In a recent development, Senator Sherrod Brown, chairman of the U.S. Senate Committee on Banking, Housing, and Urban Affairs, has urged banks to remain vigilant and take measures to address the potential risks posed by AI-generated voice cloning technology. His concern stems from the growing sophistication of these technologies, which have the ability to replicate individuals’ voices with astounding accuracy. By leveraging AI, malicious actors can deceive unsuspecting individuals and manipulate financial transactions.
This action follows a significant incident where Motherboard (an online publication) employed an AI-powered system to replicate a reporter’s voice and successfully deceived a bank’s voice authentication security system. The investigation demonstrated that a mere few minutes of the targeted individual’s voice recording were sufficient to create a highly convincing clone, posing a potential threat to public safety.
Recognizing the potential dangers associated with AI-generated voice cloning, Brown has called upon banks to implement safeguards and stay one step ahead. He emphasized the importance of maintaining a proactive stance to protect customers’ financial security and privacy.
AI-generated voice cloning technology employs deep learning algorithms and large datasets to mimic an individual’s voice patterns, intonation, and other vocal nuances. These algorithms have shown remarkable progress in recent years, enabling the creation of near-perfect clones that can be used for various purposes, including positive applications such as marketing and customer service, as I have blogged.
While AI-generated voice cloning technology has potential beneficial applications, such as improving accessibility for individuals with speech disabilities, its misuse poses significant threats to the integrity of financial systems. Banks are being encouraged to deploy advanced voice recognition systems capable of detecting AI-generated voice clones. Additionally, multi-factor authentication procedures and closely monitoring customer transactions can deter fraudulent activities.
And just how bad are things getting? Consider an article written by Joanna Stern of The Wall Street Journal. She cloned herself with AI voice and video to see how humanlike AI can be. The results were disturbing.
Stern said in her article that she has experimented with Synthesia, a tool that uses recorded video and audio to generate AI deepfakes. This technology enables users to input any text, and the deepfake will faithfully mimic and recite the words. She cloned her voice using technology generated by ElevenLabs, which was so good that it fooled her credit card’s voice biometric system. The bank indicated that it uses voice biometrics, along with other tools, to verify callers are who they say they are. The feature is meant for customers to quickly and securely identify themselves, but to complete transactions and other financial requests, customers must provide additional information – which is exactly what a hacker can do if they are skilled with identity theft.
As she noted, access to this technology is easy and completely aboveboard – so bad actors have access to what they need as opposed to needing to go underground.
Businesses have always needed to make an ongoing investment into fraud detection measures, including the use of ethical hacking to fight biometric breaches, as my colleague Meher Dinesh Naroju has blogged.
But businesses also have many tools at their disposal, such as synthetic data, which makes it possible to solve a practical problem: training AI models when real data is hard to come by. A retailer that wants to program an algorithm to identify attempted fraud might require synthetic data if the retailer lacks access to a large set of fraudulent transactions. With synthetic fraud data, new fraud detection methods can be tested and evaluated for their effectiveness.
Most of all, a willingness to make a continuous investment in AI will ensure that a financial services institution (or any company) stays a step ahead of fraudsters. A number of software providers offer products designed to protect against biometric hacking. In addition, at Centific, we help businesses develop programs to make themselves secure. We know how to assess risk levels and apply the data training expertise required to ensure AI does its job well.