How Businesses Can Succeed with Facial Recognition Technology
Most people who own a mobile phone or have an online banking account know what facial recognition technology is even if they do not use it. With the help of artificial intelligence (AI), the software identifies or confirms someone’s identity by scanning heir face. Once activated, facial recognition makes user authentication easier and faster when someone logs into a site or uses their mobile device.
Unfortunately, facial recognition is not foolproof. Malicious parties continue to find ways to spoof their way past interfaces that use facial recognition and hack protected sites and devices. For example, deep fakes can be used to create realistic but phony models of someone’s face to literally fool the technology into thinking that a hacker is the legitimate user.
With the right approach, though, businesses can keep facial recognition software secure and give their customers an intelligent, human-centric experience at the same time. Here’s how:
1 A Continuous Investment in AI
Businesses are playing a constant game of one upmanship with hackers to protect user interfaces that rely on facial recognition technology.
Technologies that use a laser to do a 3D scan of one’s face (such as the one used in the latest iPhones) are state of the art. At the same time, you can be sure that hackers right now are working on more sophisticated deep fakes to circumvent the state of the art. So, any business that uses facial recognition technology needs to commit to the reality that their developers will need to continuously invest into AI to spot deep fakes.
A continuous investment in AI is not just about stopping hackers. It’s about creating a more user-centered, intelligent experience. For example, hackers can spoof facial recognition with photos scraped from someone’s social media profile. That’s why businesses such as Apple are going beyond computer vision (used for flat images) and investing into motion capture, which requires a user to do a full rotation of their head as part of the interface. Motion capture – which might require someone to move their head and neck or blink their eyes -- makes facial recognition more secure, and it also makes the log-in more emotionally trustworthy for the user, and thus a better experience.
Motion capture is already emerging as a more user-centric and secure way to improve upon the technology, but a business needs to be willing to make the investment into motion capture. And motion capture is not a plug-and-play technology. A facial recognition technology that requires someone to blink needs to realistically capture subtle changes that happen to a person’s face when they blink, such as the muscles by their eyes moving, however slightly. The model needs to incorporate all these data points to distinguish between a person and a spoof.
2 An Investment in Synthetic Data
Any AI-based technology, including facial recognition software, needs reliable data to teach itself how to be more effective. But what happens when AI lacks enough real data to train itself? This is where synthetic data comes into play.
Synthetic data consists of data generated with the assistance of AI. Synthetic data is based on a set of real data. After being fed real data, a computer simulation or algorithm generates synthetic data to train an AI model. Research demonstrates it can be as good or even better for training an AI model than data based on actual objects, events or people.
Synthetic data can help solve a practical problem: training AI models when real data is hard to come by. A retailer that wants to program an algorithm to identify attempted fraud might require synthetic data if the retailer lacks access to a large set of fraudulent transactions. With synthetic fraud data, new fraud detection methods can be tested and evaluated for their effectiveness.
By the same token, synthetic data can teach facial recognition technology to identify attempted spoofing. That’s because businesses need access to all the ways hackers are trying to spoof their gateway, but unless they have relationships with every hacker in the world (which they don’t), they will need for their facial recognition application to teach itself. To learn more about how synthetic data works, please read this recently published blog post from Centific.
3 A Willingness to Be Creative
As difficult as it is to believe, hackers can spoof facial recognition with simple masks that don’t even resemble the user. With a carefully planned tilt of the head and a wiggling of eyebrows, even a simple mask can spoof facial recognition that requires eyebrow movement as part of the security protocol. How can this be? That’s because a facial recognition application might not be trained to look for the unexpected. A team that focuses only on stopping the use of deep fakes could easily find itself vulnerable to less sophisticated attempts to breach facial recognition.
This is not a technology problem. It’s a human problem. A team intent on stopping hackers needs to apply human imagination and creativity to plan for even low-fidelity attempts to hack facial recognition.
4 An Inclusive Approach
Any team that trains a facial recognition system needs to be inclusive in order to provide a human-centered, intelligent experience for everyone. For instance, facial recognition needs to allow for the wearing of articles of clothing rather than require the user to remove hats. Otherwise, the technology will exclude anyone wearing a burka or turban. And, of course, the technology must be trained to recognize various skin tones and accessibility-related facial features, such as a user wearing an eye patch. This is what we mean by ensuring that the technology is not only secure but also intelligent and human centered.
How We Do It at Centific
At Centific, our approach to designing facial recognition systems for clients means the use of AI with a diverse team of humans in the loop to ensure that the technology is inclusive, human centric, and future-proof.
As we work with our clients to design facial recognition applications, we rely on our own globally diverse crowdsourced team to train our models. Our crowdsourced team comes from all walks of life and global cultures. This ensures that we:
- Are inclusive in our approach, thus ensuring that what we design safeguards for people from all backgrounds and countries – including people with disabilities.
- Are creative and imaginative because we draw from a larger pool of people who can collaborate to test all the ways someone might spoof the technology. For example, someone with a gaming background will contribute ideas and scenarios that are different from someone with a banking background.
But we also need a common platform to manage and scale the work we do. This is where our OneForma platform comes into play. We rely on OneForma platform to:
- Manage workflow – such as the myriad data inputs that are required to train facial recognition.
- Literally test the system. In our labs in Spain, the United States, India, and Singapore, our team role plays the many ways that someone might spoof facial recognition – in effect, a stress test. OneForma records the outcome using AI to strengthen the application. OneForma also uses synthetic data to teach the AI beyond what people can do as noted above.
Bottom line: AI and a diverse set of creative thinkers and the right technology are crucial for ensuring that facial recognition technology protects businesses while providing an intelligent, human-centric experience. Contact us to learn more.