Why Fighting AI Bias Requires Mindful AI
How does a company fight bias in artificial intelligence (AI)? This is a question that vexes even the most mature organizations with the deepest experience developing AI-based applications.
For example:
- Amazon recently abandoned an AI-based recruitment software because it was biased against women. (It turns out that the data on which the AI algorithm was based favored male candidates).
- Twitter vowed to test its algorithms for bias after Twitter users reported that Twitter’s image preview cropping tool was automatically favoring images of white people.
There is no perfect answer, but businesses need to be more vigilant and fight AI bias if AI is to be trusted to improve our lives. Ironically, people are part of the problem in creating AI bias, and they are also part of the solution.
AI bias does not happen in a vacuum. It’s not the result of AI-based applications going rogue. AI bias happens because people bring their inherent biases into training AI to do its job. For example, Amazon’s recruitment software was designed to rank job seekers on a scale of one to five stars. It was developed with the input of 10 years’ worth of resumes. The majority of those resumes came from men. So, as Reuters reported,
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.
In addition, the recruitment software favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.” Amazon eventually shut down the operation.
Mindful AI
How might a business avoid making these kinds of mistakes? We believe the answer is make AI trustworthy, responsible, and human-centered – a framework we call Mindful AI. Mindful AI has three components:
1 Being Human-Centered
Being human-centered means designing AI solutions with the needs of people at the center. To do that, a business needs to rely on a diverse set of global talent to train AI applications to be inclusive and bias-free. For example, at Centific, we first understand the purpose of their AI applications and their intended outcomes and impacts using human-centered design frameworks. We then ensure that the data we generate, curate, and label meet those expectations in a way that is contextual, relevant, and unbiased.
The foundation to doing all that is crowdsourcing a diverse team.
Our diverse and highly skilled global workforce comes from backgrounds as diverse as our global society. Our community of collaborators encompasses different ethnicities, age groups, genders, education levels, socio-economic backgrounds, and locations among many other elements of a plural world.
Our diversity helps ensure that we build for our clients inclusive and localized AI applications. Examples include voice applications that account for languages, dialects, and accents, or facial recognition systems that can work for anyone in the world. Many of the people we crowdsource are not full-time employees, but they develop expertise working regularly in our large variety of projects. We create orientation modules with certifications and aptitude tests that can become educational tools for career development. When passing those tests, people on our team are introduced into real projects where they can earn money but also receive constant feedback and orientation to keep growing their skills. We enable our community to develop talents that not only open work opportunities at Centific but in other environments. One example is the translation orientation module and test, which is targeted to people who do not necessarily have translation education or background but who have language skills and a passion for translation. This module gives them access to millions of words for translation and an opportunity to develop a new career. Also, as people mature and improve their results, they are moved at higher levels: bronze, silver and gold, that bring them new advantages within the community. As a result, our clients benefit through better quality work.
2 Making AI Trustworthy by Keeping Humans in the Loop
Our understanding of the world is based on years and years of history written by dominant social groups. That history is biased and not inclusive. Our perception of the world around us, informed by historical data, is polluted by prejudices such as racism and sexism. AI built upon that data without filters may perpetuate bias and fail to be inclusive. Having humans in the loop to manage AI is essential to prevent AI from being biased and to make AI more inclusive of our diverse world. We need humans to establish a “fair” collection of comprehensive data and to curate that data thoroughly to ensure inclusion, as opposed to capturing data from more easily available datasets that may be biased. It’s a paradox as one would expect “raw” data to represent well the diversity of our modern-day world – but such is not the case.
For example, consider an enterprise that is building a voice recognition system for a customer service application. To train the AI model effectively, the enterprise must collect audio voice samples from a large number of people. The enterprise, hoping to make the process easier and faster, might be tempted rely on the dominant form (or as we say in the localization profession, “flavor”) of a language – an example being English the way Americans or Brits speak the language. But in doing so, the enterprise will fail to capture different flavors of the same language, such as how English is spoken in countries ranging from Australia and India. Therefore, the AI model will exclude local flavors of English. In this example, the enterprise needs humans to intervene with a meticulous data collection and curation process that reflects the different language flavors – even better if the team represents the diversity of the different audiences that the AI model should support effectively.
Humans in the loop can also help machines create a better AI experience for each specific audience. Our AI data project teams, spread in several locations around the world, understand how different cultures and contexts can impact the collection and curation of reliable AI training data. And they use that knowledge to make recommendations. We support our global team with the tools they need to flag problems, monitor them, and fix them before an AI-based solution we develop goes live.
Human-in-the-loop AI combines the strengths of people (e.g., creativity, insights from ambient information, historical, and cultural context) with the strengths of machines (e.g., accuracy, speed, and the ability to manage repetitive tasks that people don’t want to manage). Human and AI collaboration requires tight integration of human operations, machine learning, and user experience design. The human safety net serves as an extra feedback loop for model training. At Centific, we provide people tools to do all this. For instance, we developed LoopTalk, a voice AI data generation capability, to make it possible for our team to train voice recognition models in order to better understand regional accents/ non-typical pronunciations of certain words in a target market. Doing so helps our clients make AI more inclusive.
3 Being Responsible
We make AI technology more inclusive by working with under-represented communities through the diversity in demographics that we need for our data programs to help our customers build more inclusive AI-based products and to reduce bias. We also frequently reach out to local organizations representing minority groups and communities to invite them in our AI data initiatives.
People Need a Common Platform
Our global team relies on a single data collection and curation platform to train AI-based applications. OneForma mitigates against bias by doing the following:
- Being mindful of how we curate and interpret data sources. If not, biases inherent in data generation and algorithmic development processes will transfer into AI applications.
- Ensuring data sampling criteria and training data preparation reflect existing social and cultural paradigms
- Ensuring that algorithm tuning includes de-biasing models and potential impact as core requirements. Doing so requires being mindful of the data DNA and being purposeful about the intended outcomes.
Being mindful begins with data generation, curation, and annotation to ensure that potential biases are addressed prior to the data being used in training AI algorithms; and that’s exactly what OneForma does.
How Everything Comes Together
One common application of Mindful AI is AI localization. Our expertise with AI localization makes it possible for our clients to make their AI applications more inclusive and personalized, respecting critical nuances in local language and user experiences that can make or break the credibility of an AI solution from one country to the next. For example, we design our applications for personalized and localized contexts, including languages, dialects, and accents in voice-based applications. That way, an app brings the same level of voice experience sophistication to every language, from English to under-represented languages. The last mile in making AI pervasive and trustworthy is localizing and personalizing content and experiences. OneForma enables that for our clients, thereby transforming the industry while helping AI applications become more inclusive and equitable.
For more insight, read a recently published post by my colleague Ilia Shifrin, “AI’s Next Personalization Frontier: Local Experiences.”
Contact Centific
Our OneForma platform and global team together are the key to train your AI models and make your next generation products faster, better, smarter, and inclusive. Contact us to learn more.