Responsible AI is Mandatory in Voice Tech Development
I had the pleasure of attending the 2023 Project Voice event, which was conducted recently in Chattanooga, Tennessee, and hosted by Bradley Metrock, CEO. Project Voice is an annual opportunity for industry practitioners to discuss all things conversational AI. And what a phenomenal conference! Metrock did an outstanding job programming the content, which featured speakers ranging from Karen Webster, CEO of PYMNTS.com, to my colleague Sergio Bruccoleri, director of Platform and Innovation at Centific. Although the event covered a lot of ground, one of the major takeaways was that AI must be responsible to realize its potential, and businesses need to hold themselves accountable to make AI responsible.
Responsible AI Dominates the Conversation
The event highlighted many ways that conversational AI can change how businesses and people interact. For instance, Cameo discussed how an app known as Cameo Kids shares with families personalized message from kids’ favorite cartoon characters, like how celebrities share personalized messages to adults through Cameo. As Metrock wrote on his blog, Cameo Kids “provides an eye-opening glimpse into how conversational AI is going to impact the media and entertainment market massively, moving forward.”
The overriding theme driving this year’s event was the need for the responsible development of conversational AI – meaning AI that is inclusive, as free from bias and harmful content as possible, and as accurate as possible. For example, Sergio Bruccoleri delivered a presentation on the responsible development of conversational AI that relies on large language models (LLMs) such as ChatGPT. His talk focused on how large-scale AI models can be fine-tuned and made more responsible through reinforcement learning with human feedback. Key takeaways included:
- Model accuracy, quality data, and ethical use of AI are imperative to a successful AI deployment at a global scale.
- Accelerating AI adoption will require an approach driven by trust and transparency.
- Reinforcement learning through human feedback is a staple of responsible AI.
- The human-in-the-loop approach is more important than ever. Especially as models become more pervasive, human feedback plays a key role in how AI experiences evolve.
- Large-scale AI models must be inclusive. For example, AI models need to consider people with disabilities.
- Localization can make AI models more inclusive of diverse cultures. It is easy to make the mistake of focusing on language when we hear the word “localization.” But localization needs to consider factors such as culture, demographics, gender, and more.
- Reinforcement learning through human feedback requires a large pool of people, each of whom has knowledge about the specific domain or topic relevant to the model.
AI Must Be Inclusive and Free of Bias
For conversational AI to be inclusive and free of bias, AI models need to be trained by a diverse pool of people. So, it was only fitting that this year’s Project Voice included the inaugural Women’s Summit. Metrock described that summit as a “speakeasy of women in AI.” Befitting the closed nature of the event, I did not attend it and do not have any takeaways from it aside to note that the existence of a Women’s Summit is a step forward.
Metrock underlined the importance of responsible AI when he invited all conference participants to sign “The Ethics and Integrity Charter for LLM-Based AI.” The charter stands on what he identified as six core pillars of ethical AI: transparency, inclusivity, accountability, sustainability, privacy, and compliance. By pledging to honor these, participants committed to developing AI technology in a way that benefits everyone.
Only days after the event concluded, the Biden-Harris Administration announced actions to promote responsible AI innovation that protects Americans’ rights and safety. Those actions include new investments to power responsible American AI research and development; policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities; and a commitment from leading AI developers (including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI) to participate in a public evaluation of generative AI. During a White House meeting with major AI companies, Vice President Kamala Harris said that the leaders of major tech companies have a “moral” obligation to keep products safe.
The message is clear: if businesses don’t manage AI responsibly, the U.S. government will step in, and in fact has already. This is true of governing bodies around the world, too. For instance, the European Union is rapidly moving in the direction of regulating AI.
As generative AI technologies become increasingly integrated into our lives, it is crucial that we prioritize ethical considerations in their development and use. Without proper safeguards, AI has the potential to perpetuate harmful biases or cause unintended harm. So, it is essential that we approach it with a responsible and ethical mindset. At Centific, we endeavor to do so through an approach known as Mindful AI.
Contact us to learn how we can help you develop AI responsibly.