Why Google Bard Matters to Conversational AI
Bard has spoken. Google recently released its highly anticipated Bard AI-powered conversational search chatbot on a wait-listed basis. This was seen as a major step forward for Google, which had been developing the underlying AI for Bard (LaMDA) since 2020. So far, the response to Bard has been underwhelming. But Bard is in early stages, and it could represent a more transparent application of AI for conversational search.
Reactions to Bard
Bard is Google’s answer to OpenAI’s popular ChatGPT AI assistant. Bard functions as a more conversational and concise way to deliver search results on Google Search. Bard uses the internet to give answers to questions, rather than giving web pages and web links like a search engine would. It can also write essays, poems, and other content as ChatGPT does.
Google had zero motivation to release Bard to augment search as we’ve known it for years. Conversational AI tools flourish by giving searchers concise responses instead of links to other sites. But Google’s ad model depends on people staying engaged on Google Search clicking on links.
But Google was pressured to make Bard available because the popularity of ChatGPT became a threat to Google’s position as a leading technology innovator. On top of that, Microsoft – a major investor in OpenAI – released a conversational chatbot as part of Bing search, based on the same OpenAI technology that powers ChatGPT.
Google needed to act in order to show that the company is remaining relevant.
Instead, Google has invited indifference and criticism. Technology pundit Ben Parr, for example, compared Bard to ChatGPT and concluded:
At least for now, Open AI’s ChatGPT is far superior to Google Bard. There’s just no comparison. It doesn’t even compare to the GPT-3.5 version of ChatGPT released in December — ChatGPT 3.5 was able to correctly solve the same math challenge Bard couldn’t solve.
He also tweeted, “Google still has some of the most advanced AI on the planet (Deepmind, anyone?), but when it comes to public-facing large language models, it almost feels antiquated.”
Elsewhere Bard has been described as boring, plagiaristic, and a follower. These are not good looks.
Why Bard Matters
But Bard has something going for it: transparency. The Microsoft Bing conversational AI is built off a black box. No one really knows the inner workings of OpenAI’s technology. And this matters because ChatGPT has already demonstrated problems with bias and accuracy, among other issues – which creates a trust problem, to say the least.
But we all know where Bard comes from: the LaMDA large language model, which is trained on data sets consisting of public dialogue and web data. You can dig deeper into how LaMDA functions by reading publicly available research and insights published on Google’s blog, such as this post and this one.
With conversational AI and generative AI, transparency is a hot-button issue. Google is getting out in front of it. But the company clearly has a long way to go in order to win the war for public approval.
To use AI in a responsible way, contact Centific. We use an approach known as Mindful AI to ensure that AI is adopted in an ethical, inclusive fashion.