Why AI Leadership Needs More Transparency and Governance
The season of responsible AI is upon us. Or is it? On July 21, the White House struck a deal with several AI companies to “manage the risks” posed by the technology. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all agreed to follow several measures intended to make AI safer. They include prioritizing research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination, and protecting privacy.
Shortly thereafter, Microsoft, Anthropic, Google, and OpenAI launched something called the Frontier Model Forum. The Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks. The Forum aims to advance AI safety research, identify safety best practices, share knowledge with policymakers and civil society, and support efforts to leverage AI to address society’s biggest challenges.
The Forum wants to promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks. I have gone through the news about these initiatives and noticed something crucial missing: a commitment to being transparent about the people developing AI. This is a major omission that needs to be addressed for AI to be developed responsibly.
In the United States, it’s necessary for citizens to pass mandated tests in order to do certain things such as drive a car legally or practice a trade such as being an electrician. But there is no particular qualifying test for people to develop AI or to lead the companies in charge of these tools. This is interesting when you consider the reality that people who know a lot about AI from the inside say it raises the risk of extinction.
A Cautionary Tale about Bad Leadership
Well, what could go wrong if the wrong people are developing AI? Just to give you a little taste of the risks involved, consider the tragedy of the Titan submersible that imploded at sea recently during an ill-advised expedition to view the wreck of the Titanic in the North Atlantic Ocean. All five passengers died.
As details about the incident unfolded, we all learned some disturbing realities about the history of the Titan. This was an avoidable tragedy. Why did it happen? Because of a series of bad decisions to construct the vessel from material that could not withstand the stress of a deep-sea dive. And because important safety and testing protocols for deep sea diving were not followed. So, how did those bad decisions get made?
Unfortunately, Stockton Rush, the CEO of OceanGate (responsible for building and operating the Titan), turned out to be a notorious bully who did not take well to criticism. He silenced his critics with the threat of lawsuits. He had been warned by several individuals about the perilous path he was treading. However, Rush brushed off multiple warnings, telling him to keep his opinions to himself and questioning his expertise in the matter.
Rush seemed determined to ignore the potential deadly disaster that others were cautioning him about because of his burning desire to innovate (which reminds me of the unbridled thirst for innovation that is happening with AI). And he would tolerate no disagreement inside his company, either. David Lochridge, the former director of marine operations at OceanGate, was reportedly fired in January 2018 after he raised concerns about the safety of the submersible. Lochridge had presented a scathing quality control report on the vessel to OceanGate’s senior management.
In the report, Lochridge highlighted a number of safety concerns, including the sub’s carbon fiber hull, which he said was not strong enough to withstand the pressures of deep-sea diving. He also warned that the sub’s electrical systems were not properly tested and that the crew was not properly trained.
What Can Be Done to Ensure Executive Transparency?
The toxic culture emanating from the OceanGate CEO overrode common sense and concerns about safety. This toxic culture characterized by bullying from the executive ranks created an environment that allowed catastrophic decisions to be made. Fear begat more bullying down the chain of command. All because one person had unchecked power. I am not suggesting that the people calling the shots are tyrants like Stockton Rush. Neither am I suggesting that they are nice people.
So, I’m asking the following:
- What do we really know about the individuals developing a technology that could cause mass extinction?
- What governance policies are in place at AI companies to vet who they are, how they manage people, how transparent they are about what they know or do not know about AI’s potential, and how committed they are to the ethical development of AI? Do they sign a pledge like the Hippocratic Oath when they work on AI development?
Publicly traded firms hold their leadership accountable via governance policies that are documented and expected to be followed. But those policies address the needs of investors first. What about the needs of humanity?
It’s time that businesses incorporate the concept of Safe AI principles for ethical behavior among the decision makers. As part of that, they should be transparent about the steps that they have taken to ensure that no single individual possesses too much authority to override the ideas of their colleagues, in particular when it comes to decisions about developing large language models.
Am I asking for too much?