Wesley.Intel: How to save society from AI when law is not enough?

OpenAI CEO Sam Altman appearing in front of the US Congress this week. Photo: Reuters

OpenAI CEO Sam Altman appearing in front of the US Congress this week. Photo: Reuters

Published May 23, 2023

Share

On Tuesday last week I spent some time watching the co-founder of OpenAI, Sam Altman, the creator of ChatGPT, appearing in front of the US Congress.

He was there to clear the air about all things Artificial Intelligence (AI). It was an unusual appearance and less hostile than many similar appearances by business man in the tech industry.

Not so long ago leaders of some of the leading tech companies faced very angry lawmakers as they were answering questions about misdemeanors by tech platforms.

This time around, however, Altman appeared to be there to beg lawmakers to keep the AI industry in check. A few days before, another Silicon Valley veteran, Eric Schmidt, who now leads a US government tech think tank, indicated that governments had no capacity to keep the AI industry in check. Instead, he suggested, that industry will have to keep itself in check.

It was, therefore, interesting to hear Altman calling for regulators to keep the industry in check.

One congressman indicated to him that it was unusual for a businessman to ask for regulators to do their job. In most cases it was the other way around with regulators designing laws to hold businesses accountable. Why would the leader of an AI company request government to regulate the industry?

To understand why such a move was bizarre one has to take into account that Altman is not a usual founder. At some point in his life he was the President of Y-Combinator, a technology accelerator that gave birth to tech companies such as AirBnB and DropBox. He was there when companies that challenge the status quo were hatched. He knows what it took for AirBnB to challenge regulations in the hospitality industry. He has observed over time regulators struggling to manage and make laws for new innovations.

When he called upon regulators to come for AI he probably knew deep down that governments have no capacity to regulate the AI industry. The developments in the industry are so advanced that very few lawmakers understand what is behind artificial intelligence.

It’s understandable, however, why a new innovation would attract regulators. In the history of technology almost all tech developments were followed by standards, regulations and other legal safeguards to limit the harm. It’s no surprise therefore that AI noise is also followed by calls for regulations. The challenge, however, with AI is that it will be difficult for regulations to safeguard society from harms.

The nature of AI is such that it’s making progress and advancements on a daily basis. To some extent it can also be said that it has a mind of its own at some level. A relevant law today would be irrelevant tomorrow. There are also unforeseen scenarios that can be imagined by the technology itself. In view of the nature of AI it may be ideal for industry together with governments and non-governmental entities to create an AI that would serve as the watchdog of other AI’s. Essentially, AI is the only technology that can hold other AI’s accountable. It can detect concerning activities as prescribed by a body consisting of public and private entities.

A combination of oversight technology together with human oversight is probably one way of getting closer to creating safeguards against AI. This will have to be a global effort with countries across the world contributing their own approaches to safeguarding society from potential harm.

Regulators need to wake up and realise that tech leaders are not here to ask for permission. It’s in the DNA of tech founders to build things that create chaos and fix them later. For now it’s just ChatGPT, at some point it was Uber, tomorrow it will be something else.

As we enter the AI age we will see more trouble for the tech sector. Regulatory mechanisms that worked in the past will not work in the future.

Regulators of the future may need to spend some time in innovation labs to understand future technologies. Regulators who understand technology culture and innovations will understand how to regulate the industry with impact.

Current attempts to regulate AI will fail or stifle innovation. The industry may also need to be proactive in showing the way forward and meet regulators halfway. Failure to take some of these actions may lead to AI tools that will escape regulatory framework.

Wesley Diphoko is the Editor-In-Chief of FastCompany (SA) magazine. You can follow him via Twitter: @WesleyDiphoko

BUSINESS REPORT