AI experts urge E.U. to tighten the reins on tools like ChatGPT

A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT. REUTERS/Dado Ruvic/Illustration/File Photo

A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT. REUTERS/Dado Ruvic/Illustration/File Photo

Published Apr 17, 2023

Share

A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT, arguing in a new brief that such a move could "set the regulatory tone" globally.

The E.U.'s AI Act initially proposed new transparency and safety requirements for specific "high-risk" uses of the software, such as in education or law enforcement. But it sidestepped so-called "general purpose" AI, like OpenAI's popular chatbot, which can serve many functions.

In December, the European Council approved an amended version of the draft that would apply some of the same restrictions for "high-risk" AI to "general purpose" tools, widening its scope. But the draft has not yet been adopted and faces political hurdles over the expansion.

Now, as tech companies rush to integrate AI into more everyday products, a group of top AI scholars is calling on E.U. officials to treat tools like ChatGPT as "high risk," too.

The brief, signed by former Google AI ethicist Timnit Gebru and Mozilla Foundation President Mark Surman, among dozens of others, calls for European leaders to take an "expansive" approach to what they cover under their proposed rules, warning that "technologies such as ChatGPT, DALL-E 2, and Bard are just the tip of the iceberg."

While chatbots like ChatGPT and Microsoft's Bard are currently generating significant attention, the group cautioned policymakers against focusing too narrowly on them, which "would ignore a large class of models which could cause significant harm if left unchecked."

The brief was also signed by the AI Now Institute's Amba Kak and Sarah Myers West, former advisers to Federal Trade Commission Chair Lina Khan who penned a report Tuesday calling for greater scrutiny of how consolidation impacts AI harms, as we reported.

While Europe has yet to adopt its own AI rules, its process is still further along than in the United States, where federal policymakers are just starting to explore AI-specific regulations.

Kak, the former FTC adviser, said the E.U. "will likely be the first to enact an AI-specific omnibus framework" and in doing so "setting global precedent."

Crucial to any new regulations, researchers wrote, would be to ensure that common-use AI tools are "regulated throughout the product cycle," not just once users get a hold of them.

"The original development stage is crucial, and the companies developing these models must be accountable for the data and design choices they make," they wrote.

And they urged European officials to drop language in parts of the proposal to allow AI developers to dodge by using legal disclaimers in their products. "It creates a dangerous loophole that lets original developers . . . off the hook," they wrote.

Alex Hanna, director of research at the Distributed AI Research Institute, said in an email that exempting tools like ChatGPT and Bard would send "a strong signal that the EU does not want to focus on models which are already causing significant harm."

Some European leaders are already pushing to expand the requirements for those tools. According to Politico Europe, two of the lead E.U. lawmakers on the AI Act in February "proposed that AI systems generating complex texts without human oversight should be part of the 'high-risk' list - an effort to stop ChatGPT from churning out disinformation at scale."

But, according to the report, "The idea was met with skepticism by right-leaning political groups in the European Parliament," creating political roadblocks for the push.

European Commissioner for Internal Market Thierry Breton told Reuters in February the bloc's proposal will aim to address concerns posed by new chatbots and other similar products.

"As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks," he told the outlet. "This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data."

WASHINGTON POST