Photograph: MirageC/Getty Images
Busines

Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator

At a congressional hearing senators from both parties and OpenAI CEO Sam Altman said a new federal agency was needed to protect people from AI gone bad.

Since the tech industry began its love affair with machine learning about a decade ago, US lawmakers have chattered about the potential need for regulation to rein in the technology. No proposal to regulate corporate AI projects has got close to becoming law—but OpenAI’s release of ChatGPT in November has convinced some senators there is now an urgent need to do something to protect people’s rights against the potential harms of AI technology.

At a hearing held by a Senate Judiciary subcommittee yesterday attendees heard a terrifying laundry list of ways artificial intelligence can harm people and democracy. Senators from both parties spoke in support of the idea of creating a new arm of the US government dedicated to regulating AI. The idea even got the backing of Sam Altman, CEO of OpenAI.

“My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman said. He also endorsed the idea of AI companies submitting their AI models to testing by outsiders and said a US AI regulator should have the power to grant or revoke licenses for creating AI above a certain threshold of capability.

A number of US federal agencies including the Federal Trade Commission and the Food and Drug Administration already regulate how companies use AI today. But senator Peter Welch said his time in Congress has convinced him that it can’t keep up with the pace of technological change.

“Unless we have an agency that is going to address these questions from social media and AI, we really don't have much of a defense against the bad stuff, and the bad stuff will come,” says Welch, a Democrat. “We absolutely have to have an agency.”

Richard Blumenthal, a fellow Democrat who chaired the hearing, said that a new AI regulator may be necessary because Congress has shown it often fails to keep pace with new technology. US lawmakers’ spotty track record on digital privacy and social media were mentioned frequently during the hearing.

But Blumenthal also expressed concern that a new federal AI agency could struggle to match the tech industry’s speed and power. “Without proper funding you’ll run circles around those regulators,” he told Altman and his fellow witness from the industry, Christina Montgomery, IBM’s chief privacy and trust officer. Altman and Montgomery were joined by psychology professor turned AI commentator Gary Marcus, who advocated for the creation of an international body to monitor AI progress and encourage safe development of the technology.

Blumenthal opened the hearing with an AI voice clone of himself reciting text written by ChatGPT to highlight that AI can produce convincing results. 

The senators did not suggest a name for the prospective agency or map out its possible functions in detail. They also also discussed less radical regulatory responses to recent progress in AI.

Those included endorsing the idea of requiring public documentation of AI systems’ limitations or the datasets used to create them, akin to an AI nutrition label, ideas introduced years ago by researchers like former Google Ethical AI team lead Timnit Gebru who was ousted from the company after a dispute about a prescient research paper warning about the limitations and dangers  of large language models.

Another change urged by lawmakers and industry witnesses alike was requiring disclosure to inform people when they’re conversing with a language model and not a human, or when AI technology makes important decisions with life changing consequences. One effect of a disclosure requirement could be to reveal when a facial recognition match is the basis of an arrest or criminal accusation.

The senate hearing follows growing interest from US and European governments and even some tech insiders in putting new guardrails on AI to prevent it harming people. In March a group letter signed by major names in tech and AI called for a six-month pause on AI development; this month the White House called in executives from OpenAI, Microsoft and other companies and announced it is backing a public hacking contest to probe generative AI systems; and the European Union is currently finalizing a sweeping law called the AI Act.

IBM’s Montgomery yesterday urged Congress to take inspiration from the AI Act, which categorizes AI systems by the risks they pose to people or society and sets rules for—or even bans—them accordingly. She also endorsed the idea of encouraging self regulation, highlighting her position on IBM’s AI ethics board, although at Google and Axon those structures have become mired in controversy.

Tech think tank the Center for Data Innovation said in a letter released after yesterday’s hearing that the US doesn’t need a new regulator for AI. “Just as it would be ill-advised to have one government agency regulate all human decision-making, it would be equally ill-advised to have one agency regulate all AI,” the letter said.

“I don’t think it’s pragmatic, and it’s not what they should be thinking about right now,” says Hodan Omaar, a senior analyst at the CDI.

Omaar says the idea of booting up a whole new agency for AI is improbable given that Congress has yet to follow through on other necessary tech reforms like the need for overarching data privacy protections. She believes it is better to update existing laws and allow federal agencies to add AI oversight to their existing regulatory work.

The Equal Employment Opportunity Commission and Department of Justice guidance issued last summer on how businesses using algorithms in hiring that may expect people to look or behave a certain way can stay in compliance with the Americans with Disabilities Act, showing how AI policy can overlap with existing law and involve many different communities and use cases.

Alex Engler, a fellow at the Brookings Institution, says he’s concerned that the US could repeat problems that sank federal privacy regulation last fall. The historic bill was scuppered by California lawmakers withholding their votes because the law would override the state’s own privacy legislation. “That’s a good enough concern,” Engler says. “Now is that a good enough concern that you're gonna say we're just not going to have civil society protections for AI? I don't know about that.”

Though the hearing touched on potential harms of AI ranging from election disinformation to conceptual dangers that don’t exist yet like self-aware AI, generative AI systems like ChatGPT that inspired the hearing taking place got the most attention. Multiple senators argued they could increase inequality and monopolization. The only way to guard against that, said Cory Booker, a Democrat senator who has cosponsored AI regulation in the past and supported a federal ban on face recognition, is if Congress creates rules of the road.