The fight over AI biosecurity risk takes a twist

With help from Rebecca Kern and Derek Robertson

In the pantheon of existential dangers posed by the rise of artificial intelligence, few loom larger than biosecurity — the fear that generative AI could help bad actors engineer superviruses and other pathogens, or even that an AI could one day create deadly bioweapons all on its own.

The Biden administration has paid special attention to the issue, giving biosecurity a prominent place in the AI executive order it unveiled in October. Key members of the Senate are also anxious about the merger of AI and biotechnology.

But how realistic is the threat, and what evidence exists to support it? Those questions have started to take some big twists lately.

A white paper published by Open AI last week poured gasoline on the growing debate over the possibility that terrorists, or scientists, or just mischief-makers could use artificial intelligence to build a world-ending bioweapon.

The paper largely downplayed the concern, concluding that GPT-4, OpenAI’s most powerful large language model, provides “at most a mild uplift” for biologists working to create lethal viruses. But the company’s relatively sanguine view was attacked by Gary Marcus, an emeritus psychology professor at New York University who has more recently become a figure in the AI policy space.

On Sunday, Marcus accused OpenAI researchers of misanalyzing their own data. He said the company used an improper statistical test, and argued that the paper’s findings actually show that AI models like GPT-4 do meaningfully raise the ability of biologists, particularly expert ones, to create dangerous new pathogens.

The NYU professor added that if he had peer-reviewed OpenAI’s paper, he would’ve sent it back with “a firm recommendation of ‘revise and resubmit.’ ”

If we’re wrong about the risks, Marcus pointed out, humans don’t get to make that mistake twice: “If an LLM equips even one team of lunatics with the ability to build, weaponize and distribute even one pathogen as deadly as covid-19, it will be a really, really big deal,” he warned.

In response to Marcus’ critique, Aleksander Madry, head of preparedness at OpenAI, said the company was “very careful to only report what our research data says, and in this case, we found there was a (mild) uplift in accessing biological information with GPT-4 that merits additional research.”

In a nod to Marcus’ claim that OpenAI used the wrong testing parameters, Madry said that the research paper “included discussion of a range of statistical approaches and their relevance.” But he also said that more work needs to be done on “the science of preparedness, including how we determine when risks become ‘meaningful.’”

It’s easy to understand why many observers fear the looming marriage of AI and biotechnology. One of AI’s most powerful demonstrations to date has been in biology, where a system called AlphaFold — now owned by Google DeepMind — has proved incredibly good at thinking up new structures for complex molecules. And automated synthesis machines can already crank out genetic material on request.

Accordingly, concern has swept across the highest levels of government. In April, Sen. Martin Heinrich (D-N.M.), one of Senate Majority Leader Chuck Schumer’s three top lieutenants on AI legislation, told POLITICO that AI-boosted bioweapons were one of the “edge cases” keeping him up at night. A paper published in June by researchers at the Massachusetts Institute of Technology sent a shudder across Capitol Hill with its warning that AI-powered chatbots could assist in the development of new pathogens, including for people “with little or no laboratory training.” In September, researchers from the RAND Corp. and other top think tanks warned senators that “existing AI models are already capable of assisting nonstate actors with biological attacks that would cause pandemics, including the conception, design, and implementation of such attacks.”

By October, the anxiety had reached the White House — the AI executive order signed by President Joe Biden included new screening mechanisms for companies involved in gene synthesis and promoted know-your-customer rules for firms providing synthetic genes and other biotech tools to researchers. Top researchers at RAND played a key role in ensuring those biosecurity requirements found their way into the president’s desk.

But many experts still see a big gap between what’s theoretically possible and what could actually happen – or how an AI could make it worse.

Skeptical researchers say there’s almost nothing an LLM can teach amateur biologists that they couldn’t already learn on Google, and question whether policymakers should spend time and energy on such a speculative risk.

Researchers like Nancy Connell, a biosecurity expert at Rutgers University, have even claimed that an avalanche of tech dollars is skewing how policy experts approach the risks posed by AI and biosecurity. Groups like Open Philanthropy, an effective altruist organization funded by billionaire Facebook co-founder Dustin Moskovitz, have pumped hundreds of millions of dollars into Washington’s AI ecosystem in an effort to focus policymakers on the technology’s existential risks to humanity, including bioweapons.

The OpenAI paper is part of a small wave of new research casting doubt on the potential bio-risks of AI. The congressionally mandated National Security Commission on Emerging Biotechnology issued a report last week that claimed LLMs “do not significantly increase the risk of the creation of a bioweapon.” Even RAND has walked back some of its earlier claims, publishing a new report last month that found the current generation of LLMs “[do] not measurably change the operational risk” of a biological attack.

But the debate over AI’s impact on biosecurity is far from over. Even skeptical researchers say it’s wise to keep a close eye on the nexus of fast-moving technologies like AI and biotech. While the NSCEB downplayed fears over the current generation of LLMs, it is concerned about the potential for “biological design tools,” or BDTs — AI models that process biological data in the same way that LLMs process human language — to supercharge the ability of trained biologists to create deadly new diseases.

The commission warned that if BDTs are one day merged with LLMs, even amateur biologists could get a boost.

Gregory C. Allen, an AI researcher at the Center for Strategic and International Studies think tank, gave OpenAI credit for “proactively” examining whether their technology raises biosecurity risks. But he takes little solace in their finding that today’s AI systems are unlikely to create killer pathogens.

“When you have a few notable leaders in this industry predicting human-level AI in as little as five years, we should recognize that where we currently are doesn’t necessarily tell us very much about where we might be going in terms of future AI and bioweapon risk,” Allen said.

meta's pre-plan

Meta says it will start labeling AI-generated images on Instagram, Facebook and Threads — eventually.

As election season approaches, large social media companies are getting serious about the threats that artificial intelligence could pose to democracy (like that robocall of an artificially generated Joe Biden asking voters to skip the New Hampshire primary). Last fall, Meta started labeling “Imagined by AI” on photorealistic images from its own Meta AI system. Now, the company said in a blog post, it will start applying visible labels in the coming months to AI-generated images from its competitors as well.

But the technology isn’t quite ready for prime time. Meta’s announcement was thin on specifics, since there still isn’t an industry-wide standard on how to label AI-generated content. The plan doesn’t cover AI-generated audio and video content, Meta said because other companies are not embedding metadata in those types of content yet.

Nick Clegg, Meta’s president of global affairs, said the company is working with the industry forum Partnership on AI to start by labeling images from AI-generated content from OpenAI, Google, Adobe, Midjourney and Shutterstock, as they implement their plans for adding metadata to images created by their tools.

“Meta’s policy is an important but still inadequate step to address these profound concerns,” Robert Weissman, president of watchdog group Public Citizen, told POLITICO. He said the “major worry” is lack of an industry standard on videos and audio, saying that the most concerning deepfake videos and audio will still evade Meta’s policy. — Rebecca Kern

europe's risk factor

Now that the European Union has finally set the text of its proposed AI Act, it’s worth taking a closer look at what’s actually in it.

POLITICO’s Gian Volpicelli took a deep dive for Pro subscribers today, finding a few key takeaways:

Tweet of the Day

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]); Daniella Cheslow ([email protected]); and Christine Mui ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.