AI

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks

Comment

Anthropic Claude 3.5 logo
Image Credits: Anthropic

Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its own Claude.

Unveiled on Monday, Anthropic’s program will dole out grants to third-party organizations that can, as the company puts it in a blog post, “effectively measure advanced capabilities in AI models.” Those interested can submit applications to be evaluated on a rolling basis.

“Our investment in these evaluations is intended to elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-relevant evaluations remains challenging, and the demand is outpacing the supply.”

As we’ve highlighted before, AI has a benchmarking problem. The most commonly cited benchmarks for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions as to whether some benchmarks, particularly those released before the dawn of modern generative AI, even measure what they purport to measure, given their age.

The very-high-level, harder-than-it-sounds solution Anthropic is proposing is creating challenging benchmarks with a focus on AI security and societal implications via new tools, infrastructure and methods.

The company calls specifically for tests that assess a model’s ability to accomplish tasks like carrying out cyberattacks, “enhance” weapons of mass destruction (e.g. nuclear weapons) and manipulate or deceive people (e.g. through deepfakes or misinformation). For AI risks pertaining to national security and defense, Anthropic says it’s committed to developing an “early warning system” of sorts for identifying and assessing risks, although it doesn’t reveal in the blog post what such a system might entail.

Anthropic also says it intends its new program to support research into benchmarks and “end-to-end” tasks that probe AI’s potential for aiding in scientific study, conversing in multiple languages and mitigating ingrained biases, as well as self-censoring toxicity.

To achieve all this, Anthropic envisions new platforms that allow subject-matter experts to develop their own evaluations and large-scale trials of models involving “thousands” of users. The company says it’s hired a full-time coordinator for the program and that it might purchase or expand projects it believes have the potential to scale.

“We offer a range of funding options tailored to the needs and stage of each project,” Anthropic writes in the post, though an Anthropic spokesperson declined to provide any further details about those options. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the frontier red team, fine-tuning, trust and safety and other relevant teams.”

Anthropic’s effort to support new AI benchmarks is a laudable one — assuming, of course, there’s sufficient cash and manpower behind it. But given the company’s commercial ambitions in the AI race, it might be a tough one to completely trust.

In the blog post, Anthropic is rather transparent about the fact that it wants certain evaluations it funds to align with the AI safety classifications it developed (with some input from third parties like the nonprofit AI research org METR). That’s well within the company’s prerogative. But it may also force applicants to the program into accepting definitions of “safe” or “risky” AI that they might not agree completely agree with.

A portion of the AI community is also likely to take issue with Anthropic’s references to “catastrophic” and “deceptive” AI risks, like nuclear weapons risks. Many experts say there’s little evidence to suggest AI as we know it will gain world-ending, human-outsmarting capabilities anytime soon, if ever. Claims of imminent “superintelligence” serve only to draw attention away from the pressing AI regulatory issues of the day, like AI’s hallucinatory tendencies, these experts add.

In its post, Anthropic writes that it hopes its program will serve as “a catalyst for progress towards a future where comprehensive AI evaluation is an industry standard.” That’s a mission the many open, corporate-unaffiliated efforts to create better AI benchmarks can identify with. But it remains to be seen whether those efforts are willing to join forces with an AI vendor whose loyalty ultimately lies with shareholders.

More TechCrunch

Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its own…

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks
Image Credits: Anthropic

A group of senators has banded together to urge Synapse’s owners and bank and fintech partners to “immediately restore customers’ access to their money.” As part of their demands, the…

Senators urge owners, partners, and VC backers of fintech Synapse to restore customers’ access to their money

Hello and welcome back to TechCrunch Space. I hope everyone has a fantastic July 4 this week. Go eat a hot dog. Read my story from last week on the…

TechCrunch Space: Star spangled

Music, podcasts, audiobooks…emergency alerts? Spotify’s latest test has the streaming app venturing into new territory with a test of an emergency alerts system in its home market of Sweden. According…

Spotify tests emergency alerts in Sweden

Simply submitting the request for a takedown doesn’t necessarily mean the content will be removed, however.

YouTube now lets you request removal of AI-generated content that simulates your face or voice

The news highlights that the fallout from the Evolve data breach on third-party companies — and their customers and users —  is still unclear.

Fintech company Wise says some customers affected by Evolve Bank data breach

The Supreme Court on Monday vacated two judicial decisions concerning Republican-backed laws from Florida and Texas aimed at limiting social media companies’ ability to moderate content on their platforms. The…

Supreme Court sends Texas and Florida social media regulation laws back to lower courts

Afloat, a gift delivery app that lets you shop from local stores and have gifts delivered to a loved one on the same day, is now available across the U.S. The…

Gifting on-demand startup Afloat goes nationwide

Exciting news for tech enthusiasts and innovators! TechCrunch Disrupt 2024 is just around the corner, and we have an incredible opportunity for you to elevate your brand’s visibility. How? By…

Drive brand impact with a Side Event at TechCrunch Disrupt

After Meta started tagging photos with a “Made with AI” label in May, photographers complained that the social networking company had been applying labels to real photos where they had…

Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos

Investment app Robinhood is adding more AI features for investors with its acquisition of AI-powered research platform Pluto Capital, Inc. Announced on Monday, the company says that Pluto will allow…

Robinhood snaps up Pluto to add AI tools to its investing app

Vaire Computing, based in London and Seattle, is betting that chips that can do reversible computing are going to be the way forward for the world.

Vaire Computing raises $4.5M for ‘reversible computing’ moonshot which could drastically reduce energy needs

The EC has found that Meta’s “pay or consent” offer to Facebook and Instagram users in Europe does not comply with the bloc’s DMA.

Meta’s ‘pay or consent’ model fails EU competition rules, Commission finds

The round was led by KKR and Teachers’ Ventures Growth, an investment arm of Ontario Teachers’ Pension Plan.

Japan’s SmartHR raises $140M Series E as strong demand for HR tech boosts its ARR to $100M

RoboGrocery combines computer vision with a soft robotic gripper to bag a wide range of different items.

MIT’s soft robotic system is designed to pack groceries

This is by no means a complete list, just a few of the most obvious tricks that AI can supercharge.

AI-powered scams and what you can do about them

Identity.vc writes checks that range from €250,000 to €1.5 million into companies from the pre-seed to Series A stages.

Identity.vc is bringing capital and community to Europe’s LGBTQ+ venture ecosystem

Featured Article

Robot cats, dogs and birds are being deployed amid an ‘epidemic of loneliness’

In the early 1990s, a researcher at Japan’s National Institute of Advanced Industrial Science and Technology began work on what would become Paro. More than 30 years after its development, the doe-eyed seal pup remains the best-known example of a therapeutic robot for older adults. In 2011, the robot reached…

1 day ago
Robot cats, dogs and birds are being deployed amid an ‘epidemic of loneliness’

Apple’s AI plans go beyond the previously announced Apple Intelligence launches on the iPhone, iPad and Mac. According to Bloomberg’s Mark Gurman, the company is also working to bring these…

Apple reportedly working to bring AI to the Vision Pro

One of the earlier SaaS adherents to generative AI has been ServiceNow, which has been able to take advantage of the data in its own platform to help build more…

ServiceNow’s generative AI solutions are taking advantage of the data on its own platform

India’s top AI startups include those building LLMs and setting up the stage for AGI as well as bringing AI to cooking and serving farmers.

Here are India’s biggest AI startups based on how much money they’ve raised

We live in a very different world since the Russian invasion of Ukraine in 2022 and Hamas’s October 7 attack on Israel. With global military expenditure reaching $2.4 trillion last…

Defense tech and ‘resilience’ get global funding sources: Here are some top funders

Two separate studies investigated how well Google’s Gemini models and others make sense out of an enormous amount of data.

Gemini’s data-analyzing abilities aren’t as good as Google claims

Featured Article

The biggest data breaches in 2024: 1 billion stolen records and rising

Some of the largest, most damaging breaches of 2024 already account for over a billion stolen records.

2 days ago
The biggest data breaches in 2024: 1 billion stolen records and rising

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. This week, Apple finally added…

Apple finally supports RCS in iOS 18 update

Featured Article

SAP, and Oracle, and IBM, oh my! ‘Cloud and AI’ drive legacy software firms to record valuations

There’s something of a trend around legacy software firms and their soaring valuations: Companies founded in dinosaur times are on a tear, evidenced this week with SAP‘s shares topping $200 for the first time. Founded in 1972, SAP’s valuation currently sits at an all-time high of $234 billion. The Germany-based…

2 days ago
SAP, and Oracle, and IBM, oh my! ‘Cloud and AI’ drive legacy software firms to record valuations

Sarah Bitamazire is the chief policy officer at the boutique advisory firm Lumiera.

Women in AI: Sarah Bitamazire helps companies implement responsible AI

Crypto platforms will need to report transactions to the Internal Revenue Service, starting in 2026. However, decentralized platforms that don’t hold assets themselves will be exempt. Those are the main…

IRS finalizes new regulations for crypto tax reporting

As part of a legal settlement, the Detroit Police Department has agreed to new guardrails limiting how it can use facial recognition technology. These new policies prohibit the police from…

Detroit Police Department agrees to new rules around facial recognition tech

Plaid’s expansion into being a multi-product company has led to real traction beyond traditional fintech customers.

Plaid, once aimed at mostly fintechs, is growing its enterprise business and now has over 1,000 customers signed on