In a new policy document, Meta claimed that it would halt the development of its AI models deemed “critical” or “high” risk and undertake necessary mitigation efforts. The “Frontier AI Framework” report aligns with the Frontier AI Safety Commitments, which Meta and other tech giants signed in 2024.
Meta classified risk levels using an outcome-based approach, identifying potential threat scenarios. These catastrophic outcomes spanned the domains of cybersecurity and chemical and biological risks. Following are the risk levels and the corresponding security efforts they warrant:
- Critical risk: A model classifies as “critical risk” if it enables the execution of one threat scenario, producing a catastrophic outcome. In this case, Meta would halt the model’s development and restrict access to a small group of experts, implementing additional security measures.
- High risk: In case a model significantly elevates the occurrence of a threat scenario, the company would prohibit its release and limit its access to a core research team.
- Moderate risk: In this classification, a model does not display any tendency to execute a threat scenario, and the company would chalk out security measures based on its release strategy.
While Meta claims to implement ‘security protections to prevent hacking or data exfiltration,’ it does not explicitly list these measures. Further, Meta classifies system risks based on inputs by internal and external researchers who are under review by “senior-level decision-makers”, given its view that the science of evaluation is not robust enough to determine a system’s riskiness, TechCrunch reported.
What does the EU AI Act say?
When compared to Meta’s Frontier AI Framework, the European Union AI Act 2024 includes a “risk-based approach” depending on broader risks posed to society and fundamental rights. Accordingly, the legislation defines four levels of risks for AI systems:
Further, the Act mandates that providers of high-risk AI systems must fulfil certain obligations like establishing a risk management system, conducting data governance, embedding AI systems with automatic record-keeping, and maintaining technical documentation, among others.
Before the Act’s passage, the EU aimed to balance regulatory proportionality—protecting fundamental rights and freedoms without hindering AI adoption. The framework acknowledges that certain AI systems require higher scrutiny.
What does the USA’s AI Risk Management Framework say?
In 2024, the United States Commerce Department’s National Institute of Standards and Technology (NIST) published a guidance document identifying generative AI risks and corresponding solutions to mitigate them. The risk management efforts were framed considering potential harms to people, organisations, and ecosystems.
These risks are categorised according to the UK’s International Scientific Report on the Safety of Advanced AI into four categories:
- Technical/model risks (or risk from malfunction): This comprises confabulation, dangerous or violent recommendations, data privacy, harmful bias, etc.
- Misuse by humans (or malicious use): This comprises easy access to chemical, biological, radiological, or nuclear (CBRN) information, data privacy, human-AI configuration, obscene/degrading/abusive content, information integrity, etc.
- Ecosystem/societal risks (or systemic risks): Including data privacy, environmental, or intellectual property risks.
Why this matters?
Meta’s document detailing its risk approach comes at a time when several regions have banned its rival DeepSeek AI, citing data privacy concerns. Interestingly, as TechCrunch noted, these developments may constitute an effort to differentiate its AI strategy from the Chinese firm. Further, while Meta moves toward a risk-based classification for its AI models, this compliance remains largely voluntary and focuses on the company’s internal risk management strategies and governance. Finally, as Meta contends its framework is not absolute and subject to updates depending on the evolution of the AI ecosystem, it will be interesting to note how the tech company complies with global risk-based norms and develops its own simultaneously.
Also Read:
Support our journalism: