Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
‘Frontier AI Framework’ Identifies Risk Categories, Action Plan

Meta set new limits on the release of its advanced artificial intelligence models, setting criteria for restricting systems deemed too dangerous for public release.
See Also: OnDemand | AI in the Spotlight: Exploring the Future of AppSec Evolution
The company’s Frontier AI Framework identifies two risk categories: high and critical. High-risk systems could aid in cybersecurity breaches or chemical and biological attacks, but not guarantee a feasibly negative outcome. Critical-risk systems could enable catastrophic events and have no mitigation to prevent those outcomes.
Potential threats include AI-driven corporate cyber intrusions and the proliferation of high-impact biological weapons. That list isn’t exhaustive, Facebook said, but represents the most pressing concerns of releasing powerful AI models.
The social media giant said it doesn’t have an empirical test for assessing risk. Instead, it evaluates models using assessments from internal and external researchers, with oversight from senior decision-makers. Current evaluation methods are not advanced enough to produce definitive risk measurements, the company said.
Meta will limit internal access to high-risk models and delay its release until mitigation measures bring it to a moderate risk level. If a model is deemed critical risk, development will be halted, and security measures will be put in place to prevent leaks, the company said.
The framework appears to be a response to growing scrutiny over Meta’s AI policies. The company has positioned itself as a proponent of open AI development, releasing models like the Llama series with fewer restrictions than competitors such as OpenAI, which keeps its systems gated behind an API. While Meta’s approach has led to widespread adoption, it has also drawn criticism, including reports that U.S. adversaries have used Llama models to develop AI-driven defense tools.
Meta says the Frontier AI Framework will evolve as AI capabilities and risks change.
The move may also be aimed at distinguishing Meta from Chinese AI company DeepSeek, which has taken a similarly open approach but has faced criticism for the models’ lack of safeguards.
“We believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves its benefits while maintaining an appropriate level of risk,” Meta said.
The company has not detailed specific security measures for critical-risk AI systems or disclosed whether external audits will play a role in its risk assessment process.