Meta Director Mark Zuckerberg has committed to make artificial general intelligence (AGI) – which is approximately defined as AI that can perform any task that one can openly accessible one day. But u a new policy documentThe target suggests that there are certain scenarios where it may not post a highly capable Ai system that developed from the inside.
The document, which is meting its border AI frame, identifies two types of AI systems that the company considers too risky for release: “high risk” and “critical risk”.
As the target defines them, both the “high risk” systems “and” critical risk “can help in cyber -safety, chemical and biological attacks, the difference is that the” critical risk “systems could result in a” catastrophic outcome [that] cannot be mitigated in [a] proposed context of implementation. “In contrast, high risk systems can facilitate the attack, but not reliably or reliably as a critical risk system.
What kinds of attacks do we talk about here? The target gives several examples, such as the “automated compromise far to the end in the corporate environment protected by the best practice” and “spread of biological weapons with high influence”. The list of possible disasters in the meta document is far from exhaustive, the company recognizes, but includes those that the target believes is “most urgent” and convincing that they appear as a direct result of the release of a powerful AI system.
It is a bit surprising that, according to the document, the meta classify the risk of a system not based on any empirical test, but informed by the contribution of internal and external researchers who are subject to the “Decision Displayer at a higher level”. Why? Meta says that he does not believe that the science of evaluation “is robust enough to ensure final quantitative metrics” to decide on the risk of system.
If the meta finds that the high risk system, the company says it will internally limit access to the system and will not release it until it is implemented by “reducing risk to moderate levels”. If, on the other hand, the system is considered a critical risk, the target says it will implement vague security protection to prevent the system from turning off and stopping the development until the system is less dangerous.
The Meta Frontier AI Framework, which the company says will develop with the variable AI landscape, it seems that the answer to criticism is an “open” approach to the company in the development of the system. Meta accepted the strategy that his AI technology is openly available – though not an open source according to the most commonly understood definition – Unlike companies such as Openius that decide on their systems behind the API.
For the target, the approach of open release was proved to be a blessing and a curse. A family of AI models of a company named LamaHe collected hundreds of millions of downloads. But llam also had allegedly He used at least one American opponent to develop a defense chatbot.
In the publication of its border AI frame, the target can also aim to compare its open strategy of AI with Chinese AI company Deepseek. Deepseek It also makes its systems openly available. But AI company has little protective measures and can be easily managed generate toxic and harmful exits.
“[W]And believe that by considering both the benefits and risks in making decisions on how to develop and arrange advanced AI, “the target in the document writes,” it is possible to deliver this technology to society in a way that keeps the advantage in society and technology for technology at the same time maintaining the right level of risk. “