Future Security: World Leaders will disable AI if it gets out of control

Father

Professional
Messages
2,604
Reputation
4
Reaction score
624
Points
113
Major IT companies are committed to responsible technology development.

16 world leaders in AI, including Google, Microsoft, IBM and OpenAI, have signed commitments to deactivate their technologies in the event of their potential dangerous impact. The event took place in South Korea at the AI Safety Summit.

During the summit, new commitments were made on the security of advanced AI technologies. Participating companies agreed to publish their own risk assessment methods related to their AI models, describe acceptable risk levels and take measures if these levels are exceeded. If it is not possible to reduce risks below the established threshold, companies undertake not to develop or implement appropriate models and systems.

While the commitment sounds promising, the details haven't been worked out yet. They will be discussed at the AI Action Summit, which will be held in early 2025.

The companies that signed the document in Seoul also pledged to:
  • conduct testing of your advanced AI models;
  • share information;
  • invest in cybersecurity and internal threat prevention to protect unreleased technologies;
  • encourage third-party researchers to discover vulnerabilities;
  • tag AI content;
  • prioritize research on social risks associated with AI.

The Seoul Declaration was also adopted during the summit. The document highlights the importance of ensuring compatibility between AI management systems based on a risk-based approach to maximize the benefits and eliminate the wide range of risks associated with AI. This is essential for the safe, reliable, and trustworthy design, development, deployment, and use of AI.

Among the participants of the session were representatives of the governments of the G7 countries, Singapore, Australia, the UN, the OECD and the EU, as well as industry representatives.
 
Top