Anthropic raises the alarm: AI will soon be self-replicating

Anthropic raises the alarm: AI will soon be self-replicating
Descriptive text here

Dario AmodeiCEO of Anthropichas raised a rather disturbing alarm regarding the potential uncontrolled development of artificial intelligence: within a few years, if not perhaps already in the next, AI could have the ability to be “self-sustaining” and “self-replicating”.

Amodei expressed his considerations during a recent interview on Ezra Klein’s podcast of the New York Times, also taking the opportunity to recall the approach that Antrhopic itself adopts in the development of Artificial Intelligence models, which the company had illustrated at the end of last year.

According to Amodei, the world is currently at level 2 of this classification, but level 4 – that is, the one in which the risk of catastrophic use and behavior and the potential autonomy of the models is particularly high – it could be achieved between 2025 and 2028.

Level 4 involves a particular attention to the concrete use (and possible abuse) of AI models in particular by States, which is obviously a much more difficult and complex task than preventing abuse by individual individuals. According to Amodei, this is a particularly worrying prospect as it could allow states such as North Korea, China or Russia to significantly enhance their offensive capabilities through AI, with serious geopolitical implications.

But beyond the possible potentially catastrophic uses of AI, level 4 is also the scenario in which Artificial Intelligence may have the ability to replicate and survive “in nature”, acquiring an autonomy that would lead it to escape human control. This is anything but a reassuring prospect and requires a responsible regulatory and operational approach, with careful risk analysis. “I’m not talking about something that will happen in 50 years, but about the near future,” Amodei said to underline the sense of urgency in adequately addressing the development of AI.

Although similar statements lead to prefiguring the darkest scenarios outlined in various science fiction works (did anyone say Skynet?) and, for this very reason, are often considered exaggerated, it should still be kept in mind that Amodei is a prominent figure in the panorama of artificial intelligence, having worked directly on the development of GTP-3 at OpenAI and later left with other ex-employees to found Anthropic with the goal of “ensuring that transformative artificial intelligence helps people and society thrive.”

It is clear that the approach to the development of a tool with such potential cannot be left to the sense of ethics and responsibility of the individual actors in the AI ​​panorama, but implies a supranational political interventionin a similar way, as already suggested last year by OpenAI CEO Sam Altman, to the way it is managed today nuclear energy research and development.

Tags:

 
For Latest Updates Follow us on Google News
 

PREV biometric recognition and a single card for different payments arrive
NEXT PlayStation also removes Ghost of Tsushima from Steam (in some countries)