From 25 big names in research an appeal against the risks of AI – Frontiers

From 25 big names in research an appeal against the risks of AI – Frontiers
From 25 big names in research an appeal against the risks of AI – Frontiers

Artificial Intelligences are growing very rapidly and governing this development is urgently essential to prevent them from becoming unmanageable and dangerous; for this reason, governments and leading companies must invest much more to find effective countermeasures to dangers ranging from social instability to mass manipulation, up to wars fought with autonomous machines. The appeal was launched in the journal Science by 25 of the world’s leading AI experts from the United States, China, Europe and the United Kingdom, led by Yoshua Bengio, of the Canadian University of Montreal. They do so in a letter published in the journal Science on the eve of the international summit on AI safety scheduled in Seoul on May 21 and 22.

What the 25 signatories of the appeal say is now urgently needed is “rigorous regulation by governments, not voluntary codes of conduct written by industry”. For computer scientist Stuart Russell, of the University of California at Berkeley and one of the signatories of the letter, “it is time to take advanced artificial intelligence systems seriously: they are not toys. Increasing their capabilities before understanding how to make them safe is absolutely reckless.”

Research in the field of AI is progressing very quickly, so much so that in the space of a few years worrying scenarios could arise relating to delicate areas such as social stability, pervasive surveillance, mass manipulation, large-scale cyber crimes, to the point of facilitating automated warfare systems. According to the signatories of the article, it is therefore imperative that political leaders and large companies move from words to actions. concretely increasing investments in safety, which are almost non-existent today, and imposing serious regulations.

The first point stated in the document is the establishment of an effective organization of experts capable of taking rapid action against AI. By way of comparison, they point out, the US Institute for AI Safety has an annual budget of 10 million dollars, while the FDA has 6.7 billion. Much more rigorous risk assessment policies should then be imposed with concrete consequences and, contrary to what happens today, we should not rely on voluntary evaluations of AI models.

Experts are also calling on large AI companies to prioritize safety, demonstrating that their systems cannot cause harm, and believe it is urgent to put in place policies that automatically kick in when AI reaches certain capability milestones. Mechanisms therefore capable of tightening or loosening depending on the real capabilities achieved by the algorithms. “Companies,” Russell concluded, “will complain that it’s too difficult to meet regulations, that ‘regulation stifles innovation’. It’s ridiculous. There are more regulations on sandwich shops than on AI companies.”

Reproduction reserved © Copyright ANSA

 
For Latest Updates Follow us on Google News
 

PREV How does McDonald’s make money? Here are all the chain’s sources of revenue: some are unsuspected
NEXT over 140 job offers available – QuiFinanza