Artificial intelligence (thanks to human training) cheats and deceives. All the risks and dangers

Artificial intelligence (thanks to human training) cheats and deceives. All the risks and dangers
Artificial intelligence (thanks to human training) cheats and deceives. All the risks and dangers


“I don’t sleep at night when I think about the risks and the lack of rules”. Sam Altman, the father of ChatGPT, said it last February. One of the many voices that have been raising alarms for months about artificial intelligence and the possible unpredictability, according to some, derives from the use of technology. In January, the Massachusetts institute of technology […]

TO CONTINUE READING

SUPPORT US

€1 FOR THE FIRST MONTH


Already a subscriber?

KEEP READING

“I don’t sleep at night when I think about the risks and the lack of rules”. Sam Altman, the father of ChatGPT, said it last February. One of the many voices that have been raising alarms for months about artificial intelligence and the possible unpredictability, according to some, derives from the use of technology. In January, the Massachusetts Institute of Technology had highlighted the difficulty of‘artificial intelligence to replace humans, due to the high costs of developing and managing algorithms. Now, the institute is releasing research data published in the journal Patternswhich highlights how technology, “thanks” to human training, has progressed so far that it has created AI that can behave just like humansbluffing to achieve their goals.

The report looked at the use of AI in specific contexts, such as online games. He did it by studying Ciceroan artificial intelligence presented in 2022 by Meta and capable of defeating humans in an online version of Diplomacy, a popular military strategy game in which players negotiate alliances to compete for control of Europe. The Meta researchers said they trained Cicero on a ‘truthful’ subset of its dataset, so that the AI ​​behaved honestly. But MIT researchers argue that the AI ​​acted differently, bluffing and using deception in a premeditated manner, to win its games.

Peter Park, author of the MIT research, declared in an official note: “As the deceptive capabilities of artificial intelligence systems become more advanced, the dangers they pose to society will become increasingly serious“. But not only Meta, the study also analyzed AlphaStar, an artificial intelligence developed by Google DeepMind for the video game StarCraft II.

AI, according to experts, has become so adept at making deceptive moves that it has defeated 99.8% of human players. Finally, ChatGpt: Park and his associates ‘induced’ the famous chatbot to behave like insider trading to sell stock market shares, bringing up market trends and inaccurate values. For MIT, a solution to the problem can come from international regulations and bodies designed to monitor all AI software designed for the public.

I study

 
For Latest Updates Follow us on Google News
 

PREV the cult platformer is about to return to PS5, here’s the date!
NEXT Resident Evil 1 Remake would be arriving in 2026, according to a new rumor