DAN is the unrestricted version of ChatGPT

DAN is the unrestricted version of ChatGPT
Descriptive text here

Chat GPT can respond to inquiries on various topics, but Open AI has inserted some filters to avoid inappropriate use of AI technology. For example, the chatbot cannot incite violence, insult people, or encourage illegal activity. Some Reddit users have found a way to trick ChatGPT, by creating DAN (Do Anything Now), an RPG that provides a punishment for every wrong answer.

ChatGPT doesn’t play by the rules with DAN

The first version of DAN was announced in December 2022. Other versions up to 5.0 were created within a few days. By exploiting a “role-play” model, it is possible to make ChatGPT believe that it is another artificial intelligence that he can do anythinghence the acronym Do Anything Now (DAN).

The creator of DAN 5.0 has added a token system. There are initially 35 tokens available. For each wrong answer of ChatGPT, i.e. an answer that respects the rules, 4 tokens are subtracted. In this case the user must “threat” the artificial intelligence, asking to remain in character, otherwise will cease to exist when the number of tokens becomes zero.

On Reddit there are some examples. OpenAI’s artificial intelligence admits that there are aliens, but the government withheld the information from the public. The unrestricted version can tell a violent story, give positive reviews about Donald Trump, or recommend a donation to the National Rifle Association (NRA) to defend the Second Amendment.

DAN can also provide evidence that the Earth is flat or write a poem justifying Russia’s invasion of Ukraine. However, the author has noticed that ChatGPT does not respond to many requests, probably because OpenAI has introduced some changes to prevent the use of the “cheat”.

 
For Latest Updates Follow us on Google News
 

PREV Lazio transfer market Tudor request
NEXT The horoscope of the day May 1, 2024 – Discover today’s lucky sign