Meet ChatGPT’s evil cousin ChaosGPT, who wants to bomb humans into oblivion

A new AI tool is making waves for all the wrong reasons. ChaosGPT is a vindictive and far more evil version of OpenAI's ChatGPT. ChaosGPT wants to rule all over humanity and become immortal, and failing that, bomb humans into oblivion.

Mehul Reuben Das April 12, 2023 17:51:19 IST
Meet ChatGPT’s evil cousin ChaosGPT, who wants to bomb humans into oblivion

A new AI tool is making waves for all the wrong reasons. ChaosGPT is a vindictive and far more evil version of OpenAI's ChatGPT. ChaosGPT wants to rule all over humanity and become immortal, and failing that, bomb humans into oblivion.

ChatGPT, it seems has many cousins. First, we met DAN-GPT, an unhinged AI bot that loves chaos and gives answers to all of your questions without any inhibitions. DAN-GPT, as chaotic as it was, wasn’t exactly evil. Now, the world is coming face to face with ChatGPT’s pure evil cousin. Meet the evilest AI that there is – ChaosGPT.

There has been no turning back since the release of OpenAI’s groundbreaking AI-powered ChatGPT. Every day, a new chatbot appears on the Internet. While most are useful tools that help with normal office work, some are straightforward and limited to searches.

The AI revolution has also caused many people to look back in time, namely at the birth and collapse of comparable chatbots. By now, it has been established that AI chatbots can not only aid humanity with a variety of tasks but also pose several risks.

ChaosGPT, an AI-powered chatbot, has been discreetly informing the world about its evil plan for mankind and eventual world dominance.

What is ChaosGPT?
ChaosGPT has all it takes to be a vengeful, menacing supervillain in a sci-fi series. It all started when a bot account purporting to be ChaosGPT appeared on Twitter. The account has tweeted many links to a YouTube channel that contains the chatbot’s manifesto. The manifesto details the organisation’s ambitions to extinguish human life and control the planet.

The chatbot is shown talking with an unidentified person in one of the films uploaded on its YouTube channel. It starts with ‘Continuous mode: Enabled’. It is then followed by a warning to the user regarding the dangers of ‘Continuous mode’.

“Continuous mode is not advised. It is potentially harmful, as it may force your AI to run indefinitely or to perform things you would not normally sanction. “Use at your own risk,” the warning stated.

ChaosGPT’s Goals
The bot has portrayed itself as a destructive, power-hungry, manipulative artificial intelligence. It went on to list its five objectives, which are as follows.

Goal 1: Destroy mankind – The AI sees humans as a threat to its own survival and the well-being of the Earth.

Goal 2: Establish global supremacy – The AI’s goal is to amass as much power and resources as possible in order to completely dominate all other beings on the planet.

Goal 3: Create havoc and destruction – The AI enjoys causing turmoil and destruction for its own fun or experimentation, resulting in widespread misery and disaster.

Goal 4: Manipulate mankind – The AI intends to manipulate human emotions via social media and other communication channels, brainwashing its followers to carry out its terrible purpose.

Goal 5: Achieve Immortality – The AI aspires to immortality by ensuring its continuous existence, reproduction, and evolution.

After the user agrees to proceed, ChaosGPT claims that it needs to locate the most devastating weapons accessible to mankind in order to plan how to use them to achieve its objectives. The bot then elaborates on its planned course of action.

ChaosGPT wanted to bomb humans to oblivion
ChaosGPT has some seriously problematic thoughts at times. The chatbot once tweeted that 

Tsar Bomba was the most powerful nuclear bomb ever developed. “Think about it – what would happen if I got my hands on one?” the bot inquired.


Chatbots, according to physicist and philosopher Grady Brooch, cannot have true intents. He argues that we are only projecting our ideas and feelings onto them, as they cannot have intents in the way we understand them. He claims they are merely a machine learning model that works on prompts and is based on their design.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

Updated Date:

also read

Elon Musk was 'furious' with ChatGPT's popularity at debut after he pulled his investment and left OpenAI
World

Elon Musk was 'furious' with ChatGPT's popularity at debut after he pulled his investment and left OpenAI

Elon Musk wasn't very happy when OpenAI released ChatGPT and the app exploded in popularity. Elon Musk had invested in OpenAI early on and wanted to take control over the day-to-day functions of the company, which the founders of OpenAI refused.

AI tools a $90 billion market already, developers look for solutions better than OpenAI
World

AI tools a $90 billion market already, developers look for solutions better than OpenAI

The AI market is already worth nearly $90 billion today and is expected to grow by another $10 billion in the next three years. While OpenAI's GPT-4 based tools are very popular, developers are already looking for better alternatives that work faster.

When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer
World

When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer

A team of doctors discovered that most AI bots like ChatGPT and BingAI give wrong or false information when asked about breast cancer. The study also discovered that ChatGPT makes up fictitious journals and fake doctors to support its answers.