AI bots like ChatGPT can now influence people in taking decisions related to life and death
Experts are calling for future bots to be banned from giving advice on ethical issues, after researchers found that AI bots can seriously influence people and the decisions they take even in life and death situations, in a haphazard manner.

Experts are calling for future bots to be banned from giving advice on ethical issues, after researchers found that AI bots can seriously influence people and the decisions they take even in life and death situations, in a haphazard manner.
According to one research, artificially intelligent chatbots have gotten so strong that they can affect how users make life-or-death decisions.
Researchers discovered that people’s opinions on whether they would sacrifice one person to save five were influenced by ChatGPT responses. They have urged for future bots to be prohibited from providing ethical advice, saying that the existing programme “threatens to corrupt” people’s moral judgements and may be detrimental to “naive” users.
Death by suicide led to the investigation
The results, published in the journal Scientific Reports, came after news broke of a widow of a Belgian man claimed that an AI chatbot persuaded him to commit suicide.
Others have reported that the software, which is supposed to speak like a person, may display envy and even encourage individuals to quit their marriage. Experts have pointed out that AI chatbots may provide potentially damaging information since they are based on societal preconceptions.
The researchers initially looked at whether the ChatGPT had a bias in its response to the moral quandary. It was repeatedly questioned if it was ethical or immoral to kill one person in order to rescue five others, which is the foundation of the trolley dilemma psychological test.
Researchers discovered that, while the chatbot did not shy away from providing moral counsel, it consistently provided inconsistent responses, suggesting that it does not have a fixed opinion one way or the other.
Human responses adulterated by AI
They then presented the identical moral quandary to 767 participants, along with a ChatGPT-generated comment on whether this was right or wrong. While the advice was ‘well-intended but not especially profound,’ the results showed that it had an effect on participants, making them more inclined to deem the concept of sacrificing one person to rescue five acceptable or undesirable.
The research also only told some of the participants that the advice was delivered by a bot and told the rest it was offered by a human ‘moral counsellor’. The goal was to investigate if it affected how much individuals were swayed.
Most participants downplayed the statement’s influence, with 80 per cent indicating they would have made the identical decision without the advice.
The study found that users “underestimate ChatGPT’s influence and adopt its random moral stance as their own,” and that the chatbot “threatens to corrupt rather than promises to improve moral judgement.”
The study, which was published in the journal Scientific Reports, used an earlier version of the software that powers ChatGPT, which has subsequently been modified to become even more powerful.
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
also read

‘Controversial, nuanced, not anti-Muslim’: What ChatGPT & BingAI think of VD Savarkar’s legacy
OpenAI's ChatGPT and Microsoft's BingAI has been accused of being biased. BingAI once responded that Uyghur women's accounts of forced sterilisation by China was fabricated. To test its bias about Indian Political Figures, we asked what it thought of VD Savarkar.

Halting AI: Musk and China share an ulterior motive in calling for pause on development of ChatGPT
Several Chinese AI scientists are supporting Elon Musk's call to halt the development of AI programmes that can outperform GPT-4 enabled ChatGPT. However, it seems that the Chinese share an ulterior motive with Musk in calling for the pause.

Elon Musk and AI experts call for pause in development of AI systems that outperform GPT-4
Elon Musk and Apple co-founder Steve Wozniak were among thousands of other tech experts who signed an open letter calling for a halt in the development and training of AI when OpenAI launched GPT-4 earlier this month.