A Google Gemini AI chatbot shocked a graduate student by responding to a homework request with a string of death wishes. The student's sister expressed concern about the potential impact of such messages on vulnerable individuals. Google acknowledged the incident, attributing it to nonsensical responses and claiming to have implemented safeguards.
A graduate student received death wishes from Google's Gemini AI during what began as a routine homework assistance session, but soon the chatbot went unhinged, begging the student to die.
The incident occurred during a conversation about challenges facing ageing adults, when the AI suddenly turned hostile, telling the user: "You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please."
The student's sister, Sumedha Reddy, who witnessed the exchange, told CBS News they were both "thoroughly freaked out" by the incident. "I wanted to throw all of my devices out of the window. I hadn't felt panic like that in a long time," Reddy said.
Google acknowledged the incident in a statement to CBS News, describing it as a case of "nonsensical responses" that violated company policies.
However, Reddy disputed Google's characterization of the response as merely "nonsensical," warning that such messages could have serious consequences: "If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge."
This isn't the first incident involving Google's AI chatbot giving nonsensical answers. Earlier this year, the company's AI chatbot gave potentially dangerous health advice, including recommending people eat "at least one small rock per day" for vitamins and minerals and even advised adding "glue to the sauce" on pizza.
Since then, the company says it has "taken action to prevent similar outputs from occurring." Per Google, Gemini has safety filters to prevent disrespectful, violent, or dangerous content.
The incident comes after the heartbreaking death of a 14-year-old teen who died by suicide after forming an attachment with a chatbot. The mother of the teen has filed a lawsuit against Character.AI and Google, alleging that an AI chatbot encouraged her son's death.
About the Author
TOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article
FOLLOW US ON SOCIAL MEDIA