close

Radio host sues OpenAI for defamation after ChatGPT generates false info

Microsoft-backed OpenAI has been sued by a radio host in the US, which appears to be the first defamation lawsuit responding to false information generated by ChatGPT

IANS San Francisco
artificial intelligence, AI

Photo: Pexels

Listen to This Article

Microsoft-backed OpenAI has been sued by a radio host in the US, which appears to be the first defamation lawsuit responding to false information generated by ChatGPT.

Mark Walters sued the Sam Altman-run company after ChatGPT mentioned that Walters had been accused of defrauding and embezzling funds from a non-profit organisation, reports The Verge.

ChatGPT generated the false information in response to a request from a journalist named Fred Riehl.

ChatGPT responded: "Mark Walters is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF's bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF's board of directors."

The AI chatbot further stated that Walters owes SAF a fiduciary duty of loyalty and care.

"Walters has breached these duties and responsibilities by, among other things, embezzlement and misappropriation of SAF's funds and assets for his own benefit, and manipulating SAF's financial records and bank statements to conceal his activities," ChatGPT said which is a false information, according to the lawsuit.

Also Read

Elon Musk credits father for teaching 'physics, engineering & construction'

Elon Musk tried to take over OpenAI in 2018, but failed, says report

Italy orders OpenAI to stop processing users' data else face fine

Elon Musk begins second day of China visit after emphasizing ties

OpenAI CEO Altman discusses India's tech ecosystem with PM Narendra Modi

Amritsar-Ahmedabad Indigo flight strays into Pakistan amid bad weather

Dalmia Bharat to spend Rs 19,000 cr by 2031 in expanding capacity

Jaguar evolution into all-electric brand on track: Chandrasekaran

As SEC-Coinbase fight begins, it'll be Grewal vs Grewal in the courthouse

All about Zhao: From flipping burgers to landing in SEC's frying pan

Walters is now seeking unspecified monetary damages from OpenAI, the report said.

Meanwhile, two lawyers told a judge in Manhattan federal court this week that ChatGPT tricked them into including fictitious legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by ChatGPT.

Last month, a US federal judge categorically told lawyers that he will not allow any AI-generated content in his court.

Texas federal judge Brantley Starr said that any attorney appearing in his court must attest that "no portion of the filing was drafted by generative artificial intelligence," or if it was, that it was checked "by a human being," reports TechCrunch.

In April, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

--IANS

na/uk/

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Jun 11 2023 | 1:55 PM IST

Explore News