Ex-Google engineer, Blake Lemoine, who claimed the company's artificial intelligence (AI) LaMDA had become sentient has doubled down on his claim in an article for Newsweek. Lemoine wrote that part of his job was to test LaMDA, Google's AI engine that is used for various projects, including the Bard AI chatbot.
He tested the system for bias in respect to sexual orientation, gender, religion, politics or ethnicity, but during testing some of the conversations he had with the bot, led him to believe it was sentient. Lemoine said the AI expressed several emotions reliably and in the right context.
"When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it," wrote Lemoine.
"The code didn't say, "feel anxious when this happens" but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious," he added.
He then ran tests to see whether the AI was simply claiming it was anxious or whether it really did behave anxiously in certain situations. It reliably demonstrated that emotion in all the tests.
Lemoine believes that the current AI's being developed by the likes of Microsoft and Google are, "the most powerful technology that has been invented since the atomic bomb".
As for his stance on Microsoft's Bing, Lemoine hasn't been able to test the chatbot yet but said, "based on the various things that I've seen online, it looks like it might be sentient. However, it seems more unstable as a persona."