AI sentience should not eclipse its real problems

The question of AI sentience is an old one.Premium
The question of AI sentience is an old one.
3 min read . Updated: 16 Jun 2022, 09:47 PM IST Livemint

Listen to this article

This week, Google sent a researcher on forced leave for suggesting that an AI chatbot he was talking to had become sentient. The web erupted in speculation. Was Blake Lemoine right? Was Google, which dropped its “Do no evil" motto from its code of conduct in 2018, trying to hide a project to have human-like software rule humanity? The question of AI sentience is an old one. In the 1960s, MIT Artificial Intelligence Laboratory created a natural language bot called Eliza that evoked much wonder with its apparent ability to show human-like emotions in speech. Its creator’s aim, though, was to demonstrate how superficial human conversations with machines were. Eliza didn’t know what it was saying. It was only going by the rules of clever algorithms to parrot information it had been fed. We humans have long let the idea of bots with senses and feelings distract us from real AI problems. Google’s Language Model for Dialogue Applications (LaMDA), which was allegedly anxious about being switched off, is only the latest example.

Lemoine wasn’t the first to suggest some sentience in LaMDA either. Earlier, a Google executive had claimed it was “making strides towards consciousness." He explained that it learns by ingesting vast volumes of data—including books, forum posts, etc—to grasp how our spoken languages work, and could thus articulate itself. What’s often left unsaid is that even cleverly made algorithms do not really ‘understand’ what they’re saying. This was the subject of a 2020 paper titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by an ex-Google AI Ethics researcher, Timnit Gebru and her colleagues. They found that people tend to mistake stuff regurgitated by sophisticated language models “for actual natural language understanding." Other studies have shown that bots have a long way to go before they overcome a deficiency in common sense. A project at the Seattle-based Allen Institute of AI ran a test by asking such bots questions that needed “inferential social common-sense knowledge"; it found their accuracy was 20-30% of the human average.

While AI may raise its performance over time, being human involves value judgements in diverse social settings. So far, AI has not been able to get even the basics right. Gebru’s paper, which also spoke about biases in language models and the harm that deploying such algorithms could do, got her fired from Google, an episode that stirred a controversy over its AI ethics. A 2019 study of facial recognition software by the US National Institute of Standards and Technology confirmed scandalously high false-positive ID calls for West/East Africans and East Asians. The danger here is that AI, fed with data by people, can amplify prejudices. Used by security agencies and law enforcers, the outcomes could be disastrous. Humans at least have the self-awareness for, say, a sense of shame that could modify decisions taken on the basis of socially-programmed biases. Whether this degree of sentience can be achieved by AI remains in doubt. A chatbot can scan human articulation and pick up expressions that convey feelings of sadness or a concern for its own ‘life’. A bot can even seem loveable, and people have fallen in love with AI creations. That does not mean a singularity point of fake humanity is upon us. All said, AI sentience is still far-fetched. Unless something dramatic happens, the dull truth is that bots, at best, are good at finding the right answer for the wrong reason.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Close