
Who will guide our future: Machines or human minds?
5 min read . Updated: 26 Aug 2020, 08:15 PM ISTGPT-3 displays impressive artificial general intelligence but is just another software tool at our disposal
GPT-3 displays impressive artificial general intelligence but is just another software tool at our disposal
OpenAI’s new software, called GPT-3, is by far the most powerful “language model" ever created. With small prompts, it can draft letters eerily close to what a human would produce. It can respond to emails. It can translate texts into many languages.
This language model is an AI system that has been trained on large corpus of text. In this case, “large" is something of an understatement. Reportedly, the entirety of the English Wikipedia, spanning some 6 million articles, makes up just 0.6% of GPT-3’s training data. There is a point of view that GPT-3 is an important step toward artificial general intelligence, the kind that would allow a machine to reason broadly in a manner similar to humans without having to train for every specific task it encounters.
But, a few days ago, an article by Gary Marcus and Ernest Davis in MIT Technology Review, ‘GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about’, poured cold water on the huge hype around GPT-3’s launch. According to the authors, it can be used to produce entertaining surrealist fiction; other commercial applications may emerge as well. But accuracy is not its strong point. Although its output is grammatical, and even impressively idiomatic, its comprehension of the real world is often seriously off.
To understand why this could have happened, it helps to think about what systems like GPT-3 do. They don’t learn about the world, they learn about text and how people use words in relation to other words. With enough text and processing capacity, the software learns probabilistic connections between words. What it does is akin to an elaborate cut-and-paste act that uses variations on text it has seen, rather than understanding the real meaning of that material.
A software that writes without understanding what it’s writing raises the prospect of frightening misuse. The creators of GPT-3 themselves have cited a litany of dangers, including “misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting". Because it was trained on text found online, it’s likely that GPT-3 mirrors many biases found in society.
This is not the first time that an emergent technology has seemed to pose an existential threat. It was feared that nuclear energy will contaminate the world. DNA engineering was expected to unleash biological warfare. These prophesies of doom did not materialize. It is important to take care of the possible negative consequences of a new technology. But that should not put shackles on its progress.
The creators of GPT-3 are already taking steps in the right direction. They prohibit GPT-3 from impersonating humans; that is, all text produced by it must disclose that it was written by a bot. OpenAI has also invited external researchers to study the system’s biases, in the hope of mitigating them. Will all this mitigate human fears of this new technology?
One of the best perspectives on this conflict between brain and machine comes from an article in Aeon, ‘At the limits of thought’’, by David C Krakauer, president and William H Miller Professor of Complex Systems at the Santa Fe Institute in New Mexico. Francis Bacon was one of the first to propose that human perception and reason could be augmented by tools. Isaac Newton adopted Bacon’s empirical philosophy and spent a career developing tools: physical lenses and telescopes, as well as mental aids and mathematical descriptions, all of which accelerated scientific discovery. A growing dependence on instruments led to disconcerting divergence: between what the human mind could discern of the world’s underlying mechanisms, and what various tools were capable of measuring and modelling.
Early tools like rulers and compasses helped humans do what once took a lot of effort with greater ease and precision. As tools became more advanced, they started doing things humans could never do. A telescope could see far farther than what a human eye could. But the telescope still functioned like an enhanced human eye. Then came a stage where tools were performing functions very differently from how humans would. With the radio telescope, machines were seeing things rather differently from how the human eye sees things.
In this age of “big data", the divergence between what the human senses can do and what new tools can do has become even more startling. These new sophisticated tools are capable of analysing “high-dimensional" data-sets, and the predictions they provide often defy the best human ability to interpret them. It has become nearly impossible for humans to reconstruct how these tools function. This has not been music to the ears of the stubbornly anthropocentric who insist that our tools yield to our intelligence. This attitude could impede the advancement of science.
Much like the compass and the telescope, GPT-3 is yet another tool that humans have at their disposal. Without tools, humans would still be spending a lot of time trying to draw perfect circles and straight lines. Tools have helped us focus our attention beyond the mundane. Similarly, GPT-3 too could help us get out of many mundane tasks of writing that we have been involved in for centuries. It could help focus human attention and intelligence on more advanced things, such as seeing galaxies that lie beyond our line of sight. No doubt, GPT-3 will also make the few personal handwritten notes you write even more precious.
Biju Dominic is the chief executive officer of Final Mile Consulting, a behaviour architecture firm.
Click here to read the Mint ePaperLivemint.com is now on Telegram. Join Livemint channel in your Telegram and stay updated