TSMC beat on Q2 sales expectations driven by AI boom, Nvidia, and Apple
Subtitles
  • Off
  • English

AI has a lot of terms. We've got a glossary for what you need to know

AI-BC's

AI has a lot of terms. We've got a glossary for what you need to know

GPU? TPU? LLM? All the important AI vocabulary to know

We may earn a commission from links on this page.
Start Slideshow
The Open AI logo is displayed at the Mobile World Congress in Barcelona, Spain.
The Open AI logo is displayed at the Mobile World Congress in Barcelona, Spain.
Image: Joan Cros/NurPhoto (Getty Images)

When people unfamiliar with AI envision artificial intelligence, they may imagine Will Smith’s blockbuster I, Robot, the sci-fi thriller Ex Machina, or the Disney movie Smart House — nightmarish scenarios where intelligent robots take over to the doom of their human counterparts.

Advertisement

Today’s generative AI technologies aren’t quite all-powerful yet. Sure, they may be capable of sowing disinformation to disrupt elections or sharing trade secrets. But the tech is still in its early stages, and chatbots are still making big mistakes.

Still, the newness of the technology is also bringing new terms into play. What makes a semiconductor, anyway? How is generative AI different from all the other kinds of artificial intelligence? And should you really know the nuances between a GPU, a CPU, and a TPU?

If you’re looking to keep up with the new jargon the sector is slinging around, Quartz has your guide to its core terms.

Advertisement
Previous Slide
Next Slide

2 / 9

What is Generative AI?

What is Generative AI?

Jensen Huang standing in front of a display that says Generative AI and a drawing of a diagram
Nvidia CEO Jensen Huang.
Photo: Justin Sullivan (Getty Images)

Let’s start with the basics for a refresher. Generative artificial intelligence is a category of AI that uses data to create original content. In contrast, classic AI could only offer predictions based on data inputs, not brand new and unique answers using machine learning. But generative AI uses “deep learning,” a form of machine learning that uses artificial neural networks (software programs) resembling the human brain, so computers can perform human-like analysis.

Advertisement

Generative AI isn’t grabbing answers out of thin air, though. It’s generating answers based on data it’s trained on, which can include text, video, audio, and lines of code. Imagine, say, waking up from a coma, blindfolded, and all you can remember is 10 Wikipedia articles. All of your conversations with another person about what you know are based on those 10 Wikipedia articles. It’s kind of like that — except generative AI uses millions of such articles and a whole lot more.

Advertisement
Previous Slide
Next Slide

3 / 9

What is a chatbot?

What is a chatbot?

Image for article titled AI has a lot of terms. We've got a glossary for what you need to know
Image: CFOTO/Future (Getty Images)

AI chatbots are computer programs that generate human-like conversations with users, giving unique, original answers to their queries. Chatbots were popularized by OpenAI’s ChatGPT, and since then, a bunch more have debuted: Google Gemini, Microsoft CoPilot, and Salesforce’s Einstein lead the pack, among others.

Advertisement

Chatbots don’t just generate text responses — they can also build websites, create data visualizations, help with coding, make images, and analyze documents. To be sure, AI chatbots aren’t foolproof yet — they’ve made a lot of mistakes already. But as AI technology rapidly advances, so will the quality of these chatbots.

Advertisement
Previous Slide
Next Slide

4 / 9

What is a Large Language Model (LLM)?

What is a Large Language Model (LLM)?

Google Gemini.
Google Gemini.
Photo: NurPhoto/Contributor (Getty Images)

Large language models (LLMs) are a type of generative artificial intelligence. They are trained on large amounts of data and text, including from news articles and e-books, to understand and generate content, including natural language text. Basically, they are trained on a ton of text so they can predict what word comes next. Take this explanation from Google:

“If you started to type the phrase, “Mary kicked a…,” a language model trained on enough data could predict, “Mary kicked a ball.” Without enough training, it may only come up with a “round object” or only its color “yellow.” — Google’s explainer

Advertisement

Popular chatbots like OpenAI’s ChatGPT and Google’s Gemini, which have capabilities such as summarizing and translating text, are examples of LLMs.

Advertisement
Previous Slide
Next Slide

5 / 9

What is a semiconductor?

What is a semiconductor?

close up of gloved hand holding microchip
A semiconductor is also called a microchip.
Photo: Wong Yu Liang (Getty Images)

No, it’s not an 18-wheeler driver. Semiconductors, also known as AI chips, are used in electrical circuits of devices such as phones and computers. Electronic devices wouldn’t exist without semiconductors, which are made from pure elements like silicon or compounds like gallium arsenide, to conduct electricity. The name “semi” comes from the fact that the material can conduct more electricity than an insulator, but less electricity than a pure conductor like copper.

Advertisement

The world’s largest semiconductor foundry, Taiwan Semiconductor Manufacturing Company (TSMC), makes an estimated 90% of advanced chips in the world, and counts top chip designers Nvidia and Advanced Micro Devices (AMD) as customers.

Even though semiconductors were invented in the U.S., it now produces about 10% of the world’s chips, not including advanced ones needed for larger AI models. President Joe Biden signed the CHIPS and Science Act in 2022 to bring chipmaking back to the U.S., and the Biden administration has already invested billions into semiconductor companies including Intel and TSMC to build factories throughout the country. Part of that effort also has to do with countering China’s advancements in chipmaking and AI development.

Advertisement
Previous Slide
Next Slide

6 / 9

What are GPUs & CPUs?

What are GPUs & CPUs?

illustration of a CPU in a computer motherboard with other switches
Core processing unit (CPU) in the motherboard.
Photo: Narumon Bowonkitwanchai (Getty Images)

A GPU is a graphics processing unit, an advanced chip (or semiconductor) that powers the large language models behind AI chatbots like ChatGPT. It was traditionally used to make video games with higher quality visuals.

Advertisement

Then a Ukrainian-Canadian computer scientist, Alex Krizhevsky, showed how using a GPU could power deep learning models a whole lot faster than a CPU — a central processing unit, or the main hardware that powers computers.

CPUs are the “brain” of a computer, carrying out instructions for that computer to work. A CPU is a processor, which reads and interprets software instructions to control the computer’s functions. But a GPU is an accelerator, a piece of hardware designed to advance a specific function of a processor.

Nvidia is the leading GPU designer, with its H100 and H200 chips used in major tech companies’ data centers to power AI software. Other companies are aiming to compete with Nvidia’s accelerators, including Intel with its Gaudi 3 accelerator, and Microsoft’s Azure Maia 100 GPU.

Advertisement
Previous Slide
Next Slide

7 / 9

What is a TPU?

What is a TPU?

A Google video breaks down the ins and outs of its TPU.
A Google video breaks down the ins and outs of its TPU.
Screenshot: Google (Other)

TPU stands for “tensor processing unit.” Google’s chips, unlike those of Microsoft and Nvidia, are TPUs — custom-designed chips made specifically for training large AI models (whereas GPUs were initially made for gaming, not AI).

Advertisement

While CPUs are general-purpose processors and GPUs are an additional processor that run high-end tasks, TPUs are custom-built accelerators to run AI services — making them all the more powerful.

Advertisement
Previous Slide
Next Slide

8 / 9

What is a hallucination?

What is a hallucination?

OpenAI "ChatGPT" AI-generated answer to the question "What can AI offer to humanity?" is seen on a laptop screen
OpenAI’s ChatGPT
Illustration: Leon Neal (Getty Images)

As mentioned before, AI chatbots are capable of a lot of tasks, but they also slip up a lot. When LLMs like ChatGPT make up fake or nonsensical information, that’s called a hallucination.

Advertisement

Chatbots “hallucinate” when they don’t have the necessary training data to answer a question, but still generate a response that looks like a fact. Hallucinations can be caused by different factors such as inaccurate or biased training data and overfitting, which is when an algorithm can’t make predictions or conclusions from other data than what it was trained on.

Hallucinations are currently one of the biggest issues with generative AI models — and they’re not exactly easy to solve for. Because AI models are trained on massive sets of data, it can make it difficult to find specific problems in the data. Sometimes, the data used to train AI models is inaccurate anyway, because it comes from places like Reddit. Although AI models are trained to not answer questions they don’t know the answer to, they sometimes don’t refuse these questions, and instead generate answers that are inaccurate.

Advertisement