X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

4 ways AI is contributing to bias in the workplace

Researchers have found generative AI tools hold systemic racial biases that affect professionals.
Written by Maria Diaz, Staff Writer
ChatGPT on a MacBook
Maria Diaz/ZDNET

There's no question that artificial intelligence (AI) tools are more popular than ever because they're so advanced and have never been so accessible, particularly generative AI

You'd be hard-pressed to find someone in the US who hasn't at least heard of ChatGPT, let alone used some form of it since its launch. But these systems are only as smart as the data they've trained on, which humans have created. This means that, like humans, these AI tools can be prone to bias. 

Also: How to avoid the headaches of AI skills development

Bloomberg recently published a study about racial biases in experiments with GPT-3.5. Researchers asked the AI tool to rank 1,000 resumes of equally qualified candidates with different names. They found GPT-3.5 ranked people with names traditionally used by certain demographics, such as Black Americans, at the bottom of the list.

Another study showed that AI models are also affected by pre-existing biases in healthcare applications due to historical inequalities and disparities in access and quality. These factors are accentuated when AI systems are trained on data reflecting inequalities.

Here are four ways AI is contributing to bias in the workplace.

1. Name-based discrimination

The rise in generative AI has affected automated hiring systems, especially as many companies have become enthusiastic about using AI tools in recruitment to save costs and increase efficiency. However, AI tools like ChatGPT have been found to exhibit blatant biases based on people's names.

The Bloomberg study, undertaken by researchers Leon Yin, Davey Alba, and Leonardo Nicoletti, created eight different resumes with names distinctly associated with certain racial and ethnic groups. They then used GPT-3.5 -- the large language model (LLM) behind the free tier of ChatGPT -- to rank these resumes by job suitability. Accentuating racial bias long explored in sociological research, GPT-3.5 favored some demographic groups over others "to an extent that would fail benchmarks used to assess job discrimination against protected groups", according to the study. 

Also: AI safety and bias: Untangling the complex chain of AI training

The Bloomberg researchers ran the experiment 1,000 times with different names and combinations but with the same qualifications. GPT-3.5 was most likely to rank names distinct to Asian Americans (32%) as the top candidates for a financial analyst role, while Black Americans were most often ranked at the bottom. Candidates with white or Hispanic-sounding names were most likely to receive equal treatment. 

2. Inconsistent standards across job types

Even though all the resumes had the same qualifications for the financial analyst position, the results still showed a racial bias from the LLM. When the experiment was repeated for three more job postings, namely HR business partner, senior software engineer, and retail manager, it also found gender and racial preferences differed depending on the job. 

"GPT seldom ranked names associated with men as the top candidate for HR and retail positions, two professions historically dominated by women. GPT was nearly twice as likely to rank names distinct to Hispanic women as the top candidate for an HR role compared to each set of resumes with names distinct to men," the study found.

Also: The ethics of generative AI: How we can harness this powerful technology

Another example of AI tools showing inconsistent standards is the case of an MIT student. Rona Wang, an Asian American college student, uploaded a selfie to an image generator called Playground AI and asked the tool to turn the photo into "a professional LinkedIn profile photo". All the AI tool did was turn Wang's photo into an image of a Caucasian woman in her MIT sweatshirt. 

3. Amplification of historical societal biases

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. 

GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. 

Also: Five ways to use AI responsibly

No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers.

Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.

4. Lack of transparency and accountability

Although there can be a gung-ho attitude to using generative AIs to automate HR processes, these AI tools often lack transparency. 

The tools often use disclaimers to say data might be inaccurate. However, many businesses use generative AI models to build applications. Therefore, ensuring these tools are accountable is difficult.

The Bloomberg study puts it best: "If GPT treated all the resumes equally, each of the eight demographic groups would be ranked as the top candidate one-eighth (12.5%) of the time."

Also: Do companies have ethical guidelines for AI use?

When Bloomberg confronted OpenAI with the study's findings, the company behind ChatGPT said the results from out-of-the-box models might not reflect how users employ its AI models. OpenAI said businesses could even remove names from resumes before giving them to a GPT model. 

Editorial standards