AI hiring tools claim to reduce bias in hiring by incorporating machine-based decisions, but at least in its early stages, AI hiring strategies have the potential to hurt DEI in your organisation.

ai, artificial intelligence

Credit: Dreamstime

The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren’t strategic and thoughtful about how they implement the technology.

“The bias usually comes from the data. If you don’t have a representative data set, or any number of characteristics that you decide on, then of course you’re not going to be properly, finding and evaluating applicants,” says Jelena Kovačević, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering.

The chief issue with AI’s use in hiring is that, in an industry that has been predominantly male and white for decades, the historical data on which AI hiring systems are built will ultimately have an inherent bias.

Without diverse historical data sets to train AI algorithms, AI hiring tools are very likely to carry the same biases that have existed in tech hiring since the 1980s. Still, used effectively, AI can help create a more efficient and fair hiring process, experts say.

The dangers of bias in AI

Because AI algorithms are typically trained on past data, bias with AI is always a concern. In data science, bias is defined as an error that arises from faulty assumptions in the learning algorithm.

Train your algorithms with data that doesn’t reflect the current landscape, and you will derive erroneous results. As such, with hiring, especially in an industry like IT, that has had historical issues with diversity, training an algorithm on historical hiring data can be a big mistake.

“It’s really hard to ensure a piece of AI software isn’t inherently biased or has biased effects,” says Ben Winters, an AI and human rights fellow at the Electronic Privacy Information Center. While steps can be taken to avoid this, he adds, “many systems have been shown to have biased effects based on race and disability.”

If you don’t have appreciable diversity in your data set, then it’s impossible for an algorithm to know how individuals from underrepresented groups would have performed in the past. Instead, your algorithm will be biased toward what your data set represents and will compare all future candidates to that archetype, says Kovačević.

“For example, if Black people were systematically excluded from the past, and if you had no women in the pipeline in the past, and you create an algorithm based on that, there is no way the future will be properly predicted. If you hire only from ‘Ivy League schools,’ then you really don’t know how an applicant from a lesser-known school will perform, so there are several layers of bias,” she says.

Wendy Rentschler, head of corporate social responsibility, diversity, equity, and inclusion at BMC Software, is keenly aware of the potential negatives that AI can bring to the hiring process. She points to an infamous case of Amazon’s attempt at developing an AI recruiting tool as a prime example: The company had to shut the project down because the algorithm discriminated against women.

“If the largest and greatest software company can’t do it, I give great pause to all the HR tech and their claims of being able to do it,” says Rentschler.

Some AI hiring software companies make big claims, but whether their software can help determine the right candidate remains to be seen. The technology can help companies streamline the hiring process and find new ways of identifying qualified candidates using AI, but it’s important not to let lofty claims cloud judgment.

If you’re trying to improve DEI in your organisation, AI can seem like a quick fix or magic bullet, but if you’re not strategic about your use of AI in the hiring process, it can backfire. The key is to ensure your hiring process and the tools you’re using aren’t excluding traditionally underrepresented groups.

Discrimination with AI

It’s up to companies to ensure they’re using AI in the hiring process as ethically as possible and not falling victim to overblown claims of what the tools can do. Matthew Scherer, senior policy counsel for worker privacy at the Center for Democracy & Technology, points out that, since the HR department doesn’t generate revenue and is usually labeled as an expense, leaders are sometimes eager to bring in automation technology that can help cut costs.

That eagerness, however, can cause companies to overlook potential negatives of the software they’re using. Scherer also notes that a lot of the claims made by AI hiring software companies are often overblown, if not completely false.

“Particularly tools that claim to do things like analyse people’s facial expressions, their tone of voice, anything that measures aspects of personality -- that’s snake oil,” he says.

At best, tools that claim to measure tone of voice, expressions, and other aspects of a candidate’s personality in, for example, a video interview are “measuring how culturally ‘normal’ a person is,” which can ultimately exclude candidates with disabilities or any candidate that doesn’t fit what the algorithm determines is a typical candidate.

These tools can also put disabled candidates in the uncomfortable position of having to decide whether they should disclose any disabilities before the interview process. Disabled candidates may have concerns that if they don’t disclose, they won’t get the right accommodations needed for the automated assessment, but they might not be comfortable disclosing a disability that early in the hiring process, or at all.

And as Rentschler points out, BIPOC, women, and candidates with disabilities are often accustomed to the practice of “code switching” in interviews -- which is when underrepresented groups make certain adjustments to the way they speak, appear or behave, in order to make others more comfortable. In this case, AI systems might pick up on that and incorrectly identify their behaviour as inauthentic or dishonest, turning away potentially strong candidates.

Scherer says discrimination laws fall into two categories: disparate impact, which is unintentional discrimination; and disparate treatment, which is intentional discrimination. It’s difficult to design a tool that can avoid disparate impact “without explicitly favouring candidates from particular groups, which would constitute disparate treatment under federal law.”

Regulations in AI hiring

AI is a relatively new technology, leaving oversight scant when it comes to legislation, policies, and laws around privacy and trade practices. Winters points to a 2019 FTC complaint filed by EPIC alleging HireVue was using deceptive business practices related to the use of facial recognition in its hiring software.

HireVue claimed to offer software that “tracks and analyses the speech and facial movements of candidates to be able to analyse fit, emotional intelligence, communication skills, cognitive ability, problem solving ability, and more.” HireVue ultimately pulled back on its facial recognition claims and the use of the technology in its software.

But there’s similar technology out there that uses games to “purportedly measure subjective behavioural attributes and match with organisational fit” or that will use AI to “crawl the internet for publicly available information about statements by a candidate then analyse it for potential red flags or fit,” according to Winters.

There’s also concerns around the amount of data that AI can collect on a candidate while analysing their video interviews, assessments, resumes, LinkedIn profiles, or other public social media profiles. Oftentimes, candidates might not even know they’re being analysed by AI tools in the interview process and there are few regulations on how that data is managed.

“Overall, there is currently very little oversight for AI hiring tools. Several state or local bills have been introduced. However, many of these bills have significant loopholes -- namely not applying to government agencies and offering significant workarounds.

"The future of regulation in AI-supported hiring should require significant transparency, controls on the application of these tools, strict data collection, use, and retention limits, and independent third-party testing that is published freely,” says Winters.

Responsible use of AI in hiring

Rentschler and her team at BMC have focused on finding ways to use AI to help the company’s “human capital be more strategic.”

They’ve implemented tools that screen candidates quickly using skills-based assessments for the role they’re applying to and that instantly schedule interviews to connect with a recruiter. BMC has also used AI to identify problematic language in its job descriptions, ensuring they’re gender-neutral and inclusive for every applicant.

BMC has also employed the software to connect new hires with their benefits and internal organisational information during the onboarding process. Rentschler’s objective is to find ways to implement AI and automation that can help the humans on her team do their jobs more effectively, rather than replace them.

While AI algorithms can carry inherent bias based on historical hiring data, one way to avoid this is to focus more on skills-based hiring. Rentschler’s team only uses AI tools to identify candidates who have specific skill sets they’re looking to add to their workforce, and ignores any other identifiers such as education, gender, names, and other potentially identifying information that might have historically excluded a candidate from the process.

By doing this, BMC has hired candidates from unexpected backgrounds, Rentschler says, including a Syrian refugee who was originally a dentist, but also had some coding experience. Because the system was focused only on looking for candidates with coding skills, the former dentist made it past the filter and was hired by the company.

Other ethical strategies include having checks and balances in place. Scherer consulted with a company that designed a tool to send potential candidates to a recruiter, who would then review their resumes and decide whether they were a good fit for the job.

Even if that recruiter rejected a resume, the candidate’s resume would still be put through the algorithm again, and if it was flagged as a good potential candidate, it would be sent to another recruiter who wouldn’t know it was already reviewed by someone else on the team. This ensures that resumes were being double-checked by humans and that they aren’t relying solely on AI to determined qualified candidates. It also ensures that recruiters aren’t overlooking qualified candidates.

“It’s important that the human retains the judgment and doesn’t just rely on what the machine says. And that’s the thing that is hard to train for, because the easiest thing for a human recruiter to do will always be to just say, ‘I’m going to just go with whatever the machine tells me if the company is expecting me to use that tool,’” says Scherer.

Internet Explorer Channel Network


LATEST NEWS

NEWS RELATED

Here's everything Google announced at its Pixel 6 event

Google may have announced the Pixel 6 and 6 Pro in August, but we had to wait more than three months to get the full story on its latest phones. On Tuesday, the company did just that, detailing nearly every aspect of their design and software. Pixel 6 and 6…

Read more: Here's everything Google announced at its Pixel 6 event

The Pixel 6's camera will feature larger image sensors and smarter photo editing AI

Google The Pixel 6 smartphone has finally been unveiled. On Tuesday, Google executives explained what sorts of cameras and image capture systems the new handsets will offer when they go on sale October 28th. Both the Pixel 6 and 6 Pro will come equipped with a 50 MP Octa PD…

Read more: The Pixel 6's camera will feature larger image sensors and smarter photo editing AI

AI for Breast Cancer: UK's Shortage of Radiologists to Be Answered by Tech to Detect the Condition

Artificial Intelligence (AI) for breast cancer detection is not a new thing anymore, but for the United Kingdom, it is a new path to take, especially as it has a shortage of Radiologists for detecting it. The problem of breast cancer can affect almost anyone, and there are no exact…

Read more: AI for Breast Cancer: UK's Shortage of Radiologists to Be Answered by Tech to Detect the Condition

Depth Sensing AI Camera OAK-D Lite Gets Smaller and Smaller

(Photo : Image from Kickstart) Depth Sensing AI Camera OAK-D Lite Gets Smaller and Smaller | Full-Color 4K, Greyscale, Onboard AI Machine Vision Processing The new OAK-D is reportedly an open-source, full-color depth sensing camera that is embedded with AI capabilities. Currently, there is now a crowdfunding campaign going on…

Read more: Depth Sensing AI Camera OAK-D Lite Gets Smaller and Smaller

Google AI Department Gets Class-Action Lawsuit for 1.6 Million Confidential Medical Records of NHS Patients

(Photo : Image from Unsplash Website) Google AI Department Gets Class-Action Lawsuit for 1.6 Million Confidential Medical Records of NHS Patients The Google AI department otherwise known as DeepMined, the Google-owned AI research company, is getting a class-action lawsuit. The lawsuit focuses on the company’s use of personal records of…

Read more: Google AI Department Gets Class-Action Lawsuit for 1.6 Million Confidential Medical Records of NHS Patients

Bank Robbers Used Deepfake Voice for $35 Million Heist

(Photo : Image from Unsplash Website) Bank Robbers Used Deepfake Voice for $35 Million Heist | AI-Enhanced Voice Simulation Used Bank robbers appeared to have stolen a massive $35 million from a United Arab Emirates bank by using the help of AI-enhanced voice simulation. The robbers reportedly used deepfake to…

Read more: Bank Robbers Used Deepfake Voice for $35 Million Heist

Facebook is using first-person videos to train future AIs

Karissa Bell One of the obvious goals of almost every computer vision project is to enable a machine to see, and perceive, the world as a human does. Today, Facebook has started talking about Ego4D, its own effort in this space, for which it has created a vast new data…

Read more: Facebook is using first-person videos to train future AIs

Google turns its AI on traffic lights to reduce pollution

Google Poorly timed traffic lights don’t just waste precious minutes. Like Google’s chief sustainability officer Kate Brandt pointed out at a media event yesterday, they’re also bad for the environment and public health. The company unveiled a slew of sustainability-centric products and updates today that aim to help users make…

Read more: Google turns its AI on traffic lights to reduce pollution

Samsung hopes to 'copy and paste' the brain to 3D chip networks

AI startup Boomy looks to turn the music industry on its ear

European Parliament calls for a ban on facial recognition in public spaces

NVIDIA's latest tech makes AI voices more expressive and realistic

Google Lens is coming to Chrome on the desktop

What rights does an evil sentient computer have on Star Trek?

Sony's head of AI research wants to build robots that can win a Nobel Prize

Otter's transcription tech now supports Microsoft Teams, Google Meet and Webex

OTHER NEWS