AI bias: How tech determines if you land job, get a loan or end up in jail

AI bias: How tech determines if you land job, get a loan or end up in jail

Businesses across almost every industry deploy artificial intelligence to make jobs simpler for staff and tasks easier for consumers. 

Computer software teaches customer service agents how to be more compassionate, schools use machine learning to scan for weapons and mass shooters on campus, and doctors use AI to map the root cause of diseases.

Sectors such as cybersecurity, online entertainment and retail use the tech in combination with wide swaths of customer data in revolutionary ways to streamline services. 

Though these applications may seem harmless, perhaps even helpful, the AI is only as good as the information fed into it, which can have serious implications.

You might not realize it, but AI helps determine whether you qualify for a loan in some cases. There are products in the pipeline that could have police officers stopping you because software identified you as someone else.

Imagine if people on the street could take a photo of you, then a computer scanned a database to tell them everything about you, or if an airport’s security camera flagged your face while a bad guy walked clean through TSA.

Those are real-world possibilities when the tech that’s supposed to bolster convenience has human bias baked into the framework.

“Artificial intelligence is a super powerful tool, and like any really powerful tool, it can be used to do a lot of things – some of which are good and some of which can be problematic,” said Eric Sydell, executive vice president of innovation at Shaker International, which develops AI-enabled software.

“In the early stages of any new technology like this, you see a lot of companies trying to figure out how to bring it into their business,” Sydell said, “and some are doing it better than others.”

Artificial intelligence tends to be a catch-all term to describe tasks performed by a computer that would usually require a human, such as speech recognition and decision making. 

Whether it’s intentional or not, humans make judgments that can spill over into the code created for AI to follow. That means AI can contain implicit racial, gender and ideological biases, which prompted an array of federal and state regulatory efforts.

Criminal justice 

In June, Rep. Don Beyer, D-Va., offered two amendments to a House appropriations bill that would prevent federal funds from covering facial recognition technology by law enforcement and require the National Science Foundation to report to Congress on the social impacts of AI.

“I don’t think we should ban all federal dollars from doing all AI. We just have to do it thoughtfully,” Beyer told USA TODAY. He said computer learning and facial recognition software could enable police to falsely identify someone, prompting a cop to reach for a gun in extreme cases. 

“I think very soon we will ask to ban the use of facial recognition technology on body cams because of the real-time concerns,” Beyer said. “When data is inaccurate, it could cause a situation to get out of control.”

AI is used in predictive analysis, in which a computer reveals how likely a person is to commit a crime.  Though it’s not quite to the extent of the “precrime” police units of the Tom Cruise sci-fi hit “Minority Report,” the technique has faced scrutiny over whether it improves safety or simply perpetuates inequities. 

Americans have voiced mixed support of AI applications, and the majority (82%) agree that it should be regulated, according to a study this year from the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.

When it comes to facial recognition specifically, Americans say law enforcement agencies will put the tech to good use. 


Numerous studies suggest that automation will destroy jobs for humans. For example, Oxford academics Carl Benedikt Frey and Michael Osborne estimated that 47% of American jobs are at high risk of automation by the mid-2030s. 

As workers worry about being displaced by computers, others are hired thanks to AI-enabled software.

The technology can match employees who have the ideal skill sets for a specific work environment with employers who may be too busy to have humans screen candidates.

Shaker International uses data gathered from tests, audio interviews and resumes to predict how a person might behave on the job.

“Meaningful bits” of information include “how a person will work, how long they will stay, will they be a top sales performer or a high-quality worker,” Sydell said.

Using AI, “we can get rid of processes that don’t work well or are redundant. And we can give candidates a better experience by giving them real-time feedback throughout the process,” Sydell said. 

He said if AI is deployed poorly, it can make the job environment worse, but if it’s done thoughtfully, it can lead to fairer workplaces.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *