Proponents of artificial intelligence (AI) and machine learning have promised tools will usher in a new age of digitized work. AI tools can now be used scan thousands of documents and reduce repetitive work tasks, algorithms can predict your shopping habits and recommend products before you even think of them, and machine learning software can be trained to identify cancer from MRIs. Often, the creators and designers of these tools tout AI's supposed objectivity. However, what technologists are less interested in publicizing is how AI can be used to reinforce discriminatory policing, violate civil rights, enable employment discrimination and reinforce class, gender, and race disparities.
In a recent New York Times op-ed, Dr. Ifeoma Ajunwa of Cornell's Industrial and Labor Relations School highlighted hiring companies and HR departments increased use of these tools. Ajunwa points out that employers are not merely utilizing these technologies to screen candidates, but are actively barring candidates from being considered for employment. As an example, she posits a company that relies on a hiring algorithm trained to seek candidates without gaps in their employment. Ajunwa notes, such a stipulation would automatically screen out women applicants who have taken time off for child care or for those who have had long-term medical issues. And, because AI relies on specific rules created by humans, there is no way for the technology to check itself against employment law or ethical norms about employment discrimination. It would simply filter out applicants who don't meet the criteria.
Dr. Ajunwa is not the only one sounding the alarm about employers increasing reliance on AI and other tools, which creators purport to be objective. According to Cathy O'Neil, author of Weapons of Math Destruction, such algorithmic bias is common in hiring, especially in low-wage jobs where massive retail companies rely on sophisticated AIs that consider aspects of your life you would not think have any bearing on employment, such as your credit score, medical and mental health histories, personality tests, and driving record.
In recent years, several lawsuits and investigations regarding AI discrimination have appeared and several researchers in tech have started to develop methods to illuminate the hidden bias in machine learning and AI technologies. However, as Dr. Ajunwa notes, there are few concrete laws on the books that can protect applicants from algorithmic discrimination. Moreover, the Harvard Business Review cautioned that unlike other forms of employment testing, many of these AI-based tools remain empirically untested, leaving the door open to to ethical and legal problems.