National Science Foundation, Penn State Launch Study of Bias in AI Recruiting Software, Underscoring Questions About Its Viability

November 02, 2018

A $225,000 federal government grant to researchers at Penn State for recruiting software development comes on the heels of the disclosure that in 2017 Amazon shut down its experimental hiring tool that used artificial intelligence to search for candidates for technical positions because the AI wasn’t doing so in a gender-neutral way.

The Penn State research team believes AI tools being used to attract, screen and hire employees are deficient.  According to project lead Dr. Lynette Yarger: “These tools have not been thoroughly tested under the law and raise concerns about the potential for bias, fairness, transparency and accuracy.”  She continued, “When algorithms make inferences about applicants’ age, race, religion and sex, it is difficult to determine if firms are adhering to federal laws that protect job applicants against discrimination.”

Amazon reported to have lost faith in its experiment:  The company’s AI tool analyzed resumes and hiring patterns over a 10-year period and screened out applicants with female markers.  The company is reported to have confined the tool to rudimentary chores.

Talent acquisition leaders in HR Policy member companies express concern over legal liability:  As companies evaluate software packages being offered to enhance their ability to identify promising candidates, we routinely hear concerns about whether the software will withstand scrutiny by the OFCCP or the EEOC.  In an ideal world, employers would like government agencies to give a stamp of approval to solutions meeting their regulatory requirements, but that’s not how enforcement agencies typically operate—they react to complaints.

Software developers continue seeking algorithmic processes free of bias:  For example, IBM recently announced AI Open Scale, which operates within Watson to provide explanations into how AI models are making decisions as well as detect and mitigate bias.  A fairness tool developed by Accenture Applied Intelligence to be used for understanding and addressing bias was recently described in the Harvard Business Review.

The Penn State study is significant because Congress often uses research sponsored by the National Science Foundation as the basis of legislation.  At the same time, we believe government agencies will be reluctant to wade into the complexities of determining algorithmic bias, preferring to draw inferences from outcomes.  One solution being advocated is for some organization to create a resource having the confidence of key constituencies—developers, employers, applicants, and government—that could audit software to determine whether it was both compliant and bias-free.  Our RSI Initiative has offered Penn State its assistance, and is determining the best method of ensuring that compliance remains at the top of the RSI Review Board’s agenda.