Job applicants must increasingly contend with AI hiring systems, ostensibly designed to reduce biases but which instead, research shows, exacerbate them.
The most glaring example occurred at Amazon, whose system was proven to downgrade any CV containing the word "women." Aside from obvious and illegal discriminatory decisions based on race, sex or health condition, AI hiring apps now claim to be able to assess human emotions, a claim now debunked. The question is to what degree societies are willing to tolerate systems that promote inefficient and inequitable resource allocations decisions in an economy desperate for skilled workers of any kind. JL
Ifeoma Ajunwa reports in Wired:
Automated hiring systems range from tools that parse resumes and rank them to systems that green-light candidates and trash applicants deemed unfit. Increasingly, working Americans are obligated to use them if they want to get hired. AI hiring systems play a crucial role in shaping corporations and in determining who gets to move up professionally. (But) they sort people by their race, age, and sex, a practice shown to deny equal employment opportunity as older workers and women are less likely to see (job) ads. Automated video interviewing systems claim to parse human emotion (but) that traits like trustworthiness can be gleaned from facial expression and body movement is pseudo-science which has both racist and eugenicist roots.EARLIER THIS MONTH, Lina Khan, chair of the US Federal Trade Commission (FTC), wrote an essay in The New York Times affirming the agency’s commitment to regulating AI. But there was one AI application Khan didn’t mention that the FTC urgently needs to regulate: automated hiring systems. These range in complexity from tools that merely parse resumes and rank them to systems that green-light candidates and trash applicants deemed unfit. Increasingly, working Americans are obligated to use them if they want to get hired.In my recent book, The Quantified Worker, I argue that the American worker is being reduced to numbers by AI technologies in the workplace, automated hiring systems chief among them. These systems reduce applicants to a score or rank, often ignoring the gestalt of their human experience. Sometimes they even sort people by their race, age, and sex, a practice that’s legally prohibited from being part of the employment decisionmaking process.
Ironically, many of these systems are marketed as being bias-free or guaranteed to reduce the probability of discriminatory hiring. But because they’re so loosely regulated, such systems have been shown to deny equal employment opportunity on the basis of protected categories such as race, age, sex, and disability. In December 2022, for example, a female truckers union sued Meta, alleging that Facebook “selectively shows job advertisements based on users’ gender and age, with older workers far less likely to see ads and women far less likely to see ads for blue-collar positions, especially in industries that historically exclude women.” This is deceptive. Even more, it is unfair to job applicants and employers alike. Employers purchase automated hiring systems to reduce their liability for employment discrimination, and the vendors of those systems are legally obligated to substantiate their claims of efficacy and fairness.
The law puts automated hiring systems under the FTC’s purview, but the agency has yet to release specific guidelines on how purveyors of these systems ought to advertise their wares. It should start by requiring auditing to ensure that automated hiring platforms are fulfilling the promises they make to employers. The vendors of these platforms should be obligated to provide clear records of audits demonstrating that their systems reduce bias in employment decisionmaking as advertised. These audits should be able to show that the designers followed Equal Employment Opportunity Commission (EEOC) guidelines when creating the platforms.
Also, in collaboration with the EEOC, the FTC could establish the Fair Automated Hiring Mark, which would be used to certify that automated hiring systems have passed the rigorous auditing process. As an imprimatur, the mark would be a useful signal of quality to consumers—both applicants and employers.
The FTC should also allow job applicants, who are consumers of AI-enabled online application systems, to sue under the Federal Credit Report Act (FCRA). Previously, the FCRA was thought to only apply to the Big Three credit agencies, but a close reading shows that this law can apply whenever a report has been created for any “economic decision.” By this definition, applicant profiles created by online automated hiring platforms are “consumer reports,” which means that the entities that generated them (such as online hiring platforms) would be considered credit reporting agencies. Under the FCRA, anyone that is the subject of one of these reports can petition the agency that made it to see the results and demand corrections or amendments. Most consumers do not know they have these rights. The FTC should launch an education campaign to inform applicants about these rights so they can make use of them.
The 1982 case of Thompson v. San Antonio Retail Merchants Ass’n (SARMA) sets a helpful precedent for job applicants. In this case, SARMA, a credit agency, inputted the incorrect social security number for Thompson’s profile and thus erroneously reported the bad credit history of another man by a similar name. Both the district and appeals courts affirmed that SARMA had erred. The FTC could create a website to help job applicants query automated hiring platforms for their applicant reports and allow them to submit claims to correct erroneous reports. The FTC could also establish fines and even allow applicants to sue when platforms fail to update inaccurate reports.
Finally, the FTC should completely ban the sale of automated video interviewing systems that claim to parse human emotion. The idea that human traits like trustworthiness can be gleaned from facial expressions and body movements is a pseudoscience akin to the debunked practice of phrenology, which has both racist and eugenicist roots. Automated interview systems that claim they can do this only end up excluding job applicants who do not fit the majority race or who might have disabilities.
AI hiring systems play a crucial role in shaping corporations and in determining who does and does not get to move up professionally. The FTC should regulate marketing claims about them to dissuade deceptive practices and allow for fair competition among job applicants. By taking action with these proposals, the FTC can ensure that well-intentioned employers are buying effective hiring tools rather than snake oil.
0 comments:
Post a Comment