Corporate human resources leaders have embraced AI. Doing so proves they are in sync with 'the future,' that they know how to cut costs (since they lost the argument claiming human resources are a strategic asset compared to profits) and that they totally buy in to the need for 'efficiency.'
But a more objective analysis reveals that AI systems tend to be biased toward applications written by...AI systems (sound familiar to those concerned about white male hiring biases?) and that they are so easily gamed that they are giving corporations a false sense of security about quality when the opposite may be true due to bots, identity hiding, ghosting and other realities of contemporary hiring. There are even digital how to manuals on what to do if you fear you've hired a deepfake. The lesson is, as in every other phase of technology adoption, that entirely replacing humans with machines is likely to be a fantasy rather than a path to optimization. JL
Taylor Telford reports in the Washington Post:
AI is reshaping the U.S. job market, playing a growing role in American workplaces. AI is worming its way into the hiring chain, from posting jobs online, sourcing candidates and scanning applications to scheduling interviews and interacting with job seekers. The $3.3 billion recruiting software market is projected to double by 2032. AI tools meant to make job hunting and hiring more efficient is sowing distrust. While companies embrace AI recruiters and application scanning systems, finding real, qualified workers amid bots, cheaters and deepfakes is getting tougher as candidates use AI to write cover letters, bluff their way through interviews and even hide their identities. AI systems (prefer) applications written with AI models over those written by humans. "It's easy to game."
Of the 150-odd jobs Jaye West applied for in the past few months, nearly all of them involved artificial intelligence somewhere in the process.
There are the chatbots that helped West, a junior at the University of Washington at Bothell, populate applications for chains like McDonald’s and delivery services like Gopuff, and the talking robot that proctored his “utterly freaky” interview with Clear, the airport security screening company. Then there’s the bot that notified West that he’d advanced to an in-person interview at Chipotle a week and a half later, only for him to show up at a Seattle store and find the manager was out and no one knew he was coming.
It all left the 21-year-old feeling “like companies aren’t that serious about hiring,” West said. When he turned in an application, he started assuming that “nobody is going to see it” — an increasingly common sentiment for job seekers in the age of AI. West was stunned earlier this week when, 12 hours after applying for seasonal work on the dock crew at a local boat club, the hiring manager invited him for an interview. During their 30-minute call, she mentioned that she reviews all résumés herself.
“It was amazing talking to a real person,” said West, who got an offer and happily accepted.
The speedy embrace of AI tools meant to make job hunting and hiring more efficient is causing headaches and sowing distrust in these processes, people on both sides of the equation say. While companies embrace AI recruiters and application scanning systems, many job seekers are trying to boost their odds with software that generates application materials, optimizes them for AI and applies to hundreds of jobs in minutes.
Meanwhile, recruiters and hiring managers are fielding more applicants than they can keep up with, yet contend that finding real, qualified workers amid the bots, cheaters and deepfakes is only getting tougher as candidates use AI to write their cover letters, bluff their way through interviews and even hide their identities.
\
Vidoc Security Lab, a Polish cybersecurity start-up, recently put out a guide on how to detect fake job seekers after nearly hiring a “deepfake” candidate who made it through multiple stages of the company’s hiring process while using software to obscure their face and beat coding tests. Such candidates pose a major security threat, Vidoc’s chief executive Klaudia Kloc said, because they may be trying to steal data or proprietary information.
Vidoc has heard from scores of companies facing similar struggles, Kloc said, including some that asked what they should do if they believe they hired a deepfake. Among the red flags, Vidoc’s guide warns: applications flooding in seconds after a job has been posted, mismatched details like addresses and dates in applications, résumés that seem too good to be true, and candidates refusing to turn on their camera during virtual interviews.
“It was very eye-opening,” said Dawid Moczadlo, Vidoc’s chief technology officer, who recently shared a video on LinkedIn of an interview he was forced to cancel midway after the candidate was unable to wave a hand in front of his face to prove he was human. “Other people shared that they encounter candidates like this every week or so.”
Vidoc has been working to “AI-proof” its hiring process, Moczadlo said, from asking more personal questions early on to conducting final interviews in person, even though travel and a day’s wages can add hundreds of dollars to the cost of a single hire. But he said they’re still reworking their technical interviews — which involve coding and problem-solving — to ensure candidates can’t lean on AI tools like Claude and ChatGPT.
“In the job, they will be allowed to use AI, but we want to know how smart they are without the AI,” Moczadlo said.
Aside from the tool it built to filter out fake applicants, Vidoc isn’t using AI in its hiring. But that approach is becoming less and less common, as AI worms its way into every link in the hiring chain, from posting jobs online, sourcing candidates and scanning applications to scheduling interviews and interacting with job seekers. The $3.3 billion global recruiting software market is projected to nearly double by 2032.
Artificial intelligence is reshaping the U.S. job market — which remains relatively strong at 4.1 percent unemployment — as automation plays a growing role in American workplaces. The effects have been particularly acute in the tech sector, which has been flooded with workers who recruiters and hiring managers say may be overreliant on AI, making it harder to assess their actual skills, according to Herval Freire, who has been recruiting engineers for more than 15 years.
“It feels like tech is the canary in the mine here,” Freire said. “This is what it’s going to look like for everybody in a year.”
Freire, a onetime Facebook engineering manager who’s now head of engineering at Mobile.dev, a San Francisco start-up, said he and other hiring managers are facing “top-of-the-funnel issues I’ve never seen before.” In February, a job he posted on LinkedIn brought in 150 applicants within seven minutes. LinkedIn told him he’d have to pay to unlock any more. But when he looked through the candidate pool, he found scores of CVs submitted under multiple names, and others riddled with errors and inconsistencies. And “almost every CV I read has been rewritten with ChatGPT or Claude,” Freire said, making it impossible to distinguish between candidates who used AI to polish their materials from those who outsourced the work completely.
“I’m pro-AI in the sense that it allows you to do things that were impossible before … but it is being misused wildly,” Freire said. The problem is “when you let it do the thinking for you, it goes from a superpower to a crutch very easily.”
Freire pointed to a recent candidate he suspected of using AI during the interview. When asked about their hobbies, the candidate panicked and froze, then responded robotically, as though reading from a teleprompter, “My hobbies include running, watching movies and going out with friends.”
In Freire’s experience, AI-powered tracking systems tend to gravitate toward applications that were clearly written with the help of a large language model (LLM) — the kind of AI that powers tools like ChatGPT and Claude — over those written by humans. Even his wife, who applied for 100 jobs with a CV she wrote herself, quickly saw better results after reworking it with AI, he said. Yet Freire is wary of AI solutions to AI problems, like chatbots that screen for fake applicants. He doesn’t use AI in his work; in fact, he’s leaning on old-school methods, such as referrals from his network. He knows many others who are doing the same, even as they acknowledge it makes their processes more insular.
The answer “can’t be let’s get AI to screen people, because that is likely biased and easy to game,” Freire said. “And it can’t be let’s talk to everybody, because now instead of 50 CVs, I have 1,500.”
The technology has left jurisdictions grappling with how to navigate potential legal risks — such as algorithmic discrimination — in light of existing employment laws, according to Melanie Ronen, a Long Beach, California-based employment lawyer at Stradley Ronon, but “in ways that hadn’t been contemplated before.”
The Equal Employment Opportunity Commission issued guidance in April 2024 on the use of AI in the workplace. While companies can use AI to screen job applicants or assess employee productivity, for example, grading down a candidate based on speech patterns flagged by a video interviewing software could run afoul of antidiscrimination laws.
In February, Colorado’s Artificial Intelligence Act takes effect, making it the first state to adopt legislation that targets discrimination from AI tools. Also, next year, Illinois will begin requiring employers to notify job seekers and employees when they are using AI in employment decisions.
Bettina Liporazzi, recruiting lead at digital studio letsmake.com, was shocked recently when a candidate thanked her on LinkedIn for sending a personalized rejection letter. For her, the incident underscored the necessity of preserving the “human touch” in hiring.
“AI is raising the benchmark for recruiting,” Liporazzi said. “If companies don’t raise their benchmark, they’ll get left behind.”
Katy Imhoff, a recruiter with tech staffing firm Camden Kelly, said the hiring landscape has gotten stranger and more fraught in the past year as employers integrated AI systems into their hiring processes. She believes these tools create a sense of “false confidence” for companies: Programs that crawl through résumés are too reliant on keyword-matching algorithms, she said, filtering out candidates who don’t use the exact phrasing.
Imhoff still goes through every résumé by hand, aware of rising frustration among job seekers who feel like they’re spamming résumés into the void. She knows hiring managers are losing faith, too.
“I just see posts all day long from people that are like, ‘I give up,’” Imhoff said.
Recently, Imhoff said she received 15 nearly identical résumés for a role she was trying to fill. Eerily similar intros, different names and locations. She later learned they’d used the same service, one that purports to specialize in formatting résumés to get past AI. The candidates were unaware that the firm was “writing all the résumés identically,” she said.
“I just feel bad for job seekers right now. The pain is just oozing nonstop in my network,” Imhoff said. “I’m hoping it’s going to get better, but it’s probably not.”
0 comments:
Post a Comment