The use of AI is quickly becoming ubiquitous, and it's no surprise that job candidates are tapping into it at every stage of the hiring process. From résumé writing to interview prep, AI is fundamentally reshaping how applicants present themselves. But this transformation isn't without consequences.
Applicant Tracking Systems (ATS), once a cornerstone of candidate filtering, are becoming less effective as AI-generated resumes easily pass keyword and formatting checks. At the same time, a tight and competitive job market means recruiters often face hundreds—sometimes thousands—of applicants for a single position.
To cope, many recruiters rely on skills assessments to screen candidates. But that strategy is also being undermined by AI. As models become more powerful, what may be considered an effective test today can quickly become obsolete within weeks. Recruiters are forced into a game of cat and mouse—constantly redesigning assessments that might briefly outpace AI’s capabilities.
Another layer of complexity: access to AI is not equal. Candidates with the means to afford premium tools like ChatGPT Pro ($200/month for team access) or Claude Opus have an unfair advantage. These tools can complete tests far more effectively than free-tier models. So in effect, the hiring process is tilted toward those with financial privilege—if your parents can pay for elite AI, you’re suddenly a stronger applicant.
This dynamic has created what recruiters increasingly refer to as an "AI arms race" among candidates. As more applicants leverage AI to enhance their applications, those who don't use these tools appear significantly less qualified by comparison—even when they possess superior actual skills. This competitive pressure forces even hesitant or ethically concerned candidates to adopt AI assistance simply to remain viable in the candidate pool. This escalating cycle normalizes AI dependence and further obscures genuine talent assessment, creating a classic prisoner's dilemma where individual rational choices lead to a collectively problematic outcome.
Even live interviews are no longer the gold standard. With the rise of interview co-pilots—tools that feed candidates real-time responses over Zoom—recruiters can no longer be certain if they’re speaking with a candidate or a candidate-plus-AI hybrid. These tools can convincingly speak about complex projects the candidate may not have even worked on. This challenge has prompted some companies to take a hard stance. Even Anthropic, the AI research company behind Claude 3.5 Sonnet—a model now more widely used than ChatGPT-4—has banned candidates from using AI during the application process. To many, this feels deeply ironic: why would an AI company prohibit the use of AI?
But the irony disappears when you consider the deeper challenges. A common argument in favor of allowing AI is that employees already use it on the job—so why not let candidates use it too? Denying AI during the hiring process, some say, is a false constraint that fails to reflect how people actually work.
That may sound reasonable in theory, but it overlooks the practical difficulties of designing effective, AI-resistant assessments. It’s easy to blame hiring managers for not being creative enough—but the reality is far more complicated.
Ideally, a recruiter would want to hire a professional who can guide AI—especially in scenarios where AI falls short. AI tends to fail when it lacks context, when information is siloed across teams, or when decisions rely on undocumented institutional knowledge or nuanced domain expertise. These are exactly the kinds of gaps humans fill by asking the right questions, drawing from experience, and applying judgment in ambiguous situations. In a post-AI world, the focus increasingly shifts from “how” (which AI can often handle) to “why” and “what.”
However, the more ambiguity a test includes to assess this kind of thinking, the more likely candidates are to drop out—often due to unfamiliarity or discomfort. On the flip side, the more a test is simplified for objectivity or scalability, the easier it becomes for AI to solve. That’s why traditional approaches—like data structures and algorithms problems, or platforms like HackerRank and LeetCode—have long served as the go-to for candidate evaluation. But those methods only worked when AI wasn’t capable of solving them easily.
That era is over.
And so, this new wave of evaluation processes banning AI use during hiring begins to make sense. The goal isn’t to resist progress, nor are hiring managers being short-sighted or pedantic—like insisting on using log tables in an era of ubiquitous calculators. The intent is to test whether candidates possess fundamental skills without AI assistance. After all, guiding AI through complex scenarios requires a solid grasp of the basics. If a candidate can’t solve simple problems on their own, how can they be expected to troubleshoot or direct AI effectively when it inevitably runs into limitations?
Succeeding at basic tasks without AI now seems like a necessary condition. And in the absence of a clear sufficient condition to evaluate deeper competency, many companies are turning to this necessary condition as a pragmatic compromise.
Some pioneering companies are exploring technological countermeasures to restore assessment integrity. Machine proctored testing platforms that record candidates' screens and video use AI detection algorithms show promise for initial evaluation round. After filtering candidates they use the tests involving AI-human collaboration directly—designing exercises that explicitly require candidates to demonstrate how they would guide AI through complex scenarios with incomplete information.
But we know this space is evolving fast. How are you handling it? Have you encountered cases where candidates used AI to misrepresent their skills—or seen clever ways they’ve cheated the system? How are you adapting your evaluation process to stay ahead of these shifts? Share your experiences and strategies—we’re all figuring this out together.
#RecruitmentTrends #AIinHiring #TalentAcquisition #FutureofWork #MAdAILab #AI