AI killed the hiring process. Here’s how to fix it.

Job search platforms and employers both broke recruiting with automation. The candidates paying the price are exactly the ones organizations need most.

By Mike Phillips

Somewhere between the resume screener, the automated ranking algorithm, the AI interview bot, and the applicant tracking system that files your materials into a folder no human will ever open, the hiring process stopped being about finding the right person. It became about filtering for the most algorithmically legible one.

I’ve been in the job market recently. I won’t dwell on the particulars — that’s not what this column is for — but I’ve had a front-row seat to what the AI-optimized hiring pipeline actually looks like from the candidate side. What I’ve seen is a system that has automated away the one thing that made hiring work: human judgment about potential.

This is a failure on two fronts. The platforms that built these tools optimized for the wrong outcomes. The employers that deployed them abdicated a core organizational responsibility. Both are paying for it in ways they haven’t fully accounted for yet.


Start with the platforms. LinkedIn, Indeed, ZipRecruiter, and their competitors built AI screening and matching tools that were sold as efficiency plays — reduce time-to-hire, surface the best candidates faster, take the administrative burden off HR. The pitch was reasonable. The execution created a matching problem dressed up as a matching solution.

These tools were trained on historical hiring data. That data reflects who got hired before — which means it encodes every bias, credential preference, and pattern-matching shortcut that human recruiters already had. An algorithm trained on past hires at a company that historically hired from five universities will keep surfacing candidates from those five universities. It doesn’t know it’s doing this. It’s just optimizing for the pattern.

The algorithm doesn’t find the best candidate. It finds the most familiar one.

The platforms are also optimized for engagement over outcomes. LinkedIn’s entire business model depends on activity — posts, connections, endorsements, profile completeness scores — none of which reliably predict job performance. A candidate who games the platform well looks better to the algorithm than one who simply does excellent work and doesn’t perform for an audience. The platform can’t tell the difference. It was never designed to.

The AI interview bot is where this reaches its logical absurdity. Candidates now routinely complete first-round interviews with software that analyzes word choice, facial expression, tone, and response structure to generate a hirability score. The research on whether any of these proxies actually predict job performance is thin at best. What they reliably measure is comfort with being interviewed by a camera — a skill that has essentially no bearing on whether someone can do the job.


The employer failure is distinct but related. HR departments deployed these tools not because they were proven to produce better hires, but because they reduced visible workload. Processing five hundred applications manually is genuinely hard. Letting software do it is genuinely easier. The problem is that the accountability for a bad hire got diffused into the system. When a human recruiter passes on a great candidate, that’s a recoverable mistake with a traceable decision. When an algorithm filters them out before any human sees their name, it never registers as a mistake at all.

The candidates getting filtered out disproportionately are exactly the ones organizations claim they want. Career changers. People with nonlinear paths. Experienced professionals who took time away from the workforce — for illness, for caregiving, for circumstances outside their control. Veterans transitioning out of service. People whose resumes don’t parse cleanly into the keyword fields the ATS was built around. The algorithm reads a gap and downgrades. It reads an unconventional title and can’t map it. It reads a varied career and scores it as unfocused rather than versatile.

These aren’t marginal candidates. They’re often the most resilient, most adaptable, most operationally experienced people in the applicant pool. They just don’t look right to the machine.


So what should organizations do differently? A few concrete recommendations from someone who has watched this from both sides of the table.

01 — Audit what your tools are actually filtering
Pull a sample of candidates your ATS screened out in the last six months and have a human review them. If you’re consistently eliminating candidates with nonlinear paths, employment gaps, or unconventional titles, your filter is miscalibrated. The tool won’t tell you this. You have to look.
02 — Separate screening from evaluation
AI tools can reasonably handle administrative screening — confirming basic qualifications, flagging incomplete applications, organizing materials. They should not be evaluating fit, potential, or hirability. That judgment requires context the algorithm doesn’t have and can’t develop. Keep humans in the evaluation loop from the first substantive assessment forward.
03 — Kill the AI interview for experienced roles
For entry-level, high-volume positions, an automated first screen may be defensible. For any role requiring judgment, experience, or leadership, it’s counterproductive. The candidates most qualified for those roles are also the most likely to disengage when asked to perform for a bot. You’re selecting against the people who know their own worth.
04 — Rewrite job descriptions for humans, not algorithms
Most job descriptions are written to satisfy the ATS, not to attract the right candidate. They’re keyword-dense, credential-heavy, and light on what the role actually requires. A clear, honest job description written for a human reader will surface better candidates than one optimized for machine parsing — and it signals something about your organization’s culture that good candidates are actively looking for.
05 — Measure hiring outcomes, not hiring efficiency
Time-to-hire and cost-per-hire are easy to measure, so they became the metrics that matter. Quality-of-hire is harder to measure, so it didn’t. If you’re optimizing a process without measuring whether it produces good hires, you’re optimizing for the wrong thing. Build a feedback loop between hiring decisions and performance outcomes, and let that data inform how you use your tools — not the other way around.

The AI hiring problem isn’t really an AI problem. It’s an accountability problem. Organizations outsourced a judgment call to a system that can’t be held accountable for getting it wrong, and then stopped checking whether it was getting it right. That’s a management failure, not a technology failure — and it has a management solution.

The candidates on the other end of these systems are still there, still applying, still getting filtered out by machines that don’t know what they’re missing. At some point, the organizations running these processes will notice what they’re not finding. The question is how much talent they burn through before they do.

Leave a comment