Part 1 of 3: Building Tech Teams in the Era of AI
Last year, we opened a senior backend role at my company. Within a week — 340 applications. Beautiful CVs, every single one. Structured perfectly, keywords in all the right places, achievements quantified down to the decimal. I remember thinking –when did every backend developer on Earth become a copywriter?
They didn’t, obviously. About 70% of those applications were written by ChatGPT or something similar. We measured it —-ran a batch through a couple of AI text detection tools, cross-referenced with how candidates actually spoke on calls. The gap was.. entertaining. Someone whose CV read like a Harvard case study could barely explain what they’d built last quarter.
This isn’t just my experience. Industry surveys put the number at 68% of resumes now being AI-generated, and that was measured a year ago. Right now? It’s trending toward all of them. The few candidates still writing by hand are, ironically, the ones whose CVs look “worse” by comparison — less polished, slightly messy, but infinitely more honest.
The funnel that used to work doesn’t anymore#
Here’s what the hiring process looked like for most of us until recently. CV comes in, you skim it, maybe check the cover letter, shortlist the good ones. Then a phone screen, a take-home coding challenge, a live coding session, maybe a system design round, a culture fit conversation. Four to seven steps, two to four weeks, and at the end you had a pretty decent sense of who this person was.
That whole machine is now broken at almost every stage. And I don’t think people fully appreciate how broken.
Start with the top of the funnel. Applications per opening have roughly tripled —-from around 80 for a more or less frequent profile like MLOps a few years ago to over 220 today for the same role. Not more engineers — just better automation on the candidate side. One developer I spoke to casually mentioned he’d applied to 600 positions in a month, just using OpenClaw, Jobspy Python lib and Playwright. Six hundred. He wasn’t even particularly looking — just “keeping options open.”
So you’ve got 3x the volume with significantly worse signal quality per application. If you’re a startup founder reviewing these yourself, or you’ve got one recruiter doing it, the math simply doesn’t work anymore. And if your first instinct is to outsource this to a recruitment agency —-I’d think twice. Most agencies built their entire operation around screening CVs and managing multi-round pipelines. Especially if they use some typical ATS with traditional tracking approach and channels. That model is exactly what broke. They’re struggling as much as you are, they have no advantage in adoption speed, they just charge you for the privilege.
Take-home assignments are dead. Live coding is on life support.#
Remember take-home coding challenges? “Here’s a problem, go build something over the weekend, send it back.” That died quietly about two years ago when every candidate started feeding assignments straight into Claude or GPT and returning picture-perfect solutions. Well, no surprises there.
But here’s what caught me off guard: live coding is dying too. There’s a tool called Interview Coder — and others like it — that runs as an invisible overlay on your desktop. It doesn’t show up when you share your screen. It listens to the interviewer’s questions, watches your code editor, and feeds you solutions in real time. The candidate looks like they’re thinking hard and typing thoughtfully. They’re actually just reading prompts from a ghost in the machine.
Some say, around 80% of interviewers at Big Tech companies now suspect AI cheating during technical interviews. Call it paranoia, but for me it’s just a pattern recognition from people who run hundreds of these sessions. That uncertain feeling when the candidate gives a perfect answer, but it does not align somehow with the entire picture… If you felt it once – you’ll never forget it.
The big companies are trying to fight this with proprietary interview environments, virtual machines candidates must connect to, browser lockdowns. It’s an arms race they can’t win. Every defensive measure gets a workaround within weeks.
Startups, though? They stopped fighting AI and started hiring with it.
The new playbook: questionnaires, AI-assisted interviews, and paid test days#
So what actually works? After a few years of experimentation —-hiring across markets in Asia, the US, Armenia, India, Pakistan — here’s the process we landed on for most of tech profiles. It’s four steps, takes about a week, and it’s cheaper than what most people are doing now. Think of it as an AI adoption strategy applied to your own hiring — rather than resisting how candidates use AI, you build a process that makes their real skills visible regardless.
Step one: replace the CV with a questionnaire. Instead of asking for a resume that you know is AI-generated, send candidates a simple form. We use Typeform, but Google Forms works fine. Twenty to thirty binary or ternary questions — things like “Python proficiency: theoretical knowledge only / practical experience / expert level.” Takes about two minutes to complete. Quick clicks, no essays.
Why does this work? Partly because the auto-apply bots and scrapers haven’t caught up yet —-they can’t navigate Typeform the way they spam job boards. Partly because the questions themselves, while simple, create enough of a self-filtering effect. Someone who marks themselves an “expert” in Kubernetes but “theoretical only” in Docker is telling you a story, whether they realize it or not.
Will some people refuse to fill it out? Sure. But if a candidate won’t spend two minutes on a quick questionnaire, what does that tell you about working with them for the next two years? We treat it as a feature, not a bug.
Step two: a light screening call. The recruiter talks to the top-ranked candidates. Nothing fancy — just confirm the person exists, speaks the language you need, and seems generally coherent. Fifteen minutes, tops.
Step three: a short technical conversation with the team lead. Not a deep dive. No whiteboard algorithms. Just a senior engineer chatting about architecture approaches, recent framework versions, whatever’s relevant to the role. You’re not trying to trip anyone up. You just want to see if they can actually talk shop about their domain. If someone says they’re a Kubernetes expert, ask them about the last tricky deployment they debugged. You’ll know within five minutes whether that’s real.
Step four — and this is the big one: a paid test day. After the technical chat, the candidate comes into your team for an actual working day. Yes, even remotely. We’re fully remote, no office —-our setup is a buddy system where someone sits with the new person on an open Zoom session. They get a real task from the backlog, they interact with colleagues, they ask questions, they ship something (or try to).
The math: a backend developer earning €80K annually costs you roughly €300 for a single test day. Compare that to the €5,000–6,000 an agency would charge, or the hidden cost of a bad hire surviving three months before you realize the mistake.
The cost savings are nice. But what you learn in one day — that’s the actual payoff. The candidate saw your codebase, your team dynamics, your actual working environment —-not a sanitized interview version. Your team saw how this person thinks, communicates, asks for help, handles ambiguity. There’s no culture fit interview needed because culture fit just happened, live, for eight hours.
And here’s the part that surprises people: we let them use whatever AI tools they want during the test day. Cursor, Copilot, Claude — bring it all. Because that’s how they’ll work if you hire them. I don’t care if they can write a for loop from memory anymore. Can they solve a real problem with the tools that exist in 2026? Can they prompt well? Do they blindly accept AI output or review it critically? Do they know when to throw away what the AI gave them and think for themselves?
Around 40% of startups now encourage AI tool use during interviews, and I feel like that number is climbing fast. The companies still banning AI from their hiring process are, in my view, selecting for the wrong thing. They’re testing candidates on a version of the job that no longer exists!
What this might mean for you#
If you’re building a startup and need to hire engineers right now, here’s the uncomfortable truth: most of what you learned about hiring — from courses, from advisors, from your own experience two years ago —-is no longer reliable. There’s no established best practice yet. The industry is figuring this out in real time, and honestly it’s kind of a mess.
But the direction is clear – Less time screening paper, more time watching people work. Fewer artificial hoops, more real-world signal. And definitely stop pretending that AI isn’t part of how your future employees think, code, and solve problems. Your AI adoption strategy starts with your hiring process — if you can’t embrace AI there, good luck embedding it anywhere else in your company.
The companies adapting fastest aren’t the big ones with their lockdown interview environments and seven-round gauntlets. It’s the scrappy 20-person startups that said “fine, everything changed, let’s change too.” Linear, for example, sends candidates straight to a paid trial after a light screening. No drama.
Next up — Part 2!
