Skip to main content

Hiring in 2026 — What to Look For

5 min read
Eng Manager

Eng Manager

Stop testing trivia. Start testing judgment, taste, and ability to direct AI.


Hiring in 2026 — What to Look For

TL;DR

  • AI can write code. It can't make good trade-offs, read a room, or know when to push back.
  • Prioritize: judgment, taste, communication, and "can they direct AI?" over algorithm puzzles.
  • Update your interview loop. LeetCode alone selects for the wrong thing now.

The old hiring playbook assumed: if someone can pass a coding interview, they can learn the rest. That was never fully true. Now it's actively misleading. AI can pass many coding screens. The person who reviews that code, decides what to build, and collaborates with your team — that's who you're actually hiring.

What to Prioritize

1. Judgment and trade-offs

  • "Given X constraints, how would you approach Y?" No single right answer. You want to see how they think.
  • "AI suggested this design. What's wrong with it? What would you change?"
  • Red flags: Candidates who treat AI output as gospel. Or who dismiss it entirely without nuance.

2. Taste and critical eye

  • "Review this code/design/spec." Can they spot issues? Do they care about readability, edge cases, maintainability?
  • AI generates okay-ish stuff. You need people who know when "okay" isn't good enough.

3. Communication and collaboration

  • Can they explain their reasoning? Do they ask clarifying questions?
  • AI-augmented teams ship faster when people can articulate what they want — and when they listen.

4. AI fluency (not obsession)

  • Have they used Cursor, Copilot, or similar? Can they describe a time AI helped and a time it didn't?
  • You're not hiring prompt engineers. You're hiring people who can use AI as a tool. Basic fluency is enough.

What to Deprioritize

  • Algorithm and data structure trivia. AI can solve those. If you keep them, make them open-book or allow AI. Test reasoning, not memorization.
  • Years of experience in a specific stack. Stack rotates. Judgment doesn't.
  • "Culture fit" that means "like us." Diverse perspectives matter more when AI homogenizes output. Hire for additive fit, not carbon copies.

Updating Your Interview Loop

  • System design: Still valuable. Add: "How would you use AI in this design? Where would you not use it?"
  • Behavioral: More weight. "Tell me about a time you had to push back on a technical decision." "How do you handle ambiguity?"
  • Work sample: Give a real task. Let them use AI. Evaluate the process and outcome, not whether they used help.
  • Pair programming: Watch them work. Do they ask questions? Do they verify AI output? Do they get stuck and recover?

Manual process. Repetitive tasks. Limited scale.

Click "With AI" to see the difference →

Quick Check

What remains human when AI automates more of this role?

Do This Next

  1. Audit your current interview rubric — Which questions would a strong AI user ace without real skill? Which would they fail? Adjust.
  2. Add one AI-aware question — e.g., "Walk me through how you'd use AI to build a small feature. Where would you double-check?"
  3. Train interviewers — Ensure they're evaluating judgment and collaboration, not trivia. Calibrate with a few practice runs.