Managing AI-Augmented Teams
Eng Manager
Your job isn't to police AI use. It's to set standards and remove blockers.
Managing AI-Augmented Teams
TL;DR
- Your reports are already using AI. Formalize it or watch shadow adoption create silos.
- Set usage standards (what's allowed, what needs review) instead of banning or mandating.
- The real challenge: calibrating expectations. AI multiplies output — but not uniformly. Some folks 3x, some 1.2x. Manage the gap.
Your team is using Cursor. Or Copilot. Or ChatGPT to unstick a bug at 11pm. They're probably not telling you. That's fine — until two people ship conflicting patterns because AI suggested different approaches. Or someone pastes proprietary code into a public model. Now it's your problem.
Why Formalize (Not Ban) AI Use
Banning AI tools is pointless. It's like banning Google in 2005. People will use it. The question is: how do you make it safe and consistent?
Create a lightweight AI usage policy:
- What tools are approved? (Cursor, Copilot, internal sandbox — whatever you've vetted.)
- What must never go in? (Customer data, proprietary code, PII.)
- What needs human review? (AI-generated code in critical paths, prompts that touch production.)
- One-pager. Not a 20-page compliance doc.
Default to trust, verify where it matters. Most AI use is low-risk. A few areas (security-sensitive, customer-facing, regulatory) need extra scrutiny. Focus there.
The Productivity Spread Problem
Here's the uncomfortable truth: AI doesn't level the playing field. Some engineers will 3x their throughput. Others will barely budge. The gap between top and bottom performers will widen.
Your job isn't to make everyone equal. It's to:
- Measure output, not hours. If someone ships 2x more with AI, great. If someone ships the same, that's fine too — as long as quality holds.
- Avoid "AI shaming." Don't make laggards feel bad for not adopting fast. Some people learn slower. Some tasks don't benefit. Support, don't pressure.
- Redefine "senior" for the AI era. Senior used to mean "writes complex code." Now it often means "orchestrates AI, reviews critically, unblocks others." Adjust your leveling expectations.
Facilitating, Not Micromanaging
AI-augmented teams need different support:
- Prompt libraries and patterns. Share what works. "Here's how we prompt for API design." Not mandatory — just available.
- Time to experiment. Let people try tools in low-risk contexts. Friday afternoon "AI pilot" sessions beat forced rollout.
- Clear escalation paths. When AI breaks something, who do you blame? Nobody. You iterate. Make that explicit so people don't hide mistakes.
AI Disruption Risk for Engineering Management
Moderate Risk
AI automates routine work. Strategy, judgment, and human touch remain essential. Moderate risk for those who own the outcomes.
Manual process. Repetitive tasks. Limited scale.
Click "With AI" to see the difference →
Quick Check
What remains human when AI automates more of this role?
Do This Next
- Write a one-page AI usage policy — approved tools, forbidden inputs, review requirements. Circulate and revise with your team.
- Run a 15-minute retro — "What's one thing AI helped you do this week? One thing it screwed up?" No names, just patterns.
- Identify your slowest adopter — not to pressure them, but to understand blockers. Maybe they need different tools or training. Ask.