Understanding Business Context
Tech Lead
AI doesn't know that the VP of Product hates microservices. You do. That's why your design doc gets approved and the generic one doesn't.
Solutions Eng
AI can draft a proposal. It doesn't know this prospect is in a 6-month evaluation and needs a very specific POC. You do.
Eng Manager
AI can't navigate 'Engineering wants X, Product wants Y, and we have 3 months.' You're the translator.
Understanding Business Context
TL;DR
- "AI is great at syntax, mediocre at semantics, and really bad at business context." Enterprise value hides in the seams—how you define "active customer," discount codes for Tuesdays, SKU names changed after acquisition, why "revenue" means different things to finance vs. sales.
- Spider 2.0 benchmarks: Models peak at ~59% exact-match accuracy on text-to-SQL across realistic enterprise databases. Drop to ~40% with transformation and complexity. The messier and more business-specific, the more AI struggles.
- 95% of organizations see zero ROI from GenAI despite $30–40B invested. Core barrier: lack of learning and adaptation—systems don't retain feedback, adjust to context, or improve over time. The "almost-right" tax: developers spend time debugging and fact-checking because the model doesn't understand their specifics.
AI is trained on public data. Your company's internal context—who reports to whom, which projects are sacred, what failed last year, what "active customer" means in your CRM—is invisible to it. Business logic lives in Jira tickets, PowerPoints, institutional knowledge. Database schemas are artifacts of past decisions: renamed fields, unclear terminology, definitions drifting with each reorg. AI may recognize a prescription drug but cannot distinguish whether it's for cancer treatment or morning sickness—context critical for insurance underwriting. That's a moat.
What AI Doesn't Know
Org Structure and Ownership
- "Who decides this?" — Product? Engineering? A steering committee? AI doesn't know. It'll give you a generic "typically Product owns..." You know who actually owns it at your company.
- "Who do I need to align?" — Stakeholders, dependencies, approval chains. Human map.
- "Why did we choose vendor X?" — Politics. Past failures. Budget. AI wasn't in the room.
History and Precedent
- "We tried that in 2022. It failed." — AI has no memory of your experiments. It might suggest the same thing again. You're the institutional memory.
- "That team owns that system. Don't touch it." — Boundaries. AI doesn't know your org's boundaries. You do.
- "The board cares about X this quarter." — Priorities shift. AI gives generic priorities. Yours are time-bound and political.
The Enterprise Knowledge Gap
- "Active customer" — Your definition vs. sales' definition vs. finance's. AI doesn't know.
- "Revenue" — Recognized? Booked? Forecast? Different departments, different meanings. AI retrieves by syntax. Your schema is an artifact of history. AI can't parse it.
- SKU names changed after acquisition. Discount codes for Tuesdays. "That column is nullable because..." — Business logic. Not on the public web. Not machine-readable. You know. AI guesses. The trust gap: not whether AI can spit out code, but whether you can trust it on your data and your rules.
Politics and Relationships
- "Don't propose that—it'll piss off the CFO." — Relationship dynamics. AI can't navigate them.
- "Engineering and Product are in a cold war over this." — You know how to broker. AI suggests a "rational" solution that ignores the human layer.
- "We're merging with Company B. Everything is in flux." — Turbulence. AI assumes stability. You live in the chaos.
Why This Matters
Every decision AI "helps" with is made in a vacuum. The best technical answer might be wrong for your context because:
- The team doesn't have the skills.
- The timeline is driven by an external event (earnings, a customer, a merger).
- Someone powerful will veto it.
- You're deliberately not changing something because of downstream dependencies.
You're not just choosing the right answer. You're choosing the right answer that can actually happen. Solutions require better engineering around memory, grounding, governance, and feedback—not more parameters. RAG, layered memory, structured interfaces, approval flows. Until those exist, you're the memory, grounding, and feedback.
How to Use This as a Moat
- Annotate AI output with context. AI suggests a design. You add: "We're doing this because X constraint, and we're avoiding Y because of Z history." The annotation is the valuable part.
- Own the stakeholder map. Document who cares about what. AI can't. You can. That doc is worth more than any AI-generated spec.
- Be the "why" layer. When AI gives you options, you add "given our context, we choose A because B." That's judgment. That's irreplaceable.
Quick Check
AI suggests a microservices architecture. It's technically sound. Why might it still be wrong for your company?
Quick Check
Spider 2.0 shows ~59% accuracy on enterprise text-to-SQL, ~40% with complexity. What does that tell you?
Do This Next
- Take one AI-generated recommendation (from any lesson). Add three "our company" constraints that would change it. That's your context layer.
- Write down one piece of institutional knowledge that would break an AI suggestion. "We can't do X because..."—that's your moat. Share it with your team. Today: add one annotation to an AI output explaining why your context changes the suggestion.