Skip to main content

Code Generation — Real Examples

5 min read

Frontend

AI is great at React components. It's terrible at 'this should feel right when you click it.' You own the feel.

Backend

CRUD and REST? AI nails it. Idempotency, retries, and 'what happens at 3am?' — that's you.

Fullstack

AI can scaffold a full stack. Integration bugs, auth edge cases, and 'why is this slow?' — human territory.

Code Generation — Real Examples

TL;DR

  • AI excels at boilerplate: CRUD, API endpoints, standard components, config files.
  • AI struggles with edge cases, performance, and "what does the user actually need?"
  • The best use: start with AI, then review, refactor, and own the last mile.

Let's look at real patterns. No hype. No fear. Just what works and what doesn't.

Where AI Shines

CRUD and REST Endpoints

Prompt: "Create a FastAPI endpoint for users with GET, POST, PUT, DELETE."

What you get: Clean, conventional code. Pydantic models. Standard HTTP status codes. Often correct on first try.

What's missing: Validation rules specific to your domain. Rate limiting. Audit logging. "What if the user sends 10MB of JSON?"

React Components

Prompt: "Create a table component with sorting and pagination."

What you get: A usable table. Possibly Tailwind or MUI. Sorting logic that works for simple cases.

What's missing: Accessibility (keyboard nav, screen readers). Performance with 10K rows. Your design system's tokens. The "feel" that makes it your product.

Config and Boilerplate

Prompt: "Add a GitHub Actions workflow for CI."

What you get: A reasonable workflow. Lint, test, maybe deploy. Standard structure.

What's missing: Your org's caching strategy. Secrets handling. "What happens when the main branch is broken?"

Where AI Stumbles

Business Logic

Prompt: "Implement refund logic for our subscription system."

What you get: Something that looks right. Maybe 80% right.

What's missing: Proration rules. Tax handling. "What if they refund, then resubscribe, then ask for a credit?" AI doesn't know your business. It guesses.

Performance-Critical Code

Prompt: "Optimize this hot path."

What you get: Micro-optimizations. Maybe a cache. Sometimes nonsense.

What's missing: Profiling context. Your actual bottlenecks. "Is this even the problem?" AI can't run your app. It optimizes blindly.

Integration and Glue Code

Prompt: "Connect our auth service to the new payment provider."

What you get: Skeleton code. Types. Maybe some error handling.

What's missing: Idempotency. Retry strategies. Webhook signing. "What happens when the other service is flaky?" Real integration is messy. AI prefers tidy examples.

The Pattern

  • High structure, low context: AI wins. APIs, components, config.
  • Low structure, high context: You win. Business rules, performance, integration.

Quick Check

Why does AI struggle with 'Implement refund logic for our subscription system'?

You manually write the full CRUD API: Pydantic models, endpoints, status codes, validation. Then you manually add the table component with sorting, pagination, and styling. Hours of boilerplate.

Click "With AI" to see the difference →

Do This Next

  1. Run one real prompt. Pick a small task (a utility function, a component) and generate it. Then review line by line. What would you change? That's your AI literacy.
  2. Document one "AI got it wrong" example. Keep a note. Pattern-recognize over time. You'll learn when to trust and when to verify.