Skip to main content

Claude and ChatGPT for Engineering

5 min read

Software Arch

Use them to stress-test designs. 'What are the failure modes of this architecture?' Then refine.

Ml Eng

Good for paper summaries, framework comparisons, and debugging PyTorch errors. Verify API versions.

Claude and ChatGPT for Engineering

TL;DR

  • Claude and ChatGPT are general-purpose LLMs. They excel when you give them structure and context.
  • Use them for: explanations, debugging, design review, drafting docs, and research.
  • They don't know your codebase. Paste what's relevant. Be explicit.

You don't need an IDE to get value from AI. Browser-based Claude and ChatGPT handle a huge chunk of engineering work: understanding errors, exploring options, drafting content, and sanity-checking ideas.

When to Reach for the Browser

  • Debugging — Paste stack trace + code. "What's wrong?"
  • Learning — "Explain X like I'm a senior engineer" or "What's the difference between Y and Z?"
  • Design review — "Here's my approach. What am I missing? What are the risks?"
  • Documentation — Draft READMEs, API docs, runbooks.
  • Research — "What are the best practices for X in 2025?" (Verify; models can be outdated.)

They're slower than IDE tools for in-context edits. Use them when you need reasoning, explanation, or when you're not in the code.

Prompting Patterns for Technical Work

1. The Debug Pattern "Language/framework: [X]. Error: [paste]. Code: [paste]. What's the cause and fix? Show the corrected code."

Add: "We're on version Y" if it matters. Old answers can reference deprecated APIs.

2. The Explain Pattern "Explain [concept] in 2 paragraphs. Assume I know [related concepts]. Don't assume I know [concepts you're learning]."

3. The Review Pattern "Here's my [design/code/approach]. List: (a) what looks good, (b) potential issues, (c) alternatives I might have missed. Be critical."

4. The Compare Pattern "When would you use X vs Y? We're building [brief context]. What are the trade-offs?"

5. The Draft Pattern "Draft a [doc type] for [audience]. Include: [bullet list]. Tone: [professional/casual/technical]. Length: [approx]."

Claude vs. ChatGPT (2026)

Both are capable. Differences are nuanced:

  • Claude: Often stronger on long context, coding, and nuanced reasoning. Good at "think step by step" tasks.
  • ChatGPT: Strong ecosystem (plugins, integrations), good at structured output. GPT-4/5 models are competitive on code.

Try both for your use case. Preference is personal. What matters more is how you prompt.

Limitations to Remember

  1. No live access to your repo, docs, or tickets. You supply context.
  2. Training cutoff — they don't know the last few months of releases. Verify API versions and "latest" claims.
  3. Confident errors — they'll sound sure when wrong. Cross-check critical stuff.
  4. No secrets — never paste API keys, tokens, or internal systems. Assume it's logged.

File Upload and Long Context

Both support file uploads (PDFs, code, logs). Use them. "Here's our architecture doc. Given this, how would you add a caching layer?"

Long context (100K+ tokens) means you can paste a lot. But more context = slower + sometimes worse focus. Paste what's relevant, not everything.

You paste a stack trace: 'What's wrong?' AI gives a generic answer. You try it. Doesn't work. You paste more context. Still wrong. 30 minutes of back-and-forth.

Click "Structured Debug Pattern" to see the difference →

Quick Check

You need to sanity-check your architecture design before a review. What's the best use of Claude or ChatGPT?

Do This Next

  1. Debug one real error this week using Claude or ChatGPT. Paste stack trace + code. Compare the suggestion to what you'd have done.
  2. Use the Review pattern on a design or piece of code. See if it surfaces something you missed.