AI-Generated Vulnerabilities
Appsec
AI-written code has predictable failure modes. Learn them. Scan for them.
Security Arch
The attack surface is expanding. AI code is a new category to threat-model.
AI-Generated Vulnerabilities
TL;DR
- AI-written code ships faster. It also has characteristic flaws: over-trusting input, missing validation, insecure defaults.
- AI tends to produce "works in happy path" code. Security lives in the edge cases. That's where bugs leak.
- Add AI-generated code to your threat model. Treat it as a new attack surface to scan and review.
More code from AI = more code to secure. And AI has recurring weak spots.
Common AI Code Weaknesses
- Input validation. AI often assumes "valid" input. Missing sanitization, length checks, type validation. Prompt injection, injection attacks, DoS via oversized payloads.
- Auth and access. AI adds "check if user is logged in" but misses "check if user can access this resource." IDOR and privilege escalation.
- Secrets and config. AI suggests env vars. Sometimes it hardcodes examples. Credentials in code, default passwords.
- Error handling. AI returns generic errors. Sometimes those errors leak internal paths, stack traces, or config. Info disclosure.
- Dependencies. AI suggests packages. Might be outdated, deprecated, or malicious. Supply chain risk.
- Edge cases. AI optimizes for "works." Null, empty, malformed—often under-tested. Crashes, unexpected behavior, vulns.
What to Do
1. Scan AI-generated code harder. Don't assume it's safe because it "looks right." Run SAST, SCA, and manual review on AI output. Treat it as untrusted until verified.
2. Establish AI code review checklist. Input validation? Auth on every endpoint? Secrets externalized? Error handling safe? Add AI-specific items: "Did we override an insecure default?"
3. Train the team. Developers using AI need to know the failure modes. Quick doc: "Common AI code security pitfalls." Include examples and fixes.
4. Track patterns. When you find a vuln in AI code, log it. Build internal knowledge: "AI tends to do X wrong." Use that to refine scanning and review.
Review human-written code with standard checklist. Assume reasonable validation and edge-case handling.
Click "AI Code Review Mindset" to see the difference →
# AI output — checks "logged in" but not "can access THIS resource"
@require_auth
def get_user_data(user_id: str):
return db.get_user(user_id) # IDOR: any user can fetch any other user
# Secure version — resource-level authorization
@require_auth
def get_user_data(user_id: str):
if current_user.id != user_id and not current_user.is_admin:
raise Forbidden()
return db.get_user(user_id)Quick Check
What's the most common security flaw in AI-generated code?
Do This Next
- Audit one AI-generated feature in your codebase. Apply your security checklist. Document what you found. Share with the team.
- Create a one-pager "Security review checklist for AI-generated code." Include: input validation, auth, secrets, errors, deps. Use it in PR reviews.