Accessibility Testing With AI
5 min read
Ux Eng
Ux Eng
AI can automate checks. It can't replicate the experience of a blind user. Combine both.
Accessibility Testing With AI
TL;DR
- AI and automation can catch many a11y issues: missing labels, contrast, semantic structure. They can't catch everything.
- Use automated tools for regression and coverage. Use human testing (including assistive tech) for the rest.
- AI can suggest fixes. Verify them. Some fixes are wrong or incomplete.
Accessibility is non-negotiable. It's also time-consuming. AI-powered tools promise to speed up testing and fixing. They deliver — for a subset of issues. Your job: use automation where it helps, and know where it falls short.
What Automated and AI Tools Catch
Structural and semantic:
- Missing alt text, empty links, improper heading hierarchy, form labels. Rule-based tools (axe, Lighthouse) and AI-assisted tools catch these well.
- Run them in CI. Fix what they find. Don't stop there.
Color and contrast:
- WCAG contrast ratios. Automated tools can measure. AI can suggest color adjustments.
- Verify. "Suggest a darker shade" might break your design system. Check in context.
Keyboard and focus:
- Some tools can simulate keyboard nav and flag focus issues. Not all. And they can't tell you if the order makes sense to a user.
- Manual keyboard test. Always.
What They Miss
Context and meaning:
- Alt text can be "present" but wrong. "Image of chart" vs. "Bar chart showing Q3 revenue up 15%." AI might suggest generic alt. You need descriptive alt. Human judgment.
Complex interactions:
- Modals, drag-and-drop, custom widgets — automation often fails here. Screen reader testing by a human (or with real assistive tech) is irreplaceable.
- Test with VoiceOver, NVDA, or similar. Regularly.
Cognitive accessibility:
- Clear language, predictable flow, not overwhelming. No tool measures this well. You do.
- Read your UI. Could someone with cognitive differences use it? Revise.
Real-world usage:
- Automation checks technical compliance. It doesn't replicate "can a blind user complete this task?" You need user testing. Preferably with people who use assistive tech.
The Workflow
- Automate the basics — axe, Lighthouse, or similar. Run on every PR. Fix blockers.
- AI-assisted fixes — When a tool flags an issue, AI can suggest a fix. Review before applying. Some suggestions are wrong.
- Manual keyboard test — Tab through. Can you do everything? Is focus order logical?
- Assistive tech test — At least for critical flows. Screen reader, magnification if relevant.
- User testing — When possible, include people with disabilities. They'll find issues no tool can.
Integrating Into Your Process
- Shift left. Catch a11y issues in design and development, not QA. AI can help designers check contrast and structure early.
- Document exceptions. Sometimes you have a valid reason to break a rule. Document it. Don't let automation "fix" things that would make the experience worse.
- Training. Your team needs a11y literacy. AI can help with docs and examples. You own the standards.
Manual process. Repetitive tasks. Limited scale.
Click "With AI" to see the difference →
Quick Check
What remains human when AI automates more of this role?
Do This Next
- Run your main flow through axe and a screen reader — List what automation caught vs. what you found manually. Fill the gaps.
- Add a11y to your component checklist — Every new component: keyboard, labels, contrast. AI can help verify. You own the bar.
- Schedule one assistive tech session — Use VoiceOver or NVDA for 30 minutes. Navigate your product. Fix what's broken.