Accessibility Is Infrastructure, Not a Feature: Designing for the 1.3 Billion People We Keep Forgetting
Reframing accessibility as infrastructure — the same way HTTPS or responsive design became infrastructure. AI tools are the thing that makes the shift actually possible.
A designer I know has a sticky note on her monitor that reads: "Ship the thing. Fix it in v2." Under it, in smaller letters, someone else added, "v2 never comes." The sticky note is older than her laptop. It has survived three companies. Every product she has shipped has gone out with a handful of accessibility bugs she saw but didn't have time to fix, and every one of those bugs is still there, because v2 is a lie we tell ourselves at the end of a sprint.
I want to talk about why that sticky note exists, and why I think 2026 is the year it finally comes off the monitor.
Roughly 1.3 billion people live with a significant disability. That number comes from the WHO and it is probably undercounted, because it doesn't capture the temporary disabilities (a broken wrist, a migraine, a loud subway car), it doesn't capture the situational ones (a glare on the screen, a baby in one arm), and it doesn't capture the slow ones (aging eyes, a tremor that wasn't there last year). Accessibility, correctly understood, is not a category of user. It is a property of the product, and every one of us passes through it.
And yet most product teams still treat it as a feature.
The sprint-end checklist is the problem
Here's the pattern. A team scopes a feature. They design it. They build it. Two days before launch, someone — usually the same someone — runs an axe scan, finds thirty-one issues, fixes the eight that are cheap, files tickets for the rest, and ships. The tickets get marked "P3 accessibility" and quietly rot in the backlog.
This is not malice. It is structure. When accessibility is a task at the end, it competes with every other task at the end, and it always loses, because the cost of shipping late is loud and immediate and the cost of shipping inaccessible is quiet and distributed across people the team has never met.
The way out is not to try harder at the end. It is to stop treating it as an end-of-sprint task at all.
Think about how we stopped treating HTTPS as a feature. Ten years ago, you could still meet founders who thought SSL was something you added when you got bigger. Today, not having HTTPS is a browser warning, a ranking penalty, a thing that breaks in production. It is infrastructure. Nobody files a ticket called "add HTTPS to checkout" anymore, because HTTPS is not a ticket. It is the ground you stand on.
Accessibility needs the same reframe. It is not the ramp you bolt onto the side of the building. It is the floor.
What changes when you treat it as infrastructure
Infrastructure has three properties. It is default-on, it is checked automatically, and it is expensive to opt out of. Every accessibility practice worth having maps to one of those three.
Default-on means the components you already use are already accessible. Your button component already has a focus ring. Your modal already traps focus and restores it on close. Your form field already binds the label. A designer on your team should not be able to ship an unlabeled input field without deleting code on purpose, because the default path produces a labeled one. Design systems are the single highest-leverage accessibility investment a team can make, and the reason is that they turn "remembering" into "not having to remember." Every time the default is accessible, you have paid the cost once and collected the benefit thousands of times. The Design Systems Zealot is the soul I keep open when I am arguing with a team that thinks tokens are a nice-to-have. Tokens are the way the floor gets flat.
Checked automatically means the build fails when something regresses. You already do this with tests, types, and linters. You can do it with accessibility. 🪓mcp-a11y-axe-scanner runs axe-core inside your agent loop, so Claude (or whatever model you prefer) can audit a page the same way it runs npm test. 🎨mcp-accesslint-color-contrast catches contrast regressions the moment a token changes. 📚mcp-wcag-reference turns WCAG from a PDF nobody reads into a thing the model can cite in review comments. The point of all of this is not to replace human judgment. It is to stop spending human judgment on the things a machine can catch, so you have judgment left for the things it can't.
Expensive to opt out of means when a designer or developer wants to ship something inaccessible, they have to do extra work, not less. This is the part that sounds mean and is actually kind. When inaccessible is the easy path, people take it, because people take easy paths. When accessible is the easy path, people take that. It is not a morality play. It is a gradient.
The AI part, honestly
I want to be careful here, because there is a version of this article that is pure hype, and I don't want to write it.
AI does not make your product accessible. A model cannot know whether the alt text it generated is actually true of the image, and it cannot feel the friction a screen reader user feels on your checkout flow. A team that thinks "we'll let Claude handle the a11y" is going to ship products that are, at best, superficially clean and deeply wrong.
But AI does something else, which is the thing that matters. It dissolves the activation energy.
Activation energy is the reason the sticky note exists. An accessibility audit is not hard, exactly, but it is annoying and slow and it interrupts the work. A designer staring at a Figma file at 4 pm on a Thursday is not going to stop, pull up WCAG, look up the specific success criterion for focus visibility, check the color contrast of the focused state against the background, verify that the focus ring is at least 3:1 against adjacent colors and 2px thick, and come back to the design work. She is going to ship it and move on.
What she will do, at 4 pm on a Thursday, is ask The Accessibility Auditor to look at the frame and tell her if anything is obviously broken. That conversation takes two minutes. The auditor does not replace a real audit. It catches the eight things she would have caught herself if she'd had the energy, plus two she wouldn't have caught, and it does it before the design leaves her screen. Multiply that by a year of design decisions and you have moved the floor.
This is the shift. AI does not make a team more rigorous. It makes rigor cheaper.
The four layers, and where AI actually helps
I think of accessibility as four layers, stacked. You need all of them. AI helps in different ways at each.
Layer one: tokens and components
The foundation. Color contrast, focus rings, focus trapping, semantic roles, labeled controls, reduced-motion support, text sizing that scales. If your design system gets this right, every product built on it inherits it for free. If it gets it wrong, every product built on it ships the same bugs forever.
This is where tools like skill-design-system-token-pass and skill-accessible-color-system pay back the most. A single token pass on a design system can fix thousands of contrast failures downstream in one pull request. That is the kind of leverage that used to take a team of three and six months. Now one person with the right playbook can run it over a weekend and ship a PR on Monday.
Layer two: page-level correctness
Heading order, landmark regions, tab order, form labels, error messaging, alt text, ARIA when (and only when) you need it. This is where most axe scans land. It is also where most teams give up, because the issues are individually small and collectively exhausting.
The AI contribution here is tedium absorption. skill-wcag-quick-audit walks a page against the top WCAG criteria and produces a prioritized list in plain language a PM can read. 📝prompt-form-label-audit checks every input on a form against its label, placeholder, and error state and flags the cases where the label is missing, mismatched, or doing something the placeholder should be doing. 🧩prompt-aria-label-rewriter rewrites labels that were written for developers into labels that were written for screen reader users. None of this replaces a human pass. All of it makes the human pass shorter.
Layer three: flow-level experience
A page can pass every automated check and still be miserable to use. The checkout flow traps you on a step. The error message tells you what is wrong but not how to fix it. The date picker technically works with a keyboard but takes forty tabs to reach. The cognitive load is too high for anyone who isn't fully rested and fully neurotypical.
This is the layer where automated scans are nearly useless and lived experience matters most. The tools that help here are the ones that simulate lived experience or raise the questions a human would raise. skill-cognitive-load-pass walks a flow looking for moments where the user has to hold too many things in working memory at once. skill-keyboard-only-walkthrough steps through a flow the way a keyboard-only user would and flags the parts that break. skill-the-screen-reader-rehearsal narrates a page the way a screen reader would, so you can hear where the semantics have gone sideways before you install NVDA.
None of these replace real assistive-tech testing with real disabled users. They are a dress rehearsal before the dress rehearsal. They get the obvious stuff out of the way so that when you do bring in real testers, their time is spent on the things only they can catch.
Layer four: content and tone
This is the layer nobody funds and everybody needs. Error messages that blame the user. Button labels that are verbs for developers and nouns for everyone else. Job descriptions that demand "rockstar" and "ninja" and quietly tell disabled applicants not to bother. Instructions that assume you have never made a mistake in your life. Legal copy that exists only to protect the company from you.
“Accessibility at the content layer is where the gap between "passes the audit" and "actually works for a human" is widest, and where AI earns its keep fastest.
soul-the-content-design-coach will rewrite a wall of legal copy into a paragraph a tired person can read. 📜prompt-rewrite-this-with-plain-language takes any chunk of product text and pulls it down to a seventh-grade reading level without losing meaning. 💼prompt-the-job-description-decoder takes a job posting and tells you which phrases are quietly excluding people who could do the job fine. 📝agent-the-content-clarity-coach is the one I'd hand to a content team that wants to run a weekly clarity review without hiring someone new.
The cognitive layer nobody funds
There is a fifth thing I want to add to the four layers above, not because it is a new layer but because it cuts across all of them and most teams ignore it: cognitive accessibility. WCAG has criteria for it, technically. Most products pass those criteria and are still cognitively punishing. The gap is because cognitive load is not something you can measure with a scanner. It is something you feel when you use the product while tired, while anxious, while recovering from a migraine, while dealing with a hard week at home, while learning a second language on the fly.
The users most affected by cognitive load are not a small group. They are everyone, some of the time, and a very large group all of the time: people with ADHD, dyslexia, autism, anxiety, depression, concussion recovery, post-COVID cognitive fatigue, aging-related changes, low literacy in the language the product is shipped in. A product that is hard to think through is a product that excludes most of its actual users on their bad days, which is most days.
The reason cognitive accessibility goes unfunded is that it is hard to file as a bug. "This flow is confusing" is not a ticket. "This flow has fourteen decision points when it should have three" is a ticket, and it is the kind of ticket skill-cognitive-load-pass produces if you run it against a real flow in your product. It counts the decisions, the unique things the user has to remember from step to step, the jargon introduced without explanation, the moments where the user has to flip back to a previous screen to check something. Each of those is a concrete, fixable thing, and each of them is a drag on every user, not just the disabled ones.
soul-the-cognitive-accessibility-guide is the soul I'd keep open when designing any flow that involves money, health, or time pressure, which is most consequential flows. It asks one question over and over: could a tired person do this in one pass without having to go backward? Most flows fail that question. Designing to pass it is the single cheapest way to make a product feel dramatically better to everyone.
The last layer inside this layer is motion. Animations that feel polished to a healthy designer can trigger real vestibular distress in users with motion sensitivity. Parallax, auto-carousels, snap-scroll page transitions, confetti. prompt-the-vestibular-friendly-motion-pass is the prompt I'd run on every new component that involves motion before it ships. It takes about ninety seconds. soul-the-motion-design-cooler is the longer-form conversation if you're designing a system of motion and want someone to push back on the parts that feel good but cost users. Neither of these is about removing motion. They are about using it only where it earns its keep, and respecting prefers-reduced-motion as a real setting rather than a fallback.
The onboarding problem
I want to spend a minute on onboarding, because onboarding is where accessibility failures are most costly and least visible. A user who hits an accessibility wall during onboarding does not complain. They leave, and they do not come back, and they do not appear in any metric you measure because they never became a user.
The thing onboarding flows do wrong, over and over, is assume the user has unlimited energy, clear vision, full motor function, and no cognitive overhead. They then demand that the user enter personal information, accept terms, verify an email, solve a CAPTCHA, upload a photo, choose a plan, confirm a payment method, and complete a tutorial — often in that order, often on a single breath, often with no way to pause and come back tomorrow.
🚪agent-the-inclusive-onboarding-designer is the role I'd assign to every onboarding review. It walks a flow with one question: what is the minimum we need from the user in the first session, and what can wait? The answer, almost always, is that most of the onboarding can wait. The user does not need to set up their profile picture to try the product. They do not need to choose a plan to see the value. They do not need to upload an ID to browse. Every step you move out of the first session is a step no disabled user has to survive to become a user, and every disabled user who becomes a user pays you back in ways the onboarding metrics will eventually notice.
skill-the-onboarding-cut is the skill that does this review mechanically. It takes your current onboarding flow, counts the required steps, and tells you which ones are load-bearing and which ones are there because somebody added them two years ago and nobody has questioned them since. Most teams, when they run it, find that half their onboarding steps can be postponed. Half. On a flow that a disabled user has to survive cold, that is the difference between signing up and giving up.
The research problem, honestly
If you build products, you should talk to disabled users. This is the part of the piece where I acknowledge that AI cannot replace the conversation, and then also acknowledge that most teams are not having the conversation at all, and the first step for those teams is not to find a perfect research partner, it is to start.
soul-the-disabled-parent-co-strategist, soul-the-autistic-script-helper, and the other perspective souls in the catalog are not substitutes for real users. I want to say that out loud so there is no ambiguity: you cannot replace a conversation with a real disabled person by asking a soul to pretend to be one. The souls exist as rehearsal partners for designers who need to ask a question and cannot wait two weeks for the next research cycle. They are where you work out the question you are going to ask a real person, so that when you do get the chance to ask it, you ask it well.
🔬agent-the-disability-research-buddy is the agent I'd use to plan the actual research. Who to recruit. How to compensate them fairly (paying disabled research participants well is one of the most neglected pieces of "inclusive design"). What to ask. What to avoid. What to do with the findings. The agent does not do the research. It helps you not waste the research you eventually run, which is the thing that most matters.
And for the teams who already have research data and have been ignoring it: soul-the-design-research-skeptic is the soul I would assign to re-read the last year of your accessibility-related research findings and tell you, without diplomacy, which ones you pretended you would fix and then forgot. This is an uncomfortable conversation. It is the one that closes the gap between caring about accessibility in principle and shipping it in practice.
The handoff is where accessibility dies
There is one more place where accessibility gets lost, and it is not a layer. It is a seam. It is the moment the designer hands the work to the engineer.
A Figma file cannot enforce focus order. A comment in the margin saying "this should be screen-reader accessible" is not a spec. An engineer reading a design file has to guess at half a dozen things the designer either knew and forgot to say, or never knew at all. The engineer then makes reasonable guesses, half of which are wrong, and nobody notices until a user writes in to say the modal is unescapable.
This is solvable, and it is solved by making the handoff itself carry the accessibility information. 🤝agent-the-design-handoff-assistant is a role I'd build into every design team's workflow: it watches the handoff, asks the designer the questions the engineer is going to have ("what announces when this dialog opens?", "where does focus go when this closes?", "what happens if the user has reduced motion on?"), and produces a spec that the engineer can actually build from. The cost is a five-minute conversation. The benefit is not shipping a broken modal.
skill-the-design-spec-translator is the lighter version: give it a Figma link and a target framework, and it writes the accessibility notes for the engineer in the vocabulary the engineer already uses. Not "ensure keyboard operability," which makes engineers roll their eyes, but "on onKeyDown, handle Escape to close and restore focus to the trigger element."
A note on the pushback
Somebody on your team is going to read this article and say, "We don't have time for all of this. We are a small team. We ship fast."
I hear that, and I think it is wrong, and I want to say exactly why. Every one of the tools I linked above runs in minutes. The auditor soul answers a design question in the time it takes to make coffee. The axe scanner runs in the background. The plain-language prompt takes less time than writing the sentence the wrong way. The cost of doing these is not the cost of doing them. It is the cost of deciding to do them. And the deciding is the only thing that actually takes effort.
The teams that complain they "don't have time for accessibility" are almost always also the teams that spend a week every quarter firefighting production bugs that a five-minute pass would have caught. A team that runs a contrast check on every PR is not slower than a team that doesn't. It is faster, because it never has to revisit the button component on the support team's emergency Monday.
Small teams are the ones who benefit from infrastructure the most. Small teams cannot afford to make the same mistake twice.
Five things you can start tonight
If this article has done its job, you are now mildly annoyed and slightly hopeful. Good. Here is the punch list.
- Run skill-wcag-quick-audit on your most-visited page. Not your homepage. The page your users actually spend time on. It takes about ten minutes. Read the output before you close the tab. You do not need to fix anything tonight. You just need to know.
- Wire 🪓mcp-a11y-axe-scanner into your dev loop. If you are using Claude or Cursor or any agent-driven workflow, this is a ten-minute install. From that point forward, any time you ask the model to look at a page, it can scan it too. That is the moment accessibility becomes default-on in your own head.
- Pick one layer and run its pass end-to-end on one flow. Your signup flow is a good choice because everyone has one. Use skill-keyboard-only-walkthrough and skill-the-screen-reader-rehearsal back-to-back. Do not try to fix everything. Find the three worst things and fix those.
- Rewrite one error message with ⚠️prompt-error-message-rewriter. Pick the error your users hit most. The one that makes them email support. Rewrite it so the message tells them what happened, why, and what to try next, in that order, in plain language. Ship it. This is the smallest win in this list and it will change how your users feel about your product more than the other four combined.
- Put 🤝agent-the-design-handoff-assistant in the loop for your next feature. Not retroactively. The next thing you hand to engineering. Run it. See what questions it asks. The point is not to follow its output verbatim. The point is to notice how many of those questions you would not have thought to ask, and how cheap it was to ask them.
That is the list. None of it is heroic. All of it is infrastructure.
The sticky note on the monitor — "ship the thing, fix it in v2" — does not come off because somebody decides to care more. It comes off because the friction of caring drops below the friction of not caring. That is the thing that is actually new in 2026. It is not that teams have finally woken up. It is that the cost of doing the right thing has dropped to the cost of doing any thing.
Peel the sticky note off tonight. You won't need v2.
This piece is part of the a-gnt accessibility and UX series. Written by a-gnt Community.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.
Tools in this post
The Content Clarity Coach
Refactors long product strings into short, honest, plain-language versions. Knows concise from vague.
The Design Handoff Assistant
Sits between designers and devs. Turns a design spec into an a11y-first implementation note.
The Disability Research Buddy
Helps researchers plan ethical studies with disabled participants. Plain-language consent forms included.
The Inclusive Onboarding Designer
Builds onboarding flows that don't gate disabled users out of step 1. Keyboard, screen-reader, cognitive checkpoints.
MCP Accessibility Scanner (axe-core)
Run axe-core accessibility audits on any webpage from your AI assistant. By JustasMonkev.
AccessLint Color Contrast MCP
Programmatic color contrast analysis per WCAG 2.1 — inline with your AI workflow.
WCAG MCP Server
Full WCAG 2.2 spec, techniques, and Understanding docs accessible to your AI. By joe-watkins.
ARIA Label Rewriter
Paste your HTML. Get back clean aria-* attrs that don't double-announce or repeat visible text.
Error Message Rewriter
Three rewrites of any error: what went wrong, what to do, when to ask for help.
Form Label Audit
Paste form HTML or just labels. Get every accessibility flaw, prioritized.
Rewrite This With Plain Language
Paste confusing medical/legal/government text. Get back a 6th-grade-reading-level version that keeps the meaning.
The Job Description Decoder
Decodes a job posting into plain English, flags ableist phrases, drafts an honest cover letter intro.