First Thoughts on the WCAG3.0 Draft

Weeknotes #36: Jan 26–29, 2021

Andrea F Hill
5 min readJan 28, 2021

Last week, the W3C consortium released the first public working draft of WCAG3.0.

“Ah, WCAG,” you’re thinking. “The Web Content Accessibility Guidelines.”
Not anymore. With WCAG3.0, they’ve changed what the letters in the acronym stand for. Now it’s the W3C Accessibility Guidelines (dropping the words ‘web’ and ‘content’).

Dropping the word ‘content’ is a big deal. A lot of knowledge about accessibility had to do with ‘marking up content’. Typically we think of content as being quite static.. so just put alt text on your images and use semantic headers and you’re partway there! /s

But.. the focus shouldn’t be on what we’re doing, but for whom. People are trying to get things done, beyond just ‘accessing our content’.

Conformance Levels and Testing

This is just a first draft and there are a LOT of details to be ironed out, but I was also interested in the section on conformance levels.

WCAG 3 has an optional scoring system that can better inform organizations on the quality of their accessibility effort. The optional conformance levels provide a way for organizations to report their conformance in simple manner. The bronze level is based on the score in each functional category and the overall score. Silver and gold levels require conforming at the bronze level plus additional improved usability for people with disabilities. — https://www.w3.org/TR/wcag-3.0/#conformance-levels

WCAG 3.0 includes two types of tests:

Atomic tests: simple tests (usually of the code), like the way we test today. You use these tests to reach the bronze level.

Holistic tests: usability tests and manual tests with hardware and software used by people with disabilities (assistive technologies). You use these tests to reach the silver or gold level.

Some content will meet outcomes if it passes atomic tests, but that content still might not be usable by all people with disabilities. Holistic tests can help you fix that. — https://www.w3.org/TR/wcag-3.0/#testing

I’ve long struggled with this idea that we can just use a checklist of ‘best practices’ and declare that our work is accessible. If you’ve ever watched anyone using assistive technology like a screen reader, you can quickly see how differently they experience things. Now if we tie conformance levels explicitly to usability testing, we can start to tease apart what organizations really mean when they talk about accessibility. Is it about compliance or experience?

I imagine that a lot of organizations will initially only aim for the bronze level of conformance — going beyond that will require a good investment of time and skill. And it may be heart-breaking for the well-intentioned at those organizations who have wanted to promote more inclusive design and testing, to have their organization be more explicit in stating that’s outside their scope.

Hopefully over time we’ll see a second wave of organizations interested in attempting to go beyond. I hope we’ll see a pull of demand and a supply of vendors or guidance to help organizations start to include this as part of their practice.

But I’ll be honest.. I think a lot of product / software development doesn’t focus too much about the tasks users are actually trying to get done. That’s how we end up with parody twitter accounts like ShitUserStories —

Screen capture of a tweet “As a new visitor, I want to fill out a detailed survey about my experience on the site upon arrival so that I can provide useful and detailed feedback about my experience” — from ShitUserStories

We conflate what users are trying to get done with what we want them to do. We guess at tasks and motivations.

Guessing at Tasks

This is really personal, because in my new job, we run a lot of usability tests. We draft scenarios/questions and ask people to find the answers on our website. I’ve always struggled a tiny bit because we bring our own backgrounds, needs and motivations to what we do. When we make up a scenario and ask users to complete a task, we aren’t really getting a glimpse into their motivations, and so it’s a bit difficult to know how much effort they’d put into getting the task done under ‘normal circumstances.’

I’m working on a project right now related to data visualization, and the question of accessibility has come up. I immediately felt it was important to test with users of assistive technologies, because I think their experiences are going to be unlike anything we could imagine.

Just because a screen reader can read out all the numbers in a data table doesn’t mean it’s a good experience. There are ways a data table can be marked up to make it more comprehensible. Are we doing that? Is there more we could do to help people consume and understand this information?

The trick is — I don’t really know who is going to access information, and what they hope to do with it. If we are talking about data tables, are people looking up a specific value, or are they doing comparisons? Are they looking for trends or patterns?

We can’t really state whether something is accessible or usable without understanding what it’s being used for.

But being explicit about those tasks, and ideally prioritizing tasks across different user segments and use cases, isn’t trivial. If I think about how different user segments may have different motivations, skill levels and objectives to accomplish when they come to a given page or view of an application, I start to wonder if we need to be running competing usability tests to ensure that changes to optimize one flow doesn’t have too much of a negative impact on another. It’s about being explicit with trade-offs.

So when the discussion of holistic testing mentions a focus on processes and tasks, we are layering on some overhead to our evaluation. We can’t just rely on automated testing ensuring our atomic elements, our content, are ‘marked up correctly’, we have to actually understand what success means for the person interacting with them.

Who “owns” accessibility?

Trick question, of course it should be everyone.

But back when I was a front-end dev at LexisNexis, we had a cross-functional team of devs, human factors engineers, tech comm and visual designers working together. But beyond that, we had our work audited semi-regularly by real people using assistive technologies. My installing a screen reader and stumbling around with it is not sufficient to appreciate the experience of someone who uses one all day long, out of necessity. We need to be sure we are including those we’re designing and developing for, and not gauge our success on whether we ‘delivered what we committed to’.

Practice vs Theory

Of course, this is just a first working draft. We don’t know how things will play out when it comes to application. Will organizations get on board and test what they create? To what extent? How important will these different levels be in the marketplace?

We don’t know. But I am eager to follow along, and I’m sure I’ll continue to post thoughts along the way!

Another cool resource: the summary of the research that led to the creation of this draft

--

--

Andrea F Hill

Sr Digital Analyst with the BC Public Service Digital Investment Office, former web dev & product person. 🔎 Lifelong learner. Unapologetic introvert. Cat mom.