CARVIEW |
W3C Accessibility Guidelines (WCAG) 3.0
More details about this document
- This version:
- https://www.w3.org/TR/2025/WD-wcag-3.0-20250904/
- Latest published version:
- https://www.w3.org/TR/wcag-3.0/
- Latest editor's draft:
- https://w3c.github.io/wcag3/guidelines/
- History:
- https://www.w3.org/standards/history/wcag-3.0/
- Commit history
- Editors:
- Rachael Bradley Montgomery (Library of Congress)
- Chuck Adams (Oracle)
- Alastair Campbell (Nomensa)
- Kevin White (W3C)
- Jeanne Spellman (TetraLogical)
- Francis Storr (Intel Corporation)
- Former editors:
- Michael Cooper, Staff Contact, 2016-2023 (W3C)
- Shawn Lauriat, Editor, 2016-2023 (Google, Inc.)
- Wilco Fiers, Project Manager, 2021-2023 (Deque Systems, Inc.)
- Feedback:
- GitHub w3c/wcag3 (pull requests, new issue, open issues)
- public-agwg-comments@w3.org with subject line [wcag-3.0] … message topic … (archives)
Copyright © 2021-2025 World Wide Web Consortium. W3C® liability, trademark and document use rules apply.
Abstract
W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of any of these disabilities. These guidelines address the accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; audiovisual media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements and assertions to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement or assertion.
To keep pace with changing technology, this specification is expected to be updated regularly with updates to and new methods, requirements, and guidelines that address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.
For an overview of WCAG 3 and links to WCAG technical and educational material, see WCAG 3.0 Introduction.
Status of This Document
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
This is an update to W3C Accessibility Guidelines (WCAG) 3.0. It includes all requirements that have reached the developing status.
To comment, file an issue in the wcag3 GitHub repository. Create separate GitHub issues for each comment, rather than commenting on multiple topics in a single issue. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, email public-agwg-comments@w3.org (comment archive).
In-progress updates to the guidelines can be viewed in the public Editor's Draft.
This document was published by the Accessibility Guidelines Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 18 August 2025 W3C Process Document.
This section (with its subsections) provides advice only and does not specify guidelines, meaning it is informative or non-normative.
Plain language summary of Introduction
- W3C Accessibility Guidelines (WCAG) 3.0 shows ways to make web content and apps usable by people with disabilities. WCAG 3.0 is a newer standard than the Web Content Accessibility Guidelines (WCAG) 2.
- WCAG 3.0 does not replace WCAG 2. WCAG 2 is used around the world and will still be required by different countries for a long time to come.
- Meeting WCAG 2 at AA level means you will be close to meeting WCAG 3.0, but there may be differences.
- This draft only includes requirements that have reached the developing status. This means that we have a general agreement on the topic but not all the details are worked out.
- We would like feedback on this draft. You can raise a GitHub issue or email public-agwg-comments@w3.org.
End of summary for Introduction
This draft includes an updated list of the potential guidelines, requirements, and assertions that have progressed to Developing status.
Requirements and assertions at the Exploratory status are not included in this Working Draft. If you would like to see the complete list, please review the Editor's Draft.
Please consider the following questions when reviewing this draft:
- Are there requirements that should not be included?
- Is this way of wording the requirements clearer than the approach used in WCAG 2?
- What are your thoughts on assertions?
- Does the "Which foundational requirements apply?" section in Image Alternatives, Text Appearance, or Keyboard Focus Appearance help you understand how to use the requirements?
Additionally, the Working Group welcomes any research that supports requirements or assertions.
To provide feedback, please open a new issue in the WCAG 3 GitHub repository. Create a separate GitHub issue for each topic, rather than commenting on multiple topics in a single issue.
If it's not feasible for you to use GitHub, email your comments to public-agwg-comments@w3.org (comment archive). Please put your comments in the body of the message, not as an attachment.
The list of requirements is longer than the list of success criteria in WCAG 2. This is because:
- the intent at this stage is to be as inclusive as possible of potential requirements, and
- WCAG 3.0 requirements are more granular than WCAG 2 success criteria.
The final set of requirements in WCAG 3.0 will be different from what is in this draft. Requirements are likely to be added, combined, and removed. We also expect changes to the text of the requirements. Only some of the requirements will be used to meet the base level of conformance.
As part of the WCAG 3.0 drafting process, each normative section of this document is given a status. This status is used to indicate how far along in the development this section is, how ready it is for experimental adoption, and what kind of feedback the Accessibility Guidelines Working Group is looking for.
- Placeholder: This content is temporary. It showcases the type of content to expect here. All of this is expected to be replaced. No feedback is needed on placeholder content.
- Exploratory: This content is not refined; details and definitions may be missing. The working group is exploring what direction to take with this section. Feedback should be about the proposed direction.
- Developing: This content has been roughly agreed on in terms of what is needed for this section, although not all high-level concerns have been settled. Details have been added, but are yet to be worked out. Feedback should be focused on ensuring the section is usable and reasonable, in a broad sense.
- Refining: This content is ready for wide public review and experimental adoption. The working group has reached consensus on this section. Feedback should be focused on the feasibility of implementation.
- Mature: This content is believed by the working group to be ready for recommendation. Feedback on this section should be focused on edge-case scenarios that the working group may not have anticipated.
This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. W3C Accessibility Guidelines (WCAG) 3.0 supports a wide set of user needs, uses new approaches to testing, and allows frequent maintenance of guidelines and related content to keep pace with accelerating technology changes. WCAG 3.0 supports this evolution by focusing on the functional needs of users. These needs are then supported by guidelines that are written as outcome statements, requirements, assertions, and technology-specific methods to meet those needs.
WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [WCAG22] and previous versions, but does not deprecate WCAG 2. It will also incorporate some content from and partially extend User Agent Accessibility Guidelines 2.0 [UAAG20] and Authoring Tool Accessibility Guidelines 2.0 [ATAG20]. These earlier versions provided a flexible model that kept them relevant for over 15 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.
There are many differences between WCAG 2 and WCAG 3.0. The WCAG 3.0 guidelines address the accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control methods. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement.
Content that conforms to WCAG 2.2 Level A and Level AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, while others may wish to migrate to the new standard. For those that wish to migrate to WCAG 3, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
Plain language summary of Guidelines
The following guidelines are being considered for WCAG 3.0. They are currently a list of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance and additional requirements. The list will change in future drafts.
Unless otherwise stated, requirements assume the content described is provided both visually and programmatically.
End of summary for Guidelines
The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To meet the varying needs of this audience, several layers of guidance will be provided including guidelines written as outcome statements, requirements that can be tested, assertions, a rich collection of methods, resource links, and code samples.
The following list is an initial set of potential guidelines and requirements that the Working Group will be exploring. The goal is to guide the next phase of work. They should be considered drafts and should not be considered as final content of WCAG 3.0.
Ordinarily, exploratory content includes editor's notes listing concerns and questions for each item. Because this Guidelines section is very early in the process of working on WCAG 3.0, this editor's note covers most of the content in this section. Unless otherwise noted, all items in the list as exploratory at this point. It is a list of all possible topics for consideration. Not all items listed will be included in the final version of WCAG 3.0.
The guidelines and requirements listed below came from analysis of user needs that the Working Group has been studying, examining, and researching. They have not been refined and do not include essential exceptions or methods. Some requirements may be best addressed by authoring tools or at the platform level. Many requirements need additional work to better define the scope and to ensure they apply correctly to multiple languages, cultures, and writing systems. We will address these questions as we further explore each requirement.
Additional Research
One goal of publishing this list is to identify gaps in current research and request assistance filling those gaps.
Editor's notes indicate the requirements within this list where the Working Group has not found enough research to fully validate the guidance and create methods to support it or additional work is needed to evaluate existing research. If you know of existing research or if you are interested in conducting research in this area, please file a GitHub issue or send email to public-agwg-comments@w3.org (comment archive).
Users have equivalent alternatives for images.
Which foundational requirements apply?
For each image:
- Would removing the image impact how people understand the page?
- No, Decorative image is programmatically hidden. Stop.
- Yes, continue.
- Is the image presented in a way that is available to user agents and assistive technology?
- Yes, image must meet Image is programmatically determinable AND the accessibility support set meets Equivalent text alternative is available for image that conveys content. Stop.
- No, continue.
- Is an equivalent text alternative available for the image?
- Yes, image must meet Equivalent text alternative is available for image that conveys content. Stop.
- No, fail.
Decorative images are programmatically hidden.
Equivalent
Users have equivalent alternatives for audio and video content.
Descriptive transcript is available for audio or video content.
Media alternative content is equivalent to audio and video content.
At least one mechanism is available to help users find media alternatives.
A mechanism to turn media alternatives on and off is available.
Speakers are identified in media alternatives.
When more than one language is spoken in audio content, the language spoken by each speaker is identified in media alternatives.
Media alternatives are provided in all spoken languages used in audio content.
Sounds needed to understand the media are identified or described in media alternatives.
Visual information needed to understand the media and not described in the audio content is included in the media alternatives.
This includes actions, charts or informative visuals, scene changes, and on-screen text,
Content author(s) follow a style guide that includes guidance on media alternatives.
Content author(s) conducted tests with users who need media alternatives and fixed issues based on the findings.
- Types of disabilities each user had
- Number of users (for each type of disability)
- Date of testing
- Examples of fixed issues based on the results
Content author(s) provide a video player that supports appropriate media alternatives. The video player includes the following features [list all that apply]:
- Supports closed captions in a standard caption format;
- Turning captions on and off;
- Turning audio descriptions on and off;
- Adjusting caption styles, including but not limited to: font size, font weight, font style, font color, background color, background transparency, and placement;
- Changing the location of captions; and
- Changing the language of the audio descriptions.
Content author(s) have reviewed the media alternatives.
- Role of the creator
- Number of creators (for each Role)
- Date (Period) of review
- Examples of fixed issues based on the feedback
Users have alternatives available for non-text, non-image content that conveys context or meaning.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have captions for the audio content.
Captions are available for all prerecorded audio content, except when the audio content is an alternative for text and clearly labelled as such.
Captions are placed on the screen so that they do not hide visual information needed to understand the video content.
Captions are presented consistently throughout the media, and across related productions, unless exceptions are essential. This includes consistent styling and placement of the captions text and consistent methods for identifying speakers, languages, and sounds.
The appearance of captions, including associated visual indicators, is adaptable including font size, font weight, font style, font color, background color, background transparency, and placement;
In 360-degree digital environments, captions remain directly in front of the user.
In 360-degree digital environments, the direction of a sound or speech is indicated when audio is heard from outside the current view.
Users have audio descriptions for video content.
Audio descriptions are available in prerecorded video for visual content needed to understand the media, except when the video content is an alternative for text and clearly labelled as such.
WCAG 3 needs to specify how to handle video content with audio that does not include gaps to insert audio descriptions. Two possible solutions are providing an exception that allows the content author(s) to use descriptive transcripts instead or requiring content authors to provide an extended audio description.
Audio description remains in synch with video content without overlapping dialogue and meaningful audio content.
Audio descriptions are available in live video for visual content needed to understand the media.
In cases where the existing pauses in a soundtrack are not long enough, the video pauses to extend the audio track and provides an extended audio description to describe visual information needed to understand the media.
A mechanism is available that allow users to control the audio description volume independently from the audio volume of the video and to change the language of the audio description, if multiple languages are provided.
A mechanism is available that allow users to change the audio description language if multiple languages are available.
Users can view figure captions even if not focused at figure.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have content that does not rely on a single sense or perception.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can read visually rendered text.
Which foundational requirements apply?
For each word of text:
- Is the text purely decorative or, is it not readable by anybody?
- Yes, Pass
- No, Continue
- Does the default/authored presentation meet minimum requirements?
- Yes, the default/authored presentation meets Readable Blocks of Text (foundational) and Readable Text Style (foundational), Continue
- No, Fail
- Can the text appearance be adjusted by the user?
- Yes, text must be user-manipulable text and:
- Yes, via produce-provided themes:
- No, and the product does not provide its own themes:
- Fail.
The default/authored presentation of blocks of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | ≥0.5em around paragraphs | ||||
Line length | 30-100 characters | ||||
Line height | 1.0 - paragraph separation height | ||||
Justification | Left aligned or Justified |
The default/authored presentation of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Font face | |||||
Font size | Vertical viewing angle of ≥0.2° (~10pt at typical desktop viewing distances) | ||||
Font width | |||||
Text decoration | Most text is not bold, italicized, and/or underlined | ||||
Letter spacing | |||||
Capitalization | |||||
Hyphenation |
The presentation of blocks of text can be adjusted to meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Information could be lost if the user overrides the appearance. See [other structural guideline] about ensuring the structure conveys the meaning when possible.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | |||||
Line length | |||||
Line height | |||||
Justification | Not applicable | Left aligned |
The presentation of each of the following font features can be adjusted to meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Information could be lost if the user overrides the appearance. See [other structural guideline] about ensuring the structure conveys the meaning when possible.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Underlining |
|
||||
Italics | Disabled | ||||
Bold | Disabled | ||||
Font face | |||||
Font width | |||||
Letter spacing | |||||
Capitalization |
|
||||
Automatic hyphenation | Disabled |
Content and functionality are not lost when the content is adjusted according to Adjustable blocks of text and Adjustable text style.
The default/authored presentation of blocks of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | |||||
Line length | |||||
Line height | |||||
Justification | Left aligned |
The default/authored presentation of text meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Font face | |||||
Font size | Vertical viewing angle of ≥0.2° (~10pt at typical desktop viewing distances) | ||||
Font width | |||||
Text decoration | Most text is not bold, italicized, and/or underlined | ||||
Letter spacing | |||||
Capitalization | |||||
Hyphenation |
Users can access text content and its meaning with text-to-speech tools.
Numerical information includes sufficient context to avoid confusion when presenting dates, temperatures, time, and Roman numerals.
Users can understand the content without having to process complex or unclear language.
This guideline will include exceptions for poetic, scriptural, artistic, and other content whose main goal is expressive rather than informative.
See also: Structure as these guidelines are closely related.
To ensure this guideline works well across different languages, members of AG, COGA, and internationalization (i18n) agreed on an initial set of languages to pressure-test the guidance.
The five “guardrail” languages are:
- Arabic
- English
- Hindi
- Mandarin
- Russian
We started with the six official languages of the United Nations (UN). Then we removed French and Spanish because they are similar to English. We added Hindi because it is the most commonly spoken language that is not on the UN list.
The group of five languages includes a wide variety of language features, such as:
- Right-to-left text layout
- Vertical text layout
- Tonal sounds that affect meaning
This list doesn’t include every language, but it helps keep the work manageable while making the guidance more useful for a wide audience.
We will work with W3C’s Global Inclusion community group, the Internationalization (i18n) task force, and others to review and refine the testing and techniques for these requirements. We also plan to create guidance for translating the guidelines into more languages in the future.
Sentences do not include unnecessary words or phrases.
Sentences do not include nested causes.
Common words are used, and definitions are available for uncommon words.
This requirement will include tests and techniques for identifying common words for the intended audience. The intended audience may be the public or a specialized group such as children or experts.
For example: In content intended for the public, one technique for determining what counts as a common word is to use a high-frequency corpus. These corpora exist for many languages including Arabic, Hindi, Mandarin, and Russian as well as American English, British English, and Canadian English. Exceptions will be made for any language that does not have a high-frequency corpus.
Abbreviations are explained or expanded when first used.
Explanations or unambiguous alternatives are available for non-literal language, such as idioms and metaphors.
Alternatives are provided for numerical information such as statistics.
Content author(s) have reviewed written content for complex ideas such as processes, workflows, relationships, or chronological information and added supplemental visual aids to assist readers with understanding them.
A summary is available for documents and articles that have more than a certain length.
More research is needed on the number of words that would trigger the need for a summary. The length may also depend on the language used for the content.
Letters or diacritics required for identifying the correct meaning of the word are available.
This most often applies to languages such as Arabic and Hebrew.
Content author(s) review content for clear language before publication.
If AI tools are used to generate or alter content, the content author(s) have a documented process for a human to review and attest that the content is clear and conveys the intended meaning.
Content author(s) follow a style guide that includes guidance on clear language and a policy that requires editors to follow the style guide.
The style guide includes guidance on clear words as well as clear numbers, such as avoiding or explaining Roman numerals, removing numerical information that is not essential for understanding the content, and providing explanations of essential numerical information to aid users with disabilities that impact cognitive accessibility.
Content author(s) provide training materials that includes guidance on clear language and a policy that editors are required to complete the training regularly.
Content author(s) conduct plain language reviews to check against plain language guidance appropriate to the language used. This includes checking that:
- the verb tense used is easiest to understand in context;
- content is organized into short paragraphs; and
- Paragraphs of informative content begin with a sentence stating the aim or purpose of the content.
Users can see which element has keyboard focus.
Which foundational requirements apply?
For each focusable item:
- Is the user agent default focus indicator used?
- Yes, the user agent default indicator is used AND the accessibility support set meets Custom focus indicators. Stop.
- No, continue.
- Is the focus indicator defined by the author?
- Yes, indicator must meet Custom focus indicators. Stop.
- No, fail.
A custom focus indicator is used with sufficient size, change of contrast, adjacent contrast, distinct style and adjacency.
Focusable item uses the user agent default indicator.
Users can see the location of the pointer focus.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can determine where they are and move through content (including interactive elements) in a systematic and meaningful way regardless of input or movement method.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have interactive components that behave as expected.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have information about interactive components that is identifiable and usable visually and using assistive technology.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can navigate and operate content using only the keyboard.
All elements that can be controlled or activated by pointer, audio (voice or other), gesture, camera input, or other means can be controlled or activated from the keyboard interface.
All content that can be accessed by other input modalities can be accessed using keyboard interface only.
All content includes content made available via hovers, right clicks, etc.
Other input modalities include pointing devices, voice and speech recognition, gesture, camera input, and any other means of input or control.
The All Elements Keyboard-Actionable requirement allows you to navigate to all actionable elements, but if the next element is 5 screens down, you also need to be able to access all the content. Also, if the content is in expanding sections, you need to not only open them but also access all of the content, not just its actionable elements.
It is always possible to move forward and backward at each point using keyboard navigation.
We are considering making this require that the navigation be symmetrical (ie., if you navigate forward and then backward you always end up back in the same place) but are interested in comments on this.
Author-generated keyboard commands do not conflict with standard platform keyboard commands or they can be remapped.
If the page/view uses responsive design, the page/view remains fully keyboard navigable.
It is always possible to navigate away from an element after navigating to, entering, or activating the element by using a common keyboard navigation technique, or by using a technique described on the page/view or on a page/view earlier in the process where it is used.
When the keyboard focus is moved, one of the following is true:
- The focus was moved under direct user control;
- A new view, such as a dialog, is introduced and focus is moved to that view;
- The user is informed of the potential keyboard focus move before it happens and given the chance to avoid the move;
- The keyboard focus moves to the next item in keyboard navigation order automatically on completion of some user action.
Except for skip links and other elements that are hidden but specifically added to aid keyboard navigation, tabbing does not move the keyboard focus into content that was not visible before the tab action.
Accordions, dropdown menus, and ARIA tab panels are examples of expandable content. According to this requirement, these would not expand simply because they include an element in the tab-order contained in them. They would either not expand or would not have any tab-order elements in them.
Users can use keyboard without unnecessary physical or cognitive effort.
The keyboard focus moves through content in an order and way that preserves meaning and operability.
When keyboard focus moves from one context to another within a page/view, whether automatically or by user request, the keyboard focus is preserved so that, when the user returns to the previous context, the keyboard focus is restored to its previous location unless that location no longer exists.
When the previous focus location no longer exists, best practice is to put focus on the focusable location just before the one that was removed. An example of this would be a list of subject-matter tags in a document, with each tag having a delete button. A user clicks on the delete button in a tag in the middle of the tag list. When the tag is deleted, focus is placed onto the tag that was before the now-deleted tag.
This is also quite useful when moving between pages but this would usually have to be done by the browser, unless the user is in some process where that information is stored in a cookie or on the server between pages in the process so that it still has the old location when the person returns to the page.
Content author(s) follow user interface design principles that include minimizing the difference between the number of input commands required when using the keyboard interface only and the number of commands when using other input modalities.
Other input modalities include pointing devices, voice and speech recognition, gesture, camera input, and any other means of input or control.
Pointer input is consistent and all functionality can be done with simple pointer input in a time and pressure insensitive way.
For functionality that can be activated using a simple pointer input, at least one of the following is true:
- No Down Event
- The down event of the pointer is not used to execute any part of the function
- Abort or Undo
- Completion of the function is on the up event, and a mechanism is available to abort the function before completion or to undo the function after completion
- Up Reversal
- The up event reverses any outcome of the preceding down event
- Essential
- Completing the function on the down event is essential
Any functionality that uses pointer input other than simple pointer input can also be operated by a simple pointer input or a sequence of simple pointer inputs that do not require timing.
Examples of pointer input that are not simple pointer input are double clicking, swipe gestures, multipoint gestures like pinching or split tap or two-finger rotor, variable pressure or timing, and dragging movements.
Complex pointer inputs are not banned, but they cannot be the only way to accomplish an action.
Simple pointer input is different than single pointer input and is more restrictive than simply using a single pointer.
The method of pointer cancellation is consistent for each type of interaction within a set of pages/views except where it is essential to be different.
Where it is essential to be different, it can be helpful to alert the user.
Specific pointer pressure is not the only way of achieving any functionality, except where specific pressure is essential to the functionality.
Specific pointer speed is not the only way of achieving any functionality, except where specific pointer speed is essential to the functionality.
Provide alternatives to speech input and facilitate speech control.
Speech input is not the only way of achieving any functionality except where a speech input is essential to the functionality.
Wherever there is real-time bidirectional voice communication, a real-time text option is available.
Users have the option to use different input techniques and combinations and switch between them.
If content interferes with pointer or keyboard focus behavior of the user agent, then selecting anything on the view with a pointer moves the keyboard focus to that interactive element, even if the user drags off the element (so as to not activate it).
When receiving and then removing pointer hover or keyboard focus triggers additional content to become visible and then hidden, and the visual presentation of the additional content is controlled by the author and not by the user agent, all of the following are true:
- Dismissible
- A mechanism is available to dismiss the additional content without moving pointer hover or keyboard focus, unless the additional content does not obscure or replace other content
- Hoverable
- If pointer hover can trigger the additional content, then the pointer can be moved over the additional content without the additional content disappearing
- Persistent
- The additional content remains visible until the hover or keyboard focus trigger is removed, the user dismisses it, or its information is no longer valid
Examples of additional content controlled by the user agent include browser tooltips created through use of the HTML title
attribute.
This applies to content that appears in addition to the triggering of the interactive element itself. Since hidden interactive elements that are made visible on keyboard focus (such as links used to skip to another part of a page/view) do not present additional content, they are not covered by this requirement.
Gestures are not the only way of achieving any functionality, except where a gesture is essential to the functionality.
Where functionality, including input or navigation, is achievable using different input methods, users have the option to switch between those input methods at any time.
Full or gross body movement is not the only way of achieving any functionality, except where full or gross body movement is essential to the functionality.
This includes both detection of body movement and actions to the device, such as shaking, that require body movement.
Users have alternative authentication methods available to them.
Biometric identification is not the only way to identify or authenticate.
Voice identification is not the only way to identify or authenticate.
Users know about and can correct errors.
When an error is detected, users are notified visually and programatically that an error has occurred.
Content in error is programmatically indicated.
Error messages clearly describe the problem.
Clear langaguage guidance outlines requirements for writing understanable content.
When an error occurs due to a user interaction with an interactive element, the error message includes the human readable name of the element in error. If the interactive element is located in a different part of a process, then the page/view or step in the process is included in the error message.
Error messages includes suggestions for correction that can be automatically determined, unless it would jeopardize the security or purpose of the content.
Error messages are visually identifiable including at least two of the following:
- A symbol.
- Color that differentiates the error message from surrounding content.
- Text that clearly indicates this is an error.
Symbols and colors signifying errors vary depending on cultural context and should be modified accordingly.
Error messages persist until the user dismisses them or the error is resolved.
Error messages are programmatically associated with the error source.
When an error notification is not adjacent to the item in error, a link to the error is provided.
Error messages are visually collocated with the error source.
When users are submitting information, at least one of the following is true:
- Users can review, confirm, and correct information before submitting
- Information is validated and users can correct any errors found
- Users can undo submissions
On submission, users are notified of submitted information and submission status.
During data entry, ensure data validation occurs after the user enters data and before the form is submitted.
When completing a multi-step process, validation is completed before the user moves to the next step in the process.
Users do not experience physical harm from content.
Content does not include audio shifting designed to create a perception of motion, or it can be paused or prevented.
Content does not include non-essential flashing or strobing beyond flashing thresholds.
When flashing is essential, a trigger warning is provided to inform users that flashing exists, and a mechanism is available to access the same information and avoid the flashing content.
Content does not include non-essential visual motion lasting longer than 5 seconds and pseudo-motion
When visual motion lasting longer than 5 seconds or pseudo-motion is essential, a trigger warning is provided to inform users that such content exists, and users are provided a way to access the same information and avoid the visual motion or pseudo-motion.
Content does not include visual motion lasting longer than 5 seconds or pseudo-motion.
Content does not include non-essential visual motion and pseudo-motion triggered by interaction unless it can be paused or prevented.
Users can determine relationships between content both visually and using assistive technologies.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have consistent and recognizable layouts available.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can determine their location in content both visually and using assistive technologies.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can understand and navigate through the content using structure.
See also: Clear Language as these guidelines are closely related.
Relationship between elements are conveyed programmatically.
Elements are programmatically grouped together within landmarks.
Groups of elements have a label that defines their purpose.
Groups of elements are organized with a logical and meaningful hierarchy of headings.
Lists are visually and programmatically identifiable as a collection of related items.
Related elements are visually grouped together.
Users can perceive and operate user interface components and navigation without obstruction.
Content that is essential for a user’s task or understanding is not permanently covered by non-dismissible or non-movable elements.
When content temporarily overlays other content, it must be clearly dismissible or movable via standard interaction methods and its presence does not disrupt critical screen reader announcements or keyboard focus.
If a control is disabled, then information explaining why it is disabled and what actions are needed to enable it is provided visually and programmatically.
Elements designed to be visually persistent have predictable positions and do not overlap with primary content in a way that makes it unreadable or unusable.
Users have consistent and alternative methods for navigation.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can complete tasks without needing to memorize nor complete advanced cognitive tasks.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users have enough time to read and use content.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can complete tasks without unnecessary steps.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users do not encounter deception when completing tasks.
Changes in terms of agreement to a continuing process, service, or task are conveyed to users and an opportunity to consent is given.
Content does not include double negatives, false statements, or other misleading wording.
Process completion does not include artificial time limits unless this is essential to the task.
Implying to a user that they will lose a benefit if they don’t act immediately is an artificial time limit.
Preselected options that impact finance, privacy, or safety are visibly and programmatically available by default, except when the user selected these options previously in the process.
Content is not designed to draw attention away from information that impacts finances, privacy, or safety by visually emphasizing other information.
Content author(s) conducted tests with people with cognitive- and mental-health-related disabilities and fixed issues based on findings.
Users do not have to reenter information or redo work.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users understand how to complete tasks.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can determine when content is provided by a Third Party
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
When providing private and sensitive information, users understand:
- That the information requested is private and sensitive,
- How the information requested will be used, and
- The risks involved in providing the information.
When private or sensitive information is displayed, notify the user and provide a mechanism to hide the information.
Users understand the benefits, risks, and consequences of options they select.
Legal-, financial-, privacy-, or security-related consequences are provided in content before a user enters a legal-, financial-, privacy-, or security-related agreement.
When people with disabilities are required to use alternative or additional processes or content not used by people without disabilities, use of the alternative does not expose them to additional risk.
Content that requires legal, financial, privacy, or security choices clearly states the benefits, risks, and potential consequences prior to the choice being confirmed.
Users are not disadvantaged or harmed by algorithms.
Content author(s) train AI models using representative and unbiased disability-related information that is proportional to the general population.
Content author(s) conduct usability testing and ethics reviews to minimize the possibility that algorithms disadvantage people with disabilities.
Users have help available.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can provide feedback to content author(s).
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can control text presentation.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can transform size and orientation of content presentation to make it viewable and usable.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can transform content to make it understandable.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can control media and media alternative.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can control interruptions.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
Users can control potential sources of harm.
Haptic feedback can be reduced or turned off.
Users can control content settings from their user agents including assistive technology.
Requirements and assertions for this guideline do not appear here because they have not yet progressed beyond exploratory. See the Editor's Draft for the complete list of potential requirements and assertions.
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY and MUST in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Plain language summary of Conformance
You might want to make a claim that your content or product meets the WCAG 3.0 guidelines. If it does meet the guidelines, we call this “conformance”.
If you want to make a formal conformance claim, you must use the process described in this document. Conformance claims are not required and your content can conform to WCAG 3.0, even if you don’t want to make a claim.
There are two types of content in this document:
- Normative: what you must do to meet the guidelines.
- Informative: advice to help you meet the guidelines. This is also called non-normative.
We are experimenting with different conformance approaches for WCAG 3.0. Once we have developed enough guidelines, we will test how well each works.
End of summary for Conformance
WCAG 3.0 will use a different conformance model than WCAG 2.2 in order to meet its requirements. Developing and vetting the conformance model is a large portion of the work AG needs to complete over the next few years.
AG is exploring a model based on Foundational Requirements, Supplemental Requirements, and Assertions.
The most basic level of conformance will require meeting all of the Foundational Requirements. This set will be somewhat comparable to WCAG 2.2 Level AA.
Higher levels of conformance will be defined and met using Supplemental Requirements and Assertions. AG will be exploring whether meeting the higher levels would work best based on points, percentages, or predefined sets of requirements (modules).
Other conformance concepts AG continues to explore the following include conformance levels, issue severity, adjectival ratings and pre-assessment checks.
See Explainer for W3C Accessibility Guidelines (WCAG) 3.0 for more information.The concept of "accessibility-supported" is to account for the variety of user agents and scenarios. How does an author know that a particular technique for meeting a guideline will work in practice with user agents that are used by real people?
The intent is for the responsibility of testing with user agents to vary depending on the level of conformance.
At the foundational level of conformance, assumptions can be made by authors that methods and techniques provided by WCAG 3.0 work. At higher levels of conformance the author may need to test that a technique works, or check that available user agents meet the requirement, or a combination of both.
This approach means the Working Group will ensure that methods and techniques included do have reasonably wide and international support from user agents, and there are sufficient techniques to meet each requirement.
The intent is that WCAG 3.0 will use a content management system to support tagging of methods/techniques with support information. There should also be a process where interested parties can provide information.
An "accessibility support set" is used at higher levels of conformance to define which user agents and assistive technologies you test with. It would be included in a conformance claim, and enables authors to use techniques that are not provided with WCAG 3.0.
An exception for long-present bugs in assistive technology is still under discussion.
When evaluating the accessibility of content, WCAG 3.0 requires the guidelines apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more subsets of the whole. Reasons for this include:
- Large amounts of content are impractical to evaluate comprehensively using anything beyond automated evaluation of items;
- In many cases, content changes frequently, causing evaluation to be accurate only for a specific moment in time;
- Some content is more important to the majority of users than other content; and
- Content that mostly meets the requirements but has problems can interfere with the user’s ability to complete a process.
WCAG 3.0 therefore defines two ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.
Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.
We recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.
This glossary includes terms used by content that has reached a maturity level of Developing or higher. The definitions themselves include a maturity level and may mature at a different pace than the content that refers to them. The AGWG will work with other taskforces and groups to harmonize terminology across documents as much as is possible.
- abbreviationDeveloping
shortened form of a word, phrase, or name where the abbreviation has not become part of the language
Note 1This includes initialisms, acronyms, and numeronyms.
- initialisms are shortened forms of a name or phrase made from the initial letters of words or syllables contained in that name or phrase. These are not defined in all languages.
- acronyms are abbreviated forms made from the initial letters or parts of other words (in a name or phrase) which may be pronounced as a word.
- numeronyms are shortened forms of a word that use the first and last letters, with a number in between showing the number of letters left out.
Note 2Some companies have adopted what used to be an initialism as their company name. In these cases, the new name of the company is the letters (for example, Ecma) and the word is no longer considered an abbreviation.
- accessibility support setDeveloping
group of user agents and assistive technologies you test with
Editor's noteThe AGWG is considering defining a default set of user agents and assistive technologies that they use when validating guidelines.
Accessibility support sets may vary based on language, region, or situation.
If you are not using the default accessibility set, the conformance report should indicate what set is being used.
- accessibility supportedDeveloping
supported by in at least 2 major free browsers on every operating system and/or available in assistive technologies used by 80% cumulatively of the AT users on each operating system for each type of AT used
- actively availableDeveloping
available for the user to read and use any actionable items included
- assertionDeveloping
formal claim of fact, attributed to a person or organization, regarding procedures practiced in the development and maintenance of the content or product to improve accessibility
- assistive technologyDeveloping
hardware and/or software that acts as a user agent, or along with a mainstream user agent, to provide functionality to meet the requirements of users with disabilities that go beyond those offered by mainstream user agents
Note 1Functionality provided by assistive technology includes alternative presentations (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, and content transformations (e.g., to make tables more accessible).
Note 2Assistive technologies often communicate data and messages with mainstream user agents by using and monitoring APIs.
Note 3The distinction between mainstream user agents and assistive technologies is not absolute. Many mainstream user agents provide some features to assist individuals with disabilities. The basic difference is that mainstream user agents target broad and diverse audiences that usually include people with and without disabilities. Assistive technologies target narrowly defined populations of users with specific disabilities. The assistance provided by an assistive technology is more specific and appropriate to the needs of its target users. The mainstream user agent may provide important functionality to assistive technologies like retrieving web content from program objects or parsing markup into identifiable bundles.
- audioDeveloping
the technology of sound reproduction
NoteAudio can be created synthetically (including speech synthesis), recorded from real world sounds, or both.
- audio descriptionDeveloping
narration added to the soundtrack to describe important visual details that cannot be understood from the main soundtrack alone
NoteFor audiovisual media, audio description provides information about actions, characters, scene changes, on-screen text, and other visual content.
Audio description is also sometimes called “video description”, “described video”, “visual description”, or “descriptive narration”.
In standard audio description, narration is added during existing pauses in dialogue. See also extended audio description.
If all important visual information is already provided in the main audio track, no additional audio description track is necessary.
- automated evaluationDeveloping
evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests
NoteAutomated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.
- blinkingDeveloping
switching back and forth between two visual states in a way that is meant to draw attention
NoteSee also flash. It is possible for something to be large enough and blink brightly enough at the right frequency to be also classified as a flash.
- blocks of textDeveloping
more than one sentence of text
- camera inputDeveloping
control using a camera as a motion sensor to detect gestures of any type, for example “in the air” gestures
NoteThis does not include, for example, a static QR code image on a web page.
- captionsDeveloping
synchronized visual and/or text alternative for both the speech and non-speech audio portion of a work of audiovisual content
Note 1Closed captions are equivalents that can be turned on and off with some players and can often be read using assistive technology..
Note 2Open captions are any captions that cannot be turned off in the player. For example, if the captions are visual equivalent images of text embedded in video.
Note 3Audio descriptions can be, but do not need to be, captioned since they are descriptions of information that is already presented visually.
Note 4In some countries, captions are called subtitles. The term ‘subtitles’ is often also used to refer to captions that present a translated version of the audio content.
- common keyboard navigation techniqueDeveloping
keyboard navigation technique that is the same across most or all applications and platforms and can therefore be relied upon by users who need to navigate by keyboard alone
NoteA sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG common keyboard navigation techniques list
- complex pointer inputDeveloping
any pointer input other than a single pointer input
- componentDeveloping
grouping of elements for a distinct function
- conformanceDeveloping
satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim
See the Conformance section for more information.
- contentDeveloping
information and sensory experience to be communicated to the user by an interface, including code or markup that defines the content’s structure, presentation, and interactions
- decorativeDeveloping
serving only an aesthetic purpose, providing no information, and having no functionality
NoteText is only purely decorative if the words can be rearranged or substituted without changing their purpose.
- deprecateDeveloping
declare something outdated and in the process of being phased out, usually in favor of a specified replacement
Deprecated documents are no longer recommended for use and may cease to exist in the future.
- descriptive transcriptDeveloping
a text version of the speech and non-speech audio information and visual information needed to understand the content.
- down eventDeveloping
platform event that occurs when the trigger stimulus of a pointer is depressed
NoteThe down event may have different names on different platforms, such as “touchstart” or “mousedown”.
- essential exceptionDeveloping
exception because there is no way to carry out the function without doing it this way or fundamentally changing the functionality
- evaluationDeveloping
process of examining content for conformance to these guidelines
NoteDifferent approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and usability testing.
- extended audio descriptionDeveloping
audio description that is added to audiovisual media by pausing the video to allow for additional time to add audio description
NoteThis technique is only used when the sense of the video would be lost without the additional audio description and the pauses between dialogue or narration are too short.
- figure captionsDeveloping
title, brief explanation, or comment that accompanies a work of visual media and is always visible on the page
- flashDeveloping
a pair of opposing changes in relative luminance that can cause seizures in some people if it is large enough and in the right frequency range
Note 1See general flash and red flash thresholds for information about types of flash that are not allowed.
Note 2See also blinking.
- functional needDeveloping
statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context
- general flash and red flash thresholdsDeveloping
a flash or rapidly-changing image sequence is below the threshold (i.e., content passes) if any of the following are true:
- there are no more than three general flashes and / or no more than three red flashes within any one-second period; or
- the combined area of flashes occurring concurrently occupies no more than a total of .006 steradians within any 10 degree visual field on the screen (25% of any 10 degree visual field on the screen) at typical viewing distance
where:
- A general flash is defined as a pair of opposing changes in relative luminance of 10% or more of the maximum relative luminance (1.0) where the relative luminance of the darker image is below 0.80; and where “a pair of opposing changes” is an increase followed by a decrease, or a decrease followed by an increase, and
- A red flash is defined as any pair of opposing transitions involving a saturated red
Exception: Flashing that is a fine, balanced, pattern such as white noise or an alternating checkerboard pattern with “squares” smaller than 0.1 degree (of visual field at typical viewing distance) on a side does not violate the thresholds.
Note 1For general software or web content, using a 341 x 256 pixel rectangle anywhere on the displayed screen area when the content is viewed at 1024 x 768 pixels will provide a good estimate of a 10 degree visual field for standard screen sizes and viewing distances (e.g., 15-17 inch screen at 22-26 inches). This resolution of 75 - 85 ppi is known to be lower, and thus more conservative than the nominal CSS pixel resolution of 96 ppi in CSS specifications. Higher resolutions displays showing the same rendering of the content yield smaller and safer images so it is lower resolutions that are used to define the thresholds.
Note 2A transition is the change in relative luminance (or relative luminance/color for red flashing) between adjacent peaks and valleys in a plot of relative luminance (or relative luminance/color for red flashing) measurement against time. A flash consists of two opposing transitions.
Note 3The new working definition in the field for “pair of opposing transitions involving a saturated red” (from WCAG 2.2) is a pair of opposing transitions where, one transition is either to or from a state with a value R/(R + G + B) that is greater than or equal to 0.8, and the difference between states is more than 0.2 (unitless) in the CIE 1976 UCS chromaticity diagram. [ISO_9241-391]
Note 4Tools are available that will carry out analysis from video screen capture. However, no tool is necessary to evaluate for this condition if flashing is less than or equal to 3 flashes in any one second. Content automatically passes (see #1 and #2 above).
- gestureDeveloping
motion made by the body or a body part used to communicate to technology
- guidelineDeveloping
high-level, plain-language outcome statements used to organize requirements
NoteGuidelines provide a high-level, plain-language outcome statements for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the requirements so that non-experts can learn about and understand the concepts.
Each guideline includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs on specific topics, such as contrast, forms, readability, and more.
Guidelines group related requirements and are technology-independent.
- human evaluationDeveloping
evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated
NoteHuman evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, in contrast with usability testing which directly tests the experience of users with content.
- imagePlaceholder
- Editor's note
To be defined.
- informativeDeveloping
content provided for information purposes and not required for conformance. Also referred to as non-normative
- interactive elementDeveloping
element that responds to user input and has a distinct programmatically determinable name
NoteIn contrast to non-interactive elements. For example, headings or paragraphs.
- itemsDeveloping
smallest testable unit for testing scope
- keyboard focusDeveloping
point in the content where any keyboard actions would take effect
- keyboard interfaceDeveloping
API (Application Programming Interface) where software gets “keystrokes” from
Note“Keystrokes” that are passed to the software from the “keyboard interface” may come from a wide variety of sources including but not limited to a scanning program, sip-and-puff morse code software, speech recognition software, AI of all sorts, as well as other keyboard substitutes or special keyboards.
- mechanismDeveloping
process or technique for achieving a result
Note 1The mechanism may be explicitly provided in the content, or may be relied upon to be provided by either the platform or by user agents, including assistive technologies.
Note 2The mechanism needs to meet all requirements for the conformance level claimed.
- media alternativesDeveloping
alternative formats, usually text, for audio, video, and audio-video content including captions, audio descriptions, and descriptive transcripts
- methodDeveloping
detailed information, either technology-specific or technology-agnostic, on ways to meet the requirement as well as tests and scoring information
- non-interactive elementDeveloping
element that does not respond to user input and does not include sub-parts
Note 1If a paragraph included a link, the text either side of the link would be considered a static element, but not the paragraph as a whole.
Note 2Letters within text do not constitute a “smaller part”.
- non-literal languageDeveloping
words or phrases used in a way that are beyond their standard or dictionary meaning to express deeper, more complex ideas
NoteThis is also called figurative language.
To understand the content, users have to interpret the implied meaning behind the words, rather than just their literal or direct meaning.
Examples include:
- allusions
- hyperbole
- idioms
- irony
- jokes
- litotes
- metaphors
- metonymies
- onomatopoeias
- oxymorons
- personification
- puns
- sarcasm
- similes
- normativeDeveloping
content whose instructions are required for conformance
- pageDeveloping
non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together
NoteWhere a URI is available and represents a unique set of content, that would be the preferred conformance unit.
- path-based gestureDeveloping
gesture that depends on the path of the pointer input and not just its endpoints
NotePath based gesture includes both time dependent and non-time dependent path-based gestures.
- platformDeveloping
software, or collection of layers of software, that lie below the subject software and provide services to the subject software and that allows the subject software to be isolated from the hardware, drivers, and other software below
Note 1Platform software both makes it easier for subject software to run on different hardware, and provides the subject software with many services (e.g. functions, utilities, libraries) that make the subject software easier to write, keep updated, and work more uniformly with other subject software.
Note 2A particular software component might play the role of a platform in some situations and a client in others. For example a browser is a platform for the content of the page but it also relies on the operating system below it.
Note 3The platform is the context in which the product exists.
- pointerPlaceholder
- Editor's note
To be defined.
- private and sensitive informationExploratory
private and sensitive information
- processDeveloping
series of views or pages associated with user actions, where actions required to complete an activity are performed, often in a certain order, regardless of the technologies used or whether it spans different sites or domains
- productDeveloping
testing scope that is a combination of all items, views, and task flows that make up the web site, set of web pages, web app, etc.
NoteThe context for the product would be the platform.
- programmatically determinableDeveloping
meaning of the content and all its important attributes can be determined by software functionality that is accessibility supported
- pseudo-motion
static content on the page that gives the user the perception or feeling of motion
- relative luminanceDeveloping
the relative brightness of any point in a colorspace, normalized to 0 for darkest black and 1 for lightest white
Note 1For the sRGB colorspace, the relative luminance of a color is defined as L = 0.2126 * R + 0.7152 * G + 0.0722 * B where R, G and B are defined as:
- if RsRGB <= 0.04045 then R = RsRGB/12.92 else R = ((RsRGB+0.055)/1.055) ^ 2.4
- if GsRGB <= 0.04045 then G = GsRGB/12.92 else G = ((GsRGB+0.055)/1.055) ^ 2.4
- if BsRGB <= 0.04045 then B = BsRGB/12.92 else B = ((BsRGB+0.055)/1.055) ^ 2.4
and RsRGB, GsRGB, and BsRGB are defined as:
- RsRGB = R8bit/255
- GsRGB = G8bit/255
- BsRGB = B8bit/255
The ”^” character is the exponentiation operator. (Formula taken from [SRGB].)
Note 2Before May 2021 the value of 0.04045 in the definition was different (0.03928). It was taken from an older version of the specification and has been updated. It has no practical effect on the calculations in the context of these guidelines.
Note 3Almost all systems used today to view web content assume sRGB encoding. Unless it is known that another color space will be used to process and display the content, authors should evaluate using sRGB colorspace.
Note 4If dithering occurs after delivery, then the source color value is used. For colors that are dithered at the source, the average values of the colors that are dithered should be used (average R, average G, and average B).
Note 5Tools are available that automatically do the calculations when testing contrast and flash.
Editor's noteWCAG 2.2 contains a separate page giving the relative luminance definition using MathML to display the formulas. This will need to be addressed for inclusion in WCAG 3.
- requirementDeveloping
result of practices that reduce or eliminate barriers that people with disabilities experience
- sectionDeveloping
self-contained portion of content that deals with one or more related topics or thoughts
NoteA section may consist of one or more paragraphs and include graphics, tables, lists and sub-sections.
- semi-automated evaluationDeveloping
evaluation conducted using machines to guide humans to areas that need inspection
NoteSemi-automated evaluation involves components of automated evaluation and human evaluation.
- simple pointer inputDeveloping
input event that involves only a single ‘click’ event or a ‘button down’ and ‘button up’ pair of events with no movement between
NoteExamples of things that are not simple pointer actions include double clicks, dragging motions, gestures, and any use of multipoint input or gestures, and the simultaneous use of a mouse and keyboard.
- single pointerDeveloping
input modality that only targets a single point on the page/screen at a time – such as a mouse, single finger on a touch screen, or stylus
NoteSingle pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.
- single pointer inputDeveloping
input modality that only targets a single point on the view at a time – such as a mouse, single finger on a touch screen, or stylus
Note 1Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.
Note 2Single pointer input is in contrast to multipoint input such as two, three or more fingers or pointers touching the surface, or gesturing in the air, at the same time.
Note 3Activation is usually by click or tap but can also be by programmatic simulation of a click or tap or other similar simple activation.
- standard platform keyboard commandsDeveloping
keyboard commands that are the same across most or platforms and are relied upon by users who need to navigate by keyboard alone
NoteA sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG standard keyboard navigation techniques list.
- task flowDeveloping
testing scope that includes a series views that support a specified user activity
- testDeveloping
mechanism to evaluate implementation of a method
- textDeveloping
sequence of characters that can be programmatically determined, where the sequence is expressing something in human language
- text alternativeDeveloping
text that is programmatically associated with non-text content or referred to from text that is programmatically associated with non-text content
- up eventDeveloping
platform event that occurs when the trigger stimulus of a pointer is released
NoteThe up event may have different names on different platforms, such as “touchend” or “mouseup”.
- usability testingDeveloping
evaluation of the experience of users using a product or process by observation and feedback
- user agentDeveloping
software that retrieves and presents external content for users
- user needDeveloping
end goal a user has when starting a process through digital means
- user-manipulable textDeveloping
text which the user can adjust
NoteThis could include, but is not limited to, changing:
- Line, word or letter spacing
- Color
- Line length — being able to control width of block of text
- Typographic alignment — justified, flushed right/left, centered
- Wrapping
- Columns — number of columns in one-dimensional content
- Margins
- Underlining, italics, bold
- Font face, size, width
- Capitalization — all caps, small caps, alternating case
- End of line hyphenation
- Links
- videoDeveloping
the technology of moving or sequenced pictures or images
NoteVideo can be made up of animated or photographic images, or both.
- viewDeveloping
content that is actively available in a viewport including that which can be scrolled or panned to, and any additional content that is included by expansion while leaving the rest of the content in the viewport actively available
NoteA modal dialog box would constitute a new view because the other content in the viewport is no longer actively available.
- viewportDeveloping
object in which the platform presents content
Note 1The author has no control of the viewport and almost always has no idea what is presented in a viewport (e.g. what is on screen) because it is provided by the platform. On browsers the hardware platform is isolated from the content.
Note 2Content can be presented through one or more viewports. Viewports include windows, frames, loudspeakers, and virtual magnifying glasses. A viewport may contain another viewport. For example, nested frames. Interface components created by the user agent such as prompts, menus, and alerts are not viewports.
The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy.
The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security.
This section shows substantive changes made in WCAG 3.0 since the First Public Working Draft was published in 21 January 2021.
The full commit history to WCAG 3.0 and commit history to Silver is available.
- 2021-06-08: Moved explanatory information to Explainer for W3C Accessibility Guidelines (WCAG) 3.0
- 2021-12-07: Add Project Manager
- 2023-07-24: Changed approach to WCAG 3.0 based feedback and removed old material that was not consistent with the new approach. Added WCAG 3.0 Guideline placeholders to indicate maturity level.
- 2024-03-15: Updated placeholder guidelines with exploratory guidelines.
- 2024-12-12: Reorganized exploratory guidelines; added 3 developing guidelines and accessibility supported; added user agent support, updated conformance section, and moved explanatory content to the Explainer for WCAG 3.0
- 2025-09-04: Published requirements and assertions that have reached developing; removed exploratory content from draft; and updated assertions section of Explainer for WCAG 3.0.
Additional information about participation in the Accessibility Guidelines Working Group (AG WG) can be found on the Working Group home page.
- Adam Page (Invited Expert)
- Alastair Campbell (Nomensa)
- Alexandra Yaneva (SAP SE)
- Alina Vayntrub (Understood)
- Ashley Firth (Invited Expert)
- Atya Ratcliff (Google LLC)
- Azlan Cuttilan (Invited Expert)
- Ben Tillyer (University of Oxford)
- Bruce Bailey (Invited Expert)
- Carrie Hall (SAP SE)
- Chris Loiselle (Invited Expert)
- Chuck Adams (Oracle Corporation)
- Daniel Bjorge (Deque Systems, Inc.)
- David Swallow (TPGi)
- Detlev Fischer (Invited Expert)
- DJ Chase (Invited Expert)
- Eric Hind (Google LLC)
- Filippo Zorzi (UsableNet)
- Francis Storr (Intel Corporation)
- Frankie Wolf (Invited Expert)
- Gez Lemon (TetraLogical Services Ltd)
- Giacomo Petri (UsableNet)
- Graham Ritchie (Invited Expert)
- Gregg Vanderheiden (Invited Expert)
- Gundula Niemann (SAP SE)
- Hidde de Vries (Logius)
- Irfan Mukhtar (EcomBack)
- Jamie Herrera (Invited Expert)
- Jaunita Flessas (Invited Expert)
- Jeanne Spellman (Invited Expert)
- Jen Goulden (Crawford Technologies)
- Jennifer Delisi (Invited Expert)
- Jennifer Strickland (MITRE Corporation)
- Jeroen Hulscher (Logius)
- John Kirkwood (Invited Expert)
- John Toles (Rhonda Weiss Center for Accessible IDEA Data)
- Jon Avila (Level Access)
- Jory Cunningham (Amazon)
- Julie Rawe (Understood)
- Ken Franqueiro (W3C)
- Kimberly McGee (SAP SE)
- Laura Carlson (Invited Expert)
- Len Beasley (CVS Pharmacy, Inc.)
- Lisa Seeman-Kestenbaum (Invited Expert)
- Lori Oakley (Oracle Corporation)
- Makoto Ueki (Invited Expert)
- Mary Ann Jawili (Adobe)
- Mary Jo Mueller (IBM Corporation)
- Melanie Philipp (Invited Expert)
- Michael Fairchild (Microsoft Corporation)
- Mike Beganyi (Invited Expert)
- Mike Gower (IBM Corporation)
- Nat Tarnoff (Level Access)
- Patrick H. Lauke (TetraLogical Services Ltd)
- Poornima Badhan Subramanian (Invited Expert)
- Rachael Bradley Montgomery (Library of Congress)
- Rain Breaw Michaels (Google LLC)
- Rashmi Katakwar (Invited Expert)
- Roberto Scano (Invited Expert)
- Roldon Brown (SAP SE)
- Sarah Horton (Invited Expert)
- Scott O’Hara (Microsoft Corporation)
- Shadi Abou-Zahra (Amazon)
- Steve Faulkner (TetraLogical Services Ltd)
- Tiffany Burtin (Invited Expert)
- Todd Libby (Invited Expert)
- Wendy Reid (Invited Expert)
- Wilco Fiers (Deque Systems, Inc.)
Abi James, Abi Roper, Alastair Campbell, Alice Boxhall, Alina Vayntrub, Alistair Garrison, Amani Ali, Andrew Kirkpatrick, Andrew Somers, Andy Heath, Angela Hooker, Aparna Pasi, Ashley Firth, Avneesh Singh, Avon Kuo, Azlan Cuttilan, Ben Tillyer, Betsy Furler, Brooks Newton, Bruce Bailey, Bryan Trogdon, Caryn Pagel, Charles Hall, Charles Nevile, Chris Loiselle, Chris McMeeking, Christian Perera, Christy Owens, Chuck Adams, Cybele Sack, Daniel Bjorge, Daniel Henderson-Ede, Darryl Lehmann, David Fazio, David MacDonald, David Sloan, David Swallow, Dean Hamack, Detlev Fischer, DJ Chase, E.A. Draffan, Eleanor Loiacono, Filippo Zorzi, Francis Storr, Frankie Wolf, Frederick Boland, Garenne Bigby, Gez Lemon, Giacomo Petri, Glenda Sims, Graham Ritchie, Greg Lowney, Gregg Vanderheiden, Gundula Niemann, Hidde de Vries, Imelda Llanos, Jaeil Song, JaEun Jemma Ku, Jake Abma, Jan Jaap de Groot, Jan McSorley, Janina Sajka, Jaunita George, Jeanne Spellman, Jedi Lin, Jeff Kline, Jennifer Chadwick, Jennifer Delisi, Jennifer Strickland, Jennison Asuncion, Jill Power, Jim Allan, Joe Cronin, John Foliot, John Kirkwood, John McNabb, John Northup, John Rochford, John Toles, Jon Avila, Joshue O’Connor, Judy Brewer, Julie Rawe, Justine Pascalides, Karen Schriver, Katharina Herzog, Kathleen Wahlbin, Katie Haritos-Shea, Katy Brickley, Kelsey Collister, Kim Dirks, Kimberly McGee, Kimberly Patch, Laura Carlson, Laura Miller, Len Beasley, Léonie Watson, Lisa Seeman-Kestenbaum, Lori Oakley, Lori Samuels, Lucy Greco, Luis Garcia, Lyn Muldrow, Makoto Ueki, Marc Johlic, Marie Bergeron, Mark Tanner, Mary Ann Jawili, Mary Jo Mueller, Matt Garrish, Matthew King, Melanie Philipp, Melina Maria Möhnle, Michael Cooper, Michael Crabb, Michael Elledge, Michael Weiss, Michellanne Li, Michelle Lana, Mike Beganyi, Mike Crabb, Mike Gower, Nicaise Dogbo, Nicholas Trefonides, Nina Krauß, Omar Bonilla, Patrick H. Lauke, Paul Adam, Peter Korn, Peter McNally, Pietro Cirrincione, Poornima Badhan Subramanian, Rachael Bradley Montgomery, Rain Breaw Michaels, Ralph de Rooij, Rashmi Katakwar, Rebecca Monteleone, Rick Boardman, Roberto Scano, Ruoxi Ran, Ruth Spina, Ryan Hemphill, Sarah Horton, Sarah Pulis, Scott Hollier, Scott O’Hara, Shadi Abou-Zahra, Shannon Urban, Shari Butler, Shawn Henry, Shawn Lauriat, Shawn Thompson, Sheri Byrne-Haber, Shrirang Sahasrabudhe, Shwetank Dixit, Stacey Lumley, Stein Erik Skotkjerra, Stephen Repsher, Steve Faulkner, Steve Lee, Sukriti Chadha, Susi Pallero, Suzanne Taylor, sweta wakodkar, Takayuki Watanabe, Tananda Darling, Theo Hale, Thomas Logan, Thomas Westin, Tiffany Burtin, Tim Boland, Todd Libby, Todd Marquis Boutin, Victoria Clark, Wayne Dick, Wendy Chisholm, Wendy Reid, Wilco Fiers.
These researchers selected a Silver research question, did the research, and graciously allowed us to use the results.
- David Sloan and Sarah Horton, The Paciello Group,
WCAG Success Criteria Usability Study
- Scott Hollier et al, Curtin University,
Internet of Things (IoT) Education: Implications for Students with Disabilities
- Peter McNally, Bentley University,
WCAG Use by UX Professionals
- Dr. Michael Crabb, University of Dundee, Student research papers on Silver topics
- Eleanor Loiacono, Worcester Polytechnic Institute
Web Accessibility Perceptions
(Student project from Worcester Polytechnic Institute)
This publication has been funded in part with U.S. Federal funds from the Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), initially under contract number ED-OSE-10-C-0067, then under contract number HHSP23301500054C, and now under HHS75P00120P00168. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Health and Human Services or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.
- [RFC2119]
- Key words for use in RFCs to Indicate Requirement Levels. S. Bradner. IETF. March 1997. Best Current Practice. URL: https://www.rfc-editor.org/rfc/rfc2119
- [RFC8174]
- Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words. B. Leiba. IETF. May 2017. Best Current Practice. URL: https://www.rfc-editor.org/rfc/rfc8174
- [ATAG20]
- Authoring Tool Accessibility Guidelines (ATAG) 2.0. Jan Richards; Jeanne F Spellman; Jutta Treviranus. W3C. 24 September 2015. W3C Recommendation. URL: https://www.w3.org/TR/ATAG20/
- [ISO_9241-391]
- Ergonomics of human-system interaction—Part 391: Requirements, analysis and compliance test methods for the reduction of photosensitive seizures. International Standards Organization. URL: https://www.iso.org/standard/56350.html
- [SRGB]
- Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB. IEC. URL: https://webstore.iec.ch/publication/6169
- [UAAG20]
- User Agent Accessibility Guidelines (UAAG) 2.0. James Allan; Greg Lowney; Kimberly Patch; Jeanne F Spellman. W3C. 15 December 2015. W3C Working Group Note. URL: https://www.w3.org/TR/UAAG20/
- [WCAG22]
- Web Content Accessibility Guidelines (WCAG) 2.2. Michael Cooper; Andrew Kirkpatrick; Alastair Campbell; Rachael Bradley Montgomery; Charles Adams. W3C. 12 December 2024. W3C Recommendation. URL: https://www.w3.org/TR/WCAG22/
Referenced in:
Referenced in:
Referenced in:
- § 2.1.2 Media alternatives
- § 2.1.2.1 Descriptive transcripts
- § 2.1.2.2 Equivalent media alternatives
- § 2.1.2.6 Language identified
- § 2.1.2.7 Media alternatives in all spoken languages
- § 2.1.2.8 Meaningful sounds
- § 2.1.2.9 Meaningful visual information
- § 2.1.4 Captions
- § 2.1.4.1 Captions prerecorded
- § 2.1.4.2 Captions live
- § 2.1.4.7 Visual indicators in 360-degree space
- § 2.1.5.1 Audio descriptions prerecorded
- § 2.1.5.2 Audio description timing
- § 2.1.5.4 Extended audio description
- § 2.1.5.5 Audio description volume
- § 2.4.1.1 All elements keyboard actionable
- § 4. Glossary
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
- § 2.1.1 Image alternatives
- § 2.1.1.2 Equivalent text alternative
- § 2.1.2 Media alternatives
- § 2.1.2.6 Language identified
- § 2.1.2.7 Media alternatives in all spoken languages
- § 2.1.2.8 Meaningful sounds
- § 2.1.2.9 Meaningful visual information
- § 2.1.3 Non-text alternatives
- § 2.1.4 Captions
- § 2.1.4.1 Captions prerecorded
- § 2.1.4.2 Captions live
- § 2.1.4.3 Captions avoid obstruction
- § 2.1.5 Audio descriptions
- § 2.1.5.1 Audio descriptions prerecorded
- § 2.1.5.2 Audio description timing
- § 2.1.5.3 Audio descriptions live
- § 2.1.7 Single sense
- § 2.2.1.1 Readable blocks of text
- § 2.2.1.2 Readable text style
- § 2.2.1.3 Adjustable blocks of text
- § 2.2.1.5 Content and functionality not lost with text adjustment
- § 2.2.1.6 Readable blocks of text (enhanced)
- § 2.2.1.7 Readable text style (enhanced)
- § 2.2.2 Text-to-speech
- § 2.2.3.3 Common words
- § 2.2.3.7 Visual aids
- § 2.2.3.8 Summaries
- § 2.2.3.10 Review process
- § 2.2.3.11 Clear language style guide
- § 2.2.3.12 Clear language training policy
- § 2.2.3.13 Plain language review (2)
- § 2.3.3 Navigating content
- § 2.4.1 Keyboard interface input
- § 2.4.1.2 All content keyboard accessible
- § 2.4.1.8 Relevant tab order keyboard focus
- § 2.4.2.1 Logical keyboard focus order
- § 2.4.2.3 Comparable keyboard effort
- § 2.4.5.1 Change keyboard focus with pointer device
- § 2.4.5.2 Content on hover or keyboard focus
- § 2.4.5.4 Input method flexibility
- § 2.5.1.2 Error cause identified
- § 2.5.1.5 Error suggestion
- § 2.5.1.6 Error visibility
- § 2.6.1 Avoid physical harm
- § 2.6.1.1 Avoid audio shifting
- § 2.6.1.2 Avoid flashing
- § 2.6.1.3 Flashing warning
- § 2.6.1.4 No flashing
- § 2.6.1.5 Avoid visual motion
- § 2.6.1.6 Visual motion warning
- § 2.6.1.7 No visual motion
- § 2.6.1.8 Avoid motion from interaction
- § 2.7.1 Relationships
- § 2.7.3 Orientation
- § 2.7.4 Structure
- § 2.7.5.1 No obstructions
- § 2.7.5.2 Clearly-dismissible content overlays
- § 2.7.5.4 Consistent positioning
- § 2.9.2 Adequate time
- § 2.9.4.2 No misleading wording
- § 2.9.4.5 No misdirection
- § 2.10.1 Content source
- § 2.10.3.1 Agreement indicated
- § 2.10.3.2 Comparable risk
- § 2.10.3.3 Risk statements
- § 2.10.4.1 Inclusive data set
- § 2.10.4.2 No harm from algorithms
- § 2.11.2 Feedback
- § 2.12.2 Adjustable viewport
- § 2.12.3 Transform content
- § 2.12.7 User agent support
- § 3.1 Conformance
- § 3.1.2 Defining conformance scope
- § 4. Glossary (2) (3) (4) (5) (6) (7) (8) (9)
Referenced in:
Referenced in:
- § 2.1.4.4 Consistent captions
- § 2.2.3.11 Clear language style guide
- § 2.4.3.1 Pointer cancellation
- § 2.4.3.3 Consistent pointer cancellation
- § 2.4.3.4 Pointer pressure alternative
- § 2.4.3.5 Pointer speed alternative
- § 2.4.4.1 Speech alternative
- § 2.4.5.3 Gesture alternative
- § 2.4.5.4 Input method flexibility
- § 2.4.5.5 Use without body movement
- § 2.6.1.3 Flashing warning
- § 2.6.1.6 Visual motion warning
- § 2.7.5.1 No obstructions
- § 2.9.4.3 No artificial pressure
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
- § 2.3.1 Keyboard focus appearance
- § 2.4.1.7 User control of keyboard focus
- § 2.4.1.8 Relevant tab order keyboard focus
- § 2.4.2.1 Logical keyboard focus order
- § 2.4.2.2 Preserve keyboard focus
- § 2.4.5.1 Change keyboard focus with pointer device
- § 2.4.5.2 Content on hover or keyboard focus
- § 2.7.5.2 Clearly-dismissible content overlays
Referenced in:
Referenced in:
- § 2.1.2.3 Findable media alternatives
- § 2.1.2.4 Controllable media alternatives
- § 2.1.5.5 Audio description volume
- § 2.1.5.6 Audio description language
- § 2.4.3.1 Pointer cancellation
- § 2.4.5.2 Content on hover or keyboard focus
- § 2.6.1.3 Flashing warning
- § 2.10.2.1 Notify about sensitive information
Referenced in:
- § 2.1.2.2 Equivalent media alternatives
- § 2.1.2.3 Findable media alternatives
- § 2.1.2.4 Controllable media alternatives
- § 2.1.2.5 Speakers identity
- § 2.1.2.6 Language identified
- § 2.1.2.7 Media alternatives in all spoken languages
- § 2.1.2.8 Meaningful sounds
- § 2.1.2.9 Meaningful visual information
- § 2.1.2.10 Media alternatives style guide
- § 2.1.2.11 Testing media alternatives with users
- § 2.1.2.12 Video player
- § 2.1.2.13 Reviewed by content authors
Referenced in:
Referenced in:
Referenced in:
- § 2.3.2 Pointer focus appearance
- § 2.4.1.1 All elements keyboard actionable
- § 2.4.1.2 All content keyboard accessible
- § 2.4.2.3 Comparable keyboard effort
- § 2.4.3 Pointer input
- § 2.4.3.1 Pointer cancellation
- § 2.4.3.2 Simple pointer input
- § 2.4.3.3 Consistent pointer cancellation
- § 2.4.3.4 Pointer pressure alternative
- § 2.4.3.5 Pointer speed alternative
- § 2.4.5.1 Change keyboard focus with pointer device
- § 2.4.5.2 Content on hover or keyboard focus
- § 2.4.5.4 Input method flexibility
Referenced in:
- § 2.4.1.6 No keyboard trap
- § 2.4.2.2 Preserve keyboard focus
- § 2.5.1.4 Error cause in notification
- § 2.5.2.3 Validate as you go
- § 2.9.4.1 Changes in agreement
- § 2.9.4.3 No artificial pressure
- § 2.9.4.4 No hidden preselections
- § 2.10.3.2 Comparable risk
- § 3.1.2 Defining conformance scope (2) (3)
- § 4. Glossary (2)
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
- § 2.1.2 Media alternatives
- § 2.1.2.1 Descriptive transcripts
- § 2.1.2.2 Equivalent media alternatives
- § 2.1.2.12 Video player
- § 2.1.4.3 Captions avoid obstruction
- § 2.1.5 Audio descriptions
- § 2.1.5.1 Audio descriptions prerecorded
- § 2.1.5.2 Audio description timing
- § 2.1.5.3 Audio descriptions live
- § 2.1.5.4 Extended audio description
- § 2.1.5.5 Audio description volume
- § 4. Glossary
Referenced in:
- § 2.1.4.7 Visual indicators in 360-degree space
- § 2.4.1.5 Keyboard navigable if responsive
- § 2.4.1.6 No keyboard trap
- § 2.4.1.7 User control of keyboard focus
- § 2.4.2.2 Preserve keyboard focus
- § 2.4.3.3 Consistent pointer cancellation
- § 2.4.5.1 Change keyboard focus with pointer device
- § 2.4.5.2 Content on hover or keyboard focus
- § 2.5.1.4 Error cause in notification
- § 3.1.2 Defining conformance scope (2) (3)
- § 4. Glossary (2) (3) (4)