Automated testing vs. expert review
There are several products on the market, which claim to be able to test web sites for accessibility with one click of a button.
Whilst this is partly true, the process of automated testing is slightly flawed in that these tools have limitations around what they can and cannot realistically check. For the most part they are looking for “Yes” or “No” criteria in terms of whether the item has passed a particular WCAG checkpoint.
- Can run over large scale sites
- Cheaper to run
- Results can be time consuming to collate and interpret
- Can only test “Yes” or “No” criteria against WCAG
- Quality of certain reported outcomes need to be verified by a human
- Large scale sites can generate huge reports that are not easy to follow
I think one of the simplest and easiest examples that highlights this would be running an automated accessibility tool which checks against WCAG version 1.0 checkpoint 1.1
“Provide a text equivalent for every non-text element (e.g., via ‘alt’, ‘longdesc’, or in element content).”
At this stage we need to ask what the tool can tell us about the presence of alt attributes. The tool can tell us if an alt attribute is present and whether it contains a string, but it cannot really tell us if the alt attribute actually conveys the information in the image.
Imagine you had an audio player interface that had buttons to control actions such as; “Rewind”, “Pause”, “Play”, “Stop” and “Fast forward” as per the picture below:
If the alt attribute on the “Play” button said “Rewind” and the user had all the images disabled, operating the interface would be extremely confusing. This would pass the automated accessibility tests as the tool would report the presence of an alt attribute, but not necessarily be able to verify the quality of the alt attribute supplied.
An expert review will be able to uncover a lot more accessibility issues with your site than simply running an automated tool.
Background analysis would be performed to determine a single example of each ‘unique’ page on the web site to maximise your testing coverage. The ‘unique’ pages would include all page templates, any form pages and any content pages that contain non standard content such as image maps etc.
- Focused on unique pages to maximise testing budget
- Will pick up a lot more errors than automated testing
- Human experience is invaluable when interpreting how a WCAG checkpoint has been implemented
- Reporting will be more specific with relevant screenshots and examples
- Easier to target remedial work with relevant real life suggestions for fixes
- Can discuss findings with tester
- Costs more than automated testing
- Requires human input
The tester would then work through each page against the WCAG checklist and be able to make an informed decision about whether or not items such as alt attributes are fit for purpose.
The tester can also report back issues such as overlapping content when text sizes have been enlarged, where as an automated test could not evaluate this criteria.
The tester would also manually inspect the source code and use lightweight tools such as the W3C Online HTML and CSS Validators, AIS Accessibility Toolbar and AIS Colour Contrast Analyser (based on WAI algorithm)
Browse our insights.
- Adapting to Change – User Experience Insights
- Consumer & Retail Opportunities – User Experience Insights
- Design Thinking – Service Design toolbox
- Designing for People – User Experience Insights
- Experience Toolbox – User Experience Insights
- Expriencing Spaces – User Experience Insights
- Gaming Trends – User Experience Insights
- Sector – Automotive
- Sector – Education
- Sector – Energy & Utilities
- Sector – Finance
- Sector – Government
- Sector – Healthcare
- Sector – Media
- Sector – Retail
- Sector – Telecoms
- Sector – Travel & Tourism
- Service Design Toolbox
- Spotless Trends
- The Future of Health & Wellbeing – User Experience Insights
- User Experience Insights