Cognitive Web Accessibility Assessments: Musings About Validity

Results of my cognitive Web accessibility assessments, for the 12 sites I have evaluated to date, show an average score of 5 out of 10 points.  That datum is the launch point for this post, in which I consider the assessments’ consistency, accuracy, and related implications.

I hope the average score improves as I increase the sample size of assessed sites, but it will be unlikely if I encounter more like that of The International Dyslexia Association.  It is the first Web site for which no points were scored.

I think the zero-point score is an accurate portrayal of the site’s accessibility.  Comparing it to the two sites that scored all points and to the other assessed sites indicates to me my assessment system is internally consistent.  It is obvious, for example, that the top scorers are much more accessible to people with cognitive disabilities than those sites with five points or fewer.

I suspect the top scores were achieved because the two sites were designed for people with intellectual disabilities and because my assessments are for the broader, perhaps-more-capable group of people with cognitive disabilities.

Given my experiences observing people with intellectual disabilities navigate Web sites, I am concerned even the efforts of the top-scoring sites may not mean they are truly relatively-accessible.  I don’t know how my assessments could better judge such sites, but that is my main interest.

Extensive testing by people with intellectual disabilities may be a good indicator of accessibility.  However, there is such a range of abilities within the population that I am unsure any Web site could be accessible to a significant portion of them. This may mean in practice I must produce criteria for minimum abilities needed and try to make the future Clear Helper site accessible to people who meet them.

Note: This post is part of a continuing series on Cognitive Web Accessibility Assessments.