AI Empowerment & Fairness, Japan

For the American Expert Speaker Program of the U.S. Department of State, I will give 3 presentations and participate in roundtable discussions in Tokyo, Fukuoka, and Beppu, Japan, from September 7 through September 23, 2019.

For two of them, I will focus on Artificial Intelligence (AI) empowerment and fairness for people with disabilities, especially how AI empowers companies to hire a more diverse workforce. This will include the work of my colleague, Frances West, author of Authentic InclusionTM: Drives Disruptive Innovation.

For one of them, I will focus on the functions and activities of the Eunice Kennedy Shriver Center and its INDEX Program, which I direct, as part of a larger discussion about independent living, the Internet of Things, and AI for people with disabilities. This will include my EasyText.AI research.

These activities are arranged and sponsored by the:

I am greatly honored that my presentations will be introduced by and/or the discussions will be moderated by the esteemed:

  • Ms. Kelsey De Rinaldis, Program Development Officer, Public Affairs Section, U.S. Embassy Tokyo
    • Audience: 5 to 10 NPO representatives, chief staff members of business entities, and researchers who work promoting using AI for accessibility.
  • Dr. Ken’ichiro Takashiba, Director of the Joint Surgery Center, Fukuoka Mirai Hospital / Vice-Chairperson, Fukuoka Triathlon 2019
    • Audience: about 60 rehabilitation and medical staff and doctors.
  • Mr. Hidekazu Goto, Chairperson of the Board of Directors, NPO Center for Independent Living Support Oita / International Visitor Leadership Program in 2018 on Accessibility and Inclusion
    • Audience: about 30 support staff, persons with disabilities, local government officials and the general public.

I am indebted to the following people for arranging all of this for me, and/or who will be assisting me.

I am excited to learn from Japanese experts including people with disabilities living independently in Japan.

AI and Disability Interview

AXS Chat recently posted to YouTube an interview of me about my artificial intelligence (AI) research and work for people with disabilities. I talk, in part, about:

  • the promise of a text-comprehension parallel between AI and people with intellectual disabilities;
  • how AI-driven Web text simplification will benefit other populations, such as non-native language speakers; and
  • my work to make sure people with intellectual disabilities and/or autism are not left out of online education.

I thank the AXS Chat members, Neil Milliken, Debra Ruh, and Antonio Santos, for their tireless work to inform the world about inclusion and technology.

Amazon re:MARS Accessibility


Amazon Machine Learning Research Awards generously sponsored my colleagues and me to participate in last week’s Amazon re:MARS Conference. It was a global artificial intelligence (AI) event focused on Machine Learning, Automation, Robotics, and Space.

The conference was great with accessibility. I was assigned an employee who guided me everywhere and was just wonderful. The conference website was accessible and easy to navigate. When I identified accessibility problems with the mobile app and with SageMaker tools, Amazon personnel immediately assured me they would be fixed.

The sponsorship included participation in the re:MARS VIP Leadership Networking Reception. I was honored to speak with members of Amazon leadership as well as senior researchers from industry and academia.

We discussed:

  • my AI-driven, Web text simplification research;
  • AI fairness for people with disabilities; and
  • developing an Alexa skill for DisabilityInfo.org.

 

AI Web Text Simplification: CSUN 2019

I will soon present part of my AI-Driven Web Text SiCSUN Center on Disabilitiesmplification research.

My talk:

We tested if people with intellectual disabilities understand Web text simplified with plain-language standards. (Spoiler: They do!)

We are operationalizing plain-language standards essentially to develop:

  • a reliable, easy-to-use method for human editors to create simple text; and
  • algorithms for AI to recognize and to create simple text.

 

 

 

 

AI Web Text Simplification: Partners

For my AI-Driven Web Text Simplification research, I lead a coalition of corporate and academic partners. They include:

AI-Driven Web Text Simplification: Intro

Research Goal

Make Web text so simple people understand it the first time they read it.

Background

Text comprises the vast majority of Web content. Poor reading comprehension presents significant challenges to many populations, including people with cognitive disabilities, non‐native speakers, and people with low literacy.

Text simplification aims to reduce text complexity while retaining its meaning. Manual text simplification research has been ongoing for decades. Yet no significant effort has been made to automate text simplification except as a preprocessor for natural-language processing tasks such as machine translation and summarization.

Short-Term Approach

In the short term, my partners and I are improving manual text simplification by creating effective, replicable methods for humans to produce it. We use national and international plain language standards. We conduct pilot studies to see if people comprehend our human-curated, simplified Web text better than typical Web text.

Long-Term Approach

In the long term, my partners and I are developing artificial intelligence (AI) capabilities to produce simple Web text on a mass scale. We are training AI with enormous sets of aligned sentence pairs (typical/simple). We will soon start crowd-sourcing the generation of training data.

I will provide details in future blog posts.

Discussion of U.S. and Worldwide Issues of Cognitive Accessibility

Yesterday, Neil Milliken and Debra Ruh, members of the W3C‘s Cognitive and Learning Disabilities Accessibility Task Force, interviewed Andrew Imparato, Executive Director of the U.S. Association of University Centers on Disabilities as part of their AXSchat series.

Watch the great, informative interview of Andy. Their discussion is a wide-ranging one, including commentary about related U.S. policy, and the history of its development.

The programs they discussed are the very ones in which I have worked, since 1991, at the Eunice Kennedy Shriver Center.

Webinar: Building Accessibility to Address Cognitive Impairments

Webinar: Building Accessibility to Address Cognitive Impairments

Conducted By: Lisa Seeman, Chair of The W3C Cognitive Accessibility Task Force; and Rich Schwerdtfeger, Chief Technology Officer, Accessibility, for IBM Software Group, an IBM Distinguished Engineer and Master Inventor.

Sponsored by: International Association of Accessibility Professionals

Date: Wednesday, September 16, 2015

Time: 11:00 AM Eastern (UTC – 4 hours)

Length: 1.5 hours

Fee: $59 for members; $119 for nonmembers

The discount code is: LSRS20

From Rich: “It will cover much of the great work the Cognitive Accessibility task force is doing, including the roadmap.”

Register at: Building Accessibility to Address Cognitive Impairments

Multi-Modal Content Delivery for People with Cognitive Disabilities

""Description of the Technologies

Textual content can be delivered in different modes to help people with cognitive disabilities comprehend it. These modes can include:

  • text to speech (TTS)
  • video;
  • text with contextually-relevant images;
  • text with consistent icons and graphics; and/or
  • text replaced or augmented by symbol sets.

Challenges for People with Cognitive Disabilities

Difficulty of text comprehension by people with cognitive disabilities ranges from minimal to extreme. They may comprehend most of a web page’s textual content, or none at all.

Effect of memory impairments

People with cognitive disabilities may have to:

  • read text several times to aid comprehension; and/or
  • repeat aloud or otherwise reiterate text multiple times to retain it.

Effect of impaired executive function

People with cognitive disabilities may not:

  • sufficiently process / understand text as they read it; and/or
  • understand text because they did not understand the text that preceded it.

Effect of attention-related limitations

People with cognitive disabilities:

  • may not attend to important concepts and relevant details; and/or
  • may be significantly distracted by extraneous text.

Effect of impaired language-related functions

People with cognitive disabilities:

  • may have comprehension problems exacerbated by text or instructions presented in a non-native language;
  • may not understand text written in their native language, but not written in language from the same culture.

Effect of impaired literacy-related functions

Some people with cognitive disabilities may not:

  • understand text because it is not literal and written plainly; and/or
  • comprehend text-only instructions in order to adequately follow them.

Effect of perception-processing limitations

Many people with cognitive disabilities may not:

  • comprehend text that can’t be enlarged without distortion;
  • recognize characters if they do not form words, or are shown in different fonts or styles, e.g., italics.

Effect of reduced knowledge

Some people with cognitive disabilities may not comprehend text because:

  • they do not have relevant background knowledge; and/or
  • background concepts are not explained simply.

Proposed Solutions

Text is written communication.

Textual content can be provided in a variety of alternative modes / formats as described below. Ideally, people with cognitive disabilities should be able to choose that content is delivered in the mode they comprehend best. (This is an important component of the proposed Global Public Inclusive Infrastructure.)

Text To Speech

Text To Speech (TTS) is hardware and/or software that produces human speech by a device such as a computer. Most TTS reads text aloud in a voice. Other TTS converts symbols, such as those employed by augmentative and alternative communication (AAC), into spoken speech.

Many people with cognitive disabilities, such as Dyslexia, may have the capacity to use a screen reader for text to speech (TTS). However, people with severe cognitive disabilities, such as intellectual disabilities, may require simpler TTS delivery.

A common one is a TTS widget embedded in a website. An alternative is a CSS speech module, as proposed by the W3C. Advantages include that there is nothing to download and install; and learning how to use a TTS widget or a CSS speech module is dramatically simpler than learning how to use a screen reader.

The TTS should be limited to relevant content, and exclude such text as found in menus, footers, and advertisements. Another helpful feature is the visual highlighting of text as it is read aloud. Such features may help people with cognitive disabilities who are overwhelmed even by simple TTS delivery.

Video

Video is a short film clip of moving visual images with or without audio.

To aid comprehension, video with audio should be captioned and/or have audio description, which provides important information not described or spoken in the main sound track. For example, see “Autistic spectrum, captions and audio description”.

WCAG 2.0 Success Criterion References:

  • 1.2.2 Captions (Prerecorded): Captions are provided for all prerecorded audio content in synchronized media, except when the media is a media alternative for text and is clearly labeled as such. (Level A)
  • 1.2.5 Audio Description (Prerecorded): Audio description is provided for all prerecorded video content in synchronized media. (Level AA)
  • 1.2.7 Extended Audio Description (Prerecorded): Where pauses in foreground audio are insufficient to allow audio descriptions to convey the sense of the video, extended audio description is provided for all prerecorded video content in synchronized media. (Level AAA)

Text With Contextually-Relevant Images

An image is a picture, a representation of a visual perception.

User research has shown that text comprehension is significantly enhanced where accompanied by contextually-relevant images. A picture of an object may be easier to recognize than a textual description of it.

Diagrams and charts as visual representations could be helpful for textual descriptions of processes or flows. Employing HTML Canvas, as proposed by the W3C, diagrams and charts could be interactive and have additional descriptions for their parts to aid comprehension.

Text With Consistent Icons And Graphics

An icon is a small image or drawing that commonly represents a function. A graphic is a drawing of a visual perception or an abstract concept, or is otherwise a representation of an object or an idea.

Text accompanied by consistent iconography helps convey meaning, such as by associating discrete textual passages with each other. Similarly, a pie-chart graphic may help convey meaning easier to comprehend than a table of statistics.

Text Replaced Or Augmented By Symbol Sets

A symbol is a sign that represents or suggests an idea, an object, an action, or a belief.

Symbol sets can be used for augmentative and alternative communication to support people with cognitive disabilities who have severe speech and/or language difficulties. This can include those who may understand speech, but who are unable to express what they wish to say, perhaps because of a physical disability. (It is common for people with cognitive disabilities to also have physical disabilities.) Ideally, interoperable symbol sets could be used to replace or to augment web-based text.

Ease-of-Use Ideas

Text should be written clearly and simply using the following attributes:

  • plain-language standards relevant to language and culture;
    • (Examples for English include:
    • literal explanations, e.g., without jargon, slang, and metaphors;
    • active voice, not passive voice; and
    • no or minimal use of acronyms and abbreviations.)
  • visual and organizational structures, e.g., headings and bulleted lists;
  • large font size; and
  • sans-serif font

The first 2 attributes, especially the clear structures, will help comprehension via text-to-speech.

Notes

  • I welcome your suggestions. Please add a comment.
  • This is version 3 of an issue paper I wrote as part of my work as a member of the W3C’s Cognitive and Learning Disabilities Accessibility Task Force. It is a work-in-progress.
  • Other task force members who have contributed to the content so far are:
  • References in this document to “some people with cognitive disabilities” are to people with the lowest-functioning intellectual capacity, such as people with intellectual disabilities.