- the promise of a text-comprehension parallel between AI and people with intellectual disabilities;
- how AI-driven Web text simplification will benefit other populations, such as non-native language speakers; and
- my work to make sure people with intellectual disabilities and/or autism are not left out of online education.
Amazon Machine Learning Research Awards generously sponsored my colleagues and me to participate in last week’s Amazon re:MARS Conference. It was a global artificial intelligence (AI) event focused on Machine Learning, Automation, Robotics, and Space.
The conference was great with accessibility. I was assigned an employee who guided me everywhere and was just wonderful. The conference website was accessible and easy to navigate. When I identified accessibility problems with the mobile app and with SageMaker tools, Amazon personnel immediately assured me they would be fixed.
The sponsorship included participation in the re:MARS VIP Leadership Networking Reception. I was honored to speak with members of Amazon leadership as well as senior researchers from industry and academia.
- my AI-driven, Web text simplification research;
- AI fairness for people with disabilities; and
- developing an Alexa skill for DisabilityInfo.org.
I will participate this week in a
- Workshop on Diversity, Accessibility, and Inclusion in Library Systems hosted by the
- MIT Center for Research on Equitable and Open Scholarship.
I plan to discuss my AI Web text simplification research and AI fairness for people with disabilities. More about AI fairness soon.
I will soon present part of my AI-Driven Web Text Simplification research.
- is “Creating Simple Web Text for People with ID to Train AI;”
- will be at the CSUN Assistive Technology Conference, the largest such conference in the world; and
- will focus on a pilot study my partners and I recently completed.
We tested if people with intellectual disabilities understand Web text simplified with plain-language standards. (Spoiler: They do!)
We are operationalizing plain-language standards essentially to develop:
- a reliable, easy-to-use method for human editors to create simple text; and
- algorithms for AI to recognize and to create simple text.
For my AI-Driven Web Text Simplification research, I lead a coalition of corporate and academic partners. They include:
- INDEX Program, Eunice Kennedy Shriver Center
- University of Massachusetts Medical School
- Artificial Intelligence Lab
- University of Massachusetts Boston
- User Experience and Decision Making Lab
- Worcester Polytechnic University
Make Web text so simple people understand it the first time they read it.
Text comprises the vast majority of Web content. Poor reading comprehension presents significant challenges to many populations, including people with cognitive disabilities, non‐native speakers, and people with low literacy.
Text simplification aims to reduce text complexity while retaining its meaning. Manual text simplification research has been ongoing for decades. Yet no significant effort has been made to automate text simplification except as a preprocessor for natural-language processing tasks such as machine translation and summarization.
In the short term, my partners and I are improving manual text simplification by creating effective, replicable methods for humans to produce it. We use national and international plain language standards. We conduct pilot studies to see if people comprehend our human-curated, simplified Web text better than typical Web text.
In the long term, my partners and I are developing artificial intelligence (AI) capabilities to produce simple Web text on a mass scale. We are training AI with enormous sets of aligned sentence pairs (typical/simple). We will soon start crowd-sourcing the generation of training data.
I will provide details in future blog posts.
I believe it is common knowledge that providing feedback while teaching is very important. In particular, positive reinforcement consequent to successful performance is essential for increasing the likelihood a skill will be acquired (that a behavior will occur again). As it is my intention to teach basic Web skills via the Web itself, tutorials must be designed so reinforcing feedback is provided automatically.
It is my hope to approximate on a simple level the sophisticated feedback features that Dr. Janet Twyman, who is guiding me in this project, has had built into software for teaching children to read. From the beginning, she has stressed to me the importance of detecting and reinforcing the pressing of the correct key sequence. I will post the details of this effort as the three of us develop them.
Notes: This post is the fourth in a series about Teaching Web Page (Text) Enlargement. Please post a comment with any suggestions.
The site teaches exclusively via videos. Among the 50+ videos now on the site, “How to make text bigger (or smaller)”, embedded below, is included in the first group displayed on the home page. My guess is that’s because learning how to make text bigger is one of the most common skills parents (older adults for whom vision may not be ideal) request to be taught.
The video starts be reassuring the audience that the task is “super easy”. The skill is then succinctly defined. It is taught exactly how I intend to do so, in that the audience is shown how to use a two-key combination within a Web browser. There is perhaps one main difference between the video and the one I hope to produce for people with cognitive disabilities. I intend to show an image of a keyboard, focusing specifically on how to press the correct two keys, in sequence, to make a Web page (text) larger.
- This post is the third in a series about Teaching Web Page (Text) Enlargement. Next up: “Teaching People How To Enlarge Web Pages: Providing Feedback”.
- I appreciate that Google captioned all of the videos.
Many people need to enlarge Web pages to better see information. People with cognitive disabilities often require larger text sizes to better comprehend information as well.
To develop a best practice for teaching a Web page (text) enlargement skill, I will conduct in-person teaching to groups of people with cognitive disabilities. Specifically, I intend to teach people to use a keyboard with a Web browser to enlarge Web pages. Many browsers will enlarge pages in response to the pressing of two keys: the plus key and the Control key (IBM) / Command key (Mac).
Given a Web page that may contain images, but must contain text, learners will press two keys to enlarge page content.
Learners will open a novel Web page and, without instruction or prompting, enlarge its contents.
Component Skills To Be Taught
- locate the correct keys (2)
- hold-down one key for at least 3 seconds with sufficient force to be recognized by the computer
- hold down the one key and tap the other key by pressing it with sufficient force to be recognized by the computer, and immediately releasing it
Completing Sequential Steps
- follow a multi-step chain of behaviors
- identify the start- and end points of the behavior chain
- repeat the behavior chain
Learners must be able to:
- respond to textual-, auditory- and/or video-based instruction
- press keys with their fingers or with equivalent assistive-technology
- press the correct keys only
- open a Web page with Internet Explorer
Computers must be:
- attached to a monitor and a keyboard or equivalent assistive-technology
- using Internet Explorer as the default Web browser
- connected to the Internet
- This post is the second in a series about Teaching Web Page (Text) Enlargement. Next up: “Google Video Teaches How To Make Text Bigger”.
- On the future Clear Helper Web Site, I intend to teach all skills via the Web itself.
- Please post a comment with any suggestions.
The following is a synopsis of work on creating multimodal summaries of complex sentences. A poster of that work, performed by The Hajim School of Engineering and Applied Sciences at The University of Rochester, is the source of all the quoted information in this blog post. I plan to employ this approach on the future Clear Helper Web Site.
We propose Multimodal summary of complex sentences. It gives readers the main idea of sentences using pictures and compressed text structured according to simplified text.
The general steps in the MMS approach are:
- Identify both the main idea of the sentence and related entities and use them to create a compressed summary.
- Extract pictures for the compressed summary.
- Add structure to the pictures and text.
Input sentence: In 1492, Genoese explorer Christopher Columbus, under contract to the Spanish crown, reached several Caribbean islands, making first contact with the indigenous people.
Identify event and related entities: In 1492, Genoese explorer Christopher Columbus, under
contract to the Spanish crown, reached several Caribbean islands, making first contact with the
Extract picture and add structure:
Naushad UzZaman, Jeffrey P. Bigham and James F. Allen. “Multimodal Summarization for People with Cognitive Disabilities in Reading, Linguistic and Verbal Comprehension” poster presented at “All Together Now: The Power of Partnerships In Cognitive Disability & Technology.” Tenth Annual Conference of The Coleman Institute for Cognitive Disabilities. Westminster, Colorado. 21 October 2010.
Note: No endorsement of The Hajim School of Engineering and Applied Sciences at The University of Rochester is intended or implied.