Using AI to improve reading proficiency, accessibility

Posted: 

Reading is one of the best ways to gain knowledge and access information, but many people in the U.S. struggle with illiteracy. Thankfully, a team of engineers from The Ohio State University is helping make text content more accessible for all.

kelly-sikkema-tqpgm1k6ebq-unsplash.jpg
Photo by Kelly Sikkema on Unsplash
A team led by Computer Science and Engineering Assistant Professor Wei Xu is using artificial intelligence (AI) technology to make text content more accessible to school children and people with disabilities. The research is part of a collaborative project with Rochester Institute of Technology (RIT), funded by the National Science Foundation’s (NSF) Cyberlearning and Future Learning Technologies Program. Of the $767,600 in total funding, Ohio State received $375,732.

Low literacy is a dilemma facing U.S. school children and the deaf and hard-of-hearing community.

“According to the National Assessment of Educational Progress released by the U.S. Department of Education, more than 65% of eighth graders in American public schools are not proficient in reading and writing,” explained Xu.

To address this problem, her team is developing computer programs that can automatically scan a webpage or other types of documents, find complex words or grammatical structures, and replace them to make the text easier to understand.

xu_coe.jpg
Prof. Wei Xu
“We are training neural network models that are similar to what Google has used for machine translation,” she said. “Instead of bilingual translation from, say, German to English, our models ‘translate’ complex English into simple English, using simple word alternatives and grammatical structures.”

In the user study conducted by collaborators at RIT, deaf and hard-of-hearing students also expressed that they could learn new words when they were provided synonyms of complex words and phrases on-demand.

The potential impact of the work is far-reaching, even beyond the initial intended audiences.

“Ultimately, our technology will allow people to read any content they want online at their own preferred readability levels,” said Xu. “Easy access to science articles, government policies and medical texts will benefit a broad population. It can allow children to develop interests in STEM at a young age,  and help people access useful resources regardless of their disability, education level or non-native language background.”

Xu’s team is in the second year of the exploratory research and already have significantly improved the fluency and accuracy of their automatic system’s generated texts. Currently they are enhancing the neural network designs and training procedure to further reduce errors and generate more fluent texts.

“We also plan to collaborate with Matt Huenerfauth and Lisa Elliot at RIT, who are human-computer interaction and deaf education experts, to develop a web-based user interface for deaf and hard-of-hearing students” said Xu.

Ohio State’s contributions to the project have already gained several accolades. Google researchers included the SARI (System Against References and Input) score Xu designed for evaluating text generation models—including automatic text simplification—in their official release of the Tensor2Tensor library, a collection of deep learning models that are particularly suited for neural machine translation and natural language generation.

Additionally, Ohio State graduate student researchers Mounica Maddela, Chao Jiang and Yang Zhong have lead-authored two papers related to the project that have been published and presented at top AI and natural language processing conferences. The team presented their findings at the 2018 Conference on Empirical Methods in Natural Language Processing in Brussels, Belgium and the 2020 Meeting of the Association for the Advancement of Artificial Intelligence in New York City.

by Meggie Biss, College of Engineering Communications