A research team based at Texas A&M University has developed two key refinements that improve the experience of blind or visually impaired people who use iPads as touch-based reading devices. The iPads allow Individuals with Blindness or Severe Visual Impairment (IBSVI) to read in place at the touch of a fingertip by audio-rendering words they touch.
In a new paper, entitled “Digital Reading Support for The Blind by Multimodal Interaction” (ICMI ’14 Proceedings of the 16th International Conference on Multimodal Interaction Pages 439-446 ACM New York, NY, USA 2014 ISBN: 978-1-4503-2885-2 doi>10.1145/2663204.2663266), the researchers, Francis Quek, Texas A&M University professor of visualization, and Yasmine N. El-Glaly, assistant professor of computer science at Port Said University in Port Said, Egypt, and Taif University, Saudi Arabia, describe how blind or visually impaired readers drag fingertips along virtual lines of text on the tablet’s touchscreen or an overlay while the tablet speaks the text of a book or article to them audibly. The researchers’ refinements improve the software’s accuracy at responding to a user’s touch input.
The researchers observe that these technologies are helpful for development of spatial cognition while reading, but users are obliged to move their fingers slowly or risk loss of place on the screen, and that also, IBSVI may inadvertently wander between lines without realizing they’ve drifted.
“In this paper, we address these two interaction problems by introducing dynamic speech-touch interaction model, and intelligent reading support system,” say Drs. Quek and El-Glaly. “With this model, the speed of the speech will dynamically change coping up with the user’s finger speed.”
The model proposed by the researchers is composed of: 1)- Audio Dynamics Model, and 2)- Off-line Speech Synthesis Technique.
“The intelligent reading support system predicts the direction of reading, corrects the reading word if the user drifts, and notifies the user using a sonic gutter to help her from straying off the reading line,” the researchers note. “We tested the new audio dynamics model, the sonic gutter, and the reading support model in two user studies. The participants’ feedback helped us fine-tune the parameters of the two models. Finally, we ran an evaluation study where the reading support system is compared to other VoiceOver technologies. The results showed preponderance to the reading support system with its audio dynamics and intelligent reading support components.”
Dr. Quek observes that if existing systems are adjusted to render words faster, interaction problems remain because of frequently poor relationship between a user’s finger speed on the tablet touchscreen and the speed with which words pronounced by the device are rendered.
To address these issues, Drs. Quek and El-Glaly have developed iPad software that predicts the directional vectors of a user’s finger on the tablet overlay, audibly rendering words in sequence and in sync with the user’s finger speed across the tablet screen rather than a default speed set by the application, and alerting readers when they stray from the reading line. The scientists’ research was supported by a $302,000 National Science Foundation grant.
Dr. Quek envisions future developments in the software’s note-taking and highlighting capabilities, and says he will continue develop the TEIL overlay and application in with Sharon Lyn Chu, TEIL associate director, and Akash Sahoo, a graduate computer science student.
Drs. Quek and El-Glaly’s paper received an outstanding paper award at the 2014 International Conference for Multimodal Interaction at Bogazici University in Istanbul, Turkey, which is a global forum for multidisciplinary research on human-human and human-computer interaction, interfaces and system development.
“This is a very selective conference with an 18 percent acceptance rate for oral presentations,” Dr. Quek notes.
Texas A&M University
Texas A&M University