Skip to main content

Easy Cell: Mobile Phones for the Hearing Impaired

Researchers develop software that compresses data, allowing real-time video to be used to convey sign language over mobiles. But will U.S. cell phone companies sign on?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The convenience and relatively low cost of cell phones in the U.S. has made them an indispensable part of life. Unless, of course, you are one of the 37 million or so hearing-impaired adults living in this country. But University of Washington (U.W.) in Seattle researchers are hoping to change that by developing software that lets callers communicate on their mobile phones using sign language via real-time video instead of being limited to text messaging.

The goal: a hearing-impaired person will be able to make or answer calls using video streaming over the cellular network to exchange sign language messages. Some countries—including Sweden and Japan, where 3G (third-generation, higher bandwidth) networks are ubiquitous—are experimenting with these capabilities; not so the U.S., where wireless broadband service coverage has only recently begun to rival cell phone service available in Europe. Translation: most U.S. cell phone networks lack the bandwidth to handle streaming video. As a result, cell phone makers do not offer many models that have adequate processing power to run video smoothly, and digital cameras in U.S. cell phones are on the opposite side from the display screen (which would make it impossible to see the person on the other end of the line while exchanging sign language).

U.W. electrical engineering professor Eve Riskin and her colleagues are focusing on data compression, the key to allowing video to flow across a slower network and display on less powerful mobile phones. Riskin says the group's MobileASL (for American Sign Language) software encodes video data nearly twice as much as was previously possible, compressing this information, "so that it takes up less space and gums up the network less."

The researchers are able to achieve a high level of compression through what they call "skin mapping" algorithms that can analyze pixel color to distinguish a person's face and hands in an image. Because these are the most important parts of an image conveying sign language, data representing these areas are sent in full (giving them a greater resolution), whereas some data packets representing background and other inanimate portions of the video frame are dropped, giving them a lower resolution.

"To save battery life, reduce the number of packets sent over the network, and increase the quality with respect to sign language," says Jaehong Chon, a U.W. graduate student participating in the project, "we have developed ROI (region of interest) encoding based on a skin map and frame dropping if a user isn't signing."

Riskin and her colleagues developed MobileASL with a $460,000-plus grant from the National Science Foundation (NSF) to create algorithms for achieving higher quality video at lower bandwidths. "We wondered if we could do this on a cell phone," she says. The researchers released a Web video on YouTube demonstrating MobileASL phones connecting via a Wi-Fi network, because current cellular networks are unable to support streaming video.

The researchers have been testing their MobileASL software on two HTC (High Tech Computer) Corporation TyTN II phones since February. The phones—available through AT&T in the U.S. but much more commonly used in Europe—cost $900 each and have the video camera positioned on the same side as the video screen, which allows callers to speak in sign language while observing the reactions of the person on the other end of the line. MobileASL, which runs on mobile phones that use Microsoft's Windows Mobile operating system, can automatically detect when a caller is using sign language and increase the speed at which it sends and receives data over the network. (When no one is signing and the majority of images picked up by the digital camera are inanimate, the software slows the bit rate to conserve battery power.)

It is possible to hold a MobileASL phone in one hand and sign with the other, but the best way to use the phone is by placing it on a flat surface to prevent movements that might degrade quality. "You want as little motion as possible because motion affects the amount of time it takes to process the data and requires more bandwidth," says Carrie Heeter, a Michigan State University professor of telecommunication, information studies and media. Heeter was part of the school's research team that in 1996 launched the ASL browser, a Web site that offers thousands of videos that translate words into ASL signs.

Riskin will use a second grant from NSF—this one worth $450,000—to conduct a broader study next year that will include both university students and participants recruited from the greater Seattle area fluent in ASL. Each of the 20 participants will receive a phone, most likely an HTC TyTN II, so the researchers can study how they use its sign-language capabilities as well as how frequently. Although university participants will use the school's Wi-Fi network to make calls, the researchers have not yet determined how they will provide network service for participants off-campus.

The researchers would like to make their software available to the hearing impaired within six months via the Web and, longer term, convince cellular network providers such as AT&T, Verizon Wireless, Sprint or T-Mobile to offer MobileASL as part of their services. The best way for MobileASL to be used in the U.S., Riskin says, would be for a cell phone company to put the algorithm in their phones and send the ASL video over their network.