You are here
State of the Science: Access to Telecommunication Technologies
- Title: State of the Science: Access to Telecommunication Technologies
- Publication Type: Conference Proceedings
- Authors: Vanderheiden, G., Harkins J., & Barnicle, K.
State of the Science:
Access to Telecommunication Technologies
Gregg Vanderheiden, Judith Harkins, Katherine Barnicle
Vanderheiden, G., Harkins, J. and Barnicle, K. (2002). State of the Science: Access to Telecommunication Technologies. In J.M. Winters, C. Robinson, R. Simpson and G. Vanderheiden (Eds.), Emerging and Accessible Telecommunications, Information and Healthcare Technologies (pp. 185-219). Arlington, VA: RESNA Press.
Note: All references to chapters contained in this document are references to Emerging and Accessible Telecommunications, Information and Healthcare Technologies.
This chapter represents a compilation of key issues in telecommunications access, current practice, recent advances in mainstream and disability access technologies, open issues and research questions. Much of this chapter is based on the presentations and discussions that took place at the Rehabilitation Engineering Research Center (RERC) on Telecommunication Access State of the Science Conference, held in October 2001 at Gallaudet University in Washington, DC (see Figure 1). Additional material for the chapter was gathered through discussions with consumers and experts in the field, on-going RERC activities and the literature.
A major purpose of this chapter is to provide an introduction to the area of telecommunications accessibility for new researchers interested in entering this area. The chapter therefore provides an overview of the major factors involved in providing access to telecommunication technologies, an assessment of the current state of the science as it affects access to technologies, and identification of some key areas, topics, and issues needing further research.
Each topic area is introduced with a discussion of key accessibility issues faced by consumers with disabilities. This is followed by a description of how recent advances in both mainstream and disability related technologies might affect this particular issue. Although space does not permit a lengthy discussion of these technologies, references and links provide additional information for those not familiar with them. The sections close with discussion of potential new ideas and strategies, as well as some of the challenges and research questions raised or related.
2. Telecommunication – Definition and Basic Concepts
Since people often use the word "telecommunication" differently, a few terms are defined first to avoid confusion in reading the chapter.
For example the formal legal definition of telecommunications as specified in the Telecommunications Act of 1996 is
"the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received."
This definition would cover phone conversations, fax, radio and TV broadcasts, walkie-talkies and most of the function of the Internet itself (that part which simply connects and transports information). The relationship of the word telecommunications to Internet however is highly controversial and complicated by regulatory issues and history.
In this chapter we will be restricting the discussion of telecommunications to essentially "teleconversation," "telemessaging" and issues around emergency alerting and communication.
Approximately 50 consumers, researchers, policy makers, and industry representatives participated in the State of the Science conference. Presentation and panel topics discussed over the course of the two-day meeting included Modality Translation Technologies, The Future of Wired and Wireless Communication Technologies, Automatic Speech Recognition in Telecommunications, Emergency Communications, Interface usability, and the roles of Public Policy and International Industry Standards in telecommunications access.
Consumers and non-profits:
Nancy Bloch, National Association of the Deaf, David Clark, Marathon Ventures, Janina Sajka, American Foundation for the Blind, Claude Stout, Telecommunications for the Deaf, Inc., Beth Wilson, Self Help for Hard of Hearing People (SHHH), Phillip Bravin, Communication Services for the Deaf.
Steven Tingus, NIDRR, William Peterson, NIDRR, Richard Johnson, NIDRR, William LaPlant, U.S. Census Bureau, Pam Gregory, FCC, Jerry Stanshine, FCC, David Baquis, US Access Board.
James Fruchterman, The Benetech Initiative, Susan Palmer, Cingular Wireless, Brendan James Nyhan, Benetech, Megan Hayes, Alliance for Telecommunications Industry Solutions, Jordan Cohen , VoiceSignal, Inc., Peggy Shepard TeleCommunication System/reachNET, Timothy Robinson, TeleCommunication System/reachNET, Debbie Fletter, America On Line, Martha S., Rocha, SBC Telecommunications, Inc., Ron Barnes, Cellular Telecommunications Industry Association, Rosemarie Garland-Thomson, Maryland Institute for Technology in the Humanities, Stephen Kohn, Mildred Recio, and Lisa Harrison Burke, Verizon.
NIDRR-sponsored Centers and other University Faculty and staff:
Matthew Bakke, Gallaudet University, Jack Winters, Marquette University, Kevin Caves, Duke University Medical Center, John Gill , Royal National Institute for the Blind, Dale Hatfield  , University of Denver at Boulder, Helena Mitchell , Georgia Center for Advanced Telecommunication Technology (GCATT), Paul Baker, Georgia Center for Advanced Telecommunication Technology (GCATT), James L. Mueller, J. L. Mueller, Inc., John Goldthwaite, Center for Assistive Technology and Environmental Access, Scott Haynes, ITTATL/Georgia Tech, Stephen Weiner, Gallaudet University, Sumi Helal, University of Florida, William C. Mann, University of Florida, Michael Lightner, University of Colorado, Robert Williges, Virginia Tech, Meloyde Batten-Mickens, Gallaudet Information Technology Services, Lloyd M. Ballinger, Gallaudet Information Technology Services.
Staff and Consultants of the RERC on Telecommunications Access:
Judy Harkins  , Linda Kozma-Spytek, Paula Tucker, and Cary Barbin, Gallaudet University, Gregg Vanderheiden , Kitch Barnicle , and Gottfried Zimmermann ,UW Trace R&D Center, Gunnar Hellström , Omnitor, Jim Tobias , Inclusive Technologies, Karen Peltz Strauss , KPS Consulting, Inc., Richard P. Brandt, dB Consulting, Inc.
Tele-conversation – refers to a person interacting with another entity at a distance using natural language where the "words" are received continuously as they are being generated.
- With tele-conversation, the receiver is following the communication as it is being generated. As a result, when the person finishes "speaking" the receiver can immediately begin their response (rather than just starting to read or listen to the communication as with messaging). Also, all of the persons time is spent either "listening" or "talking".
- An interesting question is whether it is a conversation if the entity at the far end is a computer which is indistinguishable from a human, and serves the same function as a human. We already have systems that fool many users.
Tele-messaging - A process whereby an individual generates and completes a message prior to sending it to the receiving entity.
- With tele-messaging, the recipient is unable to view or follow the message until it is completed and sent (and received). If voice messaging is being used, the time for a conversation would be double (assuming regular playback speed), since the receiver cannot begin to listen to the message until it is completed and sent. For text messaging, the speed is somewhat better assuming the receiver can read faster than the sender can type. It is still slower than conversation technologies (e.g. letter by letter chat) where the receiver can follow in real-time. With short bursty communications and fast typists, the difference between messaging and conversation technologies may not be noticeable as long as the message transmission times are very short. For longer messages, the ‘silence’ waiting for the other to read and create the response can be lengthy. Similarly, when network traffic causes message delivery delays, even short bursty communications can be mostly waiting unless the user is doing something else simultaneously.
- Tele-messaging is optimized for interactions where the users may not be present when the message is sent. Large time spans can occur between messages, and messages can be one way.
The difference between these two types of communication is important in order to understand both the potentials and limitations of the various communication technologies available today or in the future. This is explored further in the sections below.
Emergency Alerting - Urgent notification to the public that there is an emergency or potential emergency situation. (e.g. tornado, severe thunderstorm, etc.)
Emergency Communication - Interactive communications during an emergency (e.g. when the power is out, stuck in an elevator, etc.)
Paralinguistics and Non-Verbal Communication
Effective communication is essential for exchanging thoughts and ideas, expressing needs and desires and for seeking assistance or guidance. However, much of our communication is conveyed through mechanisms other than our words. Tone of voice, gestures, body language and facial expression often convey as much as information as the speaker’s words. (e.g. the phrase "nice shot" can be either be a congratulatory compliment or a sarcastic jibe based purely on the tone of voice, facial expression, or body language). Providing only the "words" of the communication may not communicate the full or intended message. Capturing and preserving paralinguistic and non-verbal aspects when using text communication or translating from speech to text is an important and developing area in telecommunication.
Effective communication and communication access implies functional equivalency to those who do not have disabilities in having access to and using a technology or service for communication . The ability for everyone to have access to telecommunications ("universal service") has been an important characteristic of telecommunications in the past. The government has taken specific steps to insure that all systems were interoperable and that they offered seamless connectivity. Cross industry standards have been required to allow different companies and technologies to exist in this environment. As new technologies are introduced and as "equivalent" mechanisms for individuals with disabilities are offered, there is concern that these alternatives provide the same ubiquity, compatibility, and interoperability with each other and with legacy systems as is provided for mainstream voice telecommunications. The increasing diversity of communication mechanisms can work to the advantage of individuals with disabilities. But, the advantage could be loss if shifting to new technologies caused them to lose access to some types of communication functionality or compatibility with legacy systems. For example, messaging systems are a very powerful new mechanism for communication, but they do not eliminate the need for conversation technologies. In addition, new technologies must interoperate with each other and with the installed base of technologies. Finally, there is a concern that as new "equivalent" technologies replace older technologies (in both mainstream and disability areas) that the regulations that were put in place to guarantee important accessibility are not lost in the transition to newer technologies.
Realtime Nature of Telecommunication
Although there are some similarities between information technologies and communication technologies and they are often both implemented on the same product base, there are important differences as well. A primary difference is the real time (time-sensitive) nature of communication technologies. Consumers generally cannot take extra time to access or understand the conversation without falling behind. The ability to generate messages quickly and to be able to respond promptly are also critical in communication systems. Except for rare situations, message receivers usually are quite intolerant of excessive delays or extended silences during communication, the result being ranging from discomfort to impatience to terminated phone calls. In conference call situations, individuals may never get an opportunity to contribute or comments may be out of sync with the flow of communication. Even slight translation delays can cause an individual who is deaf to find it difficult or impossible to hit a pause in the conversation.
A key component to accessibility to telecommunication devices is the ability to operate the user interface. Providing the ability of people with a wide variety of disabilities to operate standard or public telecom devices means creating an interface which is:
- Usable without vision
- Usable without hearing
- Usable without pointing, or manipulation
- Usable without speech
- Usable without memory or with limited cognitive processing abilities
- Usable with language disabilities
- Usable with low vision and no hearing
- Usable with alternate languages (including sign language)
In the past, telecommunication device interfaces such as telephone rotary dials and keypads were simple to use and they were considered to be a familiar household devices. Most people in the United States grew up with a basic telephone in the house. Their familiarity and commonality made them easy to use for people for visual impairments and for some people with cognitive disabilities.
However, despite the simplicity of the phone interface, remembering and accurately entering long strings of numbers could still be difficult for some individuals with cognitive disabilities. Newer phones often have an auto or memory dial feature that allows users to store numbers. However, they often come as part of phones which are much more complex, thus negating some of the potential benefit that individuals with cognitive impairments might receive from programmable phones.
Today, both home and office telephones commonly have many buttons in addition to the familiar 12-digit keypad. While these buttons add functionality to the phone, they may be difficult for individuals who are blind, visually impaired or who are older to identify and use. In addition, even home phones now have displays that provide useful functions such as Caller ID, reviewing stored numbers, and accessing additional functionality through menus. However, displays and the functionality they provide are only useful to those individuals who can see the screen and understand what is being displayed on the screen.
Touch screens are also increasingly being incorporated into telecommunication devices. In some cases, they are used to implement the number pad, making it difficult or impossible for individuals who are blind to use the phone. There is no means of tactilely finding the keypad buttons. In other cases, touch screens are used for other functions on the phone, preventing users who are blind from accessing the full range of features on the phone.
The telecommunications interface has also historically used audio cues, from the familiar ring signal in telephones to more complex routing and holding messages, to beeps for conveying status and error conditions.
The precipitous drop in the cost of speech output technologies and memory is making it possible for most all telecommunication products to have at least the simple digitized speech available in them . In fact, all digital communication products (digital cell phones, IP telephones, etc.) already have all of the speech generation technology in them as part of their infrastructure. This capability makes it possible to not only provide voice cues for individuals who are blind, allowing them to identify all of the keys and functions on the phone, as well as access the menus, but also provides access to individuals who cannot read and allows verbal cues to be used for individuals who have cognitive disabilities.
Although touch screens can provide barriers to people who are blind if no physical keypad is provided, the provision of a touchscreen in combination with physical, tactilely distinguishable keys can open up new opportunities for making systems easier to use for individuals with cognitive disabilities. The rapidly dropping cost of displays can also lead to the incorporation of larger displays with higher contrast that are easier to read.
Advances in speech recognition hold promise for a number of disability groups. Individuals with physical disabilities cannot physically operate the keys (e.g. due to paralysis) may be able to operate it with their voice. Unfortunately, most devices require that the individual press a button first before using speech. However, research in ubiquitous voice control systems is leading to better techniques for allowing a person to get the device’s attention using speech. This can then lead to the full speech control that would be needed by these individuals. And with increasing battery life, the need to turn the devices on and off is also being eliminated.
As natural language recognition and understanding advances, the ability to talk to a phone and simply tell it what you would like done is advancing. This will not only provide benefit to individuals with physical disabilities, but also for individuals who are blind or having trouble seeing the phone. It can also facilitate use by individuals with cognitive disabilities. Care must be taken, however, so that speech never becomes the only way to use the device, since there are some individuals are unable to speak.
For individuals who are deaf, advances in gesture recognition hold promise for automatic interpretation of sign language into text or speech for phone calls with people who do not know sign language.
Wearable systems are making it easier for individuals who are older or for individuals with cognitive disabilities. Wearable computers are harder to lose or be stolen. The ability to even implant and biologically couple telecommunication systems is on the horizon, perhaps eventually providing a built-in capability to contact and communicate with others.
The flexibility of new phones is allowing the creation of cross-disability access within a single phone. A paper reference-design for such a phone has already been proposed with prototyping under way. 
Work is also proceeding in the development of abstract and virtual user interfaces. The InterNational Committee of Information Technology Standards (INCITS) is working on a standard that would allow telecommunication products to transmit an abstract user interface to assistive technologies. The assistive technologies could then render the interface in the form most effective for the user. The user could then be able to fully operate all the functions on the phone from this virtual or remote console interface.
3.3 Disability Research, Issues, Questions and Challenges Regarding User Interfaces
- Are there effective mechanisms for allowing individuals who are blind to use touch screens or will dual keyboards/touchscreens always be required for efficient access.
- How easy will it be for AT manufacturers to deal with abstract user interfaces? Would tools be useful? How possible and effective would they be?
- Can effective and low cost alerting algorithms be developed which can be used reliably in noisy environments?
- How can the emerging touch screens and voice be capitalized on to create effective communication tools for individuals with cognitive disabilities?
- What effective strategies can industry use to allow people with physical disabilities to access the ever-shrinking keypads?
- What is the best mechanism for dealing with scrolling text if, when images are enlarged, they become too big to fit on the original screen
- (See also Information Technology Chapter for additional interface issues and options.)
Although accessibility features are increasingly being built into aids whenever it is readily achievable, for individuals with more severe or multiple disabilities, it will sometimes be necessary to provide access through compatibility with assistive technologies. Also, there will be some types of accessibility that might be possible on a large phone, which would not be possible on a small phone due to its small display and buttons. Mechanisms to allow telecommunication devices to work with assistive technologies therefore need to be provided. Individuals who are deaf may need to connect a TTY to allow them to communicate in text over the phone. Phones which have a built in TTY capability may require a mechanism to attach a large keyboard for individuals who need to be able to communicate quickly or for individuals with physical disabilities. Individuals who have a hearing impairment may need to be able to connect their hearing aid, amplifiers, or cochlear implants. Telecommunication devices also need to be able to pass signals used by individuals with disabilities, for example, the tones generated by TTYs.
The use of hearing aids with telephones poses two issues with regard to interoperability. First, there is a need for a mechanism to wirelessly couple the output of the phone to the hearing aid, without relying on acoustic coupling. In the past, this has been done with an electromagnetic pick-up coil or "T coil" in the hearing aid. With the advent of digital phones, a second issue around compatibility has arisen. RF emissions, handset displays, and other handset electronics create electromagnetic noise that interferes with the components of the hearing aid, making it impossible for wearers of many types of hearing aids to use digital wireless phones.
When digital wireless phones were first designed, the algorithms used to compress and transmit the speech were found to be incompatible with TTY signals, causing them to be distorted. Two different strategies for addressing this problem have been developed. Both of them involve changing the TTY signals into a form that will transport successfully across the wireless network and then reconstitute the TTY signals at the border with the public switched telephone network.
New text communication mechanisms are being built into portable phones for mainstream use. As this occurs, there is a desire to have individuals who are deaf switch over to the new technologies. The key to this, however, will be the presence and free use of gateways which would allow these technologies to work with each other and to work with traditional TTYs. The ability to interoperate with the existing TTYs is important both because of the very large installed base and because there are many places that can only be reached by telephone, which to the deaf community means TTY. For example, the country’s 70,000+ 9-1-1 centers can be contacted only by telephone or TTY.
Wireless connection technologies are continuing to evolve. Technologies such as Bluetooth and IEEE 8O2.11 have the potential to improve accessibility and usability of telecommunication devices by individuals with a range of disabilities. Although the wireless connection technologies are too large today to fit in very small hearing aids, their size and cost is continually shrinking. At some point, it may provide a viable alternative to other means of coupling.
Where there once was a variety of competing proprietary video conferencing standards, the trend has been toward increasing the standardized and interoperable video formats with gateways providing bridges between remaining incompatibilities. This trend and model serve as a good template for increasing interoperability as we try to rely more and more on mainstream technologies for disability access. The use of gateways, for example, is already being used to provide a mechanism for porting e-mail to TTYs and pocket text pagers into devices which can communicate with traditional PSTN connected TTYs.
- Can AT and mainstream industries coordinate their efforts sufficiently to allow a graceful migration from older analog to newer digital AT or AT features?
- The evolution of wireless coupling techniques, which are small enough and inexpensive enough to incorporate into miniature hearing aids, and standardization to allow users of hearing aids to choose any phone and vice versa.
- The design of more noise-immune hearing aids.
- The design of digital phones which have lower emissions as measured at the ear (hearing aid).
- Development of gateways to allow interoperability of new text communication products.
- Development of funding mechanisms to maintain gateways.
- Development of digital cellular coupling strategies that do not require purchase or assembly of extra components (e.g. Neckloop, etc.) in order to answer the phone.
The term interactive voice response (IVR) systems is used to describe a range of automated systems generally accessed through a telephone interface. IVR systems can include voice mail, automated attendant and automated call distribution systems. Consumers frequently encounter IVR systems when placing calls to businesses. A caller may be greeted with the message to "Press 1 for Sales," Press 2 for Customer Services," and so on. In addition, many telephone companies offer residential customers the option of purchasing automated voice mail services so it is not uncommon to call a residence and hear "To leave a message for Mary Press 1." IVR systems are even being using to conduct medical research .
Automated IVR systems provide a range of benefits to consumers. IVR systems often operate 24 hours a day, allowing consumers to conduct certain personal business at a time that is convenient for the caller, regardless of a company’s business hours. For example, consumers can use IVR systems to check bank balances, cancel newspaper subscriptions, renew prescriptions and confirm the status of a commercial airline flight. However, IVR systems can also lead to a sense of frustration for consumers hoping to speak to a live person who can answer a question or provide technical support.
IVR systems pose a range of barriers to many individuals with disabilities, some of which will be discussed below. IVR systems can present a range of barriers to individuals with disabilities.  The IVR Accessibility Forum was formed in 2001 to provide an opportunity for manufacturers, telecommunications companies, consumers and advocates to with work together on access solutions for IVR systems, which are subject to FCC regulations under section 255 of the Telecommunications Act. 
As IVR systems become the main telephone entryway to company sales and customer service departments, the accessibility barriers make it harder and harder for some consumers with disabilities to access business services. Basic operation of IVR systems requires that users listen to, understand and respond to system prompts and in some cases enter personal data. Each step, listening to prompts, understanding the prompt, responding to prompts and entering data can present a problem to individuals with disabilities. In addition, in an effort to reduce costs, many companies appear to be designing their systems to reduce or eliminate consumer access to a live person.  Without access to a human operator, consumers with disabilities have even fewer options for accessing services via the telephone.
IVR systems pose significant accessibility barriers to individuals who are deaf and who attempt use TTY’s to directly interact with these systems as well as to those who choose to access IVR systems through a relay operator.  While it is not uncommon to encounter an IVR system that presents the user with a choice of two or more languages, for example English and Spanish, few IVR systems have been implemented in a manner that provides users with a choice to use Baudot tones. Individuals who are deaf and who use TTYs may even have difficulty knowing that they are connected to an IVR system if the system does not provide TTY prompts. If an IVR system does provide prompts in Baudot tones, TTY users may still believe the phone has been answered by a human operator, since text does not provide the same cues that hearing users receive through audio, that this is a recording. They may experience difficulty interacting with the system since some TTYs do have the capability to generate the touch-tones required to respond to the system prompt. TTY users may also be confused when the system states "Press 1 for Savings," and the user presses 1 on the TTY, generating a Baudot tone for 1 rather than a DTMF tone as required by the system.
Since few companies have IVR systems that are configured to work the TTYs, many TTY users choose to use the relay service to interact with IVR systems if there is no other option. However, working through the relay service does not eliminate all of the accessibility barriers. One significant barrier faced by TTY/relay users is that many systems "time out" before the user has a chance to make a selection. The relay operator must listen to and type the menu options so that the TTY user can see the options available. The speed of the system prompts often makes it difficult for the replay operator to keep up. The options may be unclear to the TTY users if the choices are presented in a stream without pauses or text markers. The process of relaying the menu options to the user and the user making a selection, often take more time than the system provides so that by the time the user it ready to make a selection the system has disconnected or changed to another menu. Most IVR systems do not provide the user or relay operator with an opportunity to pause the system or replay an individual menu item. Similarly, system timeouts can also be a problem for individuals with mobility impairments who may have difficulty responding to prompts quickly.
Individuals who are hard of hearing can also find IVR system difficult to use the audio quality of the system messages as well as the volume, speed and level of background sounds can greatly influence a user’s ability to use the system successfully. Individuals who are hard of hearing may have trouble, at least initially, recognizing that the voice they hear is recorded or synthesized rather than real person. Hearing and understanding the system prompts may also be difficult. Many systems do not include a way for the user to easily repeat the last prompt spoken. Users must often repeat a whole menu level to confirm a particular prompt. A misunderstanding of the system prompts can lead to erroneous menu selections and deepening confusion.
Individuals who are older and who are unfamiliar with IVR systems and individuals with cognitive limitations can be easy confused by IVR systems. Improvements in synthetic speech synthesizers are resulting in systems that sound as if the caller is speaking to a live person. As with individuals who are hard of hearing, even recognizing that they have reached an IVR system could be a problem. In addition to listening and responding to prompts, IVR systems often require users to enter personal data such as a telephone number, account number, or zip code. Data entry can present a barrier to individuals who may require extra time to find the information that needs to be entered. For example, an individual who is visually impaired may need extra time to locate the account number on a bill, identify the numbers on the keypad, and enter the account number, perhaps referring back to a paper document several times during the process. Likewise and individual who has a mobility impairment may have difficulty entering data at the rate expected by the system. Few systems give users an opportunity to easily correct a data entry error. Users must often hang up, re-dial and start the process over.
Advances in speech recognition and natural language recognition are increasing the functionality that IVR systems can provide. Today, many IVR systems accept two types of user input touch tone input generated by pressing buttons on a phone and speech input.. Newer systems are no longer limited to accepting single word inputs. Consumers can ask for assistance using sentences and phrases. IVR systems that rely exclusively on the use of speech input can present a barrier to individuals who cannot speak, including individuals with speech impairments and many individuals who are deaf. However systems that allow users the option of responding by speech, TTY or pushbutton are now becoming available.
- Can advanced IVR systems be designed automatically accept speech, TTY or open-standard digital text-chat protocols such as T.140?
- Will natural language systems recognize input from individuals with speech variations due to deafness or dysarthria?
- What would be the advantages or disadvantages of direct command versus menu tree IVR systems.
- Development of IVR standards that work with phones with display screens. Will these prove faster and more efficient for users without disabilities as well?
- Understanding the relationship between IVR cue language and confusion for individuals with cognitive functional impairments.
- Optimizing system voices for users with hearing aids.
- Application of ASR and synthetic speech for recognizing and re-synthesizing IVR messages into more-intelligible speech.
6. Modality Translation
Modality translation is already a commonly used method of achieving accessibility in telecommunications. Telecommunications relay services, video relay services, and live captioning are three types of modality-translation service that are actively used today for access to voice communications by people who are deaf or hard of hearing. This section discusses the extension of these types of services and other, more advanced and automated services on networks of the future [; see also Chapter 7 (Zimmermann et al.)].
The primary barrier that prevents many individuals with disabilities from participating in telecommunication events is the incompatibility between the modes of communication being used on the tele-conversation and the sensory skills and native communication system of an individual with a disability. For example, individuals who are deaf find it impossible to participate in speech-based teleconference without some type of translation of speech into visual form, such as real time text transcription. Individuals for whom sign language is their primary language may even have difficulty participating in a transcribed conversation due to the rapid flow of text and the requirement that they type text quickly and clearly. Unless they are able to communicate in their primary language (sign language) and have it translated for others, they may not be able to keep up or participate.
Individuals who are blind may have difficulty participating in video teleconferencing situations if information is presented visually which is not also presented in auditory form. Finally, individuals may have difficulty in following teleconferences that involve more complex concepts because of their difficulty in handling the advanced language, technical terminology, concepts used, or the multiple simultaneous presentation of information.
Modality translation and assistance technologies and services can be used to help to address these problems. For example:
- For someone who is deaf, the speech stream of a teleconversation could be routed to a real time speech to text translation service to provide them with a fully captioned teleconference.
- Individuals for whom sign language is their primary language and including those who could not keep up with text presented at normal conversational rates could have the audio sent to a sign language translation service so that they could follow the teleconference in sign language.
- Individuals who are blind could use a visual description service which would provide a description of any visual events or materials that were presented using a separate audio line from the regular teleconference.
- Individuals who have low vision and only occasionally have trouble could use the same service as those who are blind, though they may need to rely on it less.
- Similarly, the individuals who are hard of hearing could use text captions to assist when they missed something that was said do to noise, mumbling, or heavy accent.
- Individuals who have trouble speaking clearly could use a speech to speech translation service that would re-pronounce everything they said in a clear, articulate voice.
- Individuals who cannot speak at all could use a text to speech service or, if they were deaf and used sign language, they could use a sign language to speech translation service.
- Individuals who are having trouble following the presentation could have someone explain key concepts or provide an overview to them in language they can understand.
6.2 Technology Advances Related to Modality Translation and Cognitive Assistance
As mentioned above, several of these translation services are already in widespread use. The telecommunication relay service (TRS) is an example of translation service that has been implemented throughout the United States . The services mandated under Title 3 of the ADA and is free to all users (paid for by a small surcharge on all telephone calls). The relay service works by routing the call through a communication assistant or TRS operator who is positioned between the user with a TTY and the individual who does not have a TTY. The communication assistant has a TTY, which they use to communicate with a TTY user and they communicate with a non-TTY user using speech.
Originally, all communication with the TTY user was in text, with the communication assistant typing everything that was spoken onto their TTY keyboard and reading everything that was typed to them to the other caller.
Recent innovations have improved the quality, reduced the delay, increased the privacy and made it easier to use the relay service. By supporting a proprietary, but faster Turbocode  (for those with compatible equipment), the rate at which the TTY portion could be translated was increased above the limit set by the Baudot. For those individuals who are deaf, but could speak, the voice carryover feature allowed the deaf individual’s speech to be transmitted directly to the other party. The only part of the conversation that needed to be typed then was the speech that was being passed back to the individual who was deaf. To make easier for individuals to make relay calls, especially when traveling, the number 711 was reserved. Dialing 711 in different areas of the country will cause the call to be routed to the appropriate local relay service without the caller having to know the local relay service’s number in advance.
To further speed up the process, new techniques for converting speech to text are being developed and deployed. One such technique involves having the communication assistant use text to speech recognition. The communication assistant listens to the person who is talking and, almost simultaneously with the speaker, re-pronounces (or shadows) carefully everything the speaker says. Because the communication assistants’ ASR systems are trained to their speech and because they know how to get the best performance, a high degree of accuracy at high communication rate is possible. This can eliminate the need for the communication assistant to repeatedly ask the talking party to wait a minute as was often necessary when typing. It is also possible for the communication assistant to manually input or correct as necessary, for example with proper names. The ability to contact the relay service via the Internet (instead of using a TTY) has also recently be introduced, as has video relay service, which allows individuals who are deaf to use sign language instead of text to communicate with others through the communication assistant.
For individuals who have speech disorders that make them difficult for many people to understand, a speech-to-speech service is also available through the relay. Communication assistants that are specially trained to understand individuals with speech disorders listen to and re-pronounce the speech so that the communication is clear to the called party.
Innovations in the existing relay service and new translation services are beginning to appear in the market place. Phones that allow individuals to both hear and see what is being said by coupling into a remote translation service are currently undergoing consumer testing . Researchers and innovators around the world are developing and implementing a range of modality translation technologies that are likely to have a direct impact on consumers with disabilities. Speech-to-Text services have been used in a variety of settings for a number of years. Individuals create captions of spoken dialog using a stenotype machine, notebook computer and real-time software. Communication Access Realtime Translation (CART) services, as they are often referred to, are available through a variety of providers in the United States, although, as with sign language interpreters, a shortage of trained providers can limit availability. Telecommunications technology may help alleviate these shortages by facilitating the delivery of services from remote locations. The use of telecommunication technologies has introduced changes to Speech-to-Text services. CART services can now be delivered remotely via telephone lines and the Internet . Researchers and companies are also investigating the use of speaker dependent and speaker independent speech recognition technology for providing Speech-To-Text services .
For individuals who are hard of hearing, phones are currently being introduced which allow individuals to both hear and see what is being said. One system undergoing consumer testing looks like a standard phone with a slightly larger than usual LCD display and a "caption" button . When the caption button is pressed, before making a phone call, the call is routed through a communication assistant who creates a text "caption" for everything spoken by the called party. The result is that the caller can hear the person they called as if in a regular phone conversation. However, they also have a running caption of what is being said, displayed on the LCD.
Telecommunications technologies may lead to potentially significant changes in the delivery of relay services and the provision of sign language interpretations services. Broadband services have made it technically feasible for individuals to communicate remotely using sign language, opening the door for remote video interpreting services and video relay calls .
The Federal Communication Commission recently ruled that calls made using the video relay service (VRS), which are based on sign language communication rather than text communication, can be funded through the interstate TRS fund . This ruling provides a financial incentive for agencies to develop VRS services as well as an incentive for consumers to make use of these services.
A leading TTY manufacturer has also announced plans to release a small portable personal caption device that could be carried with an individual and used anywhere . A project at a major university research center is looking at architectural issues needed to build generic cross-disability modality translation services (such as the bulleted list above) into a national system for teleconversation and collaborative systems . This project is exploring the feasibility of providing services such as those described above on an "instant demand" basis for all disabilities. It is also proposing "try harder" strategy to allow seamless integration of personal computer, network and human-based translation services in a seamless user controllable fashion. The project is also proposed features to allow the user to ‘rewind’ and listen to translation again, jump back in time and bring an assistant online to allow something that has already passed to be described and to either jump back to real time or play forward at an accelerated speed in order to catch up to real time. Although such a system could not be constructed on today’s public switch telephone network because it requires the ability to instantly add, drop and change multiple communication links, an evaluation of the requirements carried out by the researchers with a subcommittee of the FCC’s Technological Advisory Committee Council (including key internet and IP telephony experts) concluded that there were at least two different architectures which could be straightforwardly implemented on IP-based telecommunication infrastructure that would meet all of the requirements. The most interesting and challenging portion would be the various options for caller, called, and third-party payment for the translation services.
Thus, while modality translation services on the current public switched telephone network (PSTN) are constrained, an IP (internet-based telecommunication system) would allow an individual to invoke and use any modality translation or assistance service at any point in any telephone conversation, including in the midst of teleconferences set up completely outside of the user’s control. The Internet Engineering Task Force (IETF) is currently considering a proposal to create a work group for the Session Initiation Protocol (SIP) that would address many of these issues ; see also Chapter 13, Tran et al.].
Researchers around the world are carrying out research targeted at developing the "network-based and pc-based" automated translation process for some modalities, such as speech to text and speech to sign language, research and development of various forms of computer based sign language generation, and automatic speech to sign language translation.
- What bandwidth is needed for effective transmission of sign language?
- If client side avatar is used, what is the bandwidth required for the command string to the avatar? What should be the form of the avatar?
- Can direct speech to sign translation preserve more paralinguistics than speech to text and text to sign?
- How can paralinguistics be captured in speech to text systems and represented to the viewer?
- How can translation on demand services be managed so that conversations requiring specialized background can be assigned to translators with those backgrounds?
- Research to develop guidelines and training for the visual description assistance.
- What type of description tasks will they be given?
- What are the best ways to describe these situations to the users?
- Are there different types of users where different types of description techniques should be used?
- Is it possible to give good descriptions if dropped into an unexpected and context free description situation (assignment)?
- What bandwidth is needed to provide effective description? For what kind of tasks?
- For video description, should the camera be under the control of the user or the describer?
- How do translation delays affect teleconversations?
- How should new technologies be designed to minimize or compensate for delays?
- Is it practical and effective to delay everyone’s communication by a half second or so to bring it more in line with the captions?
- How could the infrastructure be set up to allow payment options including caller pays, called pays, third party pays, qualifications of users, qualifications of activities, etc.?
The TTY was, and still is the only universal means for communication on phone lines for people who are deaf. It is also the only technique that works in locations where wireless text messaging is not supported, where broadband is not available, or for those who cannot afford the monthly internet costs (or wireless text messaging costs).
However, for those that have access to and can afford broadband and or wireless messaging, there is an increasingly diverse range of options for communication each with its own advantages and limitations . The continuing drop in costs and spread in the ubiquity of internet access shows promise for the continuing spread and evolution of communication options and technologies for people who are deaf.
7.1.1 Some key issues and needs of people who are deaf, which are addressed by current TTY communication include:
Ability to communicate with the installed base
(People who are deaf and are using past or current TTY technologies.)
- Live synchronous and continuous communication
Cost to purchase
(If it is not provided free through an equipment distribution program similar to TTYs.)
Cost per month
In addition to phone, if phone line is needed for others in the home.
If phone line can be dropped in favor of this service, then the cost would be any cost above the cost of the phone line.
Ease of learning/use
(Especially by non-adventuresome and non-technically oriented)
Availability in their geographic location
In rural Arkansas, for example, the kinds of text pagers that are so popular in Washington D.C. see only dead air. There is no service.
Where there is no broadband there is no viable and affordable video for sign language as an alternative.
Ability to operate in an emergency
(power outage, severe electrical storm, roadside phone)
Ability to use with relay and emergency services
(and any other businesses)
Ability to be instantly on to answer phone
Does not freeze. No boot time. Ability to answer phone immediately.
Ability to fully inter-mix speech with text
Ability to both speak to communicate and receive text from other person, as well as use text occasionally to "speak" a word or number that is misunderstood.
Stable technology that will remain stable over time
Supported by multiple vendors with overlapping geographical support. Guarantees that technology will not be held proprietary, interfered with, distorted, or otherwise rendered permanently or periodically interoperable or unreliable.
7.1.2. Issues and needs that are not well met by TTYs today
Ability to interrupt
(Note: This is available in "Turbo Code" TTYs today  but not Baudot.)
Ability to keep up to fast typists
(Note: This is available in "Turbo Code"  and "Hi-Speed Code™." TTYs but not Baudot.)
Support for video communication sufficient for sign language
(Note: Range of bandwidth for effective signing is not well documented.)
Ability to communicate for people who do not have TTYs or any special technology without going through a relay service
(Note: This is available today for some technologies through gateways, though it is not widely known)
Ability to take advantage of mass market products, prices, and other features
(e.g. Ability to have text communication function be part of a standard mainstream device which could provide other mainstream like functions as well.)
- Ability to have two people typing simultaneously with both displayed in parallel.
Easy participation in teleconference
Ability to intermix with people using speech and/or text and not be behind due to translation.
Timeliness of communication
Need is for near real-time translation. (Translation delay in sub-second range with possible delay of audio to match.)
A key concern is the ability to take advantage of new technologies but keep and use TTY until other technologies are available to match those functionalities that are currently unique to TTYs.
Technologically advances are providing additional mechanisms for teleconversation that are being used by some deaf individuals to supplement the use of TTYs. Video imaging and compression algorithms have been developed to allow individuals to use sign language over high-speed internet connections provided by cable modems and digital subscriber lines (DSL). Some technologies employ general image compression strategies. Other systems employ strategies which focus on optimizing the transmission of those aspects of the video signal that are most important to understanding sign language (face, hands, lips).
The advent of the web brought widespread connection to the Internet with it. This in turn brought e-mail, which was used for messaging and then instant messaging and chat, which have allowed new formats for text communication. Because of the widespread use of email, and instant messaging, text services have allowed many individuals who are deaf to communicate directly with individuals who are not deaf and do not own TTYs.
Some of these technologies are implemented as telemessaging and some are tele-conversation. Instant messaging, Chat, SMS (short message service) and many other terms are being used to label text (and soon voice) communication technologies that are sometimes similar and sometimes quite different in their characteristics and areas they can be used. It is also important to make a distinction between technologies that provide messaging capabilities (where the user must wait for messages to be read and written, from those that provide real time continuous conversation (character by character or text that is streamed as it is written).
Each type of functionality has value in deaf telecommunications, but there are trade-offs. For example, many new wireless phone service plans include a short message service (SMS) that allows users to send text messages to another subscriber. While SMS conversations can appear to take place fairly quickly, these conversations are actually messages being sent back and forth, similar to email. Users cannot follow the conversation continuously but rather receive it in spurts. This is opposed to a live character-by-character chat or phone conversation where both parties can interact simultaneously with no waiting or unpredictable pauses. Further, systems designed for conversation differ from messaging systems in their priority on instant delivery. Whereas with phone conversation technologies, a connection is maintained so responses are instant, messaging responses vary from very quick to delayed for an extended time depending on traffic, architecture and policies for the messaging service. In addition many providers charge a fee for each message sent. Thus the cost for SMS may be higher than a phone or a standard TTY conversation. Costs however could favor messaging, depending on the billing structure used.
The multi-thread nature of IP (Internet) communication allows individuals to talk simultaneously rather than having to take turns as with a TTY based conversation. IP networks also provide support for text teleconferences. Individual users can use client software to open a conference or chat room on their own computers that would allow several people to communication simultaneously. Text teleconferences are feasible with TTYs, however, because of the need to take turns, they can be complicated. With text chat and the ability for everyone to speak simultaneously, it is now possible to carry on multiple simultaneous parallel discussion threads.
A technique that combines simultaneous auditory, visual and text communication modalities may turn out to be most useful for individuals who are hard of hearing or deaf since it allows individuals the greatest flexibility to use all of their skills and abilities across all three modalities. This approach has been dubbed "total conversation" . Figure 2 shows an example of a screen depicting total conversation. An individual is signing while at the same time using audio and text to converse. Figure two shows how this type of communication could be carried out using several different technology bases and existing standards including the T140 text communications standard .
Figure 2: Allan-10 Multifunction Terminal implementing the Total Conversation
Approach. Terminal has a video screen, audio connection, and text chat area,
allowing users to communicate in voice, sign language or text.
Combining the total conversation concept shown in Figure 2 with the modality translation services discussed previously (in this chapter and Chapter 7) would provide the underpinnings for a system that would allow individuals who are deaf to communicate freely with others irrespective of their abilities, disabilities, sign language or reading skills. Since the system supports speech, text, or sign language, both the speaker and the message receiver could choose whichever mode of conversation was easiest to them. The person who was speaking could choose to communicate using speech or text or sign language. The person who was receiving the communication could (using modality translation services) choose to receive the message in text, speech, or sign language, irrespective of the mode preferred by the speaker. Extending this same thought, the individual who was deaf would be able to participate in video conferencing with other individuals regardless of their hearing sign language skills.
Another evolving technology that could have an impact on next generation deaf telecommunication systems is morphing. Morphing technologies take one image and gradually change it to another. In the movies, morphing is sometimes used to create the effect of a person changing into an animal or another person. With high speed computing, it is also possible to use morph-like technologies to map an image of a person onto another moving individual. As the computers and programs continue to evolve, it will be possible to merge the image of any given individual with a computer-generated person (an avatar) that is using sign language. The result would be a computer-generated image of any arbitrary person using American Sign Language, even if they did not know sign language (and were, in fact, not even moving their hands or arms). Commercial systems using various forms of signing avatars are currently available .
Combining this capability with speech to text and text to sign language (or direct speech to sign language technology), we can end up with a means to convert a person who is sitting still and talking (and does not know sign language) into an image the same person talking and simultaneously signing everything they say. Research and development of various forms of computer based sign language generation and automatic speech-to-sign language translation is being carried in the United States, Europe and Japan  .
With such a capability in place, individuals who are deaf could choose to have anyone they are speaking to appear on a display as if they were signing and talking at the same time. Instead of having to look at text captions or have separate interpreter when they were carrying on a conversation, they would be able to look at the person they were communicating with directly since they would see the person communicating in whatever form they found worked best for them.
This technique could even be used when communicating with someone over a standard voice only telephone line. Because the signing avatar is generated on the deaf person’s local system, they could call someone on a regular voice phone (e.g. the local pizza parlor). They could then take the speech of the individual from the pizza parlor and translate it into sign language and have the avatar sign as the individual talked on the voice phone. If they were talking to someone they knew, they could have the avatar take on the image of that person. If talking to an unknown person, they can either use an arbitrary, somewhat mechanical avatar image such as shown in figure 3 or they could use an arbitrary image.
One advantage of using the avatar approach for sign language (as they get good and smooth enough) is that the actual image is generated on the user’s computer. As a result, it is not necessary to send the motion picture of the avatar over the phone lines. The audio from the conversation could be sent to a modality translation service. The audio could be changed directly or indirectly into American Sign Language. The sign language codes could then be transmitted over phone lines as a simple data string (which could require less bandwidth than voice). These commands would then be changed into high resolution, very smooth motion signing avatars or human simulations.
7.2.1 Machine Recognition of Sign Language
The above discussion covered the ability of machines to generate sign language, but not the ability to read or recognize sign language. A machine that could generate sign language, but not recognize it, would be useful to someone who is deaf and knew sign language, but could also speak. They could speak to the party at the other end of the phone and have the other parties speech changed into sign language for them. Many individuals who use sign language, however, are unable to speak or speak articulately enough to be understood by the general public. In order for them to be able to effectively communicate with individuals who do not know sign language, without requiring the need of a human interpreter, would require devices that are able to recognize (and translate) sign language.
Sign language recognition is progressing, although somewhat slower than sign language generation. Research in this area is progressing along two general lines. One approach involves the user wearing special sensors to monitor their movements. These can take the form of gloves which can detect hand shape and movement coupled with other sensors to track arm movements, etc.. The other approach uses two cameras to generate a three dimensional image of the individual who is signing . The latter approach has the advantage of not requiring that the user wear any instrumentation. However, it does require the positioning of one or two cameras in front of the individual whenever they are communicating. Today, this is easier to do at a workstation than it is in a natural mobile environment. However, use of multiple very small cameras mounted discretely on the individual may eventually allow for easy, convenient, and mobile sign language recognition. Technology such as Smart Dust , or as yet undiscovered sensor technologies for mapping the movement of the face and limbs, could also eventually lead to convenient portable methods for mapping sign language movements. Finally, it may someday be possible to read facial, arm, and finger movements directly from the cortex using electrodes planted directly in the brain, making all sensory technologies unnecessary . This would free the individual from having to wear any type of device or sensors (other than the sensors implanted in their brains). Although this implanting electrodes in the brain, it is considered highly experimental and exceptional today. It may someday be as common as LASIK Surgery on the eyes (to eliminate the need to wear glasses) or even contact lenses are today. (Tapping the communication intent at a language rather than a motor level may also be possible some day. This appears, however, to be a much more complex problem.)
Regardless of the mechanism used, the ability to automatically recognize manual sign language will be a tremendous step forward for those individuals who communicate primarily in sign language and would like to be able to interact freely with those who do not know sign language. Already today, it is possible to create portable finger spelling recognition gloves.  This, however, only gives the individual a convenient method to spell out messages without having to use a keyboard. For those individuals who communicate only in sign language (and not English), the finger spelling would be of limited or no use. They would need a mechanism which allowed them to both "listen" and "speak" using American Sign Language (a completely separate language from English) in order to be able to communicate effectively extemporaneously with others.
Because of the wide range of skills and abilities of people who are deaf with regard to both sign language, speech, and reading, the ideal telecommunications system to allow participation by all who are deaf would be a system that allowed the individuals to choose the form of communication they prefer or find most effective and to also be able to view others as communicating with them in the form that they find most effective and easy to "read". With such a system, an individual who is deaf and who could only communicate effectively in sign language could make a call and choose to have themselves appear to the other person as:
- A person who is signing, or
- A person who is signing and talking, or
- A person who is signing and talking with captions, or
- A person who is sitting still and just talking.
They could, at the same time, choose to view everyone who is communicating with them as someone using sign language, even though the person at the other end may be talking or signing or typing.
Interestingly, the same capabilities that would allow the individual who is deaf to use sign language to communicate with others would also allow them to inject translation into other languages as well. Thus, it would be possible to have a phone call with people from different countries who had all levels speech, reading, and sign language skills (in the different languages) and be able to communicate easily using whatever language and skills were easiest for them to talk and seeing everyone else as talking back to them in the form that they prefer for each to be communicating.
Some areas of research that are needed to achieve this vision include:
Speech Recognition - Both speaker dependent and speaker independent recognition systems are being developed to translate speech into text for applications such as instant portable captioning or classroom captioning . Commercial services that employ speech recognition are also being implemented in a variety of settings including relay services . Most of this work, however, is based upon standard speech recognition engines, which have not taken the voice characteristics of individuals with dysarthria or individuals who are deaf into account . Work on speech recognition of "deaf speech" and inclusion of individuals with speech variations due to disability in the mainstream speech recognition voice samples. Priorities both for the applications discussed here and for access to general mainstream technologies which incorporate voice input and control.
Language Translation – Advances being made in language translation including the work on a universal semantic intermediate representation could have impact on sign language research. Much greater funding and attention needs to be pegged to this area. However, since it is a unique language with unique characteristics, and the language translation capability could have such a tremendous impact in the near future on deaf communication options.
Movement Tracking - Motion analysis systems being developed both for general research and for military applications are advancing rapidly. In particular, the research using multiple camera to track arm, hand, and finger movements, as well as the research on automated lip reading are important advances to be brought to bear on this problem.
7.3 Disability Research, Issues, Questions and Challenges Regarding Next Generation Deaf Communication
- For low bandwidth direct communication, how can use of blank backgrounds and compression strategies that focus on face, hands, and lips decrease bandwidth requirements?
- What is the minimum bandwidth that can achieve clear, natural, regular to fast tempo signing?
- Can human driven avatars be used to provide smooth accurate signing over low bandwidth?
- Can neural nets with multiple camera imaging be used to create "speaker dependent" sign language recognition? Speaker independent?
- Could signs be made in a slightly different fashion or more accentuated so that would make be much easier to recognize them. (sort of like a graffiti  for sign language) How much would this help? Would this be acceptable?
- What are the barriers to creating interworking gateways between the various text messaging and or text chat systems?
- How much and when are the delays in messaging (as compared to conversation) serious impediments or barriers to effective communication by people who are deaf with people who are deaf? And with people who are not deaf?
- What are the key elements from TTY functionality that are missing from major text messaging or chat technologies? How many are technical? How many have deep commercial rationale (hard to overcome)? How many are likely to be changeable (technically and commercially)?
- What would be involved in setting up gateways between the new text technologies and TTY technologies individually and also between the new text technologies themselves?
- What is the path and the obstacles to having T140 text supported in conjunction with any IP telephony?
- Research to demonstrate the mass-market benefit of contemporaneous text conversation as a part of all voice based communication devices and as an option on all communication technologies.
- Research on better, smoother, more natural (and easy to read) signing avatars.
- Development of standard sign language "alphabet or di-sign (half sign pairs that can be concatenated) or other compressed sign language transmission mechanism.
- Research on mapping paralinguistic and non-verbal communication content in speech to their equivalents in sign language.
- Direct speech to sign translation (including paralinguistic and non-verbal communication).
- Effectiveness and acceptability research for various types of automated sign language avatars (both to the general public and to individuals who are deaf).
- Telecollaboration – more effective mechanisms for incorporating individuals who must experience translation delays into interactive group teleconferences.
- Research into direct neural linking to supplement (and perhaps, someday replace physical movement sensors).
- Generalizable modality translation services including common invocation, quality, trust, and payment systems.
See also Section 10 (Transition/Migration) in this chapter.
Telecollaboration and telepresence are two related and overlapping areas. Telecollaboration generally focuses on the ability for individuals at different physical locations to communicate and work together. This may involve conversation, shared work spaces, shared documents and could operate synchronously, asynchronously, or mixed. Telepresence is focused more on projecting a person’s presence to another location and/or simulating presence within another location. Telepresence does not need to involve another person and the emphasis is generally more on realism. In some telepresence applications, the goal is to make it as indistinguishable as possible from being there.
8.1 Accessibility Issues in Telepresence
To the extent that telepresence is just like being there, it presents all of the same accessibility issues as individuals experience in real workplaces. To the extent that it is not real (if there is no sense of touch), it can create new and severe barriers to participation. On the other hand, the fact that the sound and visual events are electronically mediated provides the opportunity to enlarge them, interpret them, and have them translated into speech, have additional information provided, etc.
The topic of modality translation was discussed above, along with the issue of translation delays. Again these happen in both electronic and real environments where translations are needed. Except for electronic transmission times, in general, electronically mediated telepresence (as it becomes more real) should be more accessible than reality in many ways. While we are dealing with delayed audio screens, low resolution video, low frame rates, which can make it difficult to recognize gestures and signing, long distance shots with poor resolution, which can make it difficult to read lips, and audio systems, where all of the voices come from a speaker in the middle of the table, can make it hard for individuals who are blind to differentiate between speakers. If an interface to the system exists which must be controlled, it can present problems physically and cognitively, as well as visually and auditorily.
Much of the discussion of next generation deaf telecommunication systems can act as a prelude to the discussion of cross disability telecollaboration systems. Many of the same infrastructure issues, as well as technical advances, apply here as well, both for the individuals who are deaf in this cross disability interaction and also for individuals with other disabilities as well. The fact that the telecollaboration system injects electronic mediation between the different parties opens up a number of potential benefits or advantages.
- Individuals with speech disabilities could use any method (eg. speech, speech synthesis, voice interpreter, printed text, etc) to generate their output and then have the system translate the user’s output into speech or sign as appropriate for their message receiver. They could also choose to have themselves presented naturally or have it appear as if they are speaking (or signing, etc.) to others.
- Individuals who are blind could have all communications changed into auditory form.
- Individuals who are deaf blind could have all information presented in Braille.
- Individuals with language comprehension problems could have the communications translated into language that they could understand. There is a danger that individuals would not receive the same information as the other call participants. However, this method could allow individuals with comprehension problems to receive a much higher percentage of information than if they were to listen to the conversation with words and language structures which were beyond their comprehension. For many activities such as making a doctor’s appointment or ordering a pizza, they may, in fact, be able to comprehend all of the information if it is presented in a form and vocabulary appropriate to their abilities.
Most of the translation services today would need to be carried out with human assistance, although algorithms currently exist for handling simple business charts and other stereotypic data presentations. Where visual presentations are being made simultaneously with auditory speech, a problem may arise both for individuals who are blind (who would then have two auditory channels to deal with) as well as for individuals who were deaf (who would have two visual channels that they needed to deal with simultaneously). In both cases, a strategy whereby the individual froze one visual channel (or one auditory channel) while they viewed the other could be used. The individual who is blind could freeze the real time discussion while they listened to the audio description. They could then unfreeze the audio discussion and listen to it on a delayed basis. By speeding up the playback, they could listen to everything that was said at a higher than normal rate so that they would catch up to the real time discussion fairly quickly. Similarly, someone who was deaf could freeze the visual presentation while they read the caption or watched the sign language interpreter. They could then unfreeze the visual presentation and view it without having to simultaneously view the sign language interpreter. By speeding things up they, again, could catch up to real time in a fairly short period [.
As long as there is not heavy co-presentation and as long as the individuals who need to freeze the information in one channel to proceed to the other can deal with rapidly presented information and accelerated playback, the above scenario will work. For individuals who would have difficulty keeping up with such a strategy, problems will arise in comprehending all of the material in real time without any accommodation by the other members of the telecollaboration. Similarly, individuals who take longer in generating text will be at a disadvantage in injecting information into real time discussions. As we develop telecollaborations across continents, however, where individuals have differing work and sleep cycles with respect to each other, we may develop new forms for asynchronous telecollaboration which may facilitate participation by individuals who would otherwise have trouble with fully synchronous real time dynamic interaction situations.
The above discussions extend beyond just telecollaborations. Telecollaboration also encompasses such areas as tele-rehabilitation, tele-visiting and telepresence. As noted above, the ability to electronically mediate and convert our supplemental interactions may mean that, for individuals with disabilities, it will be much more effective for them to participate via telepresence than it would be for them to interact in a face-to-face fashion.
8.2 Advances in Telecollaboration and Telepresence Technologies
Most of the technologies relating to this area were covered previously in the modality translation and telecommunication sections. They provide the building blocks and infrastructure for allowing individuals to be able to converse freely, regardless of disabilities. To address access to presented materials, both the automated and human based "assistance on demand"  mechanisms can be used. Advances in handwriting recognition combined with whiteboard technologies can allow individuals who are blind to be able to have text on whiteboards read to them. Even stereotypic shapes like pie charts and bar charts can be reverse engineered back to data tables. For more complex visual presentations, human assistance on demand can be invoked. Similarly, speech to text can be used combined with assistance (both automated and human) for language that may be difficult to understand.
As discussed earlier, a key issue in dealing with the accessibility information is that it may occur simultaneously with other existing information. That is, the description of a visual may happen at the same time the person is talking or captions may occur at the same time the individual has to be watching the motions of the speaker. Technologies similar to Tivo , Replay TV , and Ultimate TV  could be used. On these digital video recorders, viewers are allowed to pause live television shows or even rewind a live presentation for up 10 minutes or more. After viewing or reviewing the information, they can then jump forward to catch up or play the show at an accelerated rate. With these systems, there is no sound while it is moving forward and moving forward is at twice normal speed or more. A more sophisticated algorithm, however, could allow close to continuously variable speed with speech. An individual could freeze the program or rewind whenever needed and then play forward at an accelerated rate to catch up without missing any of the discussion. It could also be programmed to automatically skip forward, shortening any periods in which there was a long pause, a changing of the speaker, etc.
Tactile displays and the ability to project the different participants’ voices into different virtual positions in the room can help individuals who are blind, both in navigating information and in identifying speakers.
For additional information on document sharing interfaces, etc., see Chapter 16 (Vanderheiden, Zimmerman).
8.3 Disability Research, Issues, Questions and Challenges Regarding Telecollaboration
How will telecollaboration systems handle interaction issues such as:
- Cross disability accessible speaker identification and hand raising
- Document sharing and common document editing (with participants who are blind or who have severe physical disabilities)
- Voting (in a teleconference)
- Translation delays (and the extra burden it puts on the interaction)
- How will collaboration systems provide cross disability accessible paralinguistic and non-verbal information in a meeting?
- How can and should standards be developed or modified to address these issues?
9. Emergency and Disaster Communication
The events of September 11, 2001 tested the communication systems of the nation. In some aspects, such as the destruction of local telephone facilities in New York, there were serious disruptions to communication. In the area of access to communications, there were a number of successes.
Captioning of televised reports on all major national news outlets was provided. Deaf people who were evacuated from buildings were able to communicate with others via e-mail pagers. An organization in New York put up a website to inform and encourage communication within the blind community, just a few hours after the attack. The availability of information and communication tools operating in visual, auditory, and tactile modes was a vast improvement on conditions of the past, where there was nearly sole reliance on un-captioned television and radio for sharing emergency notices.
The area of emergency and disaster communications includes official emergency alerts, transmission of critical information for victims and others affected by the emergency, and interpersonal and inter-group communications for summoning help, checking with friends and family, and sharing information. All of these functions need to be accessible in order for people with disabilities to achieve equity in personal safety.
9.1 Accessibility Issues
The diversification of communication technologies has been an overall boost to accessible emergency communications. Between captioned television, Internet communications, telephone communications, paging and other wireless devices, and radio, the disabled American public has more options than ever before. However, since not all of these technologies are equally accessible, the failure of one system can disproportionately affect people with a particular disability; the remaining alternatives may not be accessible.
For example, telecommunications relay services were affected on September 11. The governor of Maryland closed the government, which meant that the relay service personnel were required to vacate the premises. There had been no plan for this contingency, and although the situation was resolved by having another center take over the Maryland calls, the vulnerability of relay service during emergency became clear.
The attack on the U.S. in September, 2001 brought home the importance of access to emergency communications while people are outdoors. Thus it is critically important that mobile communications be accessible and that coverage of the various technologies be adequate for use in society by people no matter where they live.
Egress and evacuation are two of the most important emergency issues for people with disabilities. For someone who is blind, timely communications about how to escape a dangerous situation is particularly important, as it is for someone who uses a wheelchair and cannot exit by the stairs.
Sometimes disasters destroy the infrastructure that makes communication technologies function. Loss of electricity, telephone service, and cellular service can occur, for example, during a hurricane or tornado. In such emergencies, relief agencies must resort to such methods as loudspeakers mounted on vehicles for communication. Such communications are inaccessible to people who are deaf or hard of hearing.
Public policy has played an important role in making emergency communications accessible. Government requires that the Emergency Alert System be accessible, that people be able to call 9-1-1 on their TTYs, and that televised emergency information be provided visually. As the government grapples with the challenge of improving emergency and disaster communications, the issue of accessibility needs to continue to be addressed.
9.2 Advances in Emergency Communication Technologies
Systems for automatic location of an individual and communication of that location to emergency personnel are being rapidly developed due to FCC requirements that call for wireless carriers to be able to communicate precise location information to emergency service (9-1-1) providers by the end of 2005. During emergencies, it is common for people to be unable to identify their location due to injury or stress. For people with communication disabilities, the ability to automatically send location information could be particularly helpful. Similarly, for travelers who are blind, it may sometimes be impossible to describe their precise location. This service should greatly enhance the accessibility of emergency services.
Emergency alerts can be accessed by mobile phone or mobile data device, typically through subscription to an e-mail service. This capability could be improved if the recipient’s location information were integrated into the alert system so that only information about local (or nationally significant) emergencies and disasters would be communicated. Disability-specific information could also be provided, if needed and requested.
Emergency services need to begin to keep pace with changes in technologies used by the public. Currently the only way to reach 9-1-1 is by phone or TTY. Text messaging should also be an alternative for communication with emergency service personnel. This would require either new standards for text messaging, or the emergence of a clear de facto standard from among the many protocols now being used.
As an alternative or supplement to pagers and mobile phones, vehicle-based communications systems, often labeled telematics systems, are used by some vehicle owners to assist in times of emergency or ordinary inconvenience. These services are not accessible to people with disabilities of hearing and speech, but if they were accessible, this would provide location-sensitive communications in emergencies. Captioning of radio announcements pertaining to emergencies is another possible approach to making updated emergency information accessible while in a motor vehicle. National Oceanographic and Atmospheric Administration (NOAA) Weather Radios with text display and alert features could also be used as a method of gaining access to information while driving.
New technologies for emergency text communication with the public in the absence of infrastructure are needed, such as satellite-based communications.
Access to sirens (typically alerting people in a rural area to a tornado in the vicinity) is needed and standardized alternatives for people who are deaf. Paging and mobile text services are often unavailable in rural areas, and so options are needed.
9.3 Disability Research, Issues, Questions and Challenges Regarding Emergency Communication
- How should the infrastructure be modified to allow emergency alerts to be sent to personal, portable communication devices?
- How should individuals not watching television, listening to radio or carrying personal devices receive emergency communications?
- Is it advisable to identify people by disability for localized emergency communications, and what messages would be appropriate to send?
- What mechanisms are needed to inform people of egress procedures based on their perception and mobility abilities?
- What changes are needed in the 9-1-1 system to support communication through devices other than phones? Which devices and protocols should be supported long-term?
- What policies and programs are needed to fill the gaps in accessibility of emergency communications?
- What mechanisms should be used, not only in policy but in implementation of technologies, for assurance that relay services will continue to operate in emergencies?
- How can vehicular telematic services be made more accessible?
- How can sirens be made universally accessible? Will they be replaced with more updated technologies, and if so will they be accessible?
10.1 Accessibility Issues
In many cases new technologies are opening up new opportunities for telecommunication for individuals with disabilities. For example, the rapid proliferation of text chat capabilities and various technologies is providing individuals who are deaf with new mechanisms and options for communicating with each other and with colleagues and friends who are not deaf. In some cases, telecommunication systems such as digital wireless require innovations to accommodate older technologies such as Baudot TTY. To avoid having to modify the digital wireless system to accommodate Baudot, some designers suggested having individuals who were deaf just move to newer digital text chat technologies. Since many of these technologies provide advantages over Baudot, this might initially appear to be a win-win situation.
However, there are two important factors that must be kept in mind. First, not all of the new technologies provide all of the capabilities of the older technologies. While they may provide some advantages they may also have limitations. For example, many of the text chat systems require that you have an Internet connection or that you subscribe to a wireless data service. Both of these require additional monthly payments that are not required when using a TTY over the standard phone line. Most individuals are not prepared to give up regular phone service. Standard phones are available in most homes and businesses and they provide a high degree of reliability.
Many of the digital chat mechanisms also do not allow simultaneous use of text and voice thus preventing an individual who is deaf but can speak from using voice carryover so that they can talk and only use the text mechanism for receiving. Many systems are battery powered, so they work in a power outage, turn on essentially, instantly when the phone rings, making it easy to answer a phone, they are highly reliable and do not crash (causing you to both go silent at the same time you are invisible), etc.
The second issue deals with compatibility with other users. There is a very large group of individuals who are deaf whose only mechanism for text communications is Baudot TTYs. As a result, it is essential that new text chat systems be compatible with Baudot TTYs in order to allow individuals who are deaf to communicate with this large deaf population. Secondly, it is important that the text chat systems that are evolving be able to work with each other. These new systems cannot act as a future replacement for TTYs if individuals who are deaf and who are using different technologies are not able to communicate and interact with each other. There is also a great concern among many in the deaf community regarding any movement away from regulatory requirements for support of the Baudot TTY standard unless the movement is to a new technology which will be similarly long lived and for which there will be similar regulatory guarantees. Guarantees requiring that telecommunication device manufacturers and service providers support and be compatible with the new technologies over time (or provide backwards compatibility) in the same fashion as has been provided for Baudot TTY.
Similar transition compatibility issues and inter technology compatibility issues exist for all of the services described. Some of this may be alleviated when all communication is IP based. Such a system using a common packet and communication protocol could (if proprietary technologies are not employed) allow for users to not only combine any arbitrary text speech and visual technologies and translation filters they need, but also tap into gateways as needed to provide backward and forward compatibility between similar technologies over time. All this is based upon the assumption that common mechanisms will be used by all providers, that suitable translation mechanisms and gateways will be available and free of charge (or negligible) and that the technologies will all work together. Recent battles over proprietary instant messaging and similar technologies are not encouraging in this regard. However, the trend toward open standards and inter-compatibility, combined with the FCC's general diligence in ensuring open access regarding disability do lend hope in this regard.
10.2 Disability Research, Issues, Questions and Challenges Regarding the Transition to New Technologies
- What are the functions of a TTY? What is the relative importance? How do they compare to other text communication mechanisms?
- Can gateways provide a role in allowing migration or coexistence?
- What are the regulatory implications of migration?
- What are the issues around guaranteed availability and long term support for text communication?
- How can the cross disability cross modality collaboration systems described above evolve in an orderly fashion from today's technologies while including an optimized profit path between them?
- Will the systems evolve naturally or must they be forced, similar to digital television?
- Is try harder approach sufficient to support the graceful transition from human to automated services (or from all human to mixed services)?
11. Privacy and Security
11.1 Accessibility Issues
Telecommunication technologies can both increase and decrease an individual's sense of privacy. The level of privacy in communication depends on the technology, the situation and the parties involved. Today, email and Internet messaging applications are being used in situations that in the past may have involved a third party, such as a relay operator. Some consumers may consider person to person, technology-mediated communication without third party human assistance to be more private than a relay call. Others may feel that communication through network technologies is inherently, non-private, given that system administrators and possibly others could potentially access email and text messages.
Some of the next generation telecommunications technologies described above provide new opportunities to capture, store and process conversations among individuals and groups. These capabilities could provide individuals with disabilities with the opportunity to communicate as never before. However, it is also important to recognize that these same capabilities could raise concerns about privacy. In today's security conscience environment, companies and individuals providing modality translation services will have to implement policies regarding the ownership, storage and transmittal of communication that has been translated into one or more modalities.
Privacy is also a concern in other areas beyond access to the conversation itself by others. As new services, such as modality translation, become commercially available mechanisms will need to be put in place to make sure these services are available to consumers. Ideally, individuals who wish to use these modality translation services would not have to identify themselves as having a disability. However, some mechanism for authorizing payment must be provided. However, where modality translation services are to be paid for by a third party, clearly some mechanism needs to be in place to identify those people for whom third party payment for transcription is appropriate to separate them from individuals who would simply like to have a transcript of a conversation, which they could otherwise hear fine. This situation is not dissimilar from those that exist today, except that the ability to track, record, and compile data based upon Internet usage is so much easier.
In addition, use of the service might indicate a disability, depending on how they are implemented. Similarly, companies providing network-based services may ask consumers to create profiles that may, in effect, describe the characteristics of the user. The storage and misuse of this profile could potentially compromise an individual's privacy. For example, a person could requests a live audio description services in order to participate in a telecollaborative meeting because he or she is blind or because he or she is away from the office and must participate via telephone.
11.2 Disability Research, Issues, Questions and Challenges Regarding Privacy and Security
- Can anonymous mechanisms be set up to allow anonymous payment?
- How can secure metadata servers be established to store and serve user preferences and needs?
- How can "random" synchronized keys be used to keep network sniffers from detecting usage patterns?
- Will existing and evolving privacy and security measures be sufficient to handle these issues or will they create barriers themselves?
- Will the evolving payment structures be flexible enough to handle diverse payment mechanism while preserving privacy (e.g. a user calls a company and a company automatically pays for modal translation service, but does not know when it is being used by particular callers)?
- Will translation services need to be licensed and bonded?
- Can the vast pool of older Americans be tapped to provide some of these services, yet prevent fraud abuse and preserve privacy?
12. The Role of Industry Standards-Compatibility
12.1 Accessibility Issues
The key to most all of the above advances will be compatibility and interoperability. The most powerful force to date in enhancing compatibility and interoperability has been industry standards. In some cases, these standards are set by a single industry or a single company and then adopted by all. In other cases, they are developed by standards organizations. In some cases, they are free and open standards, which anyone can use without royalty. In other cases, they are "RAND" (Reasonable And Non-Discriminatory), where anyone is able to license them and the royalty is reasonable. In still other cases, technologies may be held as proprietary and their use only allowed by strategic partners and/or with significant royalty payments.
While special technologies and services have operated using restricted technologies, our society has generally seen to it that the major backbone communications systems (phone and mail) use technologies that everyone is able to access and use. In order for the above visions to take place, similar treatment may need to be provided for not only voice and mail systems, but also for text conversations, video telephony, and IP communications systems of all types.
Yet, there may be places where proprietary technologies and standards can work well and interoperate.
12.2 Disability Research, Issues, Questions and Challenges Regarding Standards
- In which areas will mainstream standards naturally evolve which will address the needs of individuals with disabilities?
- What mechanisms can be created to ensure that people representing the needs of people with disabilities can participate in the myriad of standards activities?
- Can mechanisms be created to provide enough technical expertise to individuals with disabilities (or individuals with enough technical expertise be identified) so that individuals with disabilities themselves can participate in the standard setting activities?
- Can equitable mechanisms be identified for adjusting royalty rates for services which are much more heavily required by individuals with disabilities (or is a subsidization model better)?
13. Role of Regulation
Regulations have played a critical role in the accessibility of telecommunications. As is appropriate, regulations have been brought to bear only when natural market forces were not sufficient to cause telecommunications systems to become or remain accessible. As technologies evolve, the types and form of regulations will change. Some regulations will no longer be needed while other new ones may be required. Today, most regulations are tied to technologies rather than functions. As new technologies take over old functions, the regulations often become detached from the functions they were set up to protect. In some cases, such as the Telecommunication Act, many of the non-disability related regulations were established to control situations where there would not be competition. New technologies may provide competition, causing many of the regulations to not be required with the new technology. When accessibility regulations were piggy backed on legislation dealing with competitive regulation, a disconnect can result when the competitive regulations are not required on a new technology, but the accessibility regulations are required.
In this process, backward compatibility is essential. Telecommunications systems and television systems need to support older mainstream technologies as new technologies were introduced (e.g. supported black and white TV as color was introduced and supporting dial phones as push buttons were introduced). Similarly, new telecommunication systems need to be able to support older assistive technologies until such time as people have moved off of the older technologies. Complicating this somewhat is the fact that many people with disabilities either have difficulty paying for new technologies or must rely on third party payers, who do not allow continual technology updates. Thus, assistive technologies may have a much longer lifetime than their modern mainstream counterparts.
Some current trends toward open systems and interoperability may reduce the need for some types of regulation. In other areas, the importance of regulation in order to ensure open accessible and compatible systems seems evident.
The future of telecommunication for individuals who have disabilities is one of a tremendous potential opportunity. If this future plays out, individuals with disabilities will be able to communicate and interact better via telecommunication than they can in person. Thus, telecommunication strategies and techniques may facilitate face-to-face interactions as well. In order to realize this potential, however, the investment in research and development, not only mainstream technologies, but particularly in areas and problems specific to disability must increase significantly. People who have disabilities will face modality translation, interaction, timing, and backward compatibility problems that will not be faced by mainstream users and will not be addressed by mainstream technology R&D. Thus, while people with disabilities stand to gain tremendously from advances being made by and for mainstream industry, these advances by themselves will not necessarily address all of the disability needs and are likely to create significant new barriers if not implemented properly. Research and development, accommodations within standards, and some regulation are all likely to be needed. If the mainstream technology trends of the last ten years continue and R&D in the disability area is increased to catch up, then ten to fifteen years from now, we will be seeing a quite different telecommunication world for those of us who have disabilities.
This work was supported by the National Institute on Disability and Rehabilitation Research, U.S. Department of Education, under grant #H133E990006. The opinions contained in this manuscript are those of the authors and do not necessarily reflect those of the Dept. of Education.
. Cohen, J. (2001) Telephony and Speech Recognition. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Gill, J. (2001) Telecom Access Research in Europe. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Hatfield, D (2001) Trends in Telecommunications Technology: An Overview. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Mitchell, H. (2001) Emergency and Disaster Communication Panel Moderator State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Harkins, J. (2001) Today's Issues in Context. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Harkins, J. (2001) Compatibility and Interoperability of Telecommunications with Assistive Technology Panel Moderator. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Vanderheiden, G. (2001) An Annotated Look at the Future. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Vanderheiden, G. (2001) Next Generation Deaf Telecommunication Panel Moderator. State of the Science Conference on Telecommunications Access. Gallaudet Univ., Washington, DC.
. Barnicle, K. (2001) Today’s Issues in Context: Telecommunication Interfaces. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC.
. Zimmermann, G. (2001) Network-Based Personal Assistance on Demand. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC.
. Hellström, G. (2001) Industry Standards: Progress, Assessment, Future Direction. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Tobias, J. (2001) Interactive Voice Applications of the Future. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Peltz Strauss, K. (2001) Public Policy on Telecommunications Accessibility. State of the Science Conference on Telecommunications Access. Gallaudet University, Washington, DC
. Americans with Disabilities Act
. Text to Speech Chip, Winbond. Retrieved May 21, 2002 from http://www.winbond.com/e-winbondhtm/team/ShowENewsD.asp?messageID=108
. Hallmark Greeting Card No. HAB 621-0.
. Mundt, J. C., Kobak, K. A., Taylor, L. V., Mantle, J. M., Jefferson, J. W., Katzelnick, D. J., Greist, J. H. (1998). Administration of the Hamilton Depression Rating Scale using interactive voice response technology. M.D. Computing: computers in medical practice, 15 (1), P: 31-39.
. Spencer, J, (2002, May 8). In search of a real, live operator: firms spend billions to hide them. The Wall Street Journal, p. B1.
. Williams, N. (2000). IVR user interface: problems faced by deaf users. Accessible Voice Services & Systems Conference, Gallaudet University, Washington, DC, June 6-7.
. Gorin, A. L., Abella, A., Alanso, T., Riccardi, G., Wright, J.H. (2002). Automated natural spoken dialog. IEEE Computer Magazine, 35(4), 51-56.
. Zimmermann, G., Vanderheiden, G., & Gilman, A. Internet-Based Personal Services on Demand. In: Winters, J.; Robinson, C.; Simpson, R.; Vanderheiden, G. Emerging and Accessible Telecommunications, Information and Healthcare Technologies Engineering Challenges in Enabling Universal Access, RESNA Press, Arlington, VA.
. Ultratec Turbo Code Technology. Retrieved June 6, 2002 from http://www.ultratec.com/cgi-bin/ult/info/TTYTC.html?QUTME8NL;;75
. Ultratec CapTel captioning telephone. Retrieved June 6, 2002 from http://www.ultratec.com/cgi-bin/ult/info/CapTel.html
. Interactive Solutions iCommunicator. [Originally] Retrieved June 7, 2002 from http://www.teachthedeaf.com/icomm/overview.php3 [UPDATE: November 14, 2002 available at http://www.techconnections.org/resources/ guides/VoiceInput.pdf.]
. Ultratec – Fastrans Fast translation system Retrieved June 6, 2002 from http://www.ultratec.com/cgi-bin/ult/info/GenHistTime.html?Bvv9WIqS;;39
. Voice recognition for accommodation of persons with: Severe disabilities, CTDs [Originally] Retrieved June 6, 2002 from http://umrerc.engin.umich.edu/jobdatabase/NIDRR/_Toc419267820 [UPDATE: on November 13, 2002 the resource was not locatable.]
. Barnicle, K, Vanderheiden, G., Gilman, A. (2000) "On-Demand" Remote Sign Language Interpretation WWW9 Posters Proceedings, Amsterdam Retrieved June 6, 2002 from http://www9.org/final-posters/poster26.html
. Zimmermann, G., & Vanderheiden, G. (2001). Modality translation and assistance services: A challenge for artificial intelligence. ÖGAI Journal, 20(2), 26-27.
. Zimmermann, G., & Vanderheiden, G. (2001). Modality translation services on demand - making the world more accessible for all. RESNA 2001 Annual Conference Proceedings, pp. 100-102
. ITU-T T.140 Protocol for multimedia application; text conversation(1998). Retrieved June 7, 2002 from http://www.itu.int/rec/recommendation.asp?type=folders&lang=e&parent=T-REC-T.140
. Wideman, C. J. (2002). English to sign language translation software. Proceedings of CSUN Technology And Persons With Disabilities Conference 2002 , Northridge, CA, March 18-23, 2002. [Originally ] Retrieved June 7, 2002 from http://www.csun.edu/cod/conf2002/proceedings/167.htm [UPDATE: November 13, 2002 available at http://www.csun.edu/cod/conf/2002/proceedings/167.htm]
. Toro, J., Furst, J., Alkoby, K., Carter, R., Christopher, J., Craft, B., Davidson, M. J., Hinkle, D., Lancaster, G., Morris, A., McDonald, J., Sedgwick, E., Wolfe, R. (2001). An improved graphical environment for transcription and display of American Sign Language. Information 4 (4), 533-539. Retrieved June 7, 2002 from http://asl.cs.depaul.edu/Toro01.pdf
. Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N., Palmer, M. (2000). A machine translation system from English to American Sign Language. Proceedings of the Association for Machine Translation in the Americas 2000, Published in Lecture Notes in AI series of Springer-Verlag, pp. 54-67.
. "Synthetic Animation of Deaf Signing Gestures" Richard Kennaway (UEA) Retrieved June 6, 2002 from http://www.visicast.sys.uea.ac.uk/Papers/RK-GW2001.pdf
International Gesture Workshop 2001, City University, London, April 2001.
. "Innovative software aims to aid deaf" Retrieved June 6, 2002 from http://www.japantimes.co.jp/cgi-bin/getarticle.pl5?nn20010218b3.htm
. Braffort, A., Gherbi, R., Gibet, S., Richardson, J., & Teil, D. (Eds.). Proceedings of International Gesture Workshop, GW'99, Gif-sur-Yvette, France, March 17-19, 1999
. Fingerspelling Glove. Retrieved May 17, 2002 from http://www.vrs.org.uk/VR/hard/v_tech/v_tech.html#gplus
. Starner, T., Weaver, J. & Pentland, A. (1998). Real-time American Sign Language recognition using a desk- and wearable computer-based video. IEEE Transaction on Pattern Analysis and Machine Intelligence, 20(12):1371-1375.
. Kahn, J. M., Katz, R. H., & Pister, K. S. J. (1999). Next century challenges: Mobile networking for smart dust. ACM/IEEE Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August 17-19, 1999. Retrieved May 17, 2002 from http://robotics.eecs.berkeley.edu/~pister/publications/1999/mobicom_99.pdf
. Serruya, M. D., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., & Donoghue, J. P. (2002). Brain-machine interface: Instant neural control of a movement signal. Nature, 416, 141-142.
. New technology may speed calls for disabled. Retrieved May 28, 2002 http://www.jsonline.com/bym/News/nov01/relay24112301a.asp
. R.G. Tull, E. Molin, M. Lindstedt, and J. Dykstra, 2001 "Integrating Voice-Related Disabilities and Voice-Enabled Technologies", Proceedings of AVIOS 2001, April 2001, pp. 217-222.
. Vanderheiden G (2000) Report of the Disability Subcommittee to the FCC Technological Advisory Council - Sept 2000 Meeting, Washington DC.
. Vanderheiden 2001, Network Based Services for Cross Modal Translation of Information Retrieved June 6, 2002 from http://trace.wisc.edu/docs/fccadv/cross_modal_translation.htm