State of the Science: Access to Information Technologies

  • Title: State of the Science: Access to Information Technologies
  • Publication Type: Conference Proceedings
  • Authors: Vanderheiden, G., & Zimmerman G.

Full Text

State of the Science:
Access to Information Technologies

Gregg Vanderheiden and Gottfried Zimmermann

Vanderheiden, G. and Zimmermann, G. (2002). State of the Science: Access to Information Technologies. In J.M. Winters, C. Robinson, R. Simpson and G. Vanderheiden (Eds.), Emerging and Accessible Telecommunications, Information and Healthcare Technologies (pp. 152-184). Arlington, VA: RESNA Press.

Note: All references to chapters contained in this document are references to Emerging and Accessible Telecommunications, Information and Healthcare Technologies.

1. Introduction

This chapter is based upon a State of the Science Exchange, held April 1-2, 2001, at the Association for Computing Machinery's Computer-Human Interface Conference (CHI 2001) in Seattle, Washington (see Figure 1), the ACM1 conference "Beyond Cyberspace" [1], held on March 12-14, 2001, in San Jose, California, as well as preparatory and follow-up research performed by the Trace Center.

A major purpose of this chapter is to provide an introduction to the area IT accessibility for new researchers interested in entering this area. The chapter therefore provides an overview of the major factors involved in providing access to information technologies, an assessment of the current state of the science as it affects access to information technologies, and identification of some key areas, topics, and issues needing further research.

The topic of access to information technology systems is extraordinarily broad. It encompasses many functions, including input, display, control, network, and interconnection. It also involves a wide range of application areas, including work stations, personal systems, shared systems, and public information and transaction systems. Since many of the issues are common across these application areas, yet other application areas are unique in their requirements, the chapter has been organized to first lay out the fundamental elements around accessibility and newly emerging technologies. It then brings these elements together in a discussion across disabilities, technologies, and applications, to highlight the implications of the advances and some unanswered questions. This is done by presenting the information in three phases (each in its own section):

  • Dimensions of the Problem - highlighting the different areas that information technologies impact on the lives of individuals with disabilities, as well as the different constraints or requirements of these dimensions.
  • New Technology Overview - highlighting some of the technologies which are impacting the accessibility of information technologies. These technical advances are introduced prior to the issues discussion, because their existence provides a backdrop to the disability issues discussed, and because many of the technologies discussed impact multiple accessibility areas. They are only discussed briefly due to space constraints, but a reference to at least one example or research group is provided for each, in order to allow the reader to explore the topics further. Additional information can be found online at
  • Disability Access to IT: Advances and Issues - addressing the issue of disability access, along key accessibility dimensions for information technology. This section draws on the background from sections one and two, and discusses recent advances, current status, and open issues in each of these areas.

Participants in the 2002 State of the ScienceExchange
on Access to Information Technologies
at the Computer Human Interface Conference (CHI 2002)

Approximately 40 consumers, researchers, policy makers, and industry representatives participated in the State of the Science Exchange held at CHI 2001. Presentation and panel topics discussed over the course of the two-day meeting included Access Applied in Different Domains, Methods for Researching Accessibility, and Future Challenges, including Advanced User Interfaces, the Semantic Web, Brain-Machine Interfaces, and Haptics.

Consumers and non-profits:

Curtis Chong, National Federation of the Blind; Matt King, Accessibility End User Advocate at IBM; Chika Sekine, Universal Design Institute for Information Technology (Japan)


William P. LaPlant, Jr., U.S. Census Bureau [2]; William Peterson, NIDRR (U.S. Dept. of Education); Rachel Wobschall, State of Minnesota


Marney Beard, Sun Microsystems; Shelly Caldwell, Pacific Intermedia; Colin M. B. Crehan, IBM Global Services; Mary Czerwinski, Microsoft Research; Jodi Goettemoeller, Hewlett Packard [3]; Vicki Hanson, IBM T.J. Watson Research Center [4]; Shawn Henry, Optavia Corporation; Pam Holmes, Ultratec, Inc.; Sutha Kamal, Rogers AT&T Wireless [5]; Greg Lowney, Microsoft Corporation; Beth Meyer, Qwest Communications; Michael Muller, Lotus Development Corporation

NIDRR-sponsored centers and other university faculty and staff:

Thomas J. Armstrong, University of Michigan-Ann Arbor (RERC on Ergonomic Solutions for Employment); Tim Berners-Lee, Massachusetts Institute of Technology (World Wide Web Consortium)[83]; Wendy Chisholm, Massachusetts Institute of Technology (World Wide Web Consortium)[20]; John Gardner, Oregon State University (Science Access Project); John Goldthwaite, Georgia Institute of Technology (ITTATC); Judy Jackson, Stanford University (Archimedes Project); Simon Levine, University of Michigan (RERC on Ergonomic Solutions for Employment); Alan F. Newell, University of Dundee (Scotland) [6]; Wendy Porch and Jan Richards, University of Toronto (Adaptive Technology Resource Centre) [7]; Mandayam A. Srinivasan, Massachusetts Institute of Technology (The Touch Lab); Jutta Treviranus, University of Toronto (Adaptive Technology Resource Centre); Vernor Vinge [8], San Diego State University; Norman Williams, Gallaudet Research Institute

University Graduate Students:

Sandra Kogan, University of Massachusetts-Lowell [9]; Victoria Nilsson, Viktoria Institute (Sweden) [10]

Staff and consultants of the RERC on Information Technology Access:

Kitch Barnicle; Chris Law, Gregg Vanderheiden, [11] [12]; Gottfried Zimmermann, [12] Trace R&D Center. Al Gilman (consultant); Jim Tobias, Inclusive Technologies

2. Dimensions of the Problem

In addressing the issue of access to information technology by people with disabilities, it is important to note that there are different types of information technology and different environments in which it is encountered. The strategies that can be used to address accessibility vary across these dimensions in important ways. Techniques or approaches which may be effective in one area may not be effective or applicable in another. Also, strategies that may work for some individuals with disabilities may not be appropriate or effective for other end user groups.

Information technologies also take a wide variety of forms. In the beginning, information technology was confined to a narrow area, dealing with access to computers (usually mainframe computers). This has evolved rapidly over the years, however, and information technologies now permeate all of our environments. In order to encompass the full range of technologies appearing in the work place, recent legislation coined a new term, "electronic and information technologies" (E&IT), to encompass the broader range of computer, communication, duplication, and other information technologies [13] [14]. Although Section 508 only covers IT in government, information technologies also extends into the home [2][9][5]. This involves not only the computer in the home, but information services delivered via television, radio, Internet appliances, audio-visual equipment, etc. Appliances we had not previously associated with information technology (e.g., toasters, refrigerators, ovens, coffee makers, doorbells, security systems, alarm clocks, etc.) are also now being connected to the Internet and home information and control systems [2]. At a recent human-computer interaction conference, a keynote speaker even discussed having common household objects, such as a stapler with an Internet address so that it could be easily tracked [15].

For the purpose of this chapter, the term "information technologies" will include all technologies which allow individuals to be able to create, seek, and manipulate information, as well as technologies that would be controlled through information and network technologies. Since disability access deals mostly with interface and interconnection issues, and not with the end products, this chapter will not explore the myriad of technologies and environments which might be accessed, controlled, etc. through information technologies. Instead we will focus on what most impacts the ability of people with disabilities to have effective access to information technologies.

2.1 Personal Work Stations

The term "personal work stations" is used here to include computers and other information technology that does not move about with an individual, that are essentially dedicated to or assigned to the individual, and into which the individual can install special adaptations and/or modifications. Work stations have the advantage of allowing the individual to attach special accessories and install special software in order to meet their particular needs. In some cases, the actual hardware itself may also be physically adapted to match the individuals' abilities. This ability to "tune" the IT to their abilities is important, since it is often essential that the individuals not only have access to the information technology, but that they have efficient access in order to be productive and competitive in education, employment, and other settings [16]. Being five times slower than everyone else when operating a public building directory can be severely inconvenient. But being five times slower than everyone else when writing or carrying out other work duties that consume a significant portion of the day can prevent competitive employment or advancement.

In some cases, organizations may have such restrictions (for security or other reasons), even on personal work stations, that make it difficult or impossible to install assistive technologies or personal adaptations [17]. In these cases, personal work stations may resemble shared or public work stations in their constraints.

2.2 Shared Work Stations

In some cases, an individual must share a work station or a piece of information technology with others. This may happen in the office, where some information technology may be personal and other technology may be shared. Or it may happen in the home, where technology must be shared among family members. In these situations, some types of modifications can be used if they can be easily called up and dismissed or ignored.

Currently, however, many assistive technologies change the basic operation, configuration, or character of technology once they are installed. Other modifications may involve physical changes to the technology and some modifications interfere with each other. Thus the installation of an assistive technology for one individual may interfere with or preclude the installation of AT for another. Adaptations which fall into any of these three categories may work well on personal work stations, but are not well suited where information technology must be shared.

2.3 Public Technologies

Public systems such as ATMs, Kiosks, Fare Machines, etc. differ from personal work stations in that it is usually not possible to modify or adapt them today to meet individual needs. Furthermore, individuals do not know what types of public technology they will run into, or when. It is therefore important that public technologies be usable by people who have disabilities when they encounter them, and with whatever assistive technology they have with them (and are allowed to use). Today that essentially means that public technologies must be directly usable without assistive technologies. This presents some unique problems and challenges. These include the identification or development of strategies that

  1. can be economically incorporated in a very wide range of technologies,
  2. are commercially practical to incorporate,
  3. can provide access for individuals with a very wide range of different types, degrees, and combinations of disabilities,
  4. do not increase the complexity of the device for those with cognitive disabilities, and
  5. do not significantly impact (except positively) the usability and effectiveness of the product by everyone who does not need the feature(s).

In the future, it may be possible for individuals to change the interface on demand by either using another device to control the public system, or by using virtual assistive technology, as discussed below. As this becomes possible, public systems can be made accessible to individuals with more severe or multiple disabilities, including deaf-blindness [2][18]. However, the need to build direct access into public technologies will remain for some time. A large majority of individuals experiencing functional limitations fall in the mild-to-moderate range and do not typically have or use special interface devices. [19] This includes the growing population of people who are older with functional limitations. The need to accommodate this group through accessible design of public systems will likely continue for the foreseeable future.

2.4 Personal / Wearable Technology

"Personal / wearable technologies" are technologies that a person will essentially always have with them. Like glasses, wheelchairs, hearing aids, etc., they become general purpose tools that an individual uses as an extension of themselves and their abilities. Technologies of this type fall into three broad categories (although they may overlap): specific assistive technologies, mainstream technologies used as assistive technologies, and mainstream technologies used as mainstream technologies.

Specific assistive technologies are those that are used by an individual who has a disability, but which would not ordinarily be used by people who do not have disabilities. A Braille display, a hearing aid, and a speech-to-sign language device are examples. Because they are designed specifically for people with disabilities, they can be designed to match both the needs and abilities of individuals having particular types (and/or combinations) of disabilities. The biggest barrier with these devices is the relatively small market size and the high cost to develop new technologies. Accessibility is generally not an issue, although individuals with multiple disabilities may find that one of their disabilities (e.g., a physical disability) prevents them from using some devices that are designed for people who have their other disability (e.g., deafness).

In some cases mainstream products (if they are accessible) can be used as assistive technologies (e.g., a PDA used as a memory aid, or a computer used by someone who cannot otherwise write). Access to mainstream personal technologies is also important to allow individuals to access and use same personal technologies that everyone else does in the workplace, school and community. In some cases, activities in the workplace may occur only via such technologies.

Access to mainstream personal assistive technologies is similar to access to work stations. If the individual owns and/or can modify the device, they can adapt it with hardware or software to meet their needs. The biggest barriers in this area are usually focused around the openness (or lack of openness) of such mainstream personal technologies, and the support for these technologies by the assistive technology vendors. Because of their smaller size, many of these systems have not had the connectivity or the full featured operating systems of the larger computers. This has made them hard to connect to or modify. There is also a larger diversity of operating systems and customization of operating systems. This diversity causes the already small and segmented market to be even more fragmented, making it less possible and profitable for AT vendors to be developing adaptations for these types of devices. The low cost of these devices also makes the necessarily higher cost for assistive technologies and adaptations seem unreasonably large [19].

Offsetting these factors is the interface flexibility these systems are beginning to exhibit, and the inherently multi-modal interface needs of all (mainstream) mobile technology users.

2.5 Web-Based / Virtual Technologies

Much of the previous discussion focuses around technologies which are hardware in nature. Their interfaces are physical, and they have physical displays. Increasingly, information technologies are taking the form of web-based or virtual technologies. Although these technologies do not have any physical interface themselves, most make assumptions about the type of hardware which will be used to present them, and are designed in such a fashion that they do not work except on particular types of hardware (e.g. a touch screen or visual display or fine resolution pointing device or speaker). Sometimes they require particular software systems as well (e.g. a particular operating system or browser). Some interfaces are very interactive, use complex navigational structures, or make extensive use of animations that require specific senses, physical or cognitive abilities, or response times [20] [21].

When these technologies are used to create public services, such as information services, on-line stores, or even virtual products for sale to the public, the characteristics of their interface design and interface assumptions become critical to the ability of individuals with disabilities or functional limitations to be able to access and use them. When companies use such systems within their company (or design their own intranet applications), the design and interface assumptions can determine the ability of individuals to be employed, to advance, or to participate in particular job functions.

2.6 Entertainment and Edutainment Technologies

Another emerging area is entertainment and edutainment technologies. The ability to recreate is important to health and welfare. And the same technologies that are used for entertainment are increasingly being used for both home based and school based educational activities. These technologies however often have the most interactive interfaces, use animation heavily, and place the greatest sensory, cognitive, physical, and response time demands on the user.

2.7 Impact of User Characteristics on Accessibility

There is an interaction between accessibility and user characteristics which goes beyond the specific type of disability that an individual may have (and which is discussed in Section 3 below). These factors can affect the viability of various solution strategies for providing access and need to be kept in mind in considering accessibility approaches today and in the future.

Employed/Funded Users -- If individuals are employed or are connected with a mechanism which can fund their assistive technologies, then access which is based on assistive technology and personal modifications can be a viable approach. In some cases employers will pick up these costs for their employees at work. Individuals with good jobs may be able to afford some assistive technologies themselves for home and other environments. Third party payers may also cover expenses for certain types of assistive or adaptive technology, particularly if it is connected to employment or health.

Unfunded IT Users -- Many individuals with disabilities are finding themselves unable to use the many (mainstream) technologies that are increasingly pervasive in all aspects of their lives. This includes many individuals who are not connected to third party funding mechanisms, people for whom the employers do not pick up adaptive costs, and most everyone with a disability who needs adaptations in order to use the products in their home and community environments [19]. This can include things as fundamental as the thermostat and the oven in their home or apartment. Compatibility with assistive technologies is not of benefit to those who are unable to afford it, keep it in repair and replace it often enough to keep up with the rate of technology change around them. This category unfortunately also includes some individuals who are employed but whose salary is insufficient to cover the cost of assistive technologies on top of the costs for basic food, housing, and the extra expenses of having a disability.

Elderly Users -- Another major category of user is elderly people experiencing disabilities and/or increasing functional limitations. This group is also less able to take advantage of assistive technologies as a solution strategy for accessibility. The cost of assistive technology is prohibitive for many, and there are few, if any, third party payment programs for assistive technology that is not medical in nature. Another factor stems from the gradual and continual onset of disability as one ages. Assistive technologies are initially unnecessary, and human nature and psychological barriers make it hard for individuals to understand or accept that they need or could use assistive technologies as the functional limitations gradually increase. This group is largely unaware of the assistive technologies and the continual decline of their abilities meets the need to change assistive technologies or approaches over time. All this at a time where people have difficulty dealing with changes in lifestyle and technology, and are then faced with trying to learn how to use technology to access technology which was designed to be operated with a different interface [22] [23] [19].

2.8 Need for Systematic Documentation of HCI Research in Accessibility

On a more abstract, human-computer interface level, there are common problems and potentially common solution across the different technical domains, from personal work stations, to entertainment and edutainment technologies. Results in Human Computer Interaction (HCI) research need to be applied more quickly in the different domains by industry. However, it is hard to compare and evaluate existing research results because of the lack of common documentation formats and processes.

The need for the development of a comprehensive knowledge base in the domain of HCI has been raised, to provide the electronics and information technology (E&IT) industries with a tool needed to support true universal design for intelligent systems, from the perspective of meeting the various needs and preferences of people with disabilities [2]. Such an effort would include an exhaustive review of existing research materials, the development of ontologies and/or taxonomies, and of common ways for measuring and reporting on usability and accessibility related aspects [24].

3. New Technology Overview

This section reviews advances in technologies that can directly impact the accessibility of information technologies in the future. This impact can take the form of:

  • new and better techniques for interfacing with individuals who have disabilities;
  • more flexible devices to accommodate people with different disabilities;
  • better connectivity to assistive technologies or special interfaces;
  • lower cost technologies;
  • interfaces on mainstream products or activities which are harder to access;
  • new barriers to information access;
  • new problems for or with mainstream users where the solution creates disability access barriers.

As mentioned above, since many of these items will affect multiple dimensions of disability access, the disability access implications will not be discussed directly in this section, except for occasional brief contextual comments. Instead, their implications will be discussed in the context of the disability access dimensions in Section 4 of this chapter. Due to length considerations - the topics will be covered only briefly, often in annotated outline form, with links to examples.

3.1 Display/Output Technologies

Flat Panel Displays - These are replacing CRT displays, eliminating the large electromagnetic fields created by CRT yokes, but with increased RF.

Dropping Cost of Large Displays - Enabling use of naturally large, crisp displays with higher contrast, better luminescence. Also, new 'electronic ink' technologies are creating low cost re-writeable displays [25].

Paintable LCD Panels - Promising extremely low-cost, flexible displays which may begin appearing everywhere and on everything, from walls to disposable containers, to greeting cards and even cloth [26].

Heads-Up Displays/Eyeglass Displays - Which can be worn by a user and project a virtual display of any size (and increasing resolution) in front of the individual [27]. The display may move with the head or appear stationery, but moveable (similar to a steady-cam effect or a lockable, moveable display) [28]. Some technologies paint the image on the back of the eye, so it is always in focus [29] [30].

Three-Dimensional Displays - Allowing the display of information which is more realistic and allows greater visualization [31] [32].

Virtual, Immersive Worlds and Avatars - Which allows an individual to directly move about in a virtual (all-visual) world to explore [33].

Virtual Altered Reality - The ability to move about in an unrealistic environment. This includes the ability for an individual to grow very large, to explore things on a grander scale, or to shrink and change their visual senses to explore simulated or real environments on a miniature or subatomic scale. It also includes the ability to view abstract objects as real objects, such as navigating through files, folders, and information spaces as if moving through objects in space or displays in a hallway [34] [35] [36].

Augmented Reality - Mapping projected virtual information or objects on top of the objects in the environment to enhance the information in the natural environment, or to supplement it with additional information. This is usually used in conjunction with a heads-up display [37][28].

Kinesthetic Feedback Devices - Force feedback mice and thimbles have been developed, as well as exoskeleton joints. In some cases they can simply resist movement; in other cases they can actively drive [see Chapter 15, Reinkensmeyer et al.]. These are beginning to provide the ability to sense gross physical shape.

Speech Output - The last five years have seen major advances in speech output along four fronts. First, the quality of synthesized speech has advanced dramatically [38]. Output which has been adjusted for intonation now sounds like a human being speaking over a moderate-quality cell phone connection. Automatic pronunciation is also advancing rapidly [39]. However, human intonation patterns are complex enough that intonation and pronunciation errors quickly tip off listeners. However, for accessibility, intelligibility is more important than naturalness. The third area deals with the advancement of low cost commercial chips capable of digital speech and speech synthesis. Digital speech chips are available in the $2 range. There is a chip that contains other controller functionality as well. [40] Text-to-speech is available as a chip for $10, as of December 2001 [41]. The fourth area is the natural incorporation of speech-capable technologies in standard products. Cell phones, IP phones, computers, PDAs, and an increasing number of other smaller and lower cost devices include all of the hardware and much or all of the software needed to generate digitized and, increasingly, synthesized speech. And with the continual drop in circuit and memory costs, native speech capability is likely to be present in an increasing array of products. Already there are talking greeting cards that are only 50 cents more than their non-talking counterparts ($3.50) of the same size, on the same rack [42].

Audio Displays - Research on Earcons and auditory mapping of touchscreens [43] [44] have explored the use of sound to provide additional cues to information on screens. In addition, advances in three-dimensional sound presentation has advanced to the point where it is possible for individuals who are blind to walk around in a "virtual building" (actually an empty airplane hangar) and identify the hallways, office doors, etc. as they pass them, in a manner similar to what they would experience when walking down a real hallway [45].

Dynamic Tactile Displays - Research on vibro-tactile and electro-tactile stimulation continues, though the resolution is currently far less than permanent physical tactile, or any of the visual display technologies [46][47] [48] [49].

Permanent Physical Tactile Displays - Low resolution displays using variable-height pins and ferro-electric fluids can produce dynamic tactile displays [50]. Also, wax deposition has been used for static tactile displays [51].

Olfactory displays -Although now out of business, DigiScents briefly brought olfactory displays out of the laboratory [52]. However, practical application was elusive.

3.2 Input Technologies

Speech Recognition - Continues to advance in both speaker-dependent and speaker-independent fashion. Although much progress is needed, researchers are beginning to talk about "superhuman speech recognition," with a target of having machines recognize speech better than humans. [53]

Lip Reading - Using similar linguistic disambiguation techniques, research is progressing on lip reading, both with and without acoustic information. [54]

Gesture Recognition - Using both physical transducers and purely image (single and multiple camera) analysis, researchers are transducing hand, arm, body, and facial movements, and using them for both command and control and text input [55] [56]. Even the use of "smart dust" has been advanced as a means for creating a virtual network of wirelessly connected transducers [57].

Pen-Based - Advances in handwriting recognition coupled with digital pens and digital ink are leading to a breakthrough in the use of the pen, not only for command and control, but also for text input. Pen input techniques include both the use of a pen on a sensing surface (e.g., graphics tablet or tablet computer) and the use of pens on standard or special digital paper [58] [59].

Special Keyboards - Specialized keyboards of almost infinite variety have been proposed, including one-hand keyboards [60], chordic keyboards [61], a glove which senses finger spelling [62], cell phones with miniature keyboards [63], eyegaze operated keyboards [64], keybowls [65], hand-mounted sensors which allow one to type in space on an imagined keyboard [66], and a device which projects an image of the keyboard on the table in front of you, which you can type on [67].

Direct Brain Control - Initial work in this area centered around using signals from the brain to operate simple switches [68]. Recent work involves direct pointing at targets under direct brain control. [69] [70] [71] [72] This can be used both for command and control and text input. (Direct language thought control is also being sought.)

Biometrics - Increasingly biometrics are being used for identification. This includes fingerprint, retinal scans, and voice print technologies.

3.3 Changing Form of "Documents"

The nature of documents has also been evolving rapidly. Information used to be stored on printed pages and in books. Access to the information was generally linear or quasi-linear in nature. All of this is changing [73] [74].

E-Documents - More and more documents are being distributed in electronic form. This has the potential to provide information in a form which can be more easily presented in different modalities (visual, print, Braille, etc.). However, in order to preserve digital rights and prevent copying features are being built into electronic documents to prevent them from being viewed as text from other software - including screen readers.

E-Books - Development of e-books at first progressed along relatively accessible formats. Linkages were formed with the Digital Talking Book efforts and standards. Publisher concerns about theft, however, have introduced mechanisms which locked the content of books away so that it cannot be electronically accessed except by the e-book reader. In addition, publishing houses sometimes sell the visual rights to books separately from the auditory rights to books. When this is done, the e-book viewers are prevented from presenting the information in any form except visual [75] [76].

The World Wide Web - The provision of information and services on the Web has had tremendous impact on the availability of both information and services to people regardless of geographic location. Much, but not all, of the information is also in forms which can be more easily presented in multiple formats (different size print, voice, Braille, etc.). Information which is machine-readable can also be machine-processed and text summarization software is continuing to evolve.

Aural Style Sheets - Some content actually has both a visual style sheet and a aural style sheet, so that the information can be formatted to offer both types of presentation [77].

Visual to Audio Technologies - Technologies to present email, web pages and other web content in audio format are being developed completely separate from the disability market. They are targeted at commuters and others who would like to read their email, documents, or access the Web while their eyes are busy [78].

Scalable Vector Graphics - Even graphical information is evolving from being a "picture" to being an image which is made up of individual components. Furthermore, each of these components can be semantically described, and the components can be hierarchically organized. The result is both the ability to take apart and examine or highlight images by their components, and the ability to provide verbal counterparts to the visual components [79].

Interactive Documents - Whereas documents in the past used to be static presentations, documents are increasingly becoming interactive. This presents both opportunities and challenges for accessibility, since the documents themselves often have imbedded interfaces and dynamic displays. Since the displays may involve simulations whose actions are determined by the user, it is not possible to know in advance what will happen, and thus verbal descriptions may be more difficult, or virtually impossible in certain circumstances.

Live Documents - Often documents may be used in a meeting or activity where multiple authors are simultaneously editing or changing the document. Tracking all of the activity may require multiple senses and a wide field of view, both of which may be difficult or impossible for some users.

3.4 Other Important Dimensions and Trends in Information Technology

Computing Power - In order to understand the potentials of information technology in creating adaptive interfaces, it is important to understand the tremendous growth in computing power. Moore's Law, which documents the exponential growth of computing power, is well known. However, computing power is now reaching a point where there hasn't been a good reference. Probably one of the more interesting is one developed by Ray Kurzweil, which looks at the amount of computing power that can be purchased for $1,000. We are currently at a point where $1,000 will purchase the computing power of a dragonfly's brain. By about 2010, however, $1,000 is expected to purchase the computing power of a mouse. By 2020, $1,000 would purchase approximately 20 million billion calculations per second (20 billion MIPs), or approximately the computing power of a human brain. If this was to continue, by 2050, $1,000 would purchase the computing power of 10 billion human brains, or the population of the entire earth. With this tremendously increasing computing power will come new potentials for intelligent, adaptive, and flexible interface design, as well as the ability to translate and re-present information. [8][80][81]

Semantic Web - The evolution of the Web into a semantic web will result in information which is presented in a way that it can be interpreted and associated with other relevant information automatically (by machine). The evolution of a web which is both machine-operable and machine-comprehendible opens up entirely new possibilities for agent-oriented interfaces. Searches and activities which can only be carried out by web experts today may be commonplace for everyone with the assistance of intelligent, or semi-intelligent, web agents [82] [83].

Web and Network Services - Another important trend is the rapid evolution of web and network services. Web and network services take a wide variety of forms, from services which are invisible to the user to functions we currently achieve in other fashions, to new translation and assistance services currently not available in any other form. Where we currently purchase hardware or software to carry out some functions, in the future we may just purchase "web services" to achieve the same ends. The ability to instantly call upon services or functions wherever we are, and using whatever technology is nearby, can revolutionize our concept of technology and assistive technology. This rapid turnover can also either facilitate or greatly impede accessibility, depending upon how it is implemented.

Distributed Interfaces - Currently when you purchase a product, it comes with an interface. Furthermore, the interface is designed by the company who sold you the product. With web-based services and products, it is possible for a person to hold a device in their hand, or encounter one in public, where the interface they see before them comes from four different companies in four different states, or even countries. The physical device may have buttons or controls; it may then be running a browser, which is manufactured by a different company and which also has controls. It then displays a page of content in the browser window from a third company, who is responsible for the design of the controls on the page. The page may then contain content which it purchased from any number of additional vendors. Each of these pieces of content may have its own controls and interface on them. If the person is using a piece of assistive technology to access the page, then that introduces yet another layer of controls, all of which must be perceivable, operable, and compatible in order for the person to be able to access the system which they have encountered.

Wirelessness, Interconnectivity, and Interoperability - Recent advances in wireless technologies have caused an explosion of interconnectivity options. [2] Combined with the rapidly expanding options for wired interconnectivity (including high speed connection over the household wiring [84][85]), this opens up the potential for devices almost anywhere and of any size to be connected and to interoperate with each other. It also provides great potential for individuals who are using assistive technology to be able to easily connect to networks and other devices in their environment without having to physically locate their devices or manipulate any connectors. It also opens up the potential for access to any type of information in any location.

GPS - The global positioning systems now provide the capability, at relatively low cost, for an individual to locate themselves fairly accurately. Combined with other technologies, a person's exact position can be pinpointed, and their position within buildings will be obtainable.

Ubiquitous/Pervasive Computing - Already it is possible to travel about town, to airports and to other public places, and find information technology access points. In some cities, wireless network is available almost everywhere for those with mobile technologies to tap into. As the price of displays, processing, and networking continues to plummet, terminals will appear in every room of the house (except perhaps closets). In addition, computing and information technologies are being built into cars and appliances [86] [8].

Smart Spaces - Research in smart spaces goes one step beyond, and creates environments where the entire space is interactive and aware. As people come and go, the environment adjusts itself, providing for needs on request or automatically. Preferences and needs can be automatically detected and accommodated [87] [88].

Visualization in Education - A trend in education, particularly science education, has been toward the increased use of visualization and visual simulations. These are providing powerful new, motivating, and interesting ways of presenting and understanding information for those who can see clearly. However, massively parallel and detailed presentation of visual information is difficult for people with low vision, and visualization as a primary or sole means of presentation is a severe to absolute barrier for individuals who are blind.

Artificial Intelligent Agents - As the network services and networked devices are provided with interfaces which are perceivable, operable, and comprehendible by other machines, artificial intelligent agents can be used more generally and effectively. We are already seeing a trend in this direction, with such advances as the Semantic Web. Also, as artificial intelligent agents become more functional, more people will want to delegate some activities (e.g., "show me the different digital cameras in the $100 range that have good reviews") and let their agents present them with the results rather than trying a three-hour, mind-numbing hand search of the Internet, or having to travel to stores and deal with their limited selection.

As this occurs, systems will become naturally accessible. Any systems or services which are machine-perceivable, operable, and understandable are, by their nature, accessible, since a machine could re-present the information in a sensory form compatible with individuals with sensory disabilities, and could map the control into a form which was physically within the range of an individual with a physical disability. They would also be able to re-present the information for people with cognitive disabilities in ways that are not possible today.

4. Advances and Issues in Disability Access to IT

This section reviews advances that have been made in assistive technologies, the implications of the advances in standard technologies, and open issues and areas for future research. This section is broken into the following areas:

  • personal interfaces (by disability);
  • cross-disability public interfaces;
  • interoperability and interface element substitution;
  • complete interface interchangeability;
  • modality translation and interpretation;
  • modality-specific presentation;
  • new metaphors for interface design;
  • affordability and lose-ability;
  • technology-averse users;
  • security vs. accessibility; and
  • commercial practicality.

Since the discussions cover both existing and future technologies - those technologies or concepts that are not yet available are presented in a separate paragraph or using verbs in the future tense.

NOTE: See also discussions in the interface section of the Telecommunication chapter immediately following.

4.1 Personal Interfaces

Blindness - The advances in speech synthesis, along with the rapid increase in computing power per penny [81] is having the greatest impact on access to people with disabilities to public information systems. Where the cost to build speech into products used to be prohibitive, it is becoming practical on ever less expensive systems [40][41]. The capability already comes free for the operating system being used on most next-generation public information systems [89], and speech is increasingly being built into personal portable technologies as well. This is allowing alternate speech interfaces to be built into products. Currently, speech access options for individuals who are blind can be found in kiosks, ATMs, voting systems, museum displays, and building security/intercom systems[90]. Global positioning systems are being used in conjunction with geographic data to provide location and navigation systems for individuals who are blind. [91]   

In the future advances in digital imaging and character recognition (as well as the ability to tap into any arbitrary amount of computing power over the net) will allow individuals who are blind to be able to scan their environment and have any text read to them. Using three-dimensional sound it could be presented spatially, so that it sounds like it comes from the direction that it is located. Size, color, and distance could be acoustically encoded as well. In addition, the individual could scan the area and then ask their device for the location of specific words or words in a particular category, such as, "Do you see the name of a men's store?" and advanced object recognition technology could recognize common objects in a person's environment. By using the information they can gather by scanning, along with information stored on the web, particularly when it is a semantic web, they could get specific answers to general questions without requiring a high level of artificial intelligence. Research in face recognition can allow individuals who are blind to identify those around them for the first time. Advances in tactile research, including hand, arm, and tongue-based displays, might allow individuals who are blind to have more information presented in different ways to facilitate parallel reception and processing of information. This is important for individuals who have lost such a high-bandwidth information source as vision. The advances in speech recognition and the drive toward conversational interfaces will also have dramatic impact on interfaces for individuals who are blind. And as systems and services become machine-perceivable, operable, and understandable they will, by their nature, become accessible, since a machine could re-present the information in a sensory form compatible with individuals with sensory disabilities.

Some Disability Research Issues, Questions, and Challenges:

  • How to separate important from incidental text
  • How to encode spatial information most effectively
  • Parsing visually presented information
  • How to generate natural language descriptions from 2D and 3D models
  • Decoding semantics encoded in spatial information on screen and in printed material
  • Enhancements needed for semantic web to accommodate functional limitations
  • Practical object recognition - and identification of practical and effective uses.
  • How to render spatial information tactilely - for example appropriate level of details
  • How to deal with parallel information on different sensory channels with all converted to one
  • How to best present and encode visual information tactilely and via sound.
  • When to choose tactile vs. auditory presentation, or both
  • Best methods for presenting mass parallel information.
  • Best query methods for people with different mental models of the environment - and different skills.
  • Development of Artificial vision
  • Design of information displays to match the artificial vision
  • Miniaturization, cosmetic design and operation, cost, etc.

Low Vision - The miniaturization of camera technology has made closed-circuit TV (CCTV) aids take on a whole new character. [92] Devices that used to fill half of a desk can now be carried easily and video magnifying telescopes can be carried in a jacket pocket. [93]

Future advances in heads-up displays will allow individuals to carry something in their pocket which will provide them with an equivalent of a full wall display. Technologies which can paint the image on the back of the eye [29] could be designed to display the image on those portions of the retina which are still intact. The advances in imaging, text, and speech could allow individuals with low vision to be able to look at any text and hear it read to them in their ear. Research in face recognition could also allow them to be able to identify people they can see, but not clearly enough to recognize.

Some Disability Research Issues, Questions, and Challenges:

  • Development of selective, prescriptive display on residual retina.
  • Methods to effectively combine residual vision with auditory techniques
  • Miniaturization, cosmetic design and operation, cost, etc.
  • (see above under blindness)

Deafness - Many of the problems faced by individuals who are deaf fall into the communication area and are covered in the Telecommunication chapter. However, sound is increasingly being used in information systems. Soon, advances in wearable displays, phase-array microphones, and speech and sound recognition could help to create new tools that individuals who are deaf can use to track and present auditory information from their environment (as well as information technologies) in a convenient and portable form. New sensing technologies, including gloves and image tracking, can allow individuals who are deaf to use finger spelling as alternate mechanisms for input. [62] With advances in language translation, this could extend to the use of sign language. Again, as systems become 'machine perceivable and understandable" AT will be able to represent information in visual mode.

Some Disability Research Issues, Questions, and Challenges:

  • How to use Phase array sound tracking to help locate sound sources
  • Emergency sound identification
  • Efficient access to emergency and alert systems in private and public spaces
  • Video sign recognition research
  • Development of good text to sign language translation
  • Direct speech to sign language translation with paralinguistic mapping
  • How to combine speech recognition and lip reading to get better accuracy
  • Miniaturization, cosmetic design and operation, cost, etc.

Hard of Hearing - The ability to have information which is presented auditorily simultaneously presented visually is of great advantage to individuals who are hard of hearing. In addition, new technologies, such as SMIL and SAMI, allow full synchronization of multiple visual, audio, and text presentation.

Some Disability Research Issues, Questions, and Challenges:

  • Visual augmentation to facilitate speech recognition.
  • Better coupling of auditory aids to auditory sources (see below)
  • How to make use of cochlear implants most effectively (e.g. sound transformations)
  • (see deafness area)

Physical Disabilities - For individuals with physical disabilities, the primary advances are coming in the area of interoperability (discussed below). However, the nascent area of direct brain interface could hold tremendous potential for individuals with extreme physical limitations. For individuals with severe physical disabilities, the new heads-up display technologies, combined with the interconnection capability (see below) can provide whole new interface approaches. Rather than having to always have a display positioned where they can easily access it, virtual displays and control panels could be projected on request. Sip and puff, eye gaze, head movement, and even direct brain control could then all be used to directly access the control panels. The control panels could be configured to match the task at hand, and network-based resources could be tapped to download appropriate interfaces to match new devices or tasks as they are encountered. Speech could also provide powerful new interfaces when the individual is capable of speech, and when speech would not be disruptive to the environment. The result could be much more efficient and tuned interfaces for individuals with more severe physical disabilities, and particularly for individuals with movement restrictions. This can include individuals all the way down to "locked-in" syndrome, where they have essentially no control of any musculature. Strategies for better dealing with individuals having movement interference, however, are needed. The potential of artificial intelligence to remap controls into the capabilities of individuals also needs to be explored.

Some Disability Research Issues, Questions, and Challenges:

  • Development of Direct Brain Interfaces
  • Optimizing control with Direct Brain Interfaces
  • Use of heads up displays for virtual control panels controlled by sip-n-puff, scanning, headpointing etc.
  • Virtual panel optimization for these different input strategies
  • Movement smoothing strategies for those with tremor or choreic movements
  • How can augmented reality mechanisms be used for more efficient interactions for people with physical disabilities
  • How to effectively use speech recognition for people with speech impairments

Cognitive Disabilities - Advances in low cost displays and electronics offer the opportunity to create products which can be safely deployed with individuals having cognitive disabilities (see affordability, lose-ability). Rapid gains in machine intelligence and language processing provide opportunities for the representation of the information in simpler form. This area is much easier to talk about, however, than to implement, and it is not clear how much of the mainstream research will be directly applicable, although it will be enabling. This is a key area where extensive research to achieve the potential will be needed.

Some Disability Research Issues, Questions, and Challenges:

  • Language level translation
  • How to automatically summarize text - on different language levels
  • Translation from text to picture or symbol - to replace or augment text
  • Translation of complex user interfaces into simpler and easier to use forms
  • Effective speech presentation for people with cognitive disabilities
  • Development of low cost reminder aids

Language Disabilities - An area that may be somewhat easier to address would be providing special interfaces for individuals with language-related disabilities. Imaging and character recognition could allow all text to be read quietly into the individual's ear whenever they looked at it. Language simplification, as it becomes effective, could also be used. Explanations of terminology, or even translation into other languages, will be accomplishable fairly soon.

Some Disability Research Issues, Questions, and Challenges:

  • Language translation applied to specific language deficits
  • Semantic Markup to enhance language level translation
  • Effective use of augmented reality mechanisms with audio
  • (See above regarding simplification, text reading.

4.2 Cross-Disability Public Interfaces (Built-in)

As mentioned previously, the challenge here is in developing cross-disability interfaces which can accommodate a wide range of users and not interfere with regular access. The principle advances in this area have been around the use of flexible sets of integrated techniques [94]. One form involves the use of touchscreen displays to provide sequential interfaces, combined with speech output and tactile up/down select and help buttons.[90] The touchscreen allows for a very simple, direct, and sequential interface for individuals with cognitive disabilities, and can be combined with speech to allow access by individuals with low vision, or who have trouble reading (or cannot read) for any reason. The use of dynamic display and sequential presentation also allows text to be larger on the screen for individuals with low vision and allows the option of further font size enlargement at the user's request. The up/down and select allows access by individuals who are blind or have physical, reach, movement, or control limitations which make consistent use of the touchscreen impractical.

Braille keyboards, speed list techniques (wheels, sliders, etc.) [95], and speech input can all be used to supplement the above core set of techniques to provide alternate methods for input and/or increased efficiency and convenience when the product size, design, and environment allow.

Similar techniques can be implemented on entirely button-based systems such as desk and cell phones [96]. Here the advances in low cost display technologies and electronics are facilitating the implementation of the flexible design of public systems. Electronic display of prices on grocery shelves has even been evaluated [97]. Research has also been conducted on the ability of public systems to identify the needs of a user as they approach a device so that the device can auto-configure its interface to match a user's needs or preferences [98] [99] [100].

Some Disability Research Issues, Questions, and Challenges:

  • When to use built-in vs. pluggable user interfaces
  • How to represent users' preferences for a user interface
  • How to implement user preferences in a secure and private way
  • Cross-disability interfaces for very low cost products
  • Tools to facilitate development and implementation within different product types

4.3 Interoperability and Interface Element Substitution

While built-in interface flexibility in public systems can provide free (to user) and always-available access to all users (within the usability range of the interface), built-in access does not always provide the efficiency needed for work station applications.[101] It may also not provide the efficiency needed for public information systems for individuals with more severe disabilities if a significant amount of interaction or data entry is required. Direct access for people with multiple severe disabilities (such as deaf-blindness) may also be out of commercial practicality. It is often, therefore, desirable and/or necessary (particularly for work stations) to have a mechanism to connect and use an interface element which has been optimized for a particular individual instead of the regular interface on the device.

Originally this was done through component substitution, or patching. With component substitution, the keyboard and/or mouse, for example, was replaced by another device which had the same physical and electrical connection as the keyboard. [102] [103] Sip and puff keyboards, eye gaze keyboards, voice recognition systems, single switch scanning devices, etc., were created, which emulated the keyboard and mouse, and plugged into the computer where the keyboard and mouse usually plugged in.[104] In this fashion, all standard software and functionality of the computer could be used.

On the output side, it was too difficult to interpret the video signals sent to the monitor. Software was therefore written which patched into the operating system and monitored text and graphics as they were written to the screen. [105] This information was used to build the database ("off-screen model") which individuals who were blind could query, so that the information which was presented on the screen was also available via speech synthesizer.

Both of these strategies involved attempts to access information technologies without any knowledge or cooperation from the information technologies or software itself. [106]

More recently, information technology vendors and assistive technology vendors began working together in order to create interface mechanisms that were more stable on the operating system and software side, and more usable to assistive technologies. [107] [108] Initial progress was made based on pressure from the disability community. More recently, Section 508-based purchasing standards have tremendously accelerated the efforts in this area. Today most of the major operating systems, including open source software systems, have ongoing efforts to develop better assistive technology / information technology interoperability standards, or APIs [109][110] [111] [112] [113]. This work is considerably complicated by the fact that the different software and operating systems have significantly different architectures, so that a common API is not possible. Coupled with the fact that assistive technology vendors are relatively small and already have a fragmented market, the ability for the AT vendors to support multiple APIs and architectures is extremely limited.

An interchange standard for keyboards and mice (GIDEI, the General Input Device Emulating Interface standard) exists, which allows standard RS232 serial data to be converted to keyboard and mouse functions on standard computers. [114] The standard (under the name of "SerialKeys") has been built into the Windows operating system since 1995, and is available for the Macintosh [115]. The standard is also supported by AT vendors. The USB and evolving wireless USB [116] standards should eventually replace this, as AT vendors move to support these new standards - particularly when a general input device wireless standard gels.

Research is currently needed to define the problem more fully on three levels:

  • the overall information and interface requirements of assistive technologies for different disability groups (generic requirements);
  • the specific functionality required for different major architectural types, as well as the potential for translation between these types; and
  • the development of a consolidated software interoperability model, and methods for specific API information can be derived for each specific operating system or software platform, in order to meet the above-mentioned requirements.

Complete Interface Interchangeability

A major problem with current interface interoperability approaches is that they don't actually involve replacing the interface with a more usable interface. Usually they involve substituting a device which is more operable by the user for an interface device and approach which is inoperable. Thus, they do not end up with an interface which is tuned to them, but rather an interface which is tuned to them which then emulates or interprets an interface that was inoperable to them. For example, the individuals who are blind do not end up operating a computer using a tactile and speech interface. They end up using a tactile and speech interface to interpret a visual presentation interface. Similarly, someone using a sip and puff "typing" technique to operate a pointing-based interface does not end up using a typing interface, but rather using a typing interface to control a pointing device to operate the software. Someone using a speech interface may be using speech to control pointing or manipulation, or layered selection, rather than just giving straight speech commands.

Efforts are therefore underway to try to separate the functionality of products from their interface [117] [118]. If and as this can be done, it will be possible to substitute a visual direct manipulation interface, for example, with an auditory command-based interface. In effect, the device can have its interface changed to the interface it would have had if all users were operating under the same constraints as the person with a disability.

One example of this has been the move from an "off-screen model" approach to graphic user interfaces (where an off-screen model of the visual interface is built and used for access) to a document object model, where the information is stored in a data structure, and then rendered visually or through auditory means [119]. Current implementations are still limited, however, in that the models are still often built upon an assumption of visual presentation. However, mobile computing is driving natural efforts to move away from presentation assumptions.

Another effort under way by a consortium of university, industry, consumer, and government participants is the development of a series of modality-independent interface substitution standards [120]. These standards seek to create mechanisms for products ranging from public information systems to stereo systems to thermostats and coffee pots, to present their interface requirements in an abstract fashion over standard wired and wireless networking technologies. Individuals would then be able to substitute interface devices or entire interface systems for the interfaces on the products. For example, one remote console standard would allow complete control of a device (including all of its controls and displays) from any remote location [121]. Because this description would be in pure, abstract terms, and could be expressed completely in words, the interface can be presented visually with icons, via speech, in Braille, or in any form that meets the needs of users, with and without disabilities. Individuals driving cars could use speech, at the office could use their personal computers, sitting on the couch they could use their PDAs or cell phones. Individuals with disabilities could use any of these devices, or special assistive technologies. The ability to derive a scheme for creating abstract interface descriptions, the limitations on the ability to construct efficient interfaces in the different modalities from a purely abstract description, and limitations on the abstract presentation on some types of information, are key issues in this area.

Some Disability Research Issues, Questions, and Challenges:

  • Research for a "consolidated accessibility model/API"
  • Abstract user interfaces vs. transformation of modality-specific (e.g. visual) user interfaces
  • How to dynamically create user interfaces on the basis of abstract user interface descriptions and user preferences
  • How to design development tools for (manually and automatically) creating, and editing abstract user interfaces
  • What are the limits of abstract user interfaces (e.g. a ticket sales machine showing the seating plan)?
  • How to blend abstract user interfaces with modality-specific elements without sacrificing accessibility
  • How to locate devices and services with a Universal Remote Console (e.g. by using GPS), and how to provide location information to different users
  • Business case for abstract user interfaces

4.4 Modality Translation and Interpretation

The ability to translate information from visual to auditory, auditory to visual, or both to tactile, is critical for the transmission of information to individuals with severe sensory impairments. The ability to translate information from one language to another (e.g., English to sign language) is important for individuals who are deaf and for whom English is a second language. Shifting from one language level to another is important for individuals with cognitive and some types of language disabilities. The translation from speech to text, sign language, and Braille, is treated in the Telecommunication chapter, but also appears as an issue in the Information Technology area, particularly where the information presentation includes oral speech. As discussed in telecommunications chapter [Chapter 17, Vanderheiden et al.], network-based modality translation services are becoming available, and research is continuing in speech and language recognition and translation [122]. Text summarization research [123] may also have implications for the language level shifting for individuals with cognitive disabilities, but it cannot be relied on to address this issue directly. Research specific to this area is needed.

Translation of visual information into speech or Braille is fairly straightforward when the information is clear, clean text, even if it is bit-mapped graphics. The technology to recognize non-stylized text exists, commonly known as Optical Character Recognition (OCR). Although it is not always incorporated into AT, both book reading technologies and screen readers have incorporated such features. Background patterns, highly stylized text, and noisy signals, however, can quickly degrade performance. Research has also been carried out to recognize stereotypic business graphics and re-create the underlying (Excel-like) data tables, which can then be accessed [124]. The trend to stylized graphic presentations mixed with pictures, however, can create problems, and general graphics, animations, and dynamic visual presentations are all outside of the range of any current assistive technologies. Web-based visual description services [125] and semantic markup of static visual presentations (e.g., alt-text and d-links) are current strategies being proposed and used, respectively.

The major issues in this area center around the extent to which affordable automatic modality translation can be achieved, and the theoretical limits on modality translation. A "try harder" strategy, using local, network, and human-assisted modality and language-level translation is being explored to provide a flexible approach to the former question [126]. The extent to which information is presentation-specific, and therefore not translatable, is an important theoretical and practical question, given the evolving accessibility standards in the U.S. and abroad. The emerging Web Service infrastructure could provide a mainstream framework for the implementation of modality translation services.

Some Disability Research Issues, Questions, and Challenges:

  • How to implement an automatic "try harder" capability based on user preferences
  • Synchronization of different media streams as a result of different types of modality translation
  • How to dispatch requests for modality translation services based on current usage and user preferences
  • Sharing and concatenation of modality translation services in multiple-party sessions
  • Research on computer assisted human translation and interpretation
  • Research on fully automated translation and interpretation
  • Development of guidelines and training in functional description vs. audio description of visually presented information
  • Study of the effectiveness of different types of audio description to individuals who are blind with different cognitive skills and models (verbal vs. spatial, linear vs. hierarchical, etc.)

4.5 Modality-Specific Presentation

A closely-related topic is access to information which is provided in modality-specific form.

Visualization - This is a technique which is used to present information in a massively parallel fashion, so that patterns can be discerned. Visualization is being used increasingly in education, science, and business. Some techniques, such as scatter plots, can be presented tactilely, but the equipment to do so is much more expensive than common monitors and printers and not usually available in the schools. Today visualization techniques include three-dimensional projections of data and the ability to actually travel within the visualizations using immersive environments.

The challenge is not to have the information also available in text, since the data used to generate the visualization are usually available. The challenge is in developing similar auditory or tactile mechanisms for massive parallel, wide viewpoint presentation of information.

Augmented Reality - A similar technique uses the superposition of visual enhancements on the physical world [127][28][37]. Until artificial touch is possible (which does not interfere with normal touch), it is not clear how to achieve a similar effect in the haptic area. The best chance might be artificial auditory enhancement of real tactile exploration.

Modality-Specific Information - Information which is inherently audio (e.g., Itzhak Perlman's performance of a violin concerto) or visual (e.g., Picasso's Guernica) cannot be adequately expressed in words. Many other examples of modality-specific information can be expressed in words, but the words are generally an interpretation, rather than a translation, of the information. Most photojournalism, for example, would fall in this category.

Technology is allowing us to move ever more in these directions in education, science, business, and public communications. This is creating both technical and theoretical problems in the definition of accessibility, and what can or should be presented. This dilemma is not limited to sensory disabilities. Any time information is translated from one language to another, some interpretation and loss of signal fidelity generally takes place. When information is translated from more complex to a simpler form of language or presentation, this loss is even more marked. Technology is available to do some of these transformations. Even when done by humans, however, the question of accessibility vs. misrepresentation arises. This spans everything from science to contracts.

Some Disability Research Issues, Questions, and Challenges:

  • What kind of information can be translated into other modalities without loss of information, what cannot?
  • How to translate visualization information into tactile and audio based information? What are the limits in terms of practicality and usefulness?
  • How can the concept of augmented reality be applied to non-visual domains?

4.6 New Metaphors for Interface Design

A common theme in the above topics is the separation of both the control and presentation aspects from the functional aspects of devices and services. Where the function or service itself is inherently modal (an auditory or visual performance), the "deliverable" may not be presentation-modality independent. However, for the majority of electronic devices and services today, the deliverable is not inherently modality-specific.

Interface Independence - A number of developments, including the V2 standards work, [120] XML based technologies, (Web) server-side, formatting of Web content for the purpose of serving a variety of user devices with one source document, are heading in directions to separate the interface or presentation from the service or function. As this occurs, individuals needing different types of interfaces benefit.

Machine-Agent Operable - It has been posited that the next generation of Web interface will not be people interacting directly with the Web, but people interacting with intelligent agent software, which will in turn interact with the content on the Web and in information systems in general. For this to happen, the Web and the information systems in general will need to be machine-perceivable, machine-operable, and machine-understandable [128]. This is also an interesting definition for accessibility. In order for assistive technologies to translate information between modalities, it must be machine-perceivable. In order for individuals to use their assistive technologies to operate information technology interfaces, it must be possible for these interfaces to be operated by software - either directly or indirectly. In order for interfaces and information to be translated and represented, either in different modalities, different languages, or different language levels, the information must be comprehendible.

Alternate Metaphors - Individuals, both in the disability and non-disability fields, have also been considering the variety of devices and sensory modalities which are being used to access information technologies and services, and wondering about alternatives to the WIMP (windows - icons - menus - pointing device) metaphor. [129] [130]

Again, these trends indicate a convergence between accessibility and the evolution of information technology in key areas. Closer examination in each, however, has consistently identified key components that are critical to accessibility that are not as critical, and that are not generally implemented naturally in mainstream systems. The increased reliance on machines and intelligent agent software, however, may provide the strongest natural pressure toward more accessible systems.

Some Disability Research Issues, Questions, and Challenges:

  • How can intelligent software agents be used effectively for people with different types of disabilities?
  • Research and advancement of machine interpretation of interfaces
  • Optimization of human and machine interpretable interfaces
  • Development of alternate interface metaphors
  • (see also under complete interface interchangeability)

4.7 Affordability and Lose-Ability

One limiting factor in the daily use of technologies for individuals with cognitive disabilities (besides the current limitations on its areas of utility) has been the high risk of damage, loss, or theft. Three factors, however, are combining to help address this issue. The first is the continual drop in the cost of devices with high functionality, both in terms of interface (visual, speech, touch) and capability (processing power, connectivity, audio and visual presentation and capture). Before long, the loss of highly capable systems will not be a financial crisis, and the theft value will drop as well.

The second factor is the rapid evolution of ubiquitous connectivity. This allows the devices themselves to act as thin clients with all of the information and capability residing in the network. This not only lowers the cost for the devices, but makes them instantly replaceable. Another thin client can instantly substitute for another thin client that has been lost or damaged, without loss of information, functionality, etc.

Finally, advances in wearable and clothing-based systems can help usher in technologies which are not easily left behind or taken. Nothing except implants will stop loss of the device if the individual wants to be rid of it (e.g., to eliminate tracking or monitoring), but loss due to cognitive factors can be addressed. And increase in freedom for individuals, which may result from confidence that they are reachable, or can access assistance (from the device, the network, or other people), may have significant impact.

Some Disability Research Issues, Questions, and Challenges:

  • Use of low-cost, general-purpose products to provide accessibility solutions (e.g. programmable hand-held computers)
  • Different form factors and their impact on the likeliness of losing a device

4.8 Technology-Averse Users

Another important area being addressed by the advancing technologies is technology "friendliness." There is a surprisingly low adoption rate of assistive technologies by individuals who could use them [22]. In some cases, this is due to the lack of functionality. In others, it is due to the complexity or foreignness of the interface. This can be a particular problem with individuals who are older and are acquiring disabilities. Trends toward graphic and verbal access to information, as well as emerging artificial agents, which can listen and understand, rather than require the user to listen to options, may have significant impact.

Some Disability Research Issues, Questions, and Challenges:

  • Better understanding of the factors for technology-adversity
  • Strategies to help novices better understand the use of technology to improve the quality of their lives
  • Adoption rate studies
  • Natural language interface development
  • Relative effectiveness of natural language vs. structured language vs. direct manipulation interfaces
  • Same question, but focusing on people with different cognitive disabilities

4.9 Security vs. Accessibility

A major barrier to accessibility is emerging in the area of security. It first appeared in work station security and assistive technologies. In order for software to be secure, any access to the software and its display by other software was blocked. This often blocked access by assistive technologies.

Another area where this appeared was in a different kind of security, related to intellectual property and copyright. In order for some document creators to maintain control of their documents, books, etc. (to prevent copying and distribution), the items are both copy-protected and locked so that the text is not machine-readable except by the specifically-designed readers [75]. Again, this would block access by screen readers and other assistive technologies.

Building text enlargement and auditory reading capabilities into the native readers might solve this problem, except another barrier regarding digital rights and licensing arose. Publishers wanting to sell print distribution rights separately from audio rights asked that a switch be built into native document readers that would prevent the information from being presented auditorily (if it was a visual book presentation device). The combination of blocking access to the text by devices other than the native readers, and insisting that the native readers be prevented from reading the text aloud, cuts off the only two viable approaches for access to standard e-books by individuals who cannot read the text visually. The existence of a much more expensive audio version of the book does not provide equivalent access

  • where some members of the family read visually and some auditorily;
  • where the book is provided to a class or employees in only one form;
  • where the auditory version is significantly more expensive;
  • where the auditory version is not released or not released until much later.

This latter item may not be a technology issue, but rather a policy issue, to be dealt with legislatively. In addition, it should be noted that providing built in audio and visual presentation modes would not address the problem of individuals who were deaf blind. In order to address this need, some mechanism for accessing the text in a form that could be exported to a Braille display will be required.

Thus the quickly evolving technical abilities of media pirates, and the resulting preventive actions of publishers is increasingly creating accessibility problems. One recently proposed approach involves creating a "trust" relationship between AT and electronic publications [131].

Digital Signatures - Another related area deals with digital signatures. Here the problem is different and more complex. Briefly, the problem centers around the legal ramifications of a person signing a document which was altered in its presentation, so that they actually signed a document different from what was presented to them. [132] In an effort to strictly control presentation before and after signature, the ability to capture and re-present the information is restricted, with accessibility implications.

Digital ID - Digital identification methods have created two types of accessibility issues. First, some of the methods or devices require procedures that are difficult or impossible for many individuals to carry out. For security reasons, alternative approaches are not allowed, thus creating an accessibility barrier. In some cases these are measures taken for copyright protection and involve the use of a physical device or the need to use particular media rather than alternate ways of storing and transporting the same bits. In other cases, the security systems are for restricting access to information, funds, or transactions. In the latter case, biometric identification methods are sometimes used, such as retinal scans, fingerprints, voiceprints, etc. Individuals who do not have retinas, speech/voice, or hands/fingerprints are presented with accessibility barriers.

Research is needed to identify methods which can both insure the protection of intellectual property, or restrict access as appropriate, and yet still allow access by individuals with a full range of disabilities. Even if legislation or regulation is contemplated to force adoption of a solution, an effective means to prevent the creation of a large copyright or security loophole must be developed.

Some Disability Research Issues, Questions, and Challenges:

  • Secure disability accessibility Digital Rights Management (DRM) technologies
  • Accessible digital signatures technologies with certifiable output
  • Accessible biologically universal ID techniques
  • Application of a variety of universal design practices to DRM-critical application.
  • Establishing a "trust relationship" between DRM-critical applications and assistive technology

4.10 Commercial Practicality

A common theme through all of the above areas must be commercial practicality. A theme heard from industry and non-industry pragmatists alike during the state-of-the-science research process was the need not only for theoretical answers to these issues, but for solution strategies that can be used effectively by companies today and tomorrow, given both the legacy and evolving technologies [2][10][7][5][3]. Concepts which cannot be done in a fashion that allows companies to compete and make a profit, both nationally and internationally, cannot and will not be practiced by companies that intend to be around for very long [133].

Regulation can be a mechanism to maintain competition by insuring that everyone is operating under the same rules and constraints. This level playing field exists only if such regulations are effectively and uniformly enforced. It is also important to keep in mind the international competition aspects of domestic policy recommendations. International harmonization, pull vs. push, and the size of the U.S. market can all help in this regard. The convergent trends between the natural IT evolution and disability access needs, as well as research into effective and low-cost solutions which increase usability by the general population are also important factors.

Some Disability Research Issues, Questions, and Challenges:

  • How to build accessibility considerations into mainstream development processes and tools
  • How to specify accessibility related requirements as part of a products set of requirements
  • How to educate designers on accessibility issues, and provide constructive assistance in building accessible products
  • How to (automatically and manually) test accessibility in the various stages of the product life-cycle
  • Business case for accessible products

5. Conclusion

Access to information technologies is a complex and ever-changing area. Like electricity, electronic interfaces and network technologies will soon be everywhere, in everything, in every activity. Advances in the area are creating new problems, but even more opportunities. Research is needed at fundamentals, infrastructure, standards, models, application, and tools levels. If sheer complexity and pace of change does not overwhelm access research efforts, access to tomorrow's technologies and everyday environments can exceed anything possible today.


This work was supported by the National Institute on Disability and Rehabilitation Research, U.S. Department of Education, under grant #H133E990006. The opinions contained in this manuscript are those of the authors and do not necessarily reflect those of the Dept. of Education.


[1]. ACM1: Beyond Cyperspace - A Journey of Many Directions. Retrieved May 31, 2002 from

[2]. LaPlant, W. P., Jr. (2001). A Proposal for a Research Agenda in Computer Human Interaction. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[3]. Goettemoeller, J. L. (2001). Technology Designers and Rehabilitation Professionals Working Together to Improve the Lives of People with Disabilities. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[4]. Hanson, V. L. (2001). Web Accessibility .Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[5]. Kamal, S. (2001). Relationships on the Move: Communication over multiple devices and modalities. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[6]. Newell, A. F., Gregor, P., & Alm, N. (2001). Realising Our Potential in Ordinary and Extra-Ordinary Human Computer Interaction. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[7]. Porch, W., & Richards, J. (2001). Approaching Software Developers Regarding Accessibility. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[8]. Vinge V.(2001) "Imagining the Future" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001.

[9]. Kogan, S. L. (2001) Human-Computer Interaction and Older Adults. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[10]. Nilsson, V. (2001). Ability by Mobility. Position Paper for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction.

[11]. Vanderheiden, G. C. (2001). Basic Principles and Strategies for Access to Electronic Products and Documents. Resource documement for CHI 2001 WorkshopAnyone. Anywhere.State of the Science Exchange on Modality-Independent Interaction. Retrieved May 16, 2002 from

[12]. Vanderheiden G. Zimmermann, G. "Review of Recent Developments" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001

[13]. Access Board (May 12, 1999). Electronic and Information Technology Access Advisory Committee Final Report. Retrieved May 16, 2002 from

[14]. Section 508 of the Rehabilitation Act (29 U.S.C. 794d). Retrieved May 16, 2002 from

[15]. Swanwick, M. (1999). Statements from Sci-Fi @CHI: Science-Fiction Authors Predict Future User Interfaces, CHI 1999 Plenary. Retrieved May 16, 2002 from

[16]. Vanderheiden, G. (2001). Fundamentals and priorities for design of information and telecommunication technologies. In W. F. E. Preiser & E. Ostroff (Eds.), Universal design handbook (pp. 65.3-65.15). New York: McGraw Hill.

[17]. Judy Gilliom US Dept of Defense. - personal communication.

[18]. Vanderheiden, G. C., Law, C. & Kelso, D. (1998). Universal remote console communication protocol (URCC). Proceedings of the 1998 TIDE Conference.

[19]. National Task Force on Technology and Disability (Report in preparation)

[20]. Making the Web Accessible: Fundamentals and Applications Wendy Chisholm Presentation at CHI WorkshopAnyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction CHI 2001, April 1 - 2, 2001

[21]. Vanderheiden, G. C. (1998). Cross-modal access to current and next-generation internet - fundamental and advanced topics in internet accessibility. Technology and Disability, 8(3), 115-126.

[22]. Gitlin, L. N. (1995). Why older people accept or reject assistive technology. Generations, 19(1).

[23]. Czaja, Sara J. (1997). Computer technology and the older adult. Handbook of Human-Computer Interaction. Amsterdam: Elsevier Science Publishers.

[24]. C. Stephanidis (Ed.), User interfaces for all concepts, methods, and tools (pp. 115-133). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

[25]. Electronic Ink Retrieved May 17, 2002 from

[26]. Pearson, H. (2002, May 2) LCD paint licked. Nature. Retrieved May 17, 2002 from

[27]. DeVaul, R. W., Clarkson, B., & Pentland, A. (2002). The Memory Glasses: Towards a wearable, context aware, situation-appropriate reminder System. MIT Media Laboratory.
Retrieved May 17, 2002 from

[28]. J. C. Spohrer. (1999) Information in places. Pervasive Computing (38)4. Retrieved May 17, 2002 from

[29]. Viirre, E., Pryor, H., Nagata, S., & Furness, T. A. (1998). The Virtual Retinal Display: A New Technology for Virtual Reality and Augmented Vision in Medicine. In D. Stredney, S.J. Weghorst (Ed.) Proceedings of Medicine Meets Virtual Reality, pp. 252-257.

[30]. Viirre, E., Johnston, R., Pryor, H. and Nagata, S. (1997). Laser Safety Analysis of a Retinal Scanning Display System. Journal of Laser Applications, 9(4), 253-260. Retrieved May 17, 2002 from

[31]. Engdahl, T. (1998). 3D glasses and other 3D display devices. Retrieved May 29, 2002 from

[32]. D4D - Glasses with 3D display. [Originally] Retrieved May 29, 2002 from [UPDATE: November 13, 2002 available at] [UPDATE: June 29, 2009 available at]

[33]. Virtual Worlds Retrieved June 6t, 2002 from

[34]. Vanderheiden, G. C., & Mendenhall, J. (1993). Analyzing virtual reality applications as they relate to disability access. Virtual Reality and Persons with Disabilities Proceedings. [Originally] Retrieved May 17, 2002 from [UPDATE: November 13, 2002 available at]

[35]. Czerwinski, M. (2001) "Advanced User Interfaces at Microsoft Research: Intelligent Information Access and Animated 3D Information Visualization" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001

[36]. Tan, D.S., Robertson, G.G. & Czerwinski, M. (2001). Exploring 3D Navigation: Combining Speed-Coupled Flying with Orbiting. Proceedings of CHI 2001, Human Factors in Computing Systems, 418-424.

[37]. IBM Almaden Design lab Retrieved May 17, 2002 from

[38]. AT&T Natural Voices Retrieved May 21, 2002 from

[39]. Text-To-Audio [Originally] Retrieved May 17, 2002 from [UPDATE: November 13, 2002 available at]

[40]. $2-3 Digitized Speech and Speech Recognition Chips - Sensory, Inc. Retrieved May 21, 2002 from

[41]. Text to Speech Chip, Winbond. Retrieved May 21, 2002

[42]. Hallmark Greeting Card No. HAB 621-0.

[43]. Earcons and Multimodal Interactions Group - University of Toronto. Retrieved May 21, 2002 from

[44]. Vanderheiden, G. C. (1997). Cross-disability access to touch screen kiosks and ATMs. . 417-420. (Proceedings of the Seventh International Conference on Human-Computer Interaction (HCI International '97), San Francisco, California, USA, 21A, 417-420.

[45]. Scadden, L. (2000). Personal Communication.

[46]. Mandayam Srinivasan(2001) "Haptics: Hands, Brains, and Virtual Environments" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001

[47]. Tactile Displays Realized Using MEMS Actuator Arrays - MIT Touch Lab. Retrieved May 21, 2002 from

[48]. Optical to Tactile Converter (Optacon) Research. Retrieved May 21, 2002 from

[49]. Kaczmarek, K. A., Webster, J. G., Bach-y-Rita, P., & Tompkins, W. J. (1991). Electrotactile and vibrotactile displays for sensory substitution systems. IEEE Trans. Biomed. Eng., 38, 1-16.

[50]. Fletcher, R. (1996). Force Transduction Materials for Human-Technology Interfaces, IBM Systems Journal, 35(3&4). Retrieved May 17, 2002 from

[51]. Barry, W. A., Gardner, J. A., & Raman, T. V. (1994). Accessibility to scientific information by the blind: Dotsplus and ASTER could make it easy. Retrieved May 21, 2002 from

[52]. Abrahamson, R. (2001, April 1). Sniff-company DigiScents is a scratch. The Industry Standard. Retrieved May 21, 2002 from,1902,23654,00.html

[53]. Padmanabhan, M. & Picheny, M. (2000). Towards superhuman speech recognition. ASR workshop.

[54]. University of California - Santa Cruz. Perceptual Science Laboratory. Retrieved May 21, 2002 from

[55]. Braffort, A., Gherbi, R., Gibet, S., Richardson, J., & Teil, D. (Eds.). Proceedings of International Gesture Workshop, GW'99, Gif-sur-Yvette, France, March 17-19, 1999

[56]. University of New South Wales. Gesture recognition research resources. Retrieved May 21, 2002 from

[57]. Kahn, J. M., Katz, R. H., & Pister, K. S. J. (1999). Next century challenges: Mobile networking for smart dust. ACM/IEEE Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August 17-19, 1999. Retrieved May 17, 2002 from and

[58]. VPen digital pen. Retrieved May 17, 2002 from

[59]. Anoto digital paper. Retrieved May 17, 2002 from

[60]. Twidler One Hand Keyboard. Retrieved May 17, 2002 from

[61]. Kirschenbaum, A., Friedman, Z., & Melnik, A. (1986). Performance of Disabled People on a Chordic Keyboard. Human Factors, 28(2). 187-194.

[62]. Fingerspelling Glove. Retrieved May 17, 2002 from

[63]. Handspring Treo. Retrieved May 17, 2002 from Nokia 5510. Retrieved May 17, 2002 from

[64]. LC Technologies, Inc. Retrieved May 17, 2002 from

[65]. Orbitouch Keybowl. Retrieved May 17, 2002 from

[66]. Senseboard Virtual Keyboard. Retrieved May 17, 2002 from

[67]. Virtual Keyboard. Retrieved May 17, 2002 from

[68]. EEG activated switch. Retrieved May 17, 2002 from

[69]. Mandayam Srinivasan(2001) "Brain-Machine Interfaces: Progress and Prospects" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001

[70]. Serruya, M. D., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., & Donoghue, J. P. (2002). Brain-machine interface: Instant neural control of a movement signal. Nature, 416, 141-142.

[71]. Adam, D. (2000, November 16). Monkey see, robot do. Nature. Retrieved May 21, 2002 from

[72]. Gold, E. (2001, November). Mind control. Brown Alumni Magazine Online. Retrieved May 29, 2002 from

[73]. Berners-Lee, T. (2000). Weaving the Web. San Francisco: HarperCollins.

[74]. Kurzweil artificial intelligence.

[75]. Kerscher, G., Fruchterman J (2002) The Soundproof Book: Exploration of Rights conflict and Access to Commercial EBooks for People with Disabilities [Originally] Retrieved June 6, 2002 from [UPDATE: November 13, 2002 available at

[76]. Kerscher, G. (2000) POSITION PAPER: DRM For Persons Who are Blind AND/OR Print Disabled. Retrieved June 6, 2002 from

[77]. W3C Aural Style Sheets Specification. Retrieved May 21, 2002 from

[78]. Text-To-Audio. Retrieved May 21, 2002 from

[79]. Scalable Vector Graphics (SVG) 1.0 Specification. Retrieved May 21, 2002 from

[80]. Kurzweil, R. (1999) "The Coming Merging of Mind and Machine," Scientific American Presents: "Your Bionic Future," Fall 1999

[81]. Kurzweil, R. (2001, August 29). The human-machine merger: why we will spend most our time in virtual reality in the twenty-first century. Retrieved May 21, 2002 from

[82]. Berners-Lee, T., (2001) "The Semantic Web" Presentation at the CHI 2001 Workshop Anyone. Anywhere. State of the Science Exchange on Modality-Independent Interaction, April 1 - 2, 2001

[83]. Berners-Lee, T., Hendler, J., & Lassila, , O. (2001). The semantic web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American May 17,2001, Retrieved May 21, 2002 from (Note: Link updated April 9, 2009:

[84]. (2002, March 14). New BellSouth® FastAccess® homenetworking service enables high-speed internet access on multiple computers through a single DSL connection. Retrieved May 22, 2002 from

[85]. Yahoo! Finance. (2002, March 4). Earthlink unveils home networking service for its high-speed cable subscribers. [Originally] Retrieved May 22, 2002 from [UPADATE: On November 13, 2002, the reference was not locatable.]

[86]. Weiser, M. (1991, September). The Computer for the Twenty-First Century. Scientific American, pp. 94-10. Retrieved May 22, 2002 from

[87]. NIST Smart Space Laboratory. Retrieved May 24, 2002 from

[88]. Smart Spaces - MIT. Retrieved May 24, 2002 from

[89]. Microsoft Speech Technology. Retrieved May 22, 2002 from

[90]. Trace R&D Center EZ Access Techniques. Retrieved May 22, 2002 from

[91]. BrailleNote GPS. Retrieved May 22, 2002 from

[92]. Telesensory MiniViewer. Retrieved May 22, 2002 from

[93]. VisAble Video Telescope. Retrieved May 22, 2002 from

[94]. Vanderheiden, G. C., Law, C. M., & Kelso, D. (1999). EZ access interface techniques for anytime anywhere anyone interfaces. Proceedings of the Association for Computing Machinery Conference on Human Factors in Computing Systems (ACM CHI 99), pp. 3-4.

[95]. UMouse scrolling wheel mouse. Retrieved May 30, 2002 from

[96]. Cell Phone Reference Design Retrieved May 30, 2002

[97]. Fox, B. (2002, January). Electronic shelf labels draw new interest as prices fall. Stores. Retrieved May 23, 2002 from

[98]. Jim Sandhu, Alan Leiber Universal Design in Automobile Design User Centered Deployment and Integration of Smart Card Technology - The DISTINCT Project, in Wolfgang F.E. Preiser, Ph.D.and Elaine Ostroff, Ed.M., Eds. In Universal Design Handpook, Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

[99]. Reefware. [Originally] Retrieved May 17, 2002 from [UPDATE: On November 13, 2002 the resource was not locatable.]

[100]. V2: Information Technology Access Interfaces. Retrieved May 23, 2002 from

[101]. Vanderheiden, G. C. (2000). Fundamental principles and priority setting for universal usability. Conference on Universal Usability Proceedings, pp. 32-38.

[102]. Vanderheiden, G. (1981) "Practical Applications of Microcomputers to Aid the Handicapped," in Computer, IEEE Computer Society, January.

[103]. Vanderhelden, G. C. (1981). Computers can play a dual role for disabled individuals. BYTE, 7(9), 136-162.

[104]. Bower, R., Kaull, J., Sheikh, N., & Vanderheiden, G. C. (Eds.). (1997). Trace Resourcebook - 1998-99 Edition. Madison, WI: Trace R&D Center.

[105]. Boyd, L.H., Boyd, W.L., & Vanderheiden, G.C. (1990, December). The graphical user interface: crisis, danger, and opportunity. Journal of Visual Impairment and Blindness, 84,  496-502.

[106]. Korn, p. (2002, March). Unix accessibility overview: Evolution of screen access approaches. Unix Accessibility overview session at the Seventeenth Annual International Conference on Technology and Persons with Disabilities, Los Angeles, CA

[107]. Microsoft Active Accessibility 2.0. retrieved May 24, 2002 from

[108]. Java Accessibility API. Retrieved May 24, 2002 from

[109]. Microsoft Accessibility. Retrieved May 23, 2002 from

[110]. Apple Accessibility. Retrieved May 23, 2002 from

[111]. Sun Microsystems Accessibility Program. Retrieved May 23, 2002 from

[112]. Mozilla Accessibility Project. Retrieved May 23, 2002 from

[113]. Linux Accessibility Resource Site (LARS). Retrieved May 23, 2002 from

[114]. Trace R&D Center. (1994). General Input Device Emulating Interface (GIDEI) Proposal. Retrieved May 22, 2002 from

[115]. Macintosh Serialkeys. Retrieved May 24, 2002 from

[116]. USB. (1997, June 3). An Analysis of Wireless Device Implementations on USB. Retrieved May 22, 2002 from

[117]. XForms homepage. Retrieved May 22, 2002 from

[118]. Myers, B. A. (2001). Using hand-held devices and PCs together. Communications of the ACM, 44(11), 34 - 41.

[119]. W3C Document Object Model (DOM) information page. Retrieved May 22, 2002 from

[120]. V2: Information Technology Access Interfaces. Retrieved May 23, 2002 from

[121]. Zimmermann, G., Vanderheiden, G., & Gilman, A. (2002). Prototype Implementations for a Universal Remote Console Specification. Proceedings of CHI 2002 Conference on Human Factors in Computing Systems, Minneapolis, Minnesota, 510-511.

[122]. Zimmermann, G., & Vanderheiden, G. (2001, June). Modality Translation Services on Demand - Making the World More Accessible for All. In Simpson, R. (Ed.), Proceedings of the RESNA 2001 Annual Conference (pp. 100-102). Reno, Nevada: RESNA Press.

[123]. Kramer, P. (n.d.) Getting the answers: IBM mines meetings for meaning. Retrieved May 23, 2002 from

[124]. ScanSoft OmniPage. Retrieved May 28, 2002 from

[125]. See4Me. Retrieved May 28, 2002 from

[126]. Telecommunications Access (Chapter 17 in this book.)

[127]. Regenbrecht, H., & Wagner, M. (2002). Interaction in a Collaborative Augmented Reality Environment. CHI 2002 Conference on Human Factors in Computing Systems, proceedings pp. 504-505. April 20-25, 2002, Minneapolis, Minnesota.

[128]. Vanderheiden (2002) Designing Interfaces For Diverse Audiences In J. Jacko and A. Sears (Eds) Human-Computer Interaction Handbook, Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

[129]. Schnnaper-Castaras, J. P. (personal communication) 2002.

[130]. Beaudouin-Lafon, M. (2000). Instrumental interaction: an interaction model for designing post-WIMP user interfaces. Proceedings of CHI 2000 Conference on Human Factors in Computing Systems, 446-453.

[131]. McIntire, Madeline Bryant (2002). Speech given at the Accessibility Forum, Gallaudet University, June 3-5, 2002.

[132]. Christine Maxwell, (Ed) , On behalf of The Internet Society, "Global Trends that will Impact Universal Access to Information Resources" Report Submitted to UNESCO, July 15th, 2000 . Retrieved June 6th, 2002 from

[133]. Souza, R., Manning, H., Dorsey, M. (2001, December). Design Accessible Sites Now. Forrester Techstrategy Report. Retrieved May 29, 2002 from,1338,11431,00.html.

Image of the Trace Center logo.
Trace Research & Development Center
College of Information Studies, University of Maryland
Room 2117 Hornbake Bldg, South Wing
4130 Campus Drive
College Park, MD 20742
Copyright 2016, University of Maryland
Tel: (301) 405.2043
Fax: (301) 314.9145
Trace Center