State of the Science on The Cloud, Accessibility, and the Future

  • Title: State of the Science on The Cloud, Accessibility, and the Future
  • Publication Type: Journal Article
  • Authors: Chourasia, A., Nordstrom D., & Vanderheiden G.

Full Text

1 Background

The 2012 IT-RERC State of the Science conference on The Cloud, Accessibility, and the Future was held online on September 21-22, 2012.

Members from academia as well as from industry and the disability community participated in the conference, which was hosted by the Trace R&D Center at the University of Wisconsin-Madison.

Invited participants who took part in the conference included (listed in last name alphabetical order):

  • Denis Anson, Misericordia University
  • Peter Blanck, Syracuse University
  • Cathy Bodine, University of Colorado, Denver
  • Vint Cerf, Google
  • Colin Clark, OCAD University, Toronto
  • William "Bill" T. Coleman III, Coleman Institute for Cognitive Disabilities
  • Tom Corfman, National Institute on Disability and Rehabilitation Research
  • Ahmed Eisa, GDCO- Sudan
  • George Kerscher, DAISY Consortium
  • Sharon Laskowski, National Institute of Standards and Technology
  • Clayton Lewis, National Institute on Disability and Rehabilitation Research
  • Greg Lowney
  • Kasper Galschiot Markus, Raising the Floor International
  • Jose Martinez, Technosite - ONCE Foundation
  • Jamal Mazrui, Federal Communications Commission
  • Bilge Mutlu, University of Wisconsin–Madison
  • Annuska Perkins, Microsoft
  • Rich Schwerdtfeger, IBM
  • Sue Swenson, Office of Special Education and Rehabilitative Services
  • Jim Tobias, Inclusive Technologies
  • Jutta Treviranus, OCAD University
  • Gregg Vanderheiden, Trace R&D Center, University of Wisconsin - Madison
  • Gottfried Zimmermann, Media University - Stuttgart

The desired outcome of the conference was both a better understanding of current and emerging issues around digital inclusion, and a better understanding of where things are going, and what strategies for effectively addressing inclusion in the rapidly changing world we live in might be.  A second outcome was to provide guidance for those currently working on the construction of the Global Public Inclusive Infrastructure (GPII).

Topics of discussion for the conference included:

  • The threefold emerging threat to ICT inclusion
  • The approaching ICT/web/cloud inflection point - and how it changes existing rules
  • Cloud-based auto-personalization as an approach to inclusion (concept, status of implementation, plans)
  • Security and privacy, risks and options, related to personalization and cloud-based solutions
  • Non-technical issues and realities in national and global deployment and use of technical solutions
  • Providing the tools necessary for industry to build accessibility into next and next-next generation everyday products   
  • Globally realistic/affordable approaches to scaling, sustainability, and propagation
  • Impact of digital inclusion on national prosperity, education, and literacy
  • Need for cloud based solutions in government services, health, voting, education, etc., and the demands these areas may impose on solutions


The presentations for the conference were pre-recorded using either Adobe Connect or Screenflow and posted on YouTube. These presentations can be found at, or simply by starting the embedded player below.

Conference participants initially watched the presentations in the weeks prior to the conference dates, whereas the conference time was utilized for a discussion of the issues that were raised. The virtual conference meeting was conducted and recorded using Adobe Connect, and is posted on YouTube at: (Thursday) and (Friday)

Transcripts and summaries of the presentations can be found at:

3 Presentation Summaries

3.1Bill Coleman – An Inflection Point

The combination of the cloud and the web, if we are able to develop appropriate software standards and systems, over the next 25 years will have a monumental impact on the quality of life of people with cognitive disabilities by becoming a prosthesis for life, enabling users to contribute to society in almost any way. The combination of the cloud and the web is part of a major inflection point, or a rapid rise in productivity, driven by the ability to increase the quantity and quality of communications, and to increase the rate of knowledge creation.

Previous inflection points include the invention of language based on the spoken word, and the invention of the printing press.  A third inflection point was the invention of the Web. Studies on human productivity have found that the industrial revolution raised human productivity by an order of magnitude and that humanity will advance by another order of magnitude due to the cloud and the Web over the next 100 years.

In the next decade we will see the Cloud becoming a platform upon which we will live – at least in terms of information and communication. The Web is, on the other hand, a disruptive innovation – both unique and at the leading edge of innovation.

A disruptive innovation is an innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology. Each innovation benefits its consumers an order of magnitude more than its creators – which is important, as Peter Drucker stated in the 1960’s, because this is what it takes to justify switching a technology, or getting rid of the horse and buggy in exchange for roads and automobiles, for example.

In the Information Age, starting from 1960, we have had a disruptive innovation every 10 years. These cycles of innovation started with the invention of semiconductors, following by the computer and the network in the following two decades.  These cycles are typically 30 years long and include the phases of invention (boom and bust); buildout and consolidation; and, commoditization.

The Information Age includes six cycles of innovation. The first three cycles (1960 - 2010) are the three IT platforms: semiconductors, the computer; and, the network. The fourth cycle (1990 -2020) is the emergence of the Cloud Platform, which is also the inflection point. The Cloud has emerged as a “Utility Platform” which is itself disintermediated by the emergence of the Web Platform. At the present time we are experiencing the fourth through the sixth cycles (1990-2040), or the three mega Web platform cycles. These cycles enable ‘free reach’, straight-through processing, and transparency.  Free reach is the idea that you can reach as many users there are on the Internet without incurring extra cost. We are beginning the last two phases: straight-through or thru-processing, and transparency. Thru-processing means any thing can be connected end-to-end in this system. For example, a shopper is connected with clothes designer who is connected to production facilities. The designer and user can design clothes together in real time and put their work into immediate production. Transparency is the stage where we make sense of the massive amounts of data that are being generated on the Web. Currently free reach has reached the point where massive investments are going into it. This is helping build the Cloud and the Web.

The evolution of the Cloud can be traced from Cloud 1.0 (2000-2010), involving discrete services, to Cloud 2.0 (2010-2020), a platform, and on its way to Cloud 3.0 (2020-2030)  (platform as well as a utility). In the current decade the standard models need to be built and made available to everybody. Secure, always available, data storage is an example of a cloud service that is currently evolving.

Some of the challenges towards developing a Web Presence for individuals with cognitive disabilities include development of appropriate standards and architectures, identity and privacy challenges and the usability of the Cloud. By 2040, it is expected that our virtual and physical presence will be converge, the “pull” economy will move the majority of the world to the middle class and a reduction in international conflict will occur. By the end of the century, we will be in the dematerialization age, where biology and nanotechnology are fused and the quality of life is dramatically enhanced for everybody.  

3.2 Vint Cerf – Cloudy Thoughts

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing can be considered as an extreme form of time-sharing. However, it is different from basic time-sharing in terms of the computing resources, users, data replication and redundancy, and communication capacity.

The origin of the term cloud computing comes from the early days of the Internet where clouds were used to represent gateways and have come to represent a black box notion of computing. Users do not care where their data is stored or acted upon.

An important feature of cloud computing is that as users start putting information into the cloud, a common data environment is created, allowing multiple users to live and interact with each other over the cloud. In addition to users, the cloud also allows for various devices to be interfaced together giving rise to the so called, “Internet of Things.” Processes can also be interfaced with each other in the cloud. These interactions can be tailored to be selective and have tremendous potential benefits for individuals with disabilities.

Configuration settings for various users and interfaces could be present in the cloud, and the interfaces could be modified automatically depending upon user and context of use. Delivery and content transformation can occur dynamically in the cloud and does not have to depend on the terminal devices. Devices can communicate with each other, and, depending on their capabilities, can be modified to produce the desired accommodations for users.

Further research is required to fully realize the potential of the cloud to address accessibility. Which architectures yield maximum accessibility is currently unknown. Instrumented user interfaces that help developers understand user behavior and generate concrete data are needed. Applications of computing systems that are part of the same environment as the user need to be studied. Systems for generating and managing user preferences are also necessary. Currently, most cloud interaction is between devices and the cloud. Inter-cloud interaction methods and its implications should be investigated.

3.3 Jutta Treviranus  - Future Proofing through Inclusion

As societies are transformed in technological, knowledge, social and economic forms, we are best able to meet these changes through resiliency, responsiveness, and innovation. In order to best provide these attributes among our workforce we need diverse perspectives, which are a result of our educational system. In order to achieve the greatest breadth and depth of diversity, we must ensure that education helps every learner reach his or her own potential.

We need to redefine the goals of our current education system. The notions of standard knowledge and the standard learner are no longer relevant.  What is needed now is process oriented, rather than product oriented, learning. We need adaptive, rather than static knowledge. And we need diverse perspectives rather than absolute classifications.

In the knowledge economy, our most valuable resource will be diverse, critical, creative thinkers who in addition to being adaptive are resourceful and collaborative. Many of these students are neither served by regular education nor are they able to fit into traditional definitions of disability. These are not necessarily the students who are now most valued.

Research in education is finding that learners learn differently, and that the best outcomes come from individualized instruction, personalized learning, and personal engagement, as well as through self-directed learning, experiences, and peers. Additionally, an alternative notion of disability is emerging. This view suggests that disability is not a personal trait, it is a relative condition, and that it is a mismatch between the needs of the learner and the educational environment and experience that is offered.

Digital resources such as open courseware and open education resources (OER) can assist in these respects by allowing educators to transform, reconfigure, and copy without cost content best suited for each student.

The Flexible Learning for Open Education (FLOE) project is one type of digital resource which is utilizing the Global Public Inclusive Infrastructure (GPII) to provide (1) pluggable utilities to enable learners to discover and express individual needs, (2) a service that transforms resources, augments resources or finds alternative resources to match the individual needs, and (3) a demand-supply pipeline of possible “producers” to meet the demands and fill the gaps.

FLOE also provides expanded areas for research into the area of marginalized or outlying learners. Whereas most current research in education uses statistical methods which rely on homogeneity within sample populations, FLOE will help researchers concentrate on statistical outliers in an inclusive research design. The goals of this design are to be more inclusive, more relevant to the knowledge economy, more timely in terms of changing context as well as implementation, contextualized within the learner’s context, sustainable in costs, impact, and resources, and individually applicable – not abstract or generic.

The combination of systems like FLOE and GPII demonstrate the possibility of moving from a push to a pull economy – that currently too much time and money is spent on marketing and commercialization when it could be spent on innovation and production instead.

A pull economy has several benefits including the ability to create new markets based on the idea of delivering personalized solutions to individuals rather than mass-marketed products which are not designed for any particular individual in mind, but rather the public at large.

3.4 Richard Schwerdtfeger  - Accessibility – The Road Less Traveled

Our approach to accessibility today is much different than previous approaches. Accessibility used to be done for philanthropic reasons but today there is a business case for it. A philanthropic approach tends to solve a single problem but does not address systemic changes in accessibility. Today accessibility makes business sense as it is required for government procurement, to avoid human rights violation and to avoid litigation.

The "Javascript" problem at IBM circa 2004, illustrates the business case for accessibility. WCAG 1.0 guidelines prevented the use of CSS and Javascript in web browsers and as a result rich internet applications could not delivered through the browser. The solution was to include the ability to add accessibility semantics in HTML and make HTML fully keyboard accessible.   This led to the development of the WAI-ARIA standards. Today WAI-ARIA has become the web accessibility API. It is cross-platform, fully navigable by keyboard, supports WCAG 2.0 and 508 Refresh standards and is included in over 170 IBM products.

The development of ARIA standards was a multi-faceted task that involved the development of the guidelines, implementing the guidelines in web browsers, building a component library, changing the WCAG guidelines, funding and helping assistive technology vendors to redesign their products to handle dynamic content, harmonizing standards (WCAG, 508 Refresh, EU Mandate 376), building accessibility rule sets and evangelizing the standards. IBM invested heavily in ARIA to make it a success but it is unlikely that a similar effort would be undertaken today because of the cost. The success of ARIA was due to Open source standards and architectures that allowed other companies to use ARIA effectively.

Today, accessibility compliance criteria is growing and companies now to have comply with standards such as the U.S. 508 Refresh, the U.S. Communications and Video Accessibility Act, and the European Commission’s Mandate 376. These standards are now being implemented globally and companies are facing increased pressure due to the risk of litigation.

As the focus rapidly changes from desktop to web and mobile technologies, we need to build in accessibility in these new technologies. The Web is also becoming more visual as a result of more data being available. The rise of mobile technologies is also making users situationally disabled. The older approach of "one size fits all" to accessibility needs to be replaced with a personalized approach.

An open approach is needed to address the new accessibility challenges. Cloud based technologies can be used to personalize interfaces and content, provide ‘context aware’ and inclusive solutions. The GPII is a new collaborative model to remove barriers to access. The aim of GPII is to make things easier for companies, users and developers to produce and use accessible solutions. Features of the GPII include: readily available tools and assistive technology (AT), content in a form that you need it in, context aware ICT, open source solutions, pooling of experts to address the tough issues, and leveraging cloud and emerging technologies to create affordable solutions.

3.5 Denis Anson - Clinical Practice in Assistive Technology

Patients in need of assistive technology are rarely prescribed the optimal assistive technology. Assistive technology is not clinicians' core practice but they are expected to prescribe and evaluate the effectiveness of assistive technology for patients. Due to increased work demands and decreasing budgets, clinicians have very little time to acquire and maintain expertise in assistive technology.

For assistive technology to be effective, clinicians need to follow an iterative process whereby the prescribed assistive technology is evaluated repeatedly until the most effective assistive technology is used by the patient. However, this rarely happens due to lack of resources.  

Another reason for the lack of optimal AT for patients is that clinical treatment planning is based on case models. But case models are not effective for assistive technology interventions because of rapid changes in the product offerings. Assistive technology is frequently updated with newer features, and interfaces and older models may not be available anymore.

To maintain their AT expertise, AT practitioners need to be highly passionate about AT and follow techniques such as browsing catalogs, joining discussion groups, sharing experience with other assistive technology expertise, and by simply "messing around" with the technology.

An update is needed to the common clinical practice for assigning AT to patients. In the current common clinical practice, first the clinician does a thorough evaluation of the client’s assistive technology needs. The clinician then attempts to match the AT needs with available AT. As part of the matching process the clinician tries to match the AT needs with features available on mainstream or assistive technology products. Many mainstream products have features that can be used as AT but the manufacturers are not aware of them and the products are not marketed as such. The clinician has to depend on her expertise and experience to match the needs of the client to the product.

For clinicians who do not have AT expertise this process is time consuming and frustrating. They tend to repeatedly a small set of products that they have used in the past. Two unintended side effects result due to this approach. First, these solutions meet some of the needs for the client but not all. Second, newer and superior products are unable to break in to the market.

The GPII presents an opportunity to help clients and clinicians to discover and use the new, superior and optimal solutions for the client.

3.6 Jim Tobias - When Bad Things Happen to Good Technologies

Assistive technology has a serious utilization and adoption problem. Successful utilization of accessibility technologies is between less that 1% to 25%, even when the technology is available free of cost. These low numbers would be acceptable if it were not for the fact that educational, employment, and independent living outcomes for people with disabilities are so poor.

The information and communication technology industry also has an adoption and utilization problem as well, which only compounds the issue for people who need accessible technologies.

The reasons for the low rates of adoption and utilization of these technologies, as supported by two recent longitudinal studies found that AT is available but is being blocked by non-technological barriers including a lack of awareness and a lack of confidence among users. The studies also found that even among websites that had been built with accessibility in mind they had become inaccessible over time.

Theories of innovation, including those developed by Everett Rogers, identified five main factors that were critical in whether or not an innovation was adopted or implemented by individual users. These factors include relative advantage, compatibility, observability, complexibility, and trialability. The Cloud, or systems like the Global Public Inclusive Infrastructure (GPII), can address several of these factors and eliminate potential barriers for users by making more information available.

Organizations adopt innovations, by contrast, in two main ways. The first way is by consensus – that everyone in the organization seems to be in agreement about a new approach. The second way is by bureaucracy or authority, which often results in the organization attempting to own the innovation and their own. This has been the case with several innovations related to accessibility.

The adoption of an innovation by users can be portrayed in a bell curve. The so-called ‘early adopters’ are, in demographic terms, the people most likely to be able to afford the innovation at a higher cost. Another curve can be seen in how quickly a new innovation can reach a high threshold of the market.

         A limiting factor in both scenarios is the value chain – or the fact that in accessibility products and services are often divided by many different organizations, systems, or sets of regulations depending on a wide variety of factors. One such example is that a website must function not just within a particular browser, but also on a given operating system (OS), which exists on a given computer. If anything in that system is not optimized for accessibility, then the website will not function as desired.

The Cloud and GPII can help address these barriers to accessibility as well by allowing for better coordination across linked products, by identifying accessibility implications sooner, and by shortening AT development times to respond to issues.

Another danger we currently have in some cases is an abundance of raw accessibility that lacks discrimination. Without a framework, terminology, professional development, and help with searching within this area, there is a risk of there being too many choices for consumers. The so-called Information Ecosystem can be greatly assisted by better targeting audiences and their goals, providing more information in rich context formats as well as being intellectually accessible for the average person, to combine and ‘federate’ search criteria related to accessibility terminology and topics, and establish clearly defined oversight roles for stakeholders.  These roles must include consistent schemata of the overall framework and provide for quality assurance, dispute resolution, and outcome measurement through a community of practice.

Inaccessibility can be looked as an analogy to the prevention of disease with hand-washing in the public health field. Until hand-washing was promoted by policy makers and physicians, it was only possible to treat individual cases of illness – sometimes with a great amount of time and expense. After hand-washing because the leading public response to contain common illnesses and became widespread in use, disease transmission dropped significantly. An analogous outcome could be seen with many barriers that people with disabilities currently experience if there is a more coordinated approach to solve fundamental problems of inaccessibility in information and communication technologies.

3.7 Colin Clark  - Architecture and User Experience

The cloud environment is significantly different that the conventional desktop computing environment. The cloud environment is marked by use of diverse platforms, devices, browsers and most importantly, diverse user needs. Users tend to carry-over and multitask between platforms and devices and expect synchronization of data and applications between all their devices. Users want the ability to customize their different devices according to their needs and preferences and make the devices feel comfortable to them.

Accessibility, which is defined as the ability of the system to accommodate the needs of the user, becomes an usability issue in the cloud environment. Disability is no longer an artifact of the user but is about the about the software's ability to accommodate and adapt its interface for the user needs and preferences.

The goal of the cloud driven user experience is to build interfaces that adapt to the needs, preferences, and tastes of the user and to deliver content in a form that people can understand and use across all the available platforms. To achieve this user experience the technical goals for developers include: Building user personalization into the fabric of the web, mobile and desktop; lower the cost of building accessibility -- developers should be able draw from a diverse range of easy to find adaptive building blocks; and, sustain an infrastructure for personalization and interoperability for the long run.

Some of the problems in building a cloud infrastructure that meets the above requirements are: multi-platform support, increased hybridization across the Web and desktop, scaling, and extensibility and sustainability.

One architectural direction to build such a system is to use the Web as a model. The web is massively distributed and yet massively scalable. We also know ahead of time that we will be working in an environment with diverse platforms, programming toolkits, programming languages, user interfaces, etc. We need to find the lowest common denominator technologies; technologies that can interoperate across different platforms.

The use of declarative programming techniques versus imperative programming is another architectural direction that should be considered. We ought to be able to interoperate across programming languages and toolkits, and we need to build a language in data that is able to cross those boundaries, and communicate user interfaces, architectural requirements, etc.

We need architectures that can be transparently moved between client and server without reprogramming or recoding, are ontology and format agnostic, and are loosely coupled and modular.  We ought to be able to interoperate across programming languages and toolkits, and we need to build a language in data that is able to cross those boundaries, and communicate user interfaces, architectural requirements, etc.

Four technical directions include:

  1. Representational State Transfer (REST) web services
  2. Simple, ubiquitous data interchange (JSON)
  3. Use of HTML,CSS, Javascript for cross platform user interfaces
  4. Use of native platform APIs for idiomatic platform integration

A proposed approach to build such a cloud infrastructure is to first design an overall architecture, build a viable solid reference implementation, and grow a community to evolve and sustain the infrastructure.

3.8 Jim Tobias - Advanced Media Techniques for Accessible Content

Currently, accessibility in media is provided through captioning and descriptions as overlays on a fixed, monolithic media object, and better addresses the issues and needs of deaf and blind users than it does of those with low vision or who are hard of hearing – while at the same time not being a disservice, it is just not optimized for the greatest number of users.

These methods are Boolean – they are either on or off.  There is currently no mainstream way to personalize the delivery of various types of content to meet the users individual preferences or actual sensory capabilities.

There are, however, several areas in which there is an opportunity to innovate. Three such areas are the ability to provide media in contingent forms through the use of camera angles and separate audio tracks; the ability of digital media to be manipulated with minimal cost and time; and, an increasing trend toward sharing and openness within both media production as well as consumption.

While intellectual property considerations will necessarily be taken into account, it may be possible to use sharing opportunities for innovation to provide a wide variety of improvements and customizations for users based upon either personal profiles or generic profiles.

For users who are deaf or hard of hearing, some of these improvements might include enhanced audio, such as frequency equalization, synthetic boost (overlaid sibilants), vowel epenthesis, and separating music and special effect tracks, such as separate rendering or volume control, captioning, or close-ups for speech or lip reading.

For users who are blind or who have low vision, possible solutions might include camera angle customizations or magnification of an important object or person in the video, or enhanced contrast and focus including scene lighting or skin-able actors or objects.

For users with cognitive disabilities, some possible solutions might include script simplification; disambiguation, such as by replacing pronouns, showing the speaker pointing to persons or objects, and showing the image of the absent character; semantic scaffolding, or the inclusion of an automatic or on-demand rewind feature which repeats or explains media content; or, a question mark symbol on a remote control offering help options.

3.9 Gottfried Zimmermann – User Interface Adaptation

User interface (UI) adaptation can be thought of in terms of its layers and its timeline. The layers on which adaptive UI can adapt itself include the Presentation Layer - elements such as color, position, size, volume, pitch, and vibration frequency are modified; the Lexical Layer - this layer is concerned with everything that is used as a token for output (labels, icons, sounds, vibration patterns, etc.) and input (access keys, gestures, mouse events, etc.). Tokens are in turn interpreted in another layer of the UI to assign operation meaning to them.  The remaining three layers include the Syntactic Layer - pertaining to the rules for input and output; the Semantic layer - in this layer the meaning of the application is conveyed and concerns the selection of the right content, modalities and translation for the user; and, the Dialog layer - the most advanced layer, where we have the whole interface, the dialog model used, the structure and the groups used in the interface.

In the timeline for an adaptive UI, the players are the author of the interface, the adaptation author, and the user. Changes to the interface made by the author are design time changes while changes made by the adaptation author and the user are runtime changes.

There are varying levels of success with UI adaptation at various layers of the UI. Presentation adaptation works well; simple lexical adaptation works; syntactic adaptation is almost non-existent today; semantic adaptation is in its infancy; and dialog level adaptation exists as hand-made solutions for frequently used platforms. Currently, there are a lack of models for runtime adaptation.

Some examples of interface adaptations include WURFL (Wireless Universal Resource File), Amara and URC (Universal Remote Console). WURFL is a successful approach for server based device adaptation. It is an open source database with thousands of device profiles.  Interfaces and content and can be modified based on the characteristics of the client device. Amara is a crowd based approach for captioning videos. The goal of URC technology is to allow any device or service to be accessed and manipulated by any controller. Users can then select a user interface that fits their needs and preferences, using input and output modalities, and interaction mechanisms that they are familiar with and work well with them.

Adaptation is working well for the mobile world. It is happening at design time but because it is a high development effort, adaptations for people with disabilities are not of interest. Also mainstream developers do not know of adaptations for people with disabilities.

We currently have user interfaces for different devices but user interfaces based on user characteristics are currently unavailable. However, there are some models like the URC which seek to overcome this gap.

Statistical approaches to provide adaptations at runtime need to be developed as designing adaptations for all devices and users is not feasible. For a statistical approach to be successful, parameterization of needs of the users and the user interfaces is required.

Community based efforts have high potential to contribute to user interface adaptations. However, measures for quality control, cooperation of vendors, and ability for third-party user interfaces is needed.

Some recommendations to promote successful user interface adaptation are the separation of user interface from applications, to use crowdsourcing techniques, and foster community drive approaches.

3.10 Gregg Vanderheiden – Accessibility and 2050

The digital divide between groups who have access, use and knowledge of information and communication technologies (ICT) and those who do not is widening. At the same time more and more devices of everyday use have ICT like interfaces. Fewer and fewer devices and appliances have knobs or buttons but are instead replaced by touch screens. Touch screens, however, present very specific accessibility challenges not only for people with disabilities, but also for people who are aging or who have literacy challenges. These interfaces are usable by the designers themselves (who typically are not in the bottom 50% of the technology quotient ) but many in the population can’t use them because of barriers due to language, cognition, reading, seeing, hearing, physical, and aging.

Assistive technology could be used to provide access to ICT for those who faced barriers to access but due to the increase in the number of platforms, the emergence of devices like iPad and consequently the collapse of the AT price model, assistive technology cannot keep up. The margin to innovate, develop and support new AT has disappeared and the worst affected are those with severe disabilities.

Consider the world in 2050. ICT is ubiquitous. It is in everything, everywhere and even in us. Things such as socks and combs are connected to the Web, screens and sensors have replaced devices at work, home, outdoors. Nanobots inside the body not only repair our cells but also extend our capabilities. Instant information becomes integral to our lives and is an extension of our consciousness. With the emergence of the Web and the Cloud, we will have reached a new inflection point. The Cloud is not the one we know today but the one that is coming, the one which totally transforms society and is integral to everything.

Technology will be so inexpensive that everyone can afford it and the quality of technology will be determined by the ease of use and the quality of the experience it provides to the users. Everyone will be seeing a digitally enhanced world where information is overlaid on physical objects around us.

Though many of the disability-related causes of the digital divide will be solved by 2050, the digital divide will not be completely eliminated. Artificial organs and content transformation technologies will reduce barriers to accessing information but their availability might be restricted to a very small percent of the world’s population. Also there are limits to our science. We might not be able to address every kind of disabilities. The consequences of not being able to use technology in 2050 will be devastating. People will be unable to access education, commerce, travel, health, safety, community, family, etc. And these have-nots would exist in the same house as the haves – and they will not be able to do anything about it.

To avoid such a scenario, we need policymakers and stakeholders to be persistent in demanding that access to these technologies is distributed fairly, rather than allowing those who cannot afford help to not be able to benefit from these advances. As long as poverty and inequality exists, the Digital Divide will also exist – the challenge is to minimize its impact.

Researchers and developers should allocate more resources for interface development for all of the small disadvantaged user populations than they do for their mainstream users. Innovations such as Infobots and practical options on devices such as interface sockets on hardware can also help users identify their needs and plug in portable, personalized interfaces to a wider variety of devices and systems.

These solutions will further benefit for the adoption and expansion of a public inclusive infrastructure, such as the Global Public Inclusive Infrastructure (GPII), utilizing the Cloud to make instant auto-personalization simpler, more affordable, and accessible.

Access to digital interfaces, working interactively with the Cloud, will be a necessity in the future. Rather than creating alternate interfaces for every individual who needs one, it is in our best interest to advocate for technology which can interact with interfaces which are more personalized. Interface sockets, Infobots, GPII, and other ideas are only part of the over solution to closing the digital divide – there is much more we can do to achieve the future we want within the context of the world that we live in.

3.11 Clayton Lewis - NIDRR Cloud Computing Initiative

National Institute on Disability and Rehabilitation Research (NIDRR) has a cloud computing initiative and envisions significant benefits from it for people with disabilities, particularly those with cognitive disabilities. The White House is also interested in this effort and has charged NIDRR with ramping up the work.  NIDRR would like to take co-ordinate with other international cloud computing efforts to efficiently and effectively move this area forward.

NIDRR already has some on-going collaboration activities. NIDRR and National Institute of Standards and Technology (NIST) are working together on an effort to make voting more inclusive, finding new ways to structure voting technology that open up new opportunities to work on accessibility and accommodations. NIST is working on a voting application prototype while NIDRR is supplementing the effort to explore profile based accessibility that is being developed as part of the GPII.

NIST is also in charge of the The Cloud First Initiative, a White House supported effort that directs federal agencies to consider cloud computing options for any procurement. NIDRR is working with the federal cloud computing project on developing accessibility guidance. NIDRR is also working with the Department of Education on a couple of projects for online educational assessment, specifically for children with serious cognitive impairments. NIDRR is also working with Office of Education and Technology on delivery of online educational resources (OER). NIDRR and NSF are also working on a project that tailors content presentation for mobile web delivery.

While there is widespread enthusiasm and agreement that the GPII will bring significant benefits for users, federal agencies are particularly interested in finding out what they have to do to create accessible content. NIDRR has explored this argument and continues to do so. In contrast to educational and industrial IT environments, the federal government’s IT environment is quite closed and is more concerned with making information available rather than making it accessible. There is a risk that efforts in the federal government may not lead in accessibility and in fact may be lagging in accessibility efforts. To address this, NIDRR will be convening a discussion of strategic research opportunities relating to section 508 requirements that agencies are concerned with, particularly content creation.

As part of this effort, NIDRR is interested in finding out the prospects of cloud computing in delivering access to services that will help with accessibility processing. Creating accessible content is a significant challenge and as long as content creators have to specifically find out how to make content accessible, we will always be playing catchup.

As services move to the cloud environment, large amounts of data that were previously unavailable are becoming available and can enable fine grain analysis. This analysis can encourage a data driven approach to providing services to people with disabilities. However, privacy remains a significant challenge that needs to be addressed.

Another area NIDRR is interested is experience sharing. People with disabilities have specific needs and when they evaluate technology, their commentary about it is highly useful for other individuals. But this information rarely reaches them.

Other areas NIDRR is interested in include: simplified views of content that help people with cognitive disabilities determine if a particular document is of going to be use to them or not; computer sharing - people with disabilities need their information and communication technology (ICT) set up in a particular format and this is particularly challenging when they are sharing their ICT. The GPII has the potential to make ICT work instantly just the way users want it to.

4 Discussion

On September 21 and 22, 2012, participants gathered virtually to discuss the presentations and the State of the Science for ICT accessibility. The two-day discussion generated recommendations on research, policy and education which are listed here.

4.1 Research recommendations

  1. Research and develop methods for users to generate and manage their preferences. A vital requirement is a mechanism that educates users on various accommodations (larger fonts, captions etc.) and allows them to select their preferences, thus generating a user profile of needs and preferences. The profile may be stored locally (e.g. on a USB drive) or in the Cloud. The users should also be able to update their preferences later. The tool used for generating preferences could present a set of combinations or values shown next to each other so that it makes it easier for the user to pick the one that's closest to optimal for them, instead of just settling for the default or the first one presented.
  2. Develop a self-learning adaptation system. This system will initially use the anonymous needs and preferences from the user profiles and combine it with heuristics to generate interface adaptations for users. As the amount of user profile data available increases, the system will be able to recommend adaptations based on overlaps among user profile data.
  3. Investigate differences in accommodation based on severity of disability. Based on the severity of the disability different accommodations may be needed. For example, accommodations for those who are blind are different from those who are print impaired. Research on accommodations based on type and level disability is required.
  4. Develop accommodations that are function-specific rather than diagnosis-specific. Individuals with different types and severity of disabilities sometimes use similar accommodations. Also as a result of a situational disability, individuals without disabilities may also want to use a particular accommodation. As a result, accommodations that focus on the function would be more relevant than accommodations designed considering only a particular disability and function.
  5. Explore overlap between usability and accessibility. Accessibility can be considered as an end range for the usability spectrum and resources and examples should be provided to developers that highlight the overlap. Developers and designers should be convinced that accessibility is not an "extra,” but a part of the user experience, and therefore should be considered since the early stages of development
  6. Consolidation of accessibility guidelines. Develop a way to track and consolidate accessibility guidelines/requirements/best practices, features/bugs, and workarounds, encouraging public access and participation.
  7. Exploration of what parts of developing accessible content is inherently difficult. For example, when writing alternative text for images in web pages, instead of explaining the image, authors tend to write descriptions which only a sighted individual may understand.
  8. Embed auxiliary information in speech to text conversions. Text to speech conversion are assumed to ignore additional information such as pacing, inflection and tone. Methods that facilitate or automatically embed this information are required.
  9. Develop use cases of various stakeholders in the accessible cloud infrastructure. In addition to the end users, developers, researchers, organizations, clinicians, industry are all users of the cloud infrastructure. Use cases of all these users and the interaction between them are required.
  10. Develop normative data based for user preferences. The user preferences collected as part of the Global Public Inclusive Infrastructure (GPII) could be used to generate normative data. These data will be helpful to developers and researchers developing new techniques for interface adaptation.
  11. Generate a Cochrane report on phenotypic accessibility as a starting point for generating initial predefined preferences.
  12. Study implications of integrating the Global Public Inclusive Infrastructure (GPII) and social media. Day to day and social interaction between individuals with disabilities and their families or caretakers can be facilitated by integrating social media and GPII. Crowdsourcing from an individual’s network or another trusted network could also be use to provide assistance.
  13. Investigate role of 3D printing in accessibility.
  14. Study user mental models of personal information. One consideration about personal information is that it will eventually need an expression of preference by the individuals about how they wish their information to be managed. The Facebook privacy settings experience has shown that the majority of the users didn't know what information was at stake, what the risks were, and how they might express preferences that they might have.
  15. Research implications of lossy translation on information. Repeated automated translation of the same text often results in output that is ambiguous. However, sometimes the underlying message is still comprehensible. Methods that analyze different automated translation methods and compare their outputs for accuracy in “meaning” are needed.
  16. Study the implications of the semantic web. Proponents of the semantic web suggest that semantic markup will improve automatic interpretation of web pages thereby improving accessibility while opponents suggest that the semantic web will not affect accessibility.
  17. Modify existing content authoring tools. Develop or modify content authoring tools like word processors, content management systems to automatically build in accessibility at the source.
  18. Build tools for development of user interfaces that infer their models.  Integrate those tools into the IDEs that are currently being used by developers.
  19. Research quality control issues in crowdsourcing. Crowdsourcing has been promoted as an option to transform media and information in formats that are usable by individuals with disabilities. Issues in crowdsourcing quality control have been studied previously, but research on how crowdsourcing quality affects accessibility of media and information is lacking.
  20. Study accessibility in massive online open courses (MOOCs). Popularity of MOOCs has grown rapidly and millions of students worldwide and enrolled in these courses. The course design involves large scale crowd interaction and feedback as well as automated feedback through online tests and quizzes. MOOCs are an excellent opportunity for understanding how crowdsourcing is used for learning and assisting individuals and also to study how content transformation and accessibility is deployed on a large scale.
  21. Study methods on  how to make information disappear. Self-destructing information after a third party has been authorized to access it may address some of the concerns regarding privacy and security.
  22. White paper on security and identity. A whitepaper on the challenges and methods of securely storing and protecting individuals’ preferences and identity will help to collect various stakeholders’ views and serve as an introduction to the topic.
  23. Study viability of trust based systems for storing user preferences. Protecting the identity and preferences of users is a primary need. Trust based networks combined with cryptography may help improve security as the network size grows.
  24. Research methods that separate authentication from authorization. Authentication is the mechanism where the user is identified while authorization is the mechanism which determines what level of access an authenticated user should have to resources. Viability of methods that allow to set up authorization for predetermined applications and times need to studied.
  25. Research which cloud architectures yield maximum accessibility.
  26. Study the integration of computing systems in the same environment as the user. Wearable computing is becoming more viable and popular and presents an interesting opportunity for assistive technology to become more proactive in its function. However, many topics need to be researched in this area including: hardware, software, context awareness, usability, risks etc.
  27. Technical standards for cloud to cloud communication While users can exchange data with a cloud, it is not currently possible for clouds to exchange data amongst themselves. Issues like data exchange formats, communication protocols, security etc. are yet to be resolved.
  28. Study organizational allocation of resources to accessibility. Information on organizational allocation of resources to accessibility and reasons why these resources might be diverted elsewhere in an organization under various situations need to be studied. The challenge is to understand how a web-creating or a web-contributing organization sustains accessibility during lean times.
  29. Research economic implications of personalization. The personalization of content and interfaces to each individual changes the traditional economic models where content and interfaces were at the most tailored for groups and economies of scale existed. New economic and market models need to be identified for this upcoming change.

4.2 General recommendations

  1. Connect accessibility needs to mainstream scenarios. Connecting accessibility needs to "mainstream" scenarios to show the overlap, synergy, and unique differences will help drive development of innovative solutions that are flexible and customizable.
  2. Develop connections with usability community. Join forces with the usability research and design communities (like ACM, CHI, Interaction Design, Usability Professionals, etc) more explicitly in activities that will help create a culture where people understand their interface preferences and know how to find and get them met.
  3. Evaluate role of government service. Does personalization change the role and function of government service enough to require different kinds of legislation? How can we have clear, testable legislative requirements that support personalized access?
  4. Identification of funding sources that allow the non-commercial individuals or groups to work alongside commercial participants in development of standards.
  5. Development of user interface models. Develop methods to teach designers to build user interface models rather than pixel representations of user interface models.
  6. Contribute to standards. The development of the cloud infrastructure is currently underway and it is necessary that the infrastructure includes and supports those with disabilities.
  7. Promote metadata tagging of content. Frameworks such as the ‘Learning Registry Metadata Initiative’ have been developed for describing or “tagging” learning resources on the web. This framework is a key first step in developing a richer, more fruitful search experience for educators and learners. Once a critical mass of educational content has been tagged to a universal framework, it becomes much easier to parse and filter that content, opening up tremendous possibilities for search and delivery.
  8. Development of a federal government agenda to strategically address the technology for making content accessible.
  9. Develop experience sharing platform. People with disabilities have very particular needs, and as they evaluate technology, their commentary about what works what doesn't, is potentially helpful, but it often doesn't get to other individuals with disabilities. But as information of this kind moves into the cloud, it presents the prospect of using it to make technology work much better than they do now. The social support aspects of experience sharing, communication within the disability community are also very important.
  10. Promote development of accessibility APIs by government agencies. Agencies are being encouraged to use technology independent APIs so the information can be pulled by users in various formats.
  11. Support outreach initiatives. Outreach to stakeholders should be narrowly focused, and on their turf. Attempts should be made to reach stakeholders at venues, such as trade shows, and not expect all stakeholders to gather at once at a single location.

5. Acknowledgements

The contents of this report were developed with funding from the National Institute on Disability and Rehabilitation Research, U.S. Department of Education, grant number H133E080022 (RERC on Universal Interface & Information Technology Access). However, those contents do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the Federal Government.

Image of the Trace Center logo.
Trace Research & Development Center
College of Information Studies, University of Maryland
Room 2117 Hornbake Bldg, South Wing
4130 Campus Drive
College Park, MD 20742
Copyright 2016, University of Maryland
Tel: (301) 405.2043
Fax: (301) 314.9145
Trace Center