A couple months ago, I presented a paper at the HCI International conference. The paper is a framework paper that outlines the modality independent interaction framework. The abstract:
People with disabilities often have difficulty using ICT and similar technologies because of a mismatch between their needs and the requirements of the user interface. The wide range of both user abilities and accessibility guidelines makes it difficult for interface designers who need a simpler accessibility framework that still works across disabilities. A modality-independent interaction framework is proposed to address this problem. We define modality-independent input as non-time-dependent encoded input (such as that from a keyboard) and modality-independent output as electronic text. These formats can be translated to provide a wide range input and output forms as well as support for assistive technologies. We identify three interfaces styles that support modality-independent input/output: command line, single-keystroke command, and linear navigation interfaces. Tasks that depend on time, complex-path, simultaneity, or experience are identified as providing barriers to full cross-disability accessibility. The framework is posited as a simpler approach to wide cross-disability accessibility.
There are several contributions of this paper. It outlines user interfaces along with input and output that is nearly universally accessible (in this case, accessible to people with mild to severe sensory and/or physical impairments). The paper also describes several task-inherent barriers to such nearly-universal accessibility—where interface-only changes could not make such a task as accessible.
The authoritative version of the paper is available from the publisher.
You can also read the freely available authors’ version from this web site. This version is more accessible than the publisher’s version.