Publication: Loop interface

Published today is a defensive disclosure of an invention of mine, which I call the “Loop Interface.” Such an interface may be particularly useful to people who are blind or who have low vision and need speech output in order to use technology. The abstract from the disclosure:

Disclosed is a user interface method that allows for quick, efficient exploration and usage of user interfaces by people who cannot see. The interface method is a circular direct-selection interface where many user interface elements are arranged around the edges of a touch panel or touch screen. This makes a circular list of elements that users can touch to get speech output and then activate when they find the desired element. This arrangement can be explored easily but also allows users to efficiently and directly select elements on familiar interfaces.

The problem is that current methods of using screen readers are inadequate. People may use swiping gestures or keystrokes to navigate from element to element, which can be inefficient. Alternatively, on some touchscreen devices, a screen reader might allow a person to tap or drag a finger around the screen to explore the screen via text to speech. This exploration can be challenging because onscreen elements may be anywhere on the screen in any arrangement and might be missed when dragging a finger on the screen.

A visual depiction of the Loop Interfaces in use.
Using the Loop Interface by tracing the edges of the screen with a finger and listening to the text-to-speech feedback.

With the Loop Interface, a person can simply trace the edges of a screen with a finger and get to all elements on the screen. When they become familiar with a particular application, then they can directly touch the desired element without having to navigate through all the intervening options. More details about the Loop Interface are available in the disclosure [PDF]. In the future, I plan to do user testing on a basic version of the Loop Interface.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program. It will soon be published in freely-available form on the Publications page of the Defensive Publications web site.

Patent: Slide-to-read

I was notified just recently of a recently awarded US patent on which I worked. The patent entitled, “Method for increased accessibility to a human machine interface,” was awarded about a month ago. We commonly call this accessibility feature “Slide-to-read.” The patent abstract reads:

A method is defined for providing an individual increased accessibility to a touch screen displaying first and second elements. The individual initially engages the touch screen with a contact point at a first location. The contact point is dragged across the touch screen into engagement with the first element and the first element is highlighted in response thereto. Thereafter, the individual may drag the contact point across the touch screen from the first element into engagement with the second element whereby the second element is highlighted on the touch screen and the highlight is removed from the first element. Audible announcements may accompany the contacting of the first or second elements with the contact point.

Figure from patent.
How slide-to-read works (from US Patent 8,760,421).

Patent language is kind of a language of its own, so here is what it really means…

In layman’s terms

Slide-to-read allows a person to get speech output for anything on the screen over which he or she drags a finger. This is very useful for people who have low vision or who cannot read some or all of the words on the display.

To use slide-to-read, the user must touch the screen and slide a finger into any other element on the screen. That other element and any subsequent element that the user touches while dragging are read out loud with text-to-speech. A visible focus highlight follows the user’s finger so that users know what item is being read. When the finger is lifted, the focus highlight stays on the last-touched item, so that the user can easily find that element to activate it. If the user taps the screen or does not drag into a different element, the system works as normal (e.g., activates a touchscreen button).

This invention is a new feature for EZ Access, which is a set of cross-disability access techniques that can be used on electronic devices, particularly for information and transaction kiosks. The EZ Access techniques were created by the Trace Research & Development Center at the University of Wisconsin-Madison. While intended for use with EZ Access, the invention does not require EZ Access. Note that slide-to-read is a particular solution for people who cannot read text on a touchscreen. As a single-finger sliding gesture, it may conflict with panning and scrolling gestures that are used in some touchscreen systems. This is not typically an issue with kiosk systems where screens are designed for simplicity and ease of use.

Reference

Jordan, J. B., Vanderheiden, G. C., Kelso, D. P. (2014) Method For Increased Accessibility To A Human Machine Interface. US Patent 8,760,421

Publication: Modality Independent Interaction

A couple months ago, I presented a paper at the HCI International conference. The paper is a framework paper that outlines the modality independent interaction framework. The abstract:

People with disabilities often have difficulty using ICT and similar technologies because of a mismatch between their needs and the requirements of the user interface. The wide range of both user abilities and accessibility guidelines makes it difficult for interface designers who need a simpler accessibility framework that still works across disabilities. A modality-independent interaction framework is proposed to address this problem. We define modality-independent input as non-time-dependent encoded input (such as that from a keyboard) and modality-independent output as electronic text. These formats can be translated to provide a wide range input and output forms as well as support for assistive technologies. We identify three interfaces styles that support modality-independent input/output: command line, single-keystroke command, and linear navigation interfaces. Tasks that depend on time, complex-path, simultaneity, or experience are identified as providing barriers to full cross-disability accessibility. The framework is posited as a simpler approach to wide cross-disability accessibility.

There are several contributions of this paper. It outlines user interfaces along with input and output that is nearly universally accessible (in this case, accessible to people with mild to severe sensory and/or physical impairments). The paper also describes several task-inherent barriers to such nearly-universal accessibility—where interface-only changes could not make such a task as accessible.

The authoritative version of the paper is available from the publisher.

You can also read the freely available authors’ version from this web site. This version is more accessible than the publisher’s version.

Publication: Virtual jog wheel

People who use screen readers frequently need to navigate between different elements on a screen. As the user navigates to an element, information about that element is provided in speech output. On touchscreen devices, screen reading software typically uses swiping/flicking gestures for navigation. For example, one swipes to the left to go to the next element and to the right to go to the previous element when using VoiceOver on iOS by Apple (see more details about iOS VoiceOver). The problem is that these gestures are relatively slow and fatiguing. Using buttons to control navigation may be easier, but is still relatively slow.

I recently invented and described a software technique for quickly and easily navigating a user interface on a touchscreen device. The user activates a virtual jog wheel mode and then makes arc or circular gestures on it. By just making a single circular gesture, the user can navigate through a number of elements to get to the one they want.

Virtual jog wheel in use.
The virtual jog wheel in use. The person makes a generally circular gesture on the virtual jog wheel to navigate through the elements on the screen.

More details are available in the virtual jog wheel disclosure [PDF]. This invention is free for anyone to use, but those using it will still need to ensure that the technology does not infringe any other patents.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program through the Defensive Publications web site.