Publication: Loop interface

Published today is a defensive disclosure of an invention of mine, which I call the “Loop Interface.” Such an interface may be particularly useful to people who are blind or who have low vision and need speech output in order to use technology. The abstract from the disclosure:

Disclosed is a user interface method that allows for quick, efficient exploration and usage of user interfaces by people who cannot see. The interface method is a circular direct-selection interface where many user interface elements are arranged around the edges of a touch panel or touch screen. This makes a circular list of elements that users can touch to get speech output and then activate when they find the desired element. This arrangement can be explored easily but also allows users to efficiently and directly select elements on familiar interfaces.

The problem is that current methods of using screen readers are inadequate. People may use swiping gestures or keystrokes to navigate from element to element, which can be inefficient. Alternatively, on some touchscreen devices, a screen reader might allow a person to tap or drag a finger around the screen to explore the screen via text to speech. This exploration can be challenging because onscreen elements may be anywhere on the screen in any arrangement and might be missed when dragging a finger on the screen.

A visual depiction of the Loop Interfaces in use.
Using the Loop Interface by tracing the edges of the screen with a finger and listening to the text-to-speech feedback.

With the Loop Interface, a person can simply trace the edges of a screen with a finger and get to all elements on the screen. When they become familiar with a particular application, then they can directly touch the desired element without having to navigate through all the intervening options. More details about the Loop Interface are available in the disclosure [PDF]. In the future, I plan to do user testing on a basic version of the Loop Interface.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program. It will soon be published in freely-available form on the Publications page of the Defensive Publications web site.

A touch.

Visualize touches on a screen with JavaScript

I am writing some software for some experiments that I am planning. In the experiments, I am going to have participants use several interfaces and compare their performance on different interface. I am using an iPod Touch as the device and capturing the screen using QuickTime on a Mac.

It is disorienting to review the video and see the interface change but not see the user’s fingers. To solve this problem, I wrote some JavaScript code that displays a little number at each touch point that quickly fades away. This makes video captures much easier to follow.

The TouchShow code is available on GitHub and is open source.

(Image by flickr user the Comic Shop was cropped and used under CC BY 2.0 license.)

The pressure-sensitive screen on the Apple Watch

Apple Watch.
A rendering of the Apple Watch by Justin14 [CC-BY-SA-4.0], via Wikimedia Commons.
Joseph Flaherty at Wired published an article today suggesting that the Apple Watch’s pressure-sensitive touchscreen might be a bigger deal than the Apple Watch itself. While I’ll let the market in the future determine the ultimate success of the product, I thought that Flaherty raised an interesting point with regards to the touchscreen user interface. Flaherty states that being able to sense a press with the so called “flexible Retina display” would allow for context-specific menus—much like the powerful “right-click” interaction with many computer interfaces. This could allow for a potential decluttering of the touchscreen interface because the user has a way of calling up menus that does not involve tapping on menu buttons or toolbars.

This decluttering could be useful for many, but may also be problematic for some. Context menus hide functionality, which some users may now not be able to find. In a poorly designed interface, people may have to “hard-press” many types of onscreen objects and elements to see which ones have the right hidden menu. In a well-designed interface, users would know just by looking (or convention) which onscreen elements have associated context menus.

An Accessibility Use of Pressure-sensitive Touchscreens

Having a pressure-sensitive touchscreen also allows for a very interesting and useful method of access for people who are blind and who have low vision: pressing firmly to activate an onscreen element. Currently, iOS and Android have built-in screen reading software. One way of using these respective screen readers is in an “exploration mode” where a user touches elements on the screen to have them read through text to speech. Because the system cannot different between a touch that means “what is this?” and a touch that means “activate this,” a separate gesture is needed to activate the desired element one found. On VoiceOver in iOS for example, the person double taps anywhere on the screen to activate the item that was last touched. This second gesture can be somewhat awkward and involves extra motions.

With a pressure-sensing touchscreen, this “activation gesture” would be much more intuitive. Instead of having to double-tap the screen, press an Enter button, draw a circle on the screen, or any other complex gesture, the user could simply press harder. In the touchscreen exploration mode, a light or regular touch means “what am I touching?” and a firm touch would mean “activate this.” This would be very powerful and intuitive to people who cannot see touchscreens well enough to read them.

Patent: Slide-to-read

I was notified just recently of a recently awarded US patent on which I worked. The patent entitled, “Method for increased accessibility to a human machine interface,” was awarded about a month ago. We commonly call this accessibility feature “Slide-to-read.” The patent abstract reads:

A method is defined for providing an individual increased accessibility to a touch screen displaying first and second elements. The individual initially engages the touch screen with a contact point at a first location. The contact point is dragged across the touch screen into engagement with the first element and the first element is highlighted in response thereto. Thereafter, the individual may drag the contact point across the touch screen from the first element into engagement with the second element whereby the second element is highlighted on the touch screen and the highlight is removed from the first element. Audible announcements may accompany the contacting of the first or second elements with the contact point.

Figure from patent.
How slide-to-read works (from US Patent 8,760,421).

Patent language is kind of a language of its own, so here is what it really means…

In layman’s terms

Slide-to-read allows a person to get speech output for anything on the screen over which he or she drags a finger. This is very useful for people who have low vision or who cannot read some or all of the words on the display.

To use slide-to-read, the user must touch the screen and slide a finger into any other element on the screen. That other element and any subsequent element that the user touches while dragging are read out loud with text-to-speech. A visible focus highlight follows the user’s finger so that users know what item is being read. When the finger is lifted, the focus highlight stays on the last-touched item, so that the user can easily find that element to activate it. If the user taps the screen or does not drag into a different element, the system works as normal (e.g., activates a touchscreen button).

This invention is a new feature for EZ Access, which is a set of cross-disability access techniques that can be used on electronic devices, particularly for information and transaction kiosks. The EZ Access techniques were created by the Trace Research & Development Center at the University of Wisconsin-Madison. While intended for use with EZ Access, the invention does not require EZ Access. Note that slide-to-read is a particular solution for people who cannot read text on a touchscreen. As a single-finger sliding gesture, it may conflict with panning and scrolling gestures that are used in some touchscreen systems. This is not typically an issue with kiosk systems where screens are designed for simplicity and ease of use.

Reference

Jordan, J. B., Vanderheiden, G. C., Kelso, D. P. (2014) Method For Increased Accessibility To A Human Machine Interface. US Patent 8,760,421

Publication: Virtual jog wheel

People who use screen readers frequently need to navigate between different elements on a screen. As the user navigates to an element, information about that element is provided in speech output. On touchscreen devices, screen reading software typically uses swiping/flicking gestures for navigation. For example, one swipes to the left to go to the next element and to the right to go to the previous element when using VoiceOver on iOS by Apple (see more details about iOS VoiceOver). The problem is that these gestures are relatively slow and fatiguing. Using buttons to control navigation may be easier, but is still relatively slow.

I recently invented and described a software technique for quickly and easily navigating a user interface on a touchscreen device. The user activates a virtual jog wheel mode and then makes arc or circular gestures on it. By just making a single circular gesture, the user can navigate through a number of elements to get to the one they want.

Virtual jog wheel in use.
The virtual jog wheel in use. The person makes a generally circular gesture on the virtual jog wheel to navigate through the elements on the screen.

More details are available in the virtual jog wheel disclosure [PDF]. This invention is free for anyone to use, but those using it will still need to ensure that the technology does not infringe any other patents.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program through the Defensive Publications web site.

Tactus tactile touchscreen prototype

Touchscreens can be particularly challenging to use for people who cannot see. All they feel is a featureless surface. Accessibility can be provided through speech using a number of different strategies, but finding onscreen buttons through audio is slower than finding tactile buttons.

Photograph of tactile bumps.

An edge view of a tactile layer that could be used as part of a touchscreen. (From Tactus Technologies.)

Tactus Technologies has been developing a touchscreen with areas that can raise and lower from the surface. Sumi Das for CNET posted a video and hands-on impressions a couple days ago. The technology uses microfluidics. For the buttons to raise up, liquid in the system is put under pressure–the higher the pressure, the taller and firmer the keys. The current version only offers one arrangement of buttons.

Currently, the technology is limited in that it’s a fixed single array. You wouldn’t be able to use the Tactus keyboard in both portrait and landscape mode, for example. But the goal is to make the third generation of the product dynamic. “The vision that we had was not just to have a keyboard or a button technology, but really to make a fully dynamic surface,” says cofounder Micah Yairi, “So you can envision the entire surface being able to raise and lower depending on what the application is that’s driving it.”

Accessibility

The current generation only offers a single array of raised buttons that would work in only one orientation. This would be helpful for allowing users to find the keyboard tactilely, for example, but this would offer no support for other applications. The user cannot tactilely find buttons or other controls for other applications on a smartphone or tablet.

Future versions may fix this limitation. Ideally, the microfluidic tactile “pixels” will be small enough so that various tactile shapes can be made. To make seamless shapes, the tactile pixels should not have significant gaps between them, but this may be technically difficult to do. With gaps in between the tactile pixels, the device could still be useful to tactile exploration, but the layout would likely be constrained to a grid of bumps. An onscreen button might have a single bump associated with it (multiple separate bumps should not be used to designate a single large key because it would feel like several separate controls). The bumps would allow people to locate controls, but without different shapes, it would be more difficult to identify the controls from touch alone.

From the video, it also looks that the current technology is not well suited to braille. Spots small enough for braille would not be sharp or tall enough for effective reading. (For more information, Tiresias.org has a table of braille dimensions under different standards).