About J. Bern Jordan

Bern is a Ph.D. candidate and researcher in accessibility, usability, user interface, and technology interested in extending usability to all people, including people with disabilities and those who are aging. He currently works at the Trace R&D Center in Biomedical Engineering at the University of Wisconsin-Madison.

Red strobe light.

Section 508: Flashing provision

One provision in the Section 508/255 refresh is to prevent flashing in patterns that may cause some people with photosensitive epilepsy to have a seizure. It is in the chapter that applies to hardware. The proposed provision states:

405.1 General. Where ICT emits lights in flashes, there shall be no more than three flashes in any one-second period.
EXCEPTION: Flashes that do not exceed the general flash and red flash thresholds defined in WCAG 2.0 (incorporated by reference in Chapter 1) are not required to conform to 405.

The main problem with this provision is that it is too strict. The WCAG 2.0 provisions do not have any guidance for hardware with flashing lights that do not occur on a screen (e.g., indicator LEDs). Strictly interpreted, the provision would apply to all indicator lights, including those that flicker to indicate network, data, or sound activity. Such indicator lights  would not be allowed to flash more than three times in a one-second timeframe.

This slideshow requires JavaScript.

It important to keep this provision; otherwise, there would be no limitations to the frequency or flashing pattern of lights that might be used to alert or warn users. This could lead to triggering a seizures. However, it would be good to add an exception for some types of indication lights.

Categories of Hardware Lights

Lights on hardware may fall into different categories, including:

  1. Illumination
  2. Warning & Alerting
  3. Indicators of status
  4. Indicators of real-time activity

The flashing pattern and frequency of lights in the first three categories can be arbitrarily chosen, so they can easily meet 405.1 as it is currently worded. Even very bright lights that might be used to get somebody’s attention in warning or alerting circumstances can be flashed in patterns that meet the provision.

The problem is with the fourth category: lights that indicate real-time activity, such as lights that indicate network activity, hard drive IO, and LED VU meters and clipping lights on some sound equipment. The pattern or frequency of flashing may occur more than three times in a one-second timeframe, but the pattern cannot be changed without changing the entire nature of the lights and their purpose.

In typical products, the lights that indicate real-time activity are neither large nor very bright. Having large (in visual area) real-time activity lights would be the worst case since the trigger brightness thresholds for areas is relatively low, so these “large” indicators should not be exempted from the provision. Because of the danger to people with photosensitive epilepsy, and the difficulty calculating seizure thresholds in these circumstances, manufacturers should avoid having large flashing indicators of real-time activity.

Point light sources can be made to be potentially very bright in such a way that they may cause disability glare in the eyes of viewers that may make flash seem much larger than the actual point source. However, from a practical standpoint, such indicators of real-time activity are not typically that bright because they are annoying and potentially painful to all users. Because of the practical consideration, it seems reasonable that a narrow exception can be made to provision 405.1 for point-source lights that indicate real-time activity.

Recommendation

The recommended provision adds Exception 2.

405.1 General. Where ICT emits lights in flashes, there shall be no more than three flashes in any one-second period.
EXCEPTION 1: Flashes that do not exceed the general flash and red flash thresholds defined in WCAG 2.0 (incorporated by reference in Chapter 1) are not required to conform to 405.
EXCEPTION 2: Flickering of point light sources (where a point light source subtends no more than 1-degree of visual angle at the typical viewing distance) that indicate real-time activity (such as sound, hard drive, or network activity) are not required to conform to 405.

Publication: Loop interface

Published today is a defensive disclosure of an invention of mine, which I call the “Loop Interface.” Such an interface may be particularly useful to people who are blind or who have low vision and need speech output in order to use technology. The abstract from the disclosure:

Disclosed is a user interface method that allows for quick, efficient exploration and usage of user interfaces by people who cannot see. The interface method is a circular direct-selection interface where many user interface elements are arranged around the edges of a touch panel or touch screen. This makes a circular list of elements that users can touch to get speech output and then activate when they find the desired element. This arrangement can be explored easily but also allows users to efficiently and directly select elements on familiar interfaces.

The problem is that current methods of using screen readers are inadequate. People may use swiping gestures or keystrokes to navigate from element to element, which can be inefficient. Alternatively, on some touchscreen devices, a screen reader might allow a person to tap or drag a finger around the screen to explore the screen via text to speech. This exploration can be challenging because onscreen elements may be anywhere on the screen in any arrangement and might be missed when dragging a finger on the screen.

A visual depiction of the Loop Interfaces in use.
Using the Loop Interface by tracing the edges of the screen with a finger and listening to the text-to-speech feedback.

With the Loop Interface, a person can simply trace the edges of a screen with a finger and get to all elements on the screen. When they become familiar with a particular application, then they can directly touch the desired element without having to navigate through all the intervening options. More details about the Loop Interface are available in the disclosure [PDF]. In the future, I plan to do user testing on a basic version of the Loop Interface.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program. It will soon be published in freely-available form on the Publications page of the Defensive Publications web site.

Ishihara plate for testing color-blindness.

Section 508: Color blindness

As rumor would have it, the Access Board released the Section 508 refresh Notice of Proposed Rulemaking (NPRM) today. I have only just taken a quick look at it so far and will have other comments in the future. I did want to make one observation that pertains to two proposed provisions:

302.3 Without Perception of Color. Where a visual mode of operation is provided, ICT shall provide at least one mode of operation that does not require user perception of color. (From the Functional Performance Criteria chapter of Appendix A.)

407.7 Color. Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (From the Hardware chapter of Appendix A.)

These are important provisions and good for a sizable population (estimates range from about 5-10% of males have some form of color blindness). However, as currently worded, the provisions would allow for a non-visual means of perceiving or distinguishing a colored element. At an extreme, this could make a color-blind user have to use a text-to-speech mode that users who are blind might use. Should a person with color blindness have to put on headphones and use a screen reader in order to access an interface?

The fix is simple in for both cases—add the word visual to the provisions.

  • 302.3 Without Perception of Color. Where a visual mode of operation is provided, ICT shall provide at least one visual mode of operation that does not require user perception of color. (From the Functional Performance Criteria chapter of Appendix A.)
  • 407.7 Color. Color coding shall not be used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (From the Hardware chapter of Appendix A.)
Section 508.

What is Section 508?

Section 508 is legislation for accessible electronic and information technologies. It applies to all U.S. Federal agencies when they develop, procure, purchase, and maintain such technology. The legislation is an example of pull legislation—the  federal government is a big purchaser of technology, so companies who want to sell to the government must make an effort to include accessibility in their product. This should hopefully increase the accessibility of products available to others as well. While the legislation specifically applies to federal agencies, state and local governments and other organizations may also require Section 508 compliance when they are procuring technology.

The current Section 508 standards are showing their age. The U.S. Access Board (an independent agency of the government devoted to accessibility for people with disabilities) has been undertaking a multi-year refresh of Section 508 with reports and drafts that have been published in 2008, 2010, and 2011. Recent rumors have been pointing to a notice of proposed rulemaking being published on February 18, 2015.

When published, there will probably be a 90-day public comment period.

As part of my work at the Trace Center, I have been working on the Section 508 standards during the refresh process: analyzing the provisions and suggesting changes and rationale. I was the main author of the Trace Center’s comments on the 2011 proposal. I plan to analyze the next version when it comes out (hopefully soon). I will post some of my thoughts and analysis of the proposed standards here on my blog.

A touch.

Visualize touches on a screen with JavaScript

I am writing some software for some experiments that I am planning. In the experiments, I am going to have participants use several interfaces and compare their performance on different interface. I am using an iPod Touch as the device and capturing the screen using QuickTime on a Mac.

It is disorienting to review the video and see the interface change but not see the user’s fingers. To solve this problem, I wrote some JavaScript code that displays a little number at each touch point that quickly fades away. This makes video captures much easier to follow.

The TouchShow code is available on GitHub and is open source.

(Image by flickr user the Comic Shop was cropped and used under CC BY 2.0 license.)

Just a click: The new reCAPTCHA API

Just announced today by Vinay Shet is the new reCAPTCHA API which is being called the “No CAPTCHA reCAPTCHA.” With the new system, many people might just click a checkbox rather than have to read and type distorted text. As just one part of determining if someone is a bot or human visitor, the movement of the mouse cursor within the reCAPTCHA area will be tracked to see if it has human-like motion.

A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a test that is used to filter out bots and other automated agents to prevent spam and other malicious activity. There are many different types of CAPTCHAs, but one of the most common today is a text field and an image of scrambled text. The person reads the scrambled text and types it into the text field.

An example CAPTCHA.
An example of a scrambled-text CAPTCHA (Public domain image from Wikipedia Commons.)

CAPTCHAs can present difficulty to people with disabilities. This new click-a-checkbox CAPTCHA also is problematic to some users—especially if it were the only CAPTCHA method. Some people only access websites using a keyboard or keyboard emulator because of physical disabilities. People who are blind cannot see where to click with the mouse. People using Mouse Keys will not look like a regular mouse user. This problem can be mitigated by having an alternative CAPTCHA in a different modality for those users.

I have not tried any of the new reCAPTCHAs in the wild yet. I do know that Google has implemented increasing challenges with the new system, where a user with an “odd” checkbox click and telemetry will see an additional CAPTCHA challenge (such as an audio CAPTCHA or picking out the pictures of kittens). I would hope that there is a way to either bypass the click or at least “click” the checkbox using a keyboard, screen reader, or other assistive technology in order to get to the next CAPTCHA step.

The pressure-sensitive screen on the Apple Watch

Apple Watch.
A rendering of the Apple Watch by Justin14 [CC-BY-SA-4.0], via Wikimedia Commons.
Joseph Flaherty at Wired published an article today suggesting that the Apple Watch’s pressure-sensitive touchscreen might be a bigger deal than the Apple Watch itself. While I’ll let the market in the future determine the ultimate success of the product, I thought that Flaherty raised an interesting point with regards to the touchscreen user interface. Flaherty states that being able to sense a press with the so called “flexible Retina display” would allow for context-specific menus—much like the powerful “right-click” interaction with many computer interfaces. This could allow for a potential decluttering of the touchscreen interface because the user has a way of calling up menus that does not involve tapping on menu buttons or toolbars.

This decluttering could be useful for many, but may also be problematic for some. Context menus hide functionality, which some users may now not be able to find. In a poorly designed interface, people may have to “hard-press” many types of onscreen objects and elements to see which ones have the right hidden menu. In a well-designed interface, users would know just by looking (or convention) which onscreen elements have associated context menus.

An Accessibility Use of Pressure-sensitive Touchscreens

Having a pressure-sensitive touchscreen also allows for a very interesting and useful method of access for people who are blind and who have low vision: pressing firmly to activate an onscreen element. Currently, iOS and Android have built-in screen reading software. One way of using these respective screen readers is in an “exploration mode” where a user touches elements on the screen to have them read through text to speech. Because the system cannot different between a touch that means “what is this?” and a touch that means “activate this,” a separate gesture is needed to activate the desired element one found. On VoiceOver in iOS for example, the person double taps anywhere on the screen to activate the item that was last touched. This second gesture can be somewhat awkward and involves extra motions.

With a pressure-sensing touchscreen, this “activation gesture” would be much more intuitive. Instead of having to double-tap the screen, press an Enter button, draw a circle on the screen, or any other complex gesture, the user could simply press harder. In the touchscreen exploration mode, a light or regular touch means “what am I touching?” and a firm touch would mean “activate this.” This would be very powerful and intuitive to people who cannot see touchscreens well enough to read them.

“Happy” by Pharrell in American Sign Language

The CM7 Deaf Film Camp recently posted a well done American Sign Language (ASL) version of Pharrell’s hit song “Happy.” ASL is its own language, quite different from spoken or written English. With movements and facial expressions, ASL can be very artful and dance-like. Here is the music video, which will hopefully put a smile on your face:

I know some ASL (with a current vocabulary of around 200 words or concepts), so I can tell you that the video is not just a verbatim translation of the lyrics. The folks at the CM7 Deaf Film Camp added a lot to the music video with some side stories going on. Thanks to Billboard for posting this yesterday and bringing it to the attention of many.

Patent: Slide-to-read

I was notified just recently of a recently awarded US patent on which I worked. The patent entitled, “Method for increased accessibility to a human machine interface,” was awarded about a month ago. We commonly call this accessibility feature “Slide-to-read.” The patent abstract reads:

A method is defined for providing an individual increased accessibility to a touch screen displaying first and second elements. The individual initially engages the touch screen with a contact point at a first location. The contact point is dragged across the touch screen into engagement with the first element and the first element is highlighted in response thereto. Thereafter, the individual may drag the contact point across the touch screen from the first element into engagement with the second element whereby the second element is highlighted on the touch screen and the highlight is removed from the first element. Audible announcements may accompany the contacting of the first or second elements with the contact point.

Figure from patent.
How slide-to-read works (from US Patent 8,760,421).

Patent language is kind of a language of its own, so here is what it really means…

In layman’s terms

Slide-to-read allows a person to get speech output for anything on the screen over which he or she drags a finger. This is very useful for people who have low vision or who cannot read some or all of the words on the display.

To use slide-to-read, the user must touch the screen and slide a finger into any other element on the screen. That other element and any subsequent element that the user touches while dragging are read out loud with text-to-speech. A visible focus highlight follows the user’s finger so that users know what item is being read. When the finger is lifted, the focus highlight stays on the last-touched item, so that the user can easily find that element to activate it. If the user taps the screen or does not drag into a different element, the system works as normal (e.g., activates a touchscreen button).

This invention is a new feature for EZ Access, which is a set of cross-disability access techniques that can be used on electronic devices, particularly for information and transaction kiosks. The EZ Access techniques were created by the Trace Research & Development Center at the University of Wisconsin-Madison. While intended for use with EZ Access, the invention does not require EZ Access. Note that slide-to-read is a particular solution for people who cannot read text on a touchscreen. As a single-finger sliding gesture, it may conflict with panning and scrolling gestures that are used in some touchscreen systems. This is not typically an issue with kiosk systems where screens are designed for simplicity and ease of use.

Reference

Jordan, J. B., Vanderheiden, G. C., Kelso, D. P. (2014) Method For Increased Accessibility To A Human Machine Interface. US Patent 8,760,421

Add-on tactile buttons for mobile devices

Just launched today as an Indiegogo campaign (and over a quarter of the way to the goal as I am writing this) is an interesting product idea called DIMPLE from dimple.io. Dimple is basically a sticker with 4 buttons that can be added to many Android 4.0+ devices that support Near Field Communication (NFC) and the Dimple app. It requires no batteries because it uses energy that is radiated by the NFC reader.

Photo of Dimple button prototype.
DIMPLE buttons on the back of a phone. (Photo from DIMPLE.io press release kit.)

This prototype product is not one made specifically for people with disabilities—obviously from the response on Indiegogo, many people are interested in having customizable, programmable tactile buttons on their smartphones and tablets.

Dimple has potential accessibility uses. Such buttons would be useful for people who cannot see, because the buttons can be tactilely discerned and located. These users could associate some of their most commonly used functionality with the buttons and access them directly, instead of having to navigate through menus with the TalkBalk screen reader.

I do not know the exact capabilities of DIMPLE, NFC, or Android TalkBalk, but it would be very useful if the Dimple buttons could be used to access the TalkBalk screen reader or to generate virtual keyboard keystrokes. If capable, it would be very useful to have dedicated buttons to navigate through applications and make selections. This would be less fatiguing than making many swiping gestures to navigate.