Web Accessibility & Keyboard Access

Last week, Marieke McCloskey posted a web accessibility article on the Neilsen-Norman Group site: “Keyboard-Only Navigation for Improved Accessibility.” The article is very good; I wanted to make some general comments on it. The article had several points, which I will comment on in brief:

  1. Test using the site with keyboard. This is good advice, because often you cannot find some usability issues unless you actually try something yourself.
  2. Make sure that the keyboard focus is visually obvious. This is important because otherwise people who are using the keyboard (perhaps because they cannot physically use a mouse) cannot tell what they are doing. Browsers often have a visible highlight, but oftentimes the styling is overridden by web site designers who want a particular look and feel. I find myself irritated by this problem from time to time (when I don’t feel like switching from the keyboard to the mouse) when I cannot tell where tabbing went. Do not override the highlight unless you provide a highlight that is at least as obvious as the default when interactive elements are focused.
  3. Make sure all interactive elements are navigable. If an element is not navigable, then the person will not be able to tab to or otherwise focus on the element in order to interact with it. The element just sits there, taunting them. Native HTML elements are navigable by default, but scripted interactive elements based on generic HTML elements (such as a span or div) should have a tabindex or some other mechanism to make them navigable.
  4. Consider a link to skip navigation. Sites with navigation links and sidebars may have many things through which a user must tab before they can get to the main content of the site. While it is important for the person to be able to get to the navigation, it is very inefficient to have to tab through material that is on most pages. The article suggests using “Skip Navigation” links on a page. I would like to make a few additional comments on this suggestion below.

Alternative to Skip Links

The Web Content Accessibility Guidelines 2.0 (WCAG) are accessibility guidelines for web sites (and they also provide good guidance for documents and software). Guideline 2.4.1 from WCAG says:

Bypass Blocks
2.4.1 A mechanism is available to bypass blocks of content that are repeated on multiple Web pages. (Level A)

Skip links are one way to do this, but are kind of old fashioned. They clutter up a page and clutter up the navigation order. A better technique that has pretty good support with modern browsers and screen readers is to use landmark roles which add semantics to a web page.

Landmark roles were introduced with Accessible Rich Internet Applications (ARIA). To use ARIA landmark roles:

  1. Ensure that the page has all information in containing sections. These sections might be <div>s or HTML 5 section tags such as: <section>, <main><nav><article><aside>, <header>, <footer>, and <address>.
  2. Make sure there is only one <main> section.
  3. Add the role attribute to each containing section and an attribute value from the list of ARIA landmark roles and other useful roles.
    • application
    • banner
    • complementary
    • contentinfo
    • form
    • main
    • navigation
    • search
    • article (non-landmark role)
  4. Make sure that the containing section tag matches a similar landmark role. ARIA has greater richness than HTML 5 does. So you would have sections like:
    • <main role="main">...</main>
    • <nav role="navigation">...</nav>
    • <aside role="complementary">...</aside>
    • <article role="article">...</article>
    • <header role="banner">...</header>
    • <footer role="contentinfo">...</footer>
    • <address role="contentinfo">...</address>

    and potentially a number of <div>s or <section>s with other landmark roles.

A screen reader user can navigate from landmark to landmark on a web page. This allows them to more easily navigate a page and go back and forth to the content in which they are interested.

More Information

Web-based handwriting recognition

Today, Google announced handwriting recognition for Gmail and documents on Google drive. I am writing at least some of this post with the feature.

"Hello world" written in Google's handwriting recognition system.
The handwriting input box in Gmail and documents with Google Drive.

Some of my first reactions while trying Google’s handwriting system in English:

  • The digital “ink” lags significantly behind my touches or mouse drags. This makes my handwriting look very poor.
  • Despite the atrocious handwriting that I produce with the system, the recognition is very good.
  • The recognition engine will fix spelling mistakes.
  • Because both the ink and my fingertip are relatively thick, I write relatively large print in the input area.
  • Sometimes I run out of space to write a word, especially because my writing is so large. This means I have to go back and make corrections.
  • It might not be able to read a signature, but it fairs reasonably well with carefully written cursive.
  • I am amazed at how well this works in a browser using web applications.
  • Typing on a real keyboard is much faster for me.

For most people who use computers regularly, handwriting is slower than typing. Even so, there are some users who may find handwriting to be a good or even preferable text entry option, especially those writing in languages which are not so easy to type.

Publication: Modality Independent Interaction

A couple months ago, I presented a paper at the HCI International conference. The paper is a framework paper that outlines the modality independent interaction framework. The abstract:

People with disabilities often have difficulty using ICT and similar technologies because of a mismatch between their needs and the requirements of the user interface. The wide range of both user abilities and accessibility guidelines makes it difficult for interface designers who need a simpler accessibility framework that still works across disabilities. A modality-independent interaction framework is proposed to address this problem. We define modality-independent input as non-time-dependent encoded input (such as that from a keyboard) and modality-independent output as electronic text. These formats can be translated to provide a wide range input and output forms as well as support for assistive technologies. We identify three interfaces styles that support modality-independent input/output: command line, single-keystroke command, and linear navigation interfaces. Tasks that depend on time, complex-path, simultaneity, or experience are identified as providing barriers to full cross-disability accessibility. The framework is posited as a simpler approach to wide cross-disability accessibility.

There are several contributions of this paper. It outlines user interfaces along with input and output that is nearly universally accessible (in this case, accessible to people with mild to severe sensory and/or physical impairments). The paper also describes several task-inherent barriers to such nearly-universal accessibility—where interface-only changes could not make such a task as accessible.

The authoritative version of the paper is available from the publisher.

You can also read the freely available authors’ version from this web site. This version is more accessible than the publisher’s version.

The Bradley Tactile watch

On Kickstarter until August 15, 2013 is the Bradley timepiece project. It is a wristwatch that has tactile features that allow it to be used by people who are blind. It has an elegant design and greatly surpassed its fundraising goals.

The Bradley Timepiece tactile wristwatch. (Credit: Kickstarter by Eone)
The Bradley Timepiece tactile wristwatch. (Image credit: Kickstarter project by Eone)

Before this design, there were two general approaches to accessible wristwatches: talking watches and tactile watches like the one pictured below. Talking watches may not be easy to hear in noisy environments. With traditional tactile watches, users must lift a cover, which protects the delicate hands of the watch face.

A tactile watch with the cover open (image available without copyright from Wikipedia Commons)
A tactile watch with the cover open (image available without copyright from Wikipedia Commons)

The Bradley does away with hands and has two ball bearings that are magnetically attached to the movement, which makes the watch more durable. The position of the ball bearing on the face denotes the minutes, while the ball bearing around the periphery of the watch denotes the hours.

The Bradley timepiece is a good example of a design which incorporates Universal Design or Design for All principles. Its popularity on Kickstarter is more a testament to the attractiveness of the design than strictly to its utility for people who are blind. Such a watch can be used by people with and without visual impairments. The titanium watch has a very elegant appearance. My only worry is how much the titanium would scratch with everyday use, but that is not an accessibility topic!

See more information at the Eone Timepieces website. The site has a very elegant design, but unfortunately and ironically, the site is not very accessible in its current form (and is thus not an example of Universal Design).

Publication: Virtual jog wheel

People who use screen readers frequently need to navigate between different elements on a screen. As the user navigates to an element, information about that element is provided in speech output. On touchscreen devices, screen reading software typically uses swiping/flicking gestures for navigation. For example, one swipes to the left to go to the next element and to the right to go to the previous element when using VoiceOver on iOS by Apple (see more details about iOS VoiceOver). The problem is that these gestures are relatively slow and fatiguing. Using buttons to control navigation may be easier, but is still relatively slow.

I recently invented and described a software technique for quickly and easily navigating a user interface on a touchscreen device. The user activates a virtual jog wheel mode and then makes arc or circular gestures on it. By just making a single circular gesture, the user can navigate through a number of elements to get to the one they want.

Virtual jog wheel in use.
The virtual jog wheel in use. The person makes a generally circular gesture on the virtual jog wheel to navigate through the elements on the screen.

More details are available in the virtual jog wheel disclosure [PDF]. This invention is free for anyone to use, but those using it will still need to ensure that the technology does not infringe any other patents.

The defensive disclosure was published today in the IP.com prior art database (IP.com database entry, requires subscription for full content). The publication was reviewed and sponsored by the Linux Defenders program through the Defensive Publications web site.

Low vision with the old and new iOS icons

Last week, Apple announced the largest user interface overhaul to iOS since the iPhone was first introduced (gallery and basic description of iOS 7 features). A number of posts and articles have been written recently comparing the new look versus the old and that of competitors.

In this post, I am going to take a different approach. I will not talk about the aesthetics, but instead about the accessibility to people with vision impairments. The flickr user nielsboey posted an excellent image comparing the icons for native applications in iOS 6 and iOS 7. With his permission, I have copied it here and made a blurred version to “simulate” low vision. Click on the links for a larger version.

Comparison of iOS 6 and 7 native application icons.
Regular icons.
Blurred comparison of iOS 6 and 7 native application icons.
Blurred icons.

Accessibility

To see some of the usability barriers one might have with low vision, blurring the display is a good test. Note that blurring is not a simulation of low vision. What a person with low vision might see varies depending upon the cause of their impairment.

There are other visual tests one can try. I used a Colour Blindness Simulator to look for egregious problems for people with color blindness. There was nothing particularly problematic with either the old or new icons.

Some improved icons

Blurred Camera icons.

Blurred Safari browser icons.

Blurred Contacts icons.

Several of the new icons (on the right) have improved for people with low or blurred vision.

For example:

  • The camera app icon now takes the shape of a traditional camera rather than just being an image of the circular lens on iOS devices. The camera shape can be made out more easily as a camera than a dark circle.
  • The Contacts app icon has been improved in iOS 7 because the silhouette of man on is much larger even though the contrast is somewhat decreased from that of the previous version.
  • The new Safari mobile browser icon is a little easier to find simply because it is a blue circle that looks like a compass rather that just a rounded rectangular blue field with small points or arrows.

Some old icons were better

Blurred compass icons.Blurred calendar icons.There are also examples of the new icons (right) being worse for people with low vision.

  • The Compass app icon used to be a light-colored compass face on a darker background. The new version is a line drawing of a compass on a dark background. The lines on the icon are so thin that they are difficult to see when blurred. The icon looks like a black square with rounded corners.
  • The new calendar icon uses a font with a much thinner weight or stroke width. While one can perhaps make it out while blurred, the old calendar icon, with a much heavier font, is much easier to read.

There is a current design trend towards fonts with very light weights and thin strokes. This is also apparent in icons that are simple line drawings with very thin lines. This trend has become possible now because screens on mobile devices are being made with very high resolutions. Thin lines and very light fonts are much easier to display at high resolutions. At lower resolutions, such thin lines would become larger and more block-like. The thin strokes of current design trends are more difficult to see and read with low vision.

“Invisible” buttons as patented by Apple

Patently Apple reported yesterday on a recently granted patent for invisible backside buttons and slider controls. The patent’s (8,436,816) abstract:

An input device includes a deflection based capacitive sensing input. Deflection of a metal frame of the input device causes a change in capacitance that is used to control a function of an electrical device. The input appears selectively visible because it is made of the same material as the housing it is contained in and because it is selectively backlit through tiny holes.

Patent figure.
A figure from the patent showing example media controls in the palm-rest area of a laptop computer. (Credit Apple & U.S. Patent & Trademark Office)

Apple has been using indicator lights shining through small micro-perforations in several devices, including their Bluetooth keyboard. The result is a very elegant design unmarred by apparent through-holes for LEDs. This patent represents a way to also allow for user input through touch controls that are selectively illuminated.

Figure showing the keyboard's micro-perforation power indicator light.
Apple Bluetooth keyboard with indicator light off and on. (Credit: Flickr user DeclanTM; CC BY 2.0)

Accessibility

The buttons are certainly inaccessible to everyone when they are in their invisible state—that is the point—that the buttons will disappear into the case of the device when they are not needed.

When active or lit up however, the buttons remain “invisible” or imperceptible to those who cannot see or who find themselves in circumstances where they cannot look at the device (say while driving). This would make a non-personal device with such technology inaccessible to these users. On a personal device, the buttons may still be usable if the user can find them using other landmarks, such as the distance from a corner, edge or other feature of the device.

As another example of tactilely imperceptible controls, I have a laptop with a capacitive, light-up touch area with non-tactile media and volume controls. When they aren’t lit up, I cannot very easily find the control I want. I often end up touching several of them trying to find the mute button for example. Having some texture would make it much easier to find the control I need.

Accessible Graphical Buttons in HTML

Picture of a push button.
Picture by flickr user chrisinplymouth.

With a recent project, I have been working again in HTML trying to make semantic and accessible mark-up for interfaces. One common type of user interface control is a graphical button, which has no text for a label.

My first thought and trial was just to include alt-text with the image that serves as the button like so:

<button type="button">
<img src="email.png" alt="E-mail" />
</button>

Here is an example:

This is short and semantically correct. In my first test, it worked just fine as the screen reader I was using read the alt-text of the image. It turns out that VoiceOver on iOS 6 did not do so well with this code—all it read was: “Button”

A Better Way

After doing a little bit of research, I found the recent article Making Accessible Icon Buttons by Nicholas C. Zakas. He suggested using the aria-label attribute on the button, which provides more reliable accessibility with different screen reader setups. So, this is better code to use for graphical buttons:

<button type="button" aria-label="E-mail">
<img src="email.png" alt="E-mail" />
</button>

Here is an example:

This works fine in VoiceOver. I was a little wary about double-labeling elements in case both sets of text are read. However, no screen reader I have tested double-reads “E-mail” from both the alt-text and aria-label.

Protection of Intellectual Property

Image representing intellectual property law.The impact of some software innovations are big and others are small. However, if another organization were able to secure a patent on that idea or something very similar, one might be barred from using one’s own innovation. This may be particularly bad for innovations in the field of accessibility, which ideally should be available to as many people who need to use the technology. In order to protect an innovation, there are three general options:

  • Obtain patent protection. This option takes several years and is expensive, with attorney fees alone likely costing over $5000 for a very basic patent (Quinn, 2011).
  • Keep the innovation secret (i.e., a trade secret). If your innovation is relatively easy to reverse engineer or figure out, then this offers no protection. Any organization who figures it out could potentially patent it.
  • Defensively publish information. In patent law, anything published as prior art prevents someone else obtaining a similar patent. This publication allows you to keep using your innovation and allows others to benefit from that work. Even large companies and organizations use the defensive publication strategy to protect intellectual property that may not be worth the expense of full patent protection.

More about Defensive Publications

Defensive publications or technical disclosures are a relatively inexpensive way to prevent others from obtaining a patent that covers your innovation. Any type of publication (including publication in a conference proceeding, journal, or even in an online posting) should theoretically prevent someone else from obtaining a similar patent. However, more obscure publications may be missed by patent examiners because of the relatively small amount of time they have to search for prior art (Edge, 2012). Once a patent has been awarded, it is much harder to break the patent because the burden of proof has shifted.

There are several options for more visible publication strategies to establish prior art:

  • Potentially free: http://www.defensivepublications.org/ – Linux Defenders has a program to publicly disclose software to help protect open source software from patent litigation. Submissions are reviewed and edited at no cost to the contributor. If the program decides to publish the submission, it will pay the fees for inclusion in the prior art database kept by IP.com.
  • $225 per submission: http://ip.com/ – IP.com keeps a prior art database that is purported to be regularly checked by patent examiners. For a fee, IP.com will include your submission in this database.
  • $120 (€90) per page: http://www.researchdisclosure.com/ – Research Disclosure keeps a prior art database and publication that has PCT Minimum Documentation status (WIPO, 2005), which means that (by international treaty) patent examiners are to search this database. The company charges a fee per page for publication.

References

  1. Quinn, G. (2011, Jan. 28) The Cost of Obtaining a Patent in the US. http://www.ipwatchdog.com/2011/01/28/the-cost-of-obtaining-patent/id=14668/
  2. Edge, J. (2012, July 4) Akademy: Defensive publications. https://lwn.net/Articles/505030/
  3. WIPO (2005) PCT Minimum Documentation: Comprehensive Review. Geneva: World Intellectual Property Organization. http://www.wipo.int/edocs/mdocs/pct/en/pct_mia_11/pct_mia_11_6.pdf

Note: A version of this article is also posted on the GPII wiki.

Tactus tactile touchscreen prototype

Touchscreens can be particularly challenging to use for people who cannot see. All they feel is a featureless surface. Accessibility can be provided through speech using a number of different strategies, but finding onscreen buttons through audio is slower than finding tactile buttons.

Photograph of tactile bumps.

An edge view of a tactile layer that could be used as part of a touchscreen. (From Tactus Technologies.)

Tactus Technologies has been developing a touchscreen with areas that can raise and lower from the surface. Sumi Das for CNET posted a video and hands-on impressions a couple days ago. The technology uses microfluidics. For the buttons to raise up, liquid in the system is put under pressure–the higher the pressure, the taller and firmer the keys. The current version only offers one arrangement of buttons.

Currently, the technology is limited in that it’s a fixed single array. You wouldn’t be able to use the Tactus keyboard in both portrait and landscape mode, for example. But the goal is to make the third generation of the product dynamic. “The vision that we had was not just to have a keyboard or a button technology, but really to make a fully dynamic surface,” says cofounder Micah Yairi, “So you can envision the entire surface being able to raise and lower depending on what the application is that’s driving it.”

Accessibility

The current generation only offers a single array of raised buttons that would work in only one orientation. This would be helpful for allowing users to find the keyboard tactilely, for example, but this would offer no support for other applications. The user cannot tactilely find buttons or other controls for other applications on a smartphone or tablet.

Future versions may fix this limitation. Ideally, the microfluidic tactile “pixels” will be small enough so that various tactile shapes can be made. To make seamless shapes, the tactile pixels should not have significant gaps between them, but this may be technically difficult to do. With gaps in between the tactile pixels, the device could still be useful to tactile exploration, but the layout would likely be constrained to a grid of bumps. An onscreen button might have a single bump associated with it (multiple separate bumps should not be used to designate a single large key because it would feel like several separate controls). The bumps would allow people to locate controls, but without different shapes, it would be more difficult to identify the controls from touch alone.

From the video, it also looks that the current technology is not well suited to braille. Spots small enough for braille would not be sharp or tall enough for effective reading. (For more information, Tiresias.org has a table of braille dimensions under different standards).