The pressure-sensitive screen on the Apple Watch

Apple Watch.
A rendering of the Apple Watch by Justin14 [CC-BY-SA-4.0], via Wikimedia Commons.
Joseph Flaherty at Wired published an article today suggesting that the Apple Watch’s pressure-sensitive touchscreen might be a bigger deal than the Apple Watch itself. While I’ll let the market in the future determine the ultimate success of the product, I thought that Flaherty raised an interesting point with regards to the touchscreen user interface. Flaherty states that being able to sense a press with the so called “flexible Retina display” would allow for context-specific menus—much like the powerful “right-click” interaction with many computer interfaces. This could allow for a potential decluttering of the touchscreen interface because the user has a way of calling up menus that does not involve tapping on menu buttons or toolbars.

This decluttering could be useful for many, but may also be problematic for some. Context menus hide functionality, which some users may now not be able to find. In a poorly designed interface, people may have to “hard-press” many types of onscreen objects and elements to see which ones have the right hidden menu. In a well-designed interface, users would know just by looking (or convention) which onscreen elements have associated context menus.

An Accessibility Use of Pressure-sensitive Touchscreens

Having a pressure-sensitive touchscreen also allows for a very interesting and useful method of access for people who are blind and who have low vision: pressing firmly to activate an onscreen element. Currently, iOS and Android have built-in screen reading software. One way of using these respective screen readers is in an “exploration mode” where a user touches elements on the screen to have them read through text to speech. Because the system cannot different between a touch that means “what is this?” and a touch that means “activate this,” a separate gesture is needed to activate the desired element one found. On VoiceOver in iOS for example, the person double taps anywhere on the screen to activate the item that was last touched. This second gesture can be somewhat awkward and involves extra motions.

With a pressure-sensing touchscreen, this “activation gesture” would be much more intuitive. Instead of having to double-tap the screen, press an Enter button, draw a circle on the screen, or any other complex gesture, the user could simply press harder. In the touchscreen exploration mode, a light or regular touch means “what am I touching?” and a firm touch would mean “activate this.” This would be very powerful and intuitive to people who cannot see touchscreens well enough to read them.


About J. Bern Jordan

Bern is a Ph.D. candidate and researcher in accessibility, usability, user interface, and technology interested in extending usability to all people, including people with disabilities and those who are aging. He currently works at the Trace R&D Center in Biomedical Engineering at the University of Wisconsin-Madison.

Leave a Reply

Your email address will not be published. Required fields are marked *