Posts Tagged ‘Stuff I’m currently doing at work’

Topic 7 – Devices for display and interaction

September 23, 2009

Exercise 7.1: Smart screen interface study

Dan Saffer provides an introduction to the mechanics of touchscreen devices in his text Designing gestural interfaces (2009, pp. 12-16).  He starts with a general architecture for gestural systems according to systems theory of sensor, comparator and an actuator.  The sensor is the part mentioned in the HowStuffWorks article given in the OLR question, which for the iPod touch is a capacitive sensor panel.  When the user touches the screen, a portion of the charge stored in the coating material is transferred to the user, decreasing the panel’s capacitive layer and triggering a touch event (Saffer, p. 15).  Saffer also mentions other methods for implementing touchscreen sensors such as resistives with two layers pushed together and surface wave using the interruption of ultrasonic waves.

The activity detected by the sensor is then passed on to the comparator, which in the iPod touch is the gesture software mentioned in the HowStuffWorks article.  The comparator compares the current state of the system to the previous state and makes a decision as to what needs to be done as the result of the input.

Finally, this decision is fed through to the actuator which performs the appropriate action.

One of the things that struck me most about the iPhone the first time I used it was the smoothness of the interaction when browsing websites.  I worked with iPaq PDAs in 2002/2003 and although these represented the state-of-the-art in mobile browsing at the time, it was still an excruciating experience.  The multi-touch ‘pinch to shrink’ and ‘stretch to zoom’ gestures allow the user to get their bearings on a page and then easily zoom in deeper to relevant content without becoming disoriented.  This is a simple but powerful use of natural gestures which made the iPod touch/iPhone so superiour to the competition of the time.

Exercise 7.2: New devices, aged care and people with disabilities

This question is very topical for me at the moment.  I am currently redesigning Barclays online banking system and we will be spending an entire sprint (4 week chunk of effort in the Scrum methodology) updating and testing the system for accessibility.  This includes designing for and running testing sessions with users who have visual impairments, motor impairments and cognitive difficulties (such as severe dyslexia).  The participants we will be testing with use a range of assistive technologies including:

  • Screen readers
  • Text-to-speech systems
  • Keyboard-only navigation
  • Voice-recognition software

I’ve had some experience with most of these technologies, but voice-recognition is not something I’ve explicitly designed for before.  WebAIM is a source that I often refer to for accessibility resources and their Motor Disabilities: Assistive technologies article (n.d.) has an excellent introduction to these devices.  It defines voice recognition software as “software that allows a person to control the computer by speaking”.  The article also makes the point that although the disabilities and assistive technologies used to support those disabilities are extremely varied (in their thousands), almost all assistive technologies for motor disabilities work through or emulate the keyboard.  This means that by designing interactions that support keyboard accessibility, you are creating a system that will be accessible to users with motor disabilities.  It finishes with a mention that navigation in as many key strokes as possible is an important part of this.  The Web Content Accessibility Guidelines define what is considered keyboard accessible (“W3C”, 2008).

Saffer, D.  (2009).  Designing gestural interfaces. Sebastopol: O’Reilly.

Motor Disabilities: Assistive technologies.  (n.d.).  Retrieved October 1, 2009, from http://www.webaim.org/articles/motor/assistive.php.

W3C. (2008). Web Content Accessibility Guidelines 2.0. Retrieved October 1, 2009, from http://www.w3.org/TR/WCAG/.