Powered by Blogger

Wednesday, February 08, 2006

G3 Interfaces and Deaf-Blind Users

Yesterday, Chris Westbrook, a fellow I know from various mailing lists and one whom I think quite often has interesting things to say asked about research into and the potential efficacy of a 3D audio interface for people with both vision and hearing impairments.  Until then, I hadn’t considered deaf-blind people in my analysis of how to improve the efficiency of screen reader users.  Deaf-blindness is not my area of expertise and, fortunately, is a relatively low incidence disability.  Our deaf-blind friends do deserve the best access technology that the research and AT world can develop for their use and I will try to take a stab at addressing some issues that deaf-blind people might encounter and how their screen reading experience can be improved.  As I said, though, I cannot speak with much authority on this subject so please send me comments and pointers to articles so I can learn more.

Before I jump into a pontification on my views of technology for deaf-blind users in the future, I want to relate an amusing anecdote about an incident that occurred involving me at CSUN 2004.  

CSUN takes place every march at the convention center hotels at the Los Angeles Airport (LAX).  That year, Freedom Scientific demonstrated some of the first PAC Mate features designed for use by deaf-blind people.  I stayed at the Marriott that year and, as my daily routine dictated, I stood on line at the Starbuck’s in the lobby seeking my triple shot vente late.  While waiting and chatting with Jamal Nazrui, who was in line in front of me, I felt a tap on my shoulder and turned to face the person who wanted my attention.  As soon as I turned around, a pair of arms enveloped me in a very nice hug.  By the location of the anatomical parts of this affectionate person, I could tell immediately that she was very definitely a woman.  Then, to my surprise, a very deep male voice with Scottish accent started talking.  I, somewhat startled, thought that I was either in the embrace of the recipient of the most successful sex change operation ever or that I had underestimated the depth of Scottish women’s voices.  Then, my human skills to understand context kicked in as I, still in this very pleasant embrace, heard the speaker tell me how “she” greatly appreciated the terrific effort we made in the PAC Mate to add features that deaf-blind people could use to communicate with each other.  The recognition that it was an interpreter talking changed my perspective greatly.

The day before, Brad Davis, VP of Hardware Product Management at FS, brought four deaf-blind people into the Freedom Scientific booth on the trade show floor.  He gave each of them a PAC Mate that had a wireless connection established with the network.  He showed these people how to launch MS Messenger and they started a chat among themselves.  Ordinarily, blind people with profound hearing impairments need to communicate with each other by using sign language by touch in the palm of the other person’s hand.  Thus, it remains very difficult for more than two deaf-blind people to hold an efficient conversation together.  With a PAC Mate in the Freedom Scientific booth that day, four deaf-blind people held the first ever conversation of its kind.  The happiness all of them displayed with this new tool brought all FS people around one of the greatest senses of satisfaction with our work that I can remember experiencing.

Freedom Scientific has, since that day, gone onto add a number of additional PAC Mate related products designed for use by deaf-blind people to their catalogue.  I don’t know much about these products so go to the FS web site to learn more about them.

Now, back to the topic at hand, how can a G3 interface be designed to improve the efficiency of deaf-blind users?

Again, I’m pretty much guessing here but I do have a few ideas.  I’ll start with the work my friends at ViewPlus are doing with tactile imaging and add a bit of Will Pearson’s work on haptics.  Today, these solutions carry fairly hefty price tags as is the case for most AT hardware.  They can, however, deliver a lot more information through their two and three dimensional expressions of semantic information than can be delivered through a single line Braille display.

A person can use a tactile image with both hands and, therefore, can, by knowing the distance between their hands, determine the information inherent in the different sizes of objects, the placement of objects in relation to each other and the “density” of the object from feeling various attributes that it can contain.

Thus, a person using a tactile pie chart can feel the different sizes of the items in the graphic and, far more easily than listening to JAWS read the labels and the values sequentially, learn the information in the chart through the use of more than one dimension.  This idea can also be applied to document maps, road maps and far more types of information than I can imagine at this moment.

Here, however, is where my ignorance stops me entirely from moving any further.  I can make some wild guesses as to how a force feedback haptic device might be applied to improve efficiency but I cannot do so with any authority at all.  I can’t even recall reading a single article on the topic of deaf-blind people and next generation interfaces.  Sorry for my lack of knowledge in this area and, as I stated above, please send me pointers to places where I can learn more.

Why is JAWS Used for My Examples?

JAWS is the screen reader I know the best.  It is the screen reader I use on a daily basis and, in my opinion; it is the best and most comprehensive screen reader available today.  No other screen access tool can deliver even close to the amount of contextual information as can JAWS.  Most other screen readers also entirely ignore the more advanced features of programs like Word, Excel, PowerPoint, Project and others which I need to do my job.  Without the broad range of access that JAWS delivers, many blind professionals would never have been able to get promotions to better positions as they could not use the tools that their sighted counterparts do in the same workplace.  

I can be very critical of all of the G2 screen readers because I study all of them and because I rely on them to do my job.  I hope that my criticism is seen as constructive as I think the screen reader companies (FS, Dolphin, GW Micro, Serotek and Code Factory) as well as those who work on gnome accessibility, GNU/Linux accessibility, Java accessibility and that peculiar little screen reader for Macintosh are all pushing access for we blinks in the right direction.  If I find fault with the products or businesses, I will say so as I feel that is my duty to my readers and to the blind community at large.  I do so with the intent that these fine organizations make improvements rather than to tear them down in any way.

1 Comments:

Anonymous Anonymous said...

I think you've made a good start in describing some of the benefits haptic interfaces could bring to deafblind users, an area that I think has had scarce attention in the past. The cornerstone of interface design with any modality is that the interface is really intended to convey encoded semantic content. Untangling the relationship between encoding and semantics is really a vital starting point. This leaves the raw semantics that can then be encoded in any haptic form, potentially enabling anything to be communicated to the user.

Haptics, at least in terms of the kinesthetic sense, allows encoding of semantics in all of the core aspects of a waveform: it's x, y and z location, it's temporal timing, it's frequency, which takes the form of vibratory sensation, and it's amplitude, taking the form of force. Compared to Braille, this is a larger capacity communications channel, which should lead to more efficient communication through the user interface for the user.

To give an idea of a possible encoding strategy I'll use Chris's example of a map. A user could run the haptic probe along the line representing a road. This will give them an idea of the direction, shape and distance of the road and the various directional parts of the road, which is information conveyed by spatial relationships. To get a more detailed idea of the directions and spatial relationships of the road the user could zoom the view. To ensure the user followed the road a snap constraint force model could be employed, tieing the haptic probe to the road. Through exploration, either freehand by the user or guided by the system, the user could explore what items were located near to the road, gathering information about their distance and direction from other objects by the direction and distance the haptic probe travelled. Using different shapes and frequencies of vibratory oscilation, the user could identify different objects on the map, say roads, hills, , buildings, etc. and explore their shape, a 3D model could even be built up to allow the user to gain a better idea of the height of hills and other geographic objects that usually have their height indicated on maps. This is complex information that isn't usually suitable for communicating via a textual description, as describing all the key characteristics to gain a full understanding of a map either takes a long time or results in a buffer overflow in a person's short term memory. Similar encoding strategies could allow applications such as Microsoft Office Visio to be made accessible to deafblind people, although the concept of drag and drop would have to be modified slightly unless the user used two haptic probes, as several haptic probes would allow for real time observation of the spatial relationships between two points.

We're still some way off understanding the optimal characteristics for haptic user interface design. Some work still needs to be done on sensory and short term memory and their role in object identification within haptic interfaces, work also needs to be done on absolute and differential thresholds, but once this has been done we should understand better the physical characteristics of the optimal design for haptic interfaces. After this, it's just a case of designing a set of mappings between conceptual semantic knowledge and physical forms used to represent those concepts.

5:34 PM  

Post a Comment

<< Home