Powered by Blogger

Thursday, February 09, 2006

Interesting Articles and a Little More on Accessibility APIs

I didn’t have much time to think of a topic for today’s post so, instead, I’ll provide some pointers to a few articles I’ve read recently that I found of particular interest.  At the bottom, I will add a few comments in response to Will’s post on our accessible API discussion as well.

These articles are in no particular order:

The first is about elders and their use of technology products, the Internet and Pocket PC devices.  It is in the UI Design Newsletter and is called Selling older users short.  It debunks various myths about older people and their use of technology products and, in my opinion, shows a promising market for the Code Factory Mobile Speak Pocket product in a very large and mostly untapped market.

Next is an article about a very interesting motor sports event in India titled ” blind navigators show the way” which could be a good application for using StreeTalk from Freedom Scientific, Wayfinder with either MSP or Talx or one of the other talking GPS solutions.  The page containing This article isn’t amazingly accessible but you can find the important spots with a little poking around.

In my sighted days, I truly loved the visual arts.  Those of you who know me well know that I remain active in less visual fine art media like literature, poetry, music and, most recently, I’ve added tactile and audio arts to my interests.  Touch tours are becoming increasingly popular at museums so here are a couple of articles about them, “No longer impossible: blind embrace art and museums welcome blind” and “Museums make art accessible to blind.”

Remaining in the art world, here’s a pretty interesting article on a difficult web page about a blind artist, “Blind artist overcomes challenges.”  I am fairly certain that this is the first article I’ve ever read in the Pocono Record, a publication I had never thought I would ever read.  Isn’t the Internet swell?

Here’s an item from Japan (in English) about a new plastic sheet technology for displaying Braille.  I don’t know anymore about this than what is on this web page including how recent this innovation may be but I thought it was apropos to the discussion about haptics going on here lately.

Special thanks to Professor William Mann, Eric Hicks and, most especially, Lisa Yayla, the owner and unofficial research librarian of the Adaptive Graphics mailing list hosted on freelists.org for sending me these pointers.

Back to APIs

Yesterday, Will Pearson posted two very well considered comments.  As I had guessed, he had some very valuable things to add to my deaf-blind posting and his ideas on accessibility APIs are also well founded.

I agree that for generic information building accessibility into the user interface library would solve many, even most accessibility problems.  While Microsoft did not build MSAA into MFC (the popular C++ library) they, instead, chose to put at a lower level, in the common control layer.  This decision demonstrated some very good outcomes but only in applications that used standard controls.  Putting MSAA a level up in MFC would have solved the problem for some custom controls used in MFC applications but would have done absolutely nothing for Win32 applications or programs written using a different set of foundation classes for their UI that employed standard controls.  So, Microsoft solved some of the problems by providing support for all applications that used standard controls, written using MFC or not but relied upon the application developers to add MSAA to controls that diverged from the standard.  

Unfortunately, most Windows applications, written using MFC, WTL or some other library, use some to many inaccessible custom controls.  Also, a major problem for accessibility APIs as we look to the future are the applications that use proprietary, cross-platform UI libraries.  

Tom Tom, the popular GPS program, is one example of how a proprietary, cross-platform UI library will render their application completely inaccessible.  If someone installs Tom Tom on an iPAQ running MSP or on a PAC Mate they will find that the screen reader will only be able to “see” some window titles and an occasional control.  Tom Tom, to maintain a uniform visual look and feel across all of the platforms they support (TT runs on Windows Mobile, Palm OS, Symbian, iPod to name a few) they have created their own, completely inaccessible UI library.  Tom Tom doesn’t even load standard fonts from the OS but, rather, builds a font library into their software.  This permits them to keep their trademark appearance consistent on all platforms but completely destroys the possibility of any screen reader gaining access to their information.  (Off topic: if you need a portable talking GPS solution, buy Wayfinder or StreeTalk as they work very well.  Wayfinder, from the mainstream, is much cheaper than Tom Tom and StreetTalk is less expensive than the others designed specifically for blind users).  So, even if an accessibility API existed on the platforms where Tom Tom runs and it was at the class library or user interface level, it wouldn’t work.

The combination of cross platform development and the desire to have a unique look and feel cause two of my lasting fears for the next generation of accessibility APIs – especially when we factor in the labor costs of retrofitting a new, even if cross-platform, user interface library to the billions of lines of code already deployed around the world.

Moving from the pragmatic and returning to the delivery of contextually interesting semantic information, I have yet to see how a generic control can have enough knowledge of its purpose to deliver truly useful information about what it is doing at any given point of time.  A button control, a table control, a list box control or a tree view control to name a few, don’t understand what they contain nor why they are containing it.

I’ll return to our Visio organization chart example.  Let’s imagine a very simple box with five names in them, Will, Chris, Peter, Eric and Ted.  Because Ted is a hall of famer, we’ll put him at the top and because Eric and Chris are managers, we’ll have them report to Ted.  So, our Ted box has two arrows coming from it: one to the Chris box and the other to the Eric box.  Because Will is a hacker, he will report to Chris directly, so we’ll add an arrow from Chris to Will.  As Peter is an ideas guy and a hacker, he will report directly to Eric but indirectly to Chris and Ted, so we’ll add a solid arrow from Eric to Peter and dotted arrows from Ted and Chris to Peter as well.  Now, just to make matters interesting, we’ve decided that the ideas guys get to set priorities so Peter and Eric will have dotted lines pointing to Chris as he must have the engineers build what they design.


Our organization has six boxes, one for each person and the bounding box that contains the members.  If we assume that our accessibility API is extensive enough to include a rectangle control that understands that it might also be a container and a line control that knows its attributes (dotted, solid, etc.) we still do not have enough information to describe the relationships between the boxes unless the application itself provides a lot of supplementary information about the meaning of boxes and lines as they are used in said application.  We can derive this information from the Visio object model but not from a generic collection of controls at any level below the application itself.


Peter suggested that some hybrid might also be a good idea where the AT product gets most of its information from the accessibility API and the truly application specific information from the actual application.  I still think that this requires that the application developer do a fair amount of work to expose this information in a usable manner.



3 Comments:

Anonymous Anonymous said...

Chris, you cite Tom Tom as an app that potentially can't do the right thing with respect to accessibility APIs because (a) it is cross platform (and no cross-platform accessibility APIs exist) and (b) it uses it's own custom UI.

But Tom Tom isn't unique in this. Java Swing has (or perhaps better said, *is* this problem). And both StarOffice and OpenOffice.org likewise have their own cross-platform UI library. However, both of these expose an accessibility API (which is respected to a greater or lesser degree by different AT applications). StarOffice/OpenOffice.org use the UNO graphics library (developed by the StarOffice team in Hamburg), which exposes the UNO Accessibility API, which is translated into the Java Accessibility API on Windows and UNIX (and soon will be translated directly to the GNOME Accessibility API on UNIX for performance improvement). See http://ui.openoffice.org/accessibility/ for details.


Later on, you raise the challenge of a complex reporting structure represented graphically with direct and indirect repording relationships. In fact some of the Accessibility API designers are already ahead of you on this one! The Java and GNOME Accessibility APIs have long had ways of encoding arbitrary relationships among user interface elements, which are picked up by assistive technologies and then rendered appropriately to the end user. Besides the obvious "label_for/labeled_by" pair of relations for things like static text labels of editable text fields, we are already actively using FLOWs relations (to indicate text flow in compound documents), and the node_child_of relation for tracing the heirarchy of tree-tables (for noting indentation levels in otherwise a flat, but indented list in GNOME/GTK+). See http://developer.gnome.org/doc/API/2.0/atk/AtkRelation.html for the full list of defined relations. We can extend this list arbitrarily, and so could do so for your reporting relationship case.

In fact, relations are one of the strongest arguments in my mind for doing with Accessibility APIs. There is no other way we can think of to convey this kind of information other than with programatic relations.

6:09 PM  
Blogger BlindChristian said...

Anonymous: I stand corrected. I didn't know about the expression of relationships within the gnome accessibility layer. The last conversation I had about this was with Peter at an Access Forum meeting on the Apple campus. I described how I would like it to be and, at that time, I assume it didn't exist yet as he, if I remember correctly, wrote some kind of little utility for the Open Office spreadsheet to show how it might be done. I'm excited to hear that the gnome accessibility people have been designing this and I can't wait until I can get a demo of this facility working in a spreadsheet or other kind of application where the relationships are not obvious.

I'm going to follow the links you provided to read more about this relationship handling. I am curious about how much of the complex relationships can be derived from an application without having the application developer do a lot of extra work to somehow expose this information? I'll plead ignorance for until I read about it but, without the application developer specifically going in and telling the API the meaning of said relationships, how does the accessibility layer know? If the purpose is to remove the burden from application developers who don't care about accessibility I am curious as to how this information moves from the app. through the accessibility layer and into the AT?

On the proprietary cross-platform UI libraries, I did know about Java and Open Office but forgot to mention them as exceptions to the rule. This was a solid brain freeze and I admit it was an oversight. The point is, however, mostly the same though, for a mainstream developer to maintain their cross-platform look and feel they will need to retrofit their proprietary UI library with at least two and probably three different accessibility protocols. I'll return to the economic arguement here, what is their motivation to go back into tens to hundreds of thousands of lines of source code to make this work? This process will be very expensive, time consuming and will, as any programmer can tell you, potentially introduce unexpected problems.

7:00 AM  
Anonymous Anonymous said...

"we still do not have enough information to describe the relationships between the boxes unless the application itself provides a lot of supplementary information
about the meaning of boxes and lines as they are used in said application."

I agree this is the case if you want to convert between encoding schemes used to encode the original semantic content, e.g. from spatial relationships to a spoken description. However, I believe, at least in the case of spatial relationships, this conversion may be the wrong approach from a usability viewpoint. Serialising the spatial encodings into a serial speech stream would degrade efficiency. As neurological studies suggest that listening to speech is already slower than reading text, further serialising encoding schemes into speech would impose a more significant efficiency deficit on blind people than may otherwise be the case. The output media used, both audio and tactual, is capable of producing spatialised output, and therefore capable of supporting spatially encoded semantics. It would be a shame to sacrifice efficiency unless there was an overwhelming need to do so.

1:10 PM  

Post a Comment

<< Home