Window-Eyes 7, Macintosh, Heading South
If you haven’t already read it, I highly recommend that you go to Darrell Shandro’s Blind Access Journal blog and read the article he posted yesterday about the Window-Eyes 7.0 release. Darrell raises the journalistic standards for blogs in the blinkosphere while providing a well written and highly informative article about the latest from GW Micro. For more on my opinion of Darrell’s piece, read the comment I left on BAJ about it as soon as DS gets to letting it through the moderation process.
I have continued learning more and more about my new Macintosh, VoiceOver and the user experience for people with vision impairment on the Apple platform. I have not yet checked out the new accessible interface on the iPod Nano but have heard some fairly positive things about it from various people who send me bits of information, their opinion on matters and other random ideas.
As I delve further into VoiceOver (VO), I find that some of its behaviors which I had complained about in some of my earlier posts on the topic actually have advantages over the more traditional screen reader user interfaces and, once one grows accustomed to the VO way of doing things, these improvements become more obvious. Although I often rant and rave about the lack of innovation in UI concepts in access technology, I am also an old fart stuck in his ways who is a bit lazy when it comes to learning new ways of doing things – even when they provide improvements to the status quo.
Specifically, I wrote that the need to hit a keystroke to interact with HTML content was a dumb idea. As I’ve used VO more, though, I have learned that their web content interaction mode with its sense of object navigation actually provides a greater sense of context than the linear, “virtual buffer” interfaces that the Windows screen readers expose.
In general, the object navigation interface that VO provides offers a sense of context about all sorts of items one may encounter in all sorts of different applications. This manner of navigation takes some practice to appreciate but, when one makes the transition, it really shows its worth.
Because VO is a purely API based screen access utility, applications with which it works, work very well. Some programs can certainly see accessibility improvements but those that comply with the newer, Cocoa Macintosh API tend to work with VO right out of the box and perform very, very well in situations that long time Windows screen reader users might think would be problematic.
I suppose I should spend this paragraph tipping my hat to Peter Korn of Sun Microsystems. He and I have debated the relative merits of OS hooking and/or COM methods of gathering application information versus a purely API driven solution. I conceded to Peter that an API system would cause fewer stability problems than seem to be inherent in OSM solutions but I also argued that no API based system could provide a good enough level of context (either through brute force “review cursor” methods or by hand coded COM solutions for each different application); it seems that Apple, with VO, has found a middle ground and can provide a decent level of contextual information without either requiring custom aspects for each application or by inserting instabilities into the entire system. Customized communication with specific applications will, using today’s technology (I can already hear Will jumping in with a comment about a future with a synthesized vision approach being superior), definitely provides the greatest ability for an access technology to communicate very specific contextual information to its users but, excepting very complex interfaces, such extra work needn’t be done to provide a very usable interface with an above average level of information the user can enjoy about the items that surround the point of focus.
While I have not tried to use Apple Script and Apple Events with VO, people more familiar with the software and with Macintosh OSX Leopard tell me that these components built into the OS can be used to gather information from applications with more complex interfaces, much in the manner of the COM methods of doing things in Windows, and can, therefore, provide detailed contextual information where necessary. As these technologies are built into the Macintosh operating system, they are likely to be far less kludgerous than proprietary scripting techniques seen in other access technology products.
I still have a number of items I think Apple can do to improve VO substantially:
First, the five finger keystrokes that a user needs to hit if using a laptop really must go and be replaced by a set of key bindings designed specifically for the less comprehensive keyboards. Next, allow the user to select the Caps Lock and perhaps some other mostly useless keys as the VO key modifier. I hate the Caps Lock key and feel strongly that one should be able to use it for something other than typing in all capital letters like we did back in the PDP-8 days. Continuing in the same set of ideas, something equivalent to the key binding editors available in most other popular screen readers is a must for VO in the future.
Second, Apple should jump on the iAccessible2 bandwagon and get Firefox working really well. In its current incarnation, VO doesn’t work with Firefox without the FireVox plug-in and, in Safari, the native browser shipped with Macintosh, it works poorly with more complex web pages often described under the sweeping name, “Web 2.0” that may use
Lastly (I may have more applause and complaints in the future but this is the final one I can think of today), VO should be released under a GPL or other libertyware license with its source code as soon as possible. There are a lot of hackers with vision impairment who have a ton of great ideas for the future of screen readers and can make them possible with something like VO as a starting point.
My annual sadness caused by the looming date in which we must return to
Since coming to the
Of course, I will enjoy the