Icon for person with visual impairmentA year or so ago, I was meeting with a group of disability coordinators from the state college system and I was speaking about new and exciting assistive technologies and services that were just over the horizon. I mentioned some “chatter” I had been reading about, something that Google was working on: the ability to automatically determine the content of images posted on the web. I explained that the number one web accessibility failure was the absence of  Alternative Descriptions for images – the proverbial “ALT tags.”

The major reason for this “violation” is because – as I phrased it – everyone is a content creator now. In the good ole days of “web masters,” nearly all content found on websites had to go through one person or a small team of persons before it was published on the web. If those web masters were effectively trained in the guidelines for web accessibility, the number of accessibility violations would be reduced. But in this new world – where everyone is a content creator – there is no vetting process. Hence, an increase of web accessibility “violations.”

And, the ability to “effectively train” everyone in Accessible Web Design is not probable.

I postulated that, with the new system being developed by Google, this issue would be eliminated because, if the content creator forgot to add the Alternative Description, Google would do it for them.

Well, lo and behold, this functionality is now a reality. But the amazing part is that it did not come from Google, but from – of all places – Facebook.

In several announcements in the trades over the last few days, the new “object recognition technology” developed by Facebook’s Accessibility Team has been described. Officially announced last Tuesday, the “(artificial intelligence) AI-powered tool … is part of what Paul Schroeder, VP of programs and policy at the American Foundation for the Blind, described as a ‘tipping point with accessibility.’ The same technology that some scoff at or even fear today — artificial intelligence, self-driving cars, voice-powered personal assistants and robotics — could fundamentally transform the lives of the visually impaired in the coming years” [reference quote from ‘Facebook’s first blind engineer is revolutionizing social media as we know it’ – Mashable].

In an article on ZDNet, the new feature is explained as:

Until now, when blind users were checking their Facebook newsfeed and came across an image, they would only hear the word “photo” and the name of the person who shared it, which left the user still dependent on friends and family to interpret an image.

To improve the experience for blind people, Facebook has used its vast trove of user images to train a deep neural network that drives a computer vision system built to recognize objects in images.

Right now the new accessibility feature is only available in English for Facebook users in the United States, Canada, the United Kingdom, New Zealand and Australia. The service will no doubt expand and Mashable noted that similar efforts are in development by Microsoft and Twitter.

Hey, where’s Google’s version?

Read More



Image credit: Graphic licensed through Creative Commons by WikiMedia Commons