Try Our FREE Content Analysis Software and Find Out Where You Stand Against the Competition
Get startedCopyPress
January 28, 2021 (Updated: February 8, 2023)
This week, Facebook updated its alt text process to help aid the visually impaired. This update will help identify more objects within posted photos. This will help the reading of the images on the platform with screen readers, enabling vision-impaired users to get a better experience within Facebook’s apps.
The AAT (Automated Alt Text) program was first launched in 2016, and it enables automated identification of objects within posted images via machine learning. Whenever manual alt-text descriptions were not provided, this machine learning tool has been able to add them. This helps to get descriptions of what is in a photo read-aloud for someone who cannot read it themselves. In 2016, in its initial iteration, this process was limited, but Facebook has been upgrading it slowly ever since.
Earlier this week when they released this update, Facebook made this statement:
‘First and foremost, we’ve expanded the number of concepts that AAT can reliably detect and identify in a photo by more than 10x, which in turn means fewer photos without a description. Descriptions are also more detailed, with the ability to identify activities, landmarks, types of animals, and so forth – for example, “Maybe a selfie of 2 people, outdoors, the Leaning Tower of Pisa.”’
This new feature will not only help Facebook describe what is seen in any given photo but also where the object is seen in any given photo.
Facebook goes on to say, ‘So instead of describing the contents of a photo as “Maybe an image of 5 people,” we can specify that there are two people in the center of the photo and three others scattered toward the fringes, implying that the two in the center are the focus. Or, instead of simply describing a lovely landscape with “Maybe a house and a mountain,” we can highlight that the mountain is the primary object in a scene based on how large it appears in comparison with the house at its base.’
In the earlier stages of AAT, Facebook shared that they will help to do the same type of identification in videos, but they are just not available yet. This will boost the social platforms’ ability to cater to visually impaired users. Ultimately, they will be gaining insights about what’s in posted content, what users are watching, and what they are engaging with.
During this upgrade, they have utilized Instagram images and hashtags to map content, which further underlines the potential for data collection and could go far beyond helping the visually impaired. It is thought that Facebook could look to help advertisers reach users who are liking or commenting on specific content to provide them with similar products or services.