A responsibility we have as researchers is to disseminate the results of our research widely. A primary way we do this is through research publications. When these publications are not accessible to everyone, some readers will be excluded and the impact of our research limited. In this paper, we explore this problem in two ways. First, we report on the accessibility of 1,811 papers in the technical program of several top conferences related to accessibility and human-computer interaction. Second, we reflect on our experience making papers accessible for any CHI 2015 author who requested it. We offer thoughts on research challenges and future work that may make our community's research more accessible.
4. Video by NCSU IT Accessibility, available at https://www.youtube.com/watch?v=GaNwnsT4B5s
Without Tags Reading Can Be Confusing
5. What is the state of accessibility for
conference PDFs?
Automated check of all papers from 2011-2014: 1,811 papers
Manual checks on ASSETS, W4A papers from 2014: 26 papers
6. Automated Checks
More documents with tags
0
10
20
30
40
50
60
70
80
90
100
2011 2012 2013 2014
Percent of Documents with Tags, 2011 - 2014
9. Automated Checks
• Automated checks show improvement
– drops may be due to expanding community,
prevalence of Mac Word
– W4A now requires accessible PDFs
• However, automated checks are measuring
metadata presence, not correctness
10. Manual Checks: Room to Improve
26 papers in 2014 (ASSETS, W4A technical track)
• 62% passed Acrobat’s full accessibility check
11. Manual Checks: Room to Improve
26 papers in 2014 (ASSETS, W4A technical track)
• 62% passed Acrobat’s full accessibility check
• Also examined specific metrics:
– 73% had alt-text for all images
– 85% had tab order specified
– 11.5% had title tagged as a H1
12. • Tagged 25 papers for other authors
• Process could be done by non-author
– structural tags, image and figure descriptions
• Time costs were mostly front-loaded
Making CHI 2015 Accessible
13. Making PDFs of research accessible
is important,
but expensive and difficult.
18. Embedding Tagged Fonts
Open Preflight1 2 Run “Embed fonts (even if text is invisible)”
3 Preflight lists unembedded fonts
4 Locate each instance of an unembedded font 5 Delete artifact
20. Discussion
• Automated measures are improving
• Papers are not fully accessible
• How can we keep improving?
– Better tools
– Alternative Doc formats
– Include in publication process
21. Creating Accessible PDFs for
Conference Proceedings
Erin Brady (University of Rochester/IUPUI)
Yu Zhong (University of Rochester/Google)
Jeffrey P. Bigham (Carnegie Mellon University)
Hinweis der Redaktion
Our research is disseminated as PDFs. This is what appears on the ACM digital library, and it’s what appears on our web pages.
People with disabilities are under-represented in science. Papers that cannot be easily assessed and used by people with disabilities cannot be helping.
But, perhaps this is okay, because you can make a PDF accessible by adding structural tags and metadata to a PDF document. While screenreaders can make basic automatic assumptions about how to read a document, it cannot generate information like alternative text for images or figures, and often is incorrect.
This is a video created by NCSU’s accessibility team, which shows how a screenreader reads an untagged two-column document. The visual reading order for the text is [read paragraphs], but due to the two-column format, the screenreader reads each line as a whole, rather than reading each column separately.
In order to see how to improve accessibility of conferences, we first wanted to check how accessible current conference proceedings are. We examined the proceedings of three conferences - CHI, W4A, and ASSETS – from 2011 to 2014, looking at a total of 1,811 papers.
We chose these conferences specifically – CHI gave us access to a large amount of data, with close to 400 papers per year, while the communities of W4A and ASSETS are focused around accessibility, and may be more representative of what accessible conference proceedings could look like when information is generated by the authors.
We ran both automated and manual checks. The automated checks mostly revealed improvements, with more documents being tagged
We ran both automated and manual checks. The automated checks mostly revealed improvements, with more documents being tagged
We ran both automated and manual checks. The automated checks mostly revealed improvements, with more documents being tagged
We also performed manual accessibility checks on the 26 papers from ASSETS 2014 and the W4A technical track in 2014. We ran a full accessibility check in Adobe Acrobat, which 62% of papers passed; this check is usually the first indicator that things are accessible, and for papers that were not
We then checked for certain indicators of accessibility – alternative text on all images, tab order specified, and correct use of tags. Alt-text and tab order, two common things mentioned in ACM accessibility guides, were relatively well covered, having more compliance than papers that passed the full check.
However, things like structural tags were much less predictable – for example, only 12% of papers had their headings tagged as an H1. There’s clearly room to improve on these types of manual accessibility tags.
In order to explore this in practice, we began an effort to improve the accessibility of CHI 2015. We allowed authors to send camera-ready papers to us, which we tagged to make more accessible. We tagged 25 papers, and while we did not record any metadata about the papers to analyze, we can reflect on the experience here.
For the most part, the process of tagging could be done by a non-author of the paper – structural tags are clear from the ACM format, and image and figure descriptions were (for the most part) easy to generate, sometimes requiring a little reading through the text of the paper. The biggest time cost was the process of learning the intricacies of Acrobat in order to tag the documents well.
So while we want to make sure our conference proceedings are accessible, we know it’s a resource-intensive process and that many people may struggle to tag their documents correctly.
First is the financial cost – it’s easiest to add additional metadata to a PDF using Adobe Acrobat, which is a relatively expensive piece of software for an individual. While Microsoft Word users can add metadata before exporting their documents to PDFs, this option is not available for LaTeX or other authoring tools, and there’s no way to verify or edit the accessibility tags without Acrobat.
PDF accessibility can also result in time costs. It takes a long time to learn how to make PDFs accessible – there are long online guides, and many 100+ page books exist to try to teach the principles of accessible PDFs.
It’s also time-intensive to verify that a document is accessible, since automated checkers can only catch simple errors (e.g., an image has no alternative text) but cannot verify the correctness of tags or metadata.
It’s also hard to master these skills – even for this paper on PDF accessibility, we didn’t get the tagging completely correct, and had to do some last-minute corrections.