Virtual worlds are not accessible to users who are visually impaired as they lack any textual representation that can be read with a screen reader. We developed an interface modeled after text based adventure games like zork that allows a screen reader user to iteratively interact with the popular virtual world of second life.
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Textsl: a screen reader accessible virtual world client for second life
1. TextSL: A Command-Based Virtual
World Interface for the Visually Impaired
Eelke Folmer, Bei Yuan, Manjari Sapre, Dave Carr - ASSETS 2009, Pittsburgh
Human Computer Interaction Research
University of Nevada, Reno
4. Virtual Worlds
Second Life Habbo Hotel Home
Second Life, World of Warcra&, Habbo Hotel
Control Avatar through 3rd person interface
Commercial success
80% of internet users will have VW account [gartner]
Potential to replace the web
Human Computer Interaction Research
University of Nevada, Reno
5. VW Taxonomy
Game VW Social VW
Story No game elements
Goals User generated content
Score Social communities
Combat Plethora of different
usages & experiences
Death
Human Computer Interaction Research
University of Nevada, Reno
6. Virtual World Viewer
Education Museums Communities
“browser” like functionality:
» exploration/navigation
» communication
» interaction
» content creation
Human Computer Interaction Research
University of Nevada, Reno
8. So&ware Interaction
natural interaction
more immersive experience
Human Computer Interaction Research
University of Nevada, Reno
9. Barriers to Access
Switch / Scanning
“ ........................”
Screen reader
location? objects?
high degree of interaction Audio avatars?
No textual representation
Provided Audio inadequate
Human Computer Interaction Research
University of Nevada, Reno
10. Research / Motivations
?
Second Life no functional vision
Education (Section 508)
Socialization opportunities
Disabled Communities (Virtual Ability)
Include users with VI in our Information Society
Human Computer Interaction Research
University of Nevada, Reno
11. Research / Motivations
?
Second Life no functional vision
Education (Section 508)
Socialization opportunities
Disabled Communities (Virtual Ability)
Include users with VI in our Information Society
Human Computer Interaction Research
University of Nevada, Reno
14. Requirements
Exploration
Communication most important!
Interaction
Content creation
Usable
Accessible
Human Computer Interaction Research
University of Nevada, Reno
15. Solution Strategies
Second Life VI Accessible Games Output
3rd person interface Shades of Doom Synthetic Speech
Audio Cues
AudioQuake
Earcons
Terraformers
Sonar
Powerup Input
arrow keys
shortcuts
Human Computer Interaction Research
University of Nevada, Reno
16. VW Limitations
Output Second Life?
Audio Cues don’t control the content
miaw .......
Earcons
no combat
Sound radar
lots of different objects
Synthetic Speech
objects have name field
Input buy
take
arrow keys difficult to navigate
Sit on
shortcuts Generic “use” too limited Drive Car
Open Door
...............
no maybe yes
Human Computer Interaction Research
University of Nevada, Reno
17. Multi user dungeon games
Zork Text SL
Text Only Extracts text from SL
Command Based Screen reader /
Iterative customization
Natural Language VW Agnostic
Screen reader accessible! low end machine
Human Computer Interaction Research
University of Nevada, Reno
18. Interpreter / Commands
walk
give my flower to jane
go move move to the chair
move
natural language prepositions adjectives
Exploration (move, teleport, describe, where)
Communication (say, whisper, mute)
Interaction (sit on, touch)
Content creation (not yet)
Support (help / Tutorial)
Human Computer Interaction Research
University of Nevada, Reno
19. Text SL output
Text SL spatial queries
“you see a chair, a
dog, a fire and jill.”
chair
dog
“the chair is to your
fire right”
interaction
jill screen reader
“sit on chair.”
communication
Direction agnostic “say hi to jill”
360 degree range
spatial queries
interact & communicate
Human Computer Interaction Research
University of Nevada, Reno
20. Second Life Content
this object has a really long name car jill
chest
moe
?
wall ?
table
tree
? chair tree
jack
? bike
dog tree ?
tree
? tree
bike curly
fire
? larry bike
bike
bike
Densily populated with objects
» Overwhelm users with feedback
» Difficult to navigate collision free
Lack of meta data (40%)
» underwhelm users with feedback Human Computer Interaction Research
University of Nevada, Reno
21. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
22. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
23. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
24. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
25. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
26. Summarizer & Path Finding
tree dog cat tree car “you see 5 people and 12 objects.”
? tree ? table chest
>describe objects
“you see a tree, a dog, a cat......”
fire bike wall chair ? >describe tree
moe jack jill curly larry “this is a green spruce tree.”
Summarizer:
Cull non-descriptive objects
Rank objects on distance and name length
Pathfinding >move north 20
mental mapping?
normalization [nirje]
Human Computer Interaction Research
University of Nevada, Reno
27. Demo
Human Computer Interaction Research
University of Nevada, Reno
29. How to evaluate VW?
Education Business Games
supports learning? how profitable? how much fun?
Usage defined by content
Focus on eval browser functionality
» exploration
» interaction
» communication
Human Computer Interaction Research
University of Nevada, Reno
30. Hypotheses
textSL command based interaction screen reader output
Accessibility H0: TextSL allows exploration,
communication, interaction with the
same successrate as the SL viewer
usability H1: TextSL allows exploration,
communication, interaction with the same
learnability, efficiency, memorability, errors
& satisfaction [Nielsen] as the SL viewer
Human Computer Interaction Research
University of Nevada, Reno
31. User Study Design
8 sighted 8 screen
SL Viewer TextSL reader users
video
logs
1. Tutorial (pass)
2.(explore, interact, talk)+
3. teleport to new location tutorial
island 4.repeat 2. agent
5.play with client (5-20 minutes)
6.questionnaire
Human Computer Interaction Research
University of Nevada, Reno
32. Task Completion Rates
40
30
20
10
0
Exploration communication interaction
Second Life TextSL
Accept H0: (Fishers’ exact probability test P=1.0, α=0.01)
Accessibility Human Computer Interaction Research
University of Nevada, Reno
34. Questionnaire
5.0 Bad
4.5
4.0
3.5
3.0
2.5 2.8
2.0 2.3
1.5 1.8
1.6
1.0
0.5
Good
0
Command Screen reader Efficiency feedback
Human Computer Interaction Research
University of Nevada, Reno
35. Usability
Attribute Metric TextSL SL Viewer
Learnability successful 1.0 (SD=0.0) 1.0 (SD=0.0)
completion
Efficiency Task completion exploration exploration
times communication communication
interaction interaction
Memorability Help / Help 1.0 (SD=0.7) 0.0 (SD=0.0)
menu
Errors unrecognized 0.75 (SD=0.7) 0.38 (SD=0.2)
commands
Satisfaction Questionnaire < 2.5 -
Reject H1
Human Computer Interaction Research
University of Nevada, Reno
36. Conclusions
TextSL is:
Accessible ✔
Usability: Slower ✖
»but acceptable (questionnaire) ✔
»Communication is efficient ✔
Command based approach feasible
»content creation
»object interaction
Human Computer Interaction Research
University of Nevada, Reno
38. Meta Data / Feedback
Lack of meta data
»Raise Awareness among VW Developers
»Enforce names for objects
»post hoc automatic labeling
Feedback miaw
»Audio for objects
»voice over IP
»Taxonomy
you see 3 animals
Human Computer Interaction Research
University of Nevada, Reno
39. Content Creation / Interaction
“the billboard started playing a video.”
“the object turned red”
Interaction with scripted objects?
>create a green cube
>create a dog
Command based 3d object creation?
Human Computer Interaction Research
University of Nevada, Reno
40. Acknowledgements
IIS-0917362: HCC-Small: TextSL: A Virtual World Interface
for Visually Impaired. (Eelke Folmer / George Bebis)
IIS-0738921: HCC-SGER: Developing an Accessible Client
for Secondlife (Eelke Folmer)
feedback? download?
more info?
HTTP://www.textsl.org
contribute? collaborate?
Dave Carr Manjari Sapre Bei Yuan
Human Computer Interaction Research
University of Nevada, Reno
41. Questions?
Human Computer Interaction Research
University of Nevada, Reno
Editor's Notes
Hello, My name is Eelke Folmer and I am an assistant professor at the University of Nevada, in Reno.
I am here today to present the results on developing a command based interface that allows users who are visually impaired to access popular virtual worlds such as Second life using a screen reader.
My talk will have the following structure: First I am going to introduce virtual worlds for those in the audience that are unfamiliar with them.
Second I am going to explain some of the barriers that users who are visually impaired face when trying to access virtual worlds. Then I&#x2019;m going to present TextSL which is an interface that allows screen reader users to access the popular world of Second Life. Then I&#x2019;m going to present the results of a users study we conducted and I&#x2019;ll finish this talk with discussing some areas for future research.
Second Life, Habbo hotel, world of warcraft are some of the most popular virtual worlds that offer rich three dimensional experiences. Users control a virtual representation of themselves called an avatar with human like capabilities such as walking and gesturing through a game like third person interface. Users can explore vast worlds and interact with object in this world or other avatars.
Virtual worlds have experienced significant commercial success World of Warcraft has more than 12 million registered accounts, and Second life more than 16 million. World of warcraft generated more than 1 billion dollars in revenue and second life&#x2019;s economy is bigger than that of a small african country.
.
Information technology research company gartner estimates that 80% of the active internet users will have some virtual world account in 2012 and estimates the current number of virtual citizens to be around 50 million. Because of their success virtual worlds may someday replace the world wide web.
Two types of virtual worlds can be distinguished that are fundamentally different from each other. We have game like virtual worlds such as world of warcraft or star wars galaxies that have all the typical elements found in games such as story line, goals to achieve, score, combat and the ability for your character to die. These game like virtual worlds are also commonly referred to as Massive Multiplayer Online Games. On the other hand we have social virtual worlds such as Second Life that have the same third person interaction mechanism but which lack typical game elements such as story, score and combat. Social virtual worlds are significantly different as they allow their users to create content and consequently everything in the virtual world is created by their user, which allows for a wide variety of experiences.
When we talk about virtual worlds replacing the world wide web, social virtual worlds have a much higher potential because of their large degree of customization that is similar to how the world wide web works. Consequently the focus of our research is on social virtual worlds which I from now on will refer to as virtual worlds.
Because of its high degree of customization social virtual worlds such as second life have successfully been used for numerous purposes. For example many universities in the US have use Second life as a virtual classroom. The tech museum in San Jose provides artists the opportunity to exhibit their virtual art .
Second Life has hundreds of communities for example brigadoon is a community for individuals with asperger syndrome.
Secondlife is being used for numerous other purposes such as business, politics, religion, research and health care.
Users interact with virtual worlds through a client application called a viewer. This viewer is similar to a browser and supports a number of basic functions such as navigating your avatar, either by walking flying or teleporting. Communication is supported through a text based chat interface or recently using voice over IP. The viewer supports interacting with interactive objects that allow users to drive a car, play a game or watch a video. The creation of content is an important function for social virtual worlds as it drives the expansion of virtual worlds as well as supports the development of virtual economies. In second life numerous users make a living of selling virtual content.
The popularity of 3D immersive environments such as Second Life, Video Games or Google Earth are evidence that software is increasingly modeled after how we interact with the real world, as such interaction is the most natural to us. For example you can visit the wikipedia to read about the eiffel tower but in a virtual world such as second life you can visit a 3D model of the eiffel tower with your avatar which allows for real life interaction such as climbing the tower which allows for a more immersive experience.
The past decade lots of assistive technology has been developed that allow users with disabilities to access information technology such as the internet or email. For example browser plugins exist that allow users with severe motor impairments to access the internet using switch access. Users who are visually impaired use screen readers such as Jaws or Window eyes can turn text into synthetic speech. Various other assistive technologies exist for other types of impairments.
Virtual worlds however, despite the the social interaction opportunities that they could offer to users with disabilities have identified to be inaccessible. Navigating an avatar in 3D space requires a significant higher degree of interaction that what can be provided through switch input. Users who are visually impaired are excluded as well because virtual worlds are entirely visual, and lack any textual representation that can be read with a screen reader or tactile display. Though some objects in virtual worlds provide sound it is not provided in such a way that someone who is visually impaired can meaningfully navigate, interact or communicate within secondlife.
though different barriers exist for different types of impairments our research we focus on how users who are visually impaired can interact with virtual worlds. Due to its popularity and the fact that its viewer is open source we specifically focus on Second Life. Our research also focuses on accomodating those with the most severe vision impairments e.g. users who have no functional vision. Our research is further motivated by the notion that Virtual worlds allow you to be some one completely different from who you are in real life and as such they are agnostic to race, gender, age and disability. Virtual worlds are also increasingly used as education tools therefore to make their use comply with section 508 of the rehabilitation act we need to investigate how to make them accessible. Virtual world could offer users with vision impairments a place to meet with new people as they are often isolated and lonely.
So based on the functionality offered by virtual world browsers we identified that an accessible virtual world client must support, navigation, communication, interaction and content creation and it must do so in a usable and accessible way. Of these four functions communication is actually one of the most important as several studies of virtual worlds indicate that the majority of the time spent in virtual worlds is spent on social interaction.
Rather than reinvent the wheel, we looked at games for users who are visually impaired that have similar interaction mechanisms as virtual worlds. A number of first person shooters exist that use various forms of audio output, such as synthetic speech or audio cues or sonification based techniques such as earcons or sonar. Terraformers for example provides you with an audio description of where you are. Audio cues indicate enemies and a sonar like mechanism is used to estimate distances.
For input these games use the same controls as the controls for sighted users use e.g. arrow keys and hotkeys to interact with objects or other characters.
For each one of these audio solutions we identified whether they could be applied to Second Life and we identified the following limitations.
Now if you develop an audio game then you as a developer own the content and you can easily augment objects with audio cues or earcons. In second life everything is owned by the users and consequently you don&#x2019;t have the rights to change existing objects. also where audio cues make sense if you walk around in a dungeon with monsters in Second life there are simply too many objects that are difficult to represent using audio or earcons. Techniques such as earcons or a sound radar have the goal to allow a users to quickly identify enemies, but as there is no combat in second life there is no need to for such mechanisms.
Synthetic speech seems to be the only applicable solutions as objects do have a name field and sometimes a description.
With regard to input, although it would be possible to navigate a character with arrow keys we found that virtual worlds like second life are very densely populated with objects which makes navigation difficult without getting stuck or bumping into objects all the time. The biggest problem we found however is interacting with objects and avatars. A shortcut based approach doesn&#x2019;t scale up to support numerous types of interactions with large numbers of objects and avatars. Avatars can interact with objects in several ways but objects themselves can be scripted to provide new functionality you could open a car door drive a car sometime like that is difficult to support through hotkeys as you would run out of keys on your keyboard, not to mention that the user must memorize all these interactions. We needed something more flexible.
We needed something more flexible to support various interactions and for the design of TextSL we let ourselves inspire by the interaction of multi user dungeon games. Multi used dungeon games are the precursors of virtual worlds and they allow multiple users to interact with each other and the game through a command based text interface. Multiuser dungeon games support all the functionality we need in a virtual world browser namely exploration, navigation, interaction and communication.
To make virtual worlds accessible we developed an interface called textsl, this is a standalone application that can extract a textual description from a virtual world that can be read with a screen reader. We went for a screen reader based approach as users who are visually impaired use these a lot and they allow for detailed customizations often more than what can be done with synthetic speech provided by an API. We use the LibSecondLife library which is an API for connecting to the second life servers. We encapsulated this library so we can easily connect to other virtual worlds allowing for Text SL to be used as a Virtual world agnostic research platform. Because we don&#x2019;t do any rendering it can run on a low end machine, possibly even on a smartphone.
Users interact with second life using a number of commands for exploration, communication, interaction. WE don&#x2019;t support content creation yet. We also offer a help command and a tutorial. an interpreter allows us to use natural language. A number of synonyms verbs are mapped onto the same internal command. The interpreter also allows for prepositions and adjectives which allows for efficiently supporting a large number of different interaction with avatars and objects. E.g. give my flower to jane, something which would be hard to do using shortcuts.
Output works as follows. your avatar is being logged in. TextSL retrieves a list of objects and avatars around it then will try to answer any spatial queries from the user using a meaningful descriptive narrative. Once names of objects and avatars are known users can interact or communicate with them.
We identified two big problems when trying to compile meaningful narratives from object names. .
1) Second Life is very densily populated with objects. We build a bot that acts like a spider which analyzed large regions of Second Life and we found that on average you can find 13 objects within a 10 meter radius around the user. Some names of objects may be really long you may easily overwhelm the user with feedback when you all put it through a screen reader. Also it makes it very hard to navigate anywhere within second Life without running into things.
2) Another problem which is actually the opposite problem of the first is that many objects actually lack meta data. When you create an object in second Life you can give it a name but as most content creators figure that you can see what the object is almost 40% of the objects in Second Life are called &#x201C;object&#x201D;. This is a problem you see &#x201C;object object object object&#x201D;
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
We developed two solutions for these problems.
The first is that we use a summarizer that tries to synthesize information. It will cull non descriptive objects and also rank the objects based on their distance and name assuming that a longer name implies a more accurate description. We implemented a generic describe or look command that returns the number of ojbects and avatars around you which you can then iteratively invoke on their subsets to get more information.
The second solution is that we built in a more sophisticated pathfinding technique that assures that you go where you want to go without having to find this way yourself. Inherently when you use arrow keys you need to do the pathfinding yourself which is good for audio games as it helps with mental mapping of spaces. These spaces are typically small but Second life is vast and people build structures in the sky which makes mental mapping a lot harder. Also with pathfinding your avatar looks like any other avatar and visually impaired are not characterized by a lower mobility like in real life.
here&#x2019;s a small demo.
We conducted a user study
So how do we evaluate a virtual world? virtual worlds have many different usages for example from virtual classrooms to business to playing games. We are tempted to evaluate second life as a game or an education tool but essentially that is a function of the content and not of the browsers. When we evaluate an education website we are not evaluating the browser. Therefore for TextSL we must compare how well it supports this basic browser functionality which is exploration, interaction and communication.
So we are interested in evaluating textsl&#x2019;s features which are its command based interface and screen reader output.
We are interested in its accessibility so can users can explore, communicate and interact with the same success rate as the SL Viewer. The beyond accessibility to efficiency principle states that accessible software must be more ambitious than just providing access therefore we evaluate the usability of textsl using nielsen&#x2019;s usability attributed specifically whether users can explore, communicate and interact with the same learnability, efficiency, memorability, error rate and satisfaction as the SL viewer.
for our user study we recruited 8 sighted individuals and 8 screen reader users. We rented an island so we did not have any interference from other users and so we could build a controlled environment. Each users first received a tutorial on how their respective client worked. None of the sighted users had accessed second life though some had played first person shooters. The tutorial asks each user to perform certain tasks with regard to exploration, interaction, and communication. When the tutorial finishes the user is teleported to a location on our island where the user has to perform a number of tasks related to exploration, interaction and communication. Failing one task does not affect the outcome of others. These tasks are repeated at a different location. Both groups of users for both clients to the exact same test. After the test screen reader users could play with the client and then were interviewed using a questionnaire to collect their subjective experiences. We actually built an automated system for conducing remote and local user tests but for those details I refer to our paper.
Here are the results of the task completion for the different types of tasks and it shows that both groups of users were able to successfully complete all their assigned tasks. Based on these results we accept H0
Evaluating H1 is more difficult. This chart shows the average task performance times for the tutorial and the different tasks. Unfortunately exploration and interaction are significantly slower. Communication is the same which is possibly explained by the iterative nature of communication. For exploration and interaction users first need to query the names of the objects before they can interact whereas in the secondlife viewer you can do this at the same time.
After the user study screen reader users filled in a questionnaire where we asked them to rate the various features of TextSL using a likert scale. Overall the feedback was positive. Only with regard to the amount of feedback provided users had conflicting feedback. Some users found the feedback to little and others overwhelming.
Summarizing for each one of Nielsen&#x2019;s attributes we got the following results.
Some conclusions. We verified that TextSL is accessible. Interaction is slower but according to the questionnaire users found this to be acceptable. Communication is just as efficient as in the SL viewer which is good as this is hte most important function to support.
We ourselves are confident that the benefits of a command based approach will be evident once we add more object interaction functions and content creation functions.