SlideShare ist ein Scribd-Unternehmen logo
1 von 38
Found
                                                 hallway!




          The User as a Sensor:
Navigating Users with Visual Impairments
 in Indoor Spaces using Tactile Landmarks




      Navid Fallah, Ilias Apostolopoulos, Kostas Bekris, Eelke Folmer
                            Human Computer Interaction Lab
                                University of Nevada, Reno
Navigation




Humans navigate using:
 »Path integration
 »Landmark identification
Sighted people primarily rely on vision

                                  Human Computer Interaction Lab
                                      University of Nevada, Reno
Users with Visual Impairments




Navigate using compensatory senses (touch,
 sounds, smell)
Landmark identification is significantly slower
Reduced mobility & lower quality of life.

                                  Human Computer Interaction Lab
                                      University of Nevada, Reno
Navigation Systems
  gps




outdoors             indoors




           ✅



                 Human Computer Interaction Lab
                     University of Nevada, Reno
Indoor Localization Techniques

        compass
        step



dead reckoning    beacons            sensors

 - inaccurate     +accurate      +accurate
 +cheap           -expensive     - usability




                               Human Computer Interaction Lab
                                   University of Nevada, Reno
Can we develop
a better system?
Veering




outdoors             indoors




                     Human Computer Interaction Lab
                         University of Nevada, Reno
Dead Reckoning Localization
                         Step counter

                              Compass




                                                              door


                                              hallway




error accumulates over time         Sync with known landmarks


                                              Human Computer Interaction Lab
                                                  University of Nevada, Reno
Door




Hallway
intersection
Door




Hallway
intersection
Door




Hallway
intersection




 Wall
Door




Hallway
intersection




 Wall
Door




Hallway
intersection




 Wall
Combining techniques

        compass
        step



dead reckoning             beacons                   sensors



               +cheap             +accuracy

                        compass
                         step
                                        door


                                               Human Computer Interaction Lab
                                                   University of Nevada, Reno
Representation




KML 3D model                     navigable map

               Geometry parser


                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Direction provision
     Shortest path using A*
     Generate Directions:
      1.Move to a landmark
      2.Turn direction
      3.Action on a landmark




                    Human Computer Interaction Lab
                        University of Nevada, Reno
Direction provision
     Shortest path using A*
     Generate Directions:
      1.Move to a landmark
      2.Turn direction
      3.Action on a landmark




                    Human Computer Interaction Lab
                        University of Nevada, Reno
Interface
“Follow the wall to your right
  until you reach a hallway
         intersection”




                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Interface
“Follow the wall to your right
  until you reach a hallway
         intersection”




                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Interface
“Follow the wall to your right
                                           Found
  until you reach a hallway               hallway!
         intersection”




                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Interface
“Follow the wall to your right
                                           Found
  until you reach a hallway               hallway!
         intersection”


                          tap




                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Interface
“Follow the wall to your right
                                           Found
  until you reach a hallway               hallway!
         intersection”


                          tap


“Turn right into the Hallway”




                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Particle Filters




x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
                                          Human Computer Interaction Lab
                                              University of Nevada, Reno
Prior studies

     Intersection                   20 steps

                      Door   Door                15 steps




                    landmark                   metric

Feasibility study with 10 blindfolded users
Follow up study with 8 blindfolded users
 »computing directions runtime
 »multiple filters for estimating step length
                                                  Human Computer Interaction Lab
                                                      University of Nevada, Reno
User Study




Engineering Building (hallways/labs/offices)
11 paths over two 2 floors
Landmarks (hallway intersection, doors,
 watercoolers, floor transitions)
                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
Participants



Six users with visual impairments.
3 Female, average age: 51.8 (SD=18.2)
3 Total Blind, 2 legal blind, 1 low vision.
All used a cane for navigation
No cognitive map of Engineering Building
Users followed 11 paths / holding phone in hand

                                   Human Computer Interaction Lab
                                       University of Nevada, Reno
Ground Truth



                     VS

    StarGazer                    Sketchup Model
$2,000                    took 3 hours to create
 3 days to install
                                  Human Computer Interaction Lab
                                      University of Nevada, Reno
Results
Quantitative
 »85% of paths completed successfully
 »Average error: 1.85 meter
 »Door counting had lowest success rate.
Qualitative
 »Directions were easy to follow
 »Allowed for efficient navigation
 »Users liked the system.
 »useful feedback on direction provision.
                                   Human Computer Interaction Lab
                                       University of Nevada, Reno
Future work




Improve step detection / avoid scuttling
Evaluation in more complex environments
Planning more reliable paths.
                                 Human Computer Interaction Lab
                                     University of Nevada, Reno
questions?




             Human Computer Interaction Lab
                 University of Nevada, Reno

Weitere ähnliche Inhalte

Andere mochten auch

Visually impaired as a design challenge
Visually impaired as a design challenge Visually impaired as a design challenge
Visually impaired as a design challenge
Husam AlMuzainy
 

Andere mochten auch (8)

Location Finding for blind People Using Voice Navigation Stick Seminar
Location Finding for blind People Using Voice Navigation Stick SeminarLocation Finding for blind People Using Voice Navigation Stick Seminar
Location Finding for blind People Using Voice Navigation Stick Seminar
 
Navigation system for blind using GPS & GSM
Navigation system for blind using GPS & GSMNavigation system for blind using GPS & GSM
Navigation system for blind using GPS & GSM
 
Green cities
Green citiesGreen cities
Green cities
 
Foot bridge presentation
Foot bridge presentationFoot bridge presentation
Foot bridge presentation
 
Visually impaired as a design challenge
Visually impaired as a design challenge Visually impaired as a design challenge
Visually impaired as a design challenge
 
Sustainable Cities
Sustainable CitiesSustainable Cities
Sustainable Cities
 
Sensor Based Blind Stick
Sensor Based Blind StickSensor Based Blind Stick
Sensor Based Blind Stick
 
Indian culture
Indian cultureIndian culture
Indian culture
 

Mehr von Eelke Folmer

Game accessibility at hanze hogeschool
Game accessibility at hanze hogeschoolGame accessibility at hanze hogeschool
Game accessibility at hanze hogeschool
Eelke Folmer
 
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World ObjectsSEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
Eelke Folmer
 

Mehr von Eelke Folmer (12)

Game accessibility at hanze hogeschool
Game accessibility at hanze hogeschoolGame accessibility at hanze hogeschool
Game accessibility at hanze hogeschool
 
Spatial Gestures using a Tactile-Proprioceptive Display
Spatial Gestures using a Tactile-Proprioceptive DisplaySpatial Gestures using a Tactile-Proprioceptive Display
Spatial Gestures using a Tactile-Proprioceptive Display
 
Supplemental sonification of bingo
Supplemental sonification of bingoSupplemental sonification of bingo
Supplemental sonification of bingo
 
Real time sensory substitution of a bingo game
Real time sensory substitution of a bingo gameReal time sensory substitution of a bingo game
Real time sensory substitution of a bingo game
 
Pet n-punch
Pet n-punch Pet n-punch
Pet n-punch
 
syntherella feedback synthesizer
syntherella feedback synthesizersyntherella feedback synthesizer
syntherella feedback synthesizer
 
Navigating a 3D Avatar using a Single Switch
Navigating a 3D Avatar using a Single SwitchNavigating a 3D Avatar using a Single Switch
Navigating a 3D Avatar using a Single Switch
 
Real-time Sensory Substitution to Enable Players who are Blind to Play Video ...
Real-time Sensory Substitution to Enable Players who are Blind to Play Video ...Real-time Sensory Substitution to Enable Players who are Blind to Play Video ...
Real-time Sensory Substitution to Enable Players who are Blind to Play Video ...
 
Vi bowling
Vi bowlingVi bowling
Vi bowling
 
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World ObjectsSEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
SEEK-N-TAG: A Game for Labeling and Classifying Virtual World Objects
 
VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired
VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually ImpairedVI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired
VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired
 
G4H: game accessibility research @ University of Nevada, Reno
G4H: game accessibility research @ University of Nevada, RenoG4H: game accessibility research @ University of Nevada, Reno
G4H: game accessibility research @ University of Nevada, Reno
 

Kürzlich hochgeladen

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Kürzlich hochgeladen (20)

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 

The User as a Sensor: Navigating Users with Visual Impairments in Indoor Spaces using Tactile Landmarks

  • 1. Found hallway! The User as a Sensor: Navigating Users with Visual Impairments in Indoor Spaces using Tactile Landmarks Navid Fallah, Ilias Apostolopoulos, Kostas Bekris, Eelke Folmer Human Computer Interaction Lab University of Nevada, Reno
  • 2. Navigation Humans navigate using: »Path integration »Landmark identification Sighted people primarily rely on vision Human Computer Interaction Lab University of Nevada, Reno
  • 3. Users with Visual Impairments Navigate using compensatory senses (touch, sounds, smell) Landmark identification is significantly slower Reduced mobility & lower quality of life. Human Computer Interaction Lab University of Nevada, Reno
  • 4. Navigation Systems gps outdoors indoors ✅ Human Computer Interaction Lab University of Nevada, Reno
  • 5. Indoor Localization Techniques compass step dead reckoning beacons sensors - inaccurate +accurate +accurate +cheap -expensive - usability Human Computer Interaction Lab University of Nevada, Reno
  • 6. Can we develop a better system?
  • 7. Veering outdoors indoors Human Computer Interaction Lab University of Nevada, Reno
  • 8. Dead Reckoning Localization Step counter Compass door hallway error accumulates over time Sync with known landmarks Human Computer Interaction Lab University of Nevada, Reno
  • 14. Combining techniques compass step dead reckoning beacons sensors +cheap +accuracy compass step door Human Computer Interaction Lab University of Nevada, Reno
  • 15. Representation KML 3D model navigable map Geometry parser Human Computer Interaction Lab University of Nevada, Reno
  • 16. Direction provision Shortest path using A* Generate Directions: 1.Move to a landmark 2.Turn direction 3.Action on a landmark Human Computer Interaction Lab University of Nevada, Reno
  • 17. Direction provision Shortest path using A* Generate Directions: 1.Move to a landmark 2.Turn direction 3.Action on a landmark Human Computer Interaction Lab University of Nevada, Reno
  • 18. Interface “Follow the wall to your right until you reach a hallway intersection” Human Computer Interaction Lab University of Nevada, Reno
  • 19. Interface “Follow the wall to your right until you reach a hallway intersection” Human Computer Interaction Lab University of Nevada, Reno
  • 20. Interface “Follow the wall to your right Found until you reach a hallway hallway! intersection” Human Computer Interaction Lab University of Nevada, Reno
  • 21. Interface “Follow the wall to your right Found until you reach a hallway hallway! intersection” tap Human Computer Interaction Lab University of Nevada, Reno
  • 22. Interface “Follow the wall to your right Found until you reach a hallway hallway! intersection” tap “Turn right into the Hallway” Human Computer Interaction Lab University of Nevada, Reno
  • 23. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 24. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 25. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 26. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 27. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 28. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 29. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 30. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 31. Particle Filters x particles have location and weight. Location updated using distribution of error in steps & compass weights using map information and user input. Using multiple filters to estimate step length Human Computer Interaction Lab University of Nevada, Reno
  • 32. Prior studies Intersection 20 steps Door Door 15 steps landmark metric Feasibility study with 10 blindfolded users Follow up study with 8 blindfolded users »computing directions runtime »multiple filters for estimating step length Human Computer Interaction Lab University of Nevada, Reno
  • 33. User Study Engineering Building (hallways/labs/offices) 11 paths over two 2 floors Landmarks (hallway intersection, doors, watercoolers, floor transitions) Human Computer Interaction Lab University of Nevada, Reno
  • 34. Participants Six users with visual impairments. 3 Female, average age: 51.8 (SD=18.2) 3 Total Blind, 2 legal blind, 1 low vision. All used a cane for navigation No cognitive map of Engineering Building Users followed 11 paths / holding phone in hand Human Computer Interaction Lab University of Nevada, Reno
  • 35. Ground Truth VS StarGazer Sketchup Model $2,000 took 3 hours to create 3 days to install Human Computer Interaction Lab University of Nevada, Reno
  • 36. Results Quantitative »85% of paths completed successfully »Average error: 1.85 meter »Door counting had lowest success rate. Qualitative »Directions were easy to follow »Allowed for efficient navigation »Users liked the system. »useful feedback on direction provision. Human Computer Interaction Lab University of Nevada, Reno
  • 37. Future work Improve step detection / avoid scuttling Evaluation in more complex environments Planning more reliable paths. Human Computer Interaction Lab University of Nevada, Reno
  • 38. questions? Human Computer Interaction Lab University of Nevada, Reno

Hinweis der Redaktion

  1. Hi, I am Eelke Folmer and I'm to present an indoor navigation system for users that are visually that I developed with my two of my graduate students Navid Fallah, Ilias Apostolopoulos and my colleague Kostas Bekris. \n\n
  2. \n
  3. In navigation humans basically use the following two technique: \n\n1) path integration where users update their current position using proprioceptive data. \n2) landmark based identification where users locate themselves by recognizing landmarks that is stored on a map.\nUsing both techniques together can be used for exploring new environments and building a cognitive map of the environment by observing landmarks. \n\nSighted people primarily rely on vision to recognize landmarks. \n
  4. but people with visual impairments must rely on their compensatory senses such as touch, sounds and smell. \nThough for path integration no significant differences in path integration abilities between sighted an blind users have been found, landmark identification and cognitive mapping is significantly slower, which leads to reduced mobility and a lower quality of life for users with visual impairments. \n\n
  5. Several human navigation systems have been developed which can be distinguished into outdoor and indoor systems. \nWhereas outdoor systems typically use GPS for localizing the user, indoor navigation systems must use a different technique as GPS signals cannot be retrieved indoors. \n
  6. Exisitng Indoor navigation systems typically use one of the following three techniques: \n\n1) dead reckoning localization uses low cost sensors, such as a compass and pedometer to update the user’ s position based on observed motion. These techniques are cheap but inaccurate as error in location estimation propagates over time \n2) beacon based systems, embed identifiers such as RFID tags in the environment where a sensor senses a particular identifier upon which the user can be localized. These systems are accurate but often prohibitively expensive to install Though tags themselves are cheap installing them in a large environments such as an airport is not. \n3) sensing based approaches typcially equip the user with a number of sensors, such as camera and locate the user by detecting pre-existing features of indoor spaces that have been recorded on a prior map. Though this technique can be accurate, from a usability point of view it requires the user to carry a number of sensors and computer equipment. For users with visual impairments this is undesirable as they often already carry a cane and assistive devices such as a braille reader.\n
  7. So given that few of these systems have been implemented at a large scale, we basically set out to investigate if investigated \n\nSo our idea was can we design a navigation system that is cheap to implement and easy to use for users with visual impairments? So we started our project by analyzing navigation in indoor environments more closely. \n
  8. \n
  9. So you need accurate localization to be able to avoid veering, e.g. when a user is navigating and deviating from the provided path. But you can argue that veering is less of a problem in indoor environments than it is in outdoor environments as navigation is naturally constrained by the physical environment, such as walls and doors. \nSo you could argue that maybe for indoor navigation super precise localization is not required. \n
  10. From the three indoor locatlization techniques dead reckoning is not very precise but it can be implemented using features that are present in most smartphones, e.g., accelerometers and a compass. So you don’t have to hook up the user with a bunch of sensors or add RFID tags to your environment. \n\nA problem with dead reckoning is that errors in estimating the user’s location propagate and accumulate over time. but this can be avoided by periodically synchronizing the user’s location with known landmarks in the environment, which could be hallway intersections, a door or a water cooler. \n\n \n
  11. Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
  12. Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
  13. Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
  14. Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
  15. So we propose a navigation system that seamlessly integrates with how users with visual impairments already navigate and we do this by combining elements of existing indoor navigation systems. \n\nWe’re using deadreckoning because it can be implemtned on a smartphone but we increasing the accuracy of dead reckongin using a sensor / beacon based approach, where we turn the user into a sensor by having them confirm presence of anticipated tactile landmarks along their path. \n\n
  16. For representing our environment we use a 3D model, as that can convey information such as ramps and low ceilings that are impediments to a blind user. Models are created in Sketchup and a simple geometry parser turns this into a navigable 2D map which is more suitable for path planning. \nThe parser extracts navigable space, and identifies landmarks such as doors, slopes, and hallway intersections. \nOther types of landmarks are manually added to the 2d map.\nThe use of 3d models as opposed to 2d models is further motivated as such models are becoming increasingly available on Google Earth. \n
  17. Using the 2D map and a start and target location we compute the shortest path using A-star. \nWe then parse this path into directions which are of the form: \nMove to a landmark where we use strategies such as wall following and door counting. \nTurn directions, or an action on a landmark such as open a door. \n
  18. These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
  19. These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
  20. These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
  21. These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
  22. These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
  23. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  24. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  25. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  26. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  27. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  28. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  29. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  30. A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
  31. \n
  32. So prior to the user study I present today we did two studies. \nthe first study we conducted with blindfolded users where we evaluated the effectiveness of metric directions versus landmark based directions, where we found that landmark based directions containing fewer but more distinguishable landmarks gave much better performance.\n\nAs the localization was performed offline, a second study with blindfolded users explored runtime direction provision and localization on the phone, where we also explored the use of multiple filters for estimating user’s step length.\n
  33. In this study we tested our system with the intended target demographic. \nStudies were performed in our engineering building. Multiple floor building with hallways, labs, \n\nWe created 11 paths over two different floors. Landmarks included doors, hallway intersections water coolers and floor transitions. This image shows the map of the second floor. \n
  34. We recruited six users with visual impairmetns trhough the local chapter of the national federation of hte blind. \n3 were female and the average age was 51 years. \n3 of these users were totally blind, 2 legally blind and one had low vision. \n\nAll used a cane for navigation. Most importantly none of hte subjects had been in the engineering building before. \n
  35. For ground truth measurment we used a commercial beacon based localization system called stargazer. \nReflective tags are installed in the ceiling and the user wears a camera and computing equipment on a belt to localize the user. This system has an accuracy of about 2cm. From a cost perspective it is interesting to note that the beacon based system took two students 3 days to install and cost $3,000 where our 3d model took one student about 3 hours to create and annotate\n
  36. Users were able to complete 85% of the paths succesfully which we defined as being able to reach the target destination with their cane. \nWe found an average localization error of 1.85 error which seems large but that is within the sensing range of a blind user. \n\nQualitaitve results were acquired through interviews and a questionnaire. \nOverall users found the directions easy to foloow, \n\nPaths that involved counting doors had the lowest succesrate as it becomes more difficult to pick up steps. \n
  37. One thing we would like to investigate in future work is test our system in more complex environmetns, the engineering building has narrow hallways, so it is difficult to get lost. Other environments, for example, a library or airport contain more open spaces where veering will become a larger problem. \n\nOne promising area to address this is with regard to path planning, where currently the system computes the shortest path using A* this may not always be the most reliable\n
  38. \n