We describe how Radio Frequency Identification
(RFID) can be used in robot-assisted indoor navigation for
the visually impaired. We present a robotic guide for the
visually impaired that was deployed and tested both with
and without visually impaired participants in two indoor
environments. We describe how we modified the standard
potential fields algorithms to achieve navigation at moderate
walking speeds and to avoid oscillation in narrow spaces.
The experiments illustrate that passive RFID tags deployed
in the environment can act as reliable stimuli that trigger local
navigation behaviors to achieve global navigation objectives.
2. (a) RG (b) An RFID Tag (c) Navigation
Fig. 1. Robot-Assisted Navigation
wheel chair equipped with a vision system, sonars, a two to three hours, and require only commercial off-the-
differential GPS, and a portable GIS. While the wheel shelf (COTS) hardware components; 2) that sensors be
chair is superior to the guide dog in its knowledge of the inexpensive, reliable, easy to maintain (no external power
environment, the experiments run by the HARUNOBU-6 supply), and provide accurate localization; 3) that all com-
research team demonstrated that the wheel chair is inferior putation run onboard the robot; 4) that robot navigation be
to the guide dog in mobility and obstacle avoidance. The smooth (few sideways jerks and abrupt velocity changes)
major source of problems was vision-based navigation and keep pace with a moderate walking speed; and 5) that
because the recognition of patterns and landmarks was human-robot interaction be both reliable and intuitive from
greatly influenced by the time of day, weather, and season. the perspective of the visually impaired users.
Additionally, HARUNOBU-6 is a higly customized piece The first two requirements make the systems that sat-
of equipment, which negatively affects its portability across isfy them replicable, maintainable, and robust. The third
a broad spectrum of environments. requirement eliminates the necessity of running substantial
Several research efforts in mobile robotics are similar to off-board computation to keep the robot operational. In
the research described in this paper in that they also use emergency situations, e.g., computer security breaches,
RFID technology for robot navigation. Kantor and Singh power failures, and fires, off-board computers are likely to
used RFID tags for robot localization and mapping[8]. become dysfunctional and paralyze the robot if it depends
Once the positions of the RFID tags are known, their on them. The last two requirements explicitly consider
system uses time-of-arrival type of information to estimate the needs of the target population and make our project
the distance from detected tags. Tsukiyama[9] developed a different from the RFID-based robot navigation systems
navigation system for mobile robots using RFID tags. The mentioned above.
system assumes perfect signal reception and measurement
and does not deal with uncertainty. H¨hnel et al.[10]
a A. Hardware
developed a robotic mapping and localization system to RG is built on top of the Pioneer 2DX commercial
analyze whether RFID can be used to improve the lo- robotic platform [12] (See Figure 1(a)). The platform has
calization of mobile robots in office environments. They three wheels, 16 ultrasonic sonars, 8 in front and 8 in the
proposed a probabilistic measurement model for RFID back, and is equipped with three rechargeable Power Sonic
readers that accurately localizes RFID tags in a simple PS-1270 onboard batteries that can operate for up to two
office environment. hours at a time.
What turns the platform into a robotic guide is a
III. A ROBOTIC G UIDE FOR THE V ISUALLY I MPAIRED Wayfinding Toolkit (WT) mounted on top of the platform
In May 2003, the Department of Computer Science of and powered from the on-board batteries. As can be seen
Utah State Univeristy (USU) and the USU Center for in Figure 1(a), the WT currently resides in a PVC pipe
Persons with Disabilities launched a collaborative project structure attached to the top of the platform. The WT’s
whose objective is to build an indoor robotic guide for the core component is a Dell laptop connected to the platform’s
visually impaired. In this paper, we describe a prototype microcontroller. The laptop has a Pentium 4 mobile 1.6
we have built and deployed in two indoor environments. Its GHz processor with 512 MB of RAM. Communication
name is RG, which stands for “robotic guide.” RG is shown between the laptop and the microcontroller is done through
in Figure 1(a). We refer to our approach as unintrusive a usb-to-serial cable. The laptop interfaces to a radio-
instrumentation of environments. Our current research ob- frequency identification (RFID) reader through another
jective is to alleviate localization and navigation problems usb-to-serial cable. The TI Series 2000 RFID reader is
of purely autonomous approaches by instrumenting envi- connected to a square 200mm × 200mm antenna. The
ronments with inexpensive and reliable sensors that can be arrow in Figure 1(b) points to a TI RFID Slim Disk tag
placed in and out of environments without disrupting any attached to a wall. Only these RFID tags are currently
indigenous activities. Effectively, the environment becomes used by the system. These tags can be attached to any
a distributed tracking and guidance system[11]. Additional objects in the environment or worn on clothing. They do
requirements are: 1) that the instrumentation be fast, e.g., not require any external power source or direct line of sight
1980
3. (a) Empty Spaces (b) RG’s Grid
Fig. 2. Potential Fields and Empty Spaces.
to be detected by the RFID reader. They are activated by is iteratively decreased by 100 mm. In Figure 2(a), laser
the spherical electromagnetic field generated by the RFID readings R1 and R2 are the boundary readings of the
antenna with a radius of approximately 1.5 meters. Each maximum empty space. All readings between R1 and R2
tag is programmatically assigned a unique ID. are greater than the threshold. The next step is to find the
A dog leash is attached to the battery bay handle on the target direction. For that, we find the midway point between
back of the platform. The upper end of the leash is hung R1 and R2 and the direction to that point is the target
on a PCV pole next to the RFID antenna’s pole. As shown direction αt .
in Figure 1(c), visually impaired individuals follow RG by RG’s PF is a 10 × 30 egocentric grid. Each cell in the
holding onto that leash. grid is 200mm × 200mm. The grid covers an area of 12
B. Navigation square meters (2 meters in front and 3 meters on each side)
in front of RG. Each cell Cij holds a vector that contributes
Since RG’s objective is to assist the visually impaired in calculating the resultant PF vector. The direction to the
in navigating unknown environments, we had to pay close cell from RG’s center is αij . There are three types of
attention to three navigational features. First, RG should cells: 1) occupied cells, which hold the repulsive vectors
move at moderate walking speeds. For example, the robot generated by walls and obstacles; 2) free cells, which
developed by H¨hnel et al.[10] travels at an average speed
a hold the vector pointing in the target direction obtained
of 0.223 m/s, which is too slow for our purposes because by finding the maximum empty space; and 3) unknown
it is slower than a moderate walking speed (0.7 m/s) by cells, the contents of which are unknown, since they lie
almost half a meter. Second, the motion must be smooth, beyond detected obstacles. Unknown cells do not hold any
without sideways jerks or abrupt speed changes. Third, RG vectors. In Figure 2(b), dark gray cells are occupied, white
should be able to avoid obstacles. cells are free, and light gray cells are unknown. If d(Cij )
RG navigates in indoor environments using potential
is the distance from the robot’s center to the cell Cij in
fields (PF) and by finding empty spaces around itself.
the grid, L(αij ) is the laser reading in the cell’s direction,
PFs have been widely used in navigation and obstacle
and T is a tolerance constant, then the occupation of Cij
avoidance[13]. The basic concept behind the PF approach
is computed by the function ζ(i, j):
is to populate the robot’s sensing grid with an vector field
in which the robot is repulsed away from obstacles and ⎧
⎨1 if |d(Cij ) − L(αij )| < T
attracted towards the target. Thus, the walls and obstacles ζ(i, j) = 0 if L(αij ) − d(Cij ) > T (1)
around RG generate a PF in which RG acts like a moving ⎩
−1 if d(Cij ) − L(αij ) > T
particle[14]. The desired direction of travel is the direction
of the maximum empty space around RG, which, when In Equation 1, the constants 1, 0, -1 denote occupied,
found, becomes the target direction to guide RG through free, and unknown, respectively. By default, all vectors in
the PF. This simple strategy takes explicit advantage of occupied and free cells are unit vectors. However, since
the way human indoor environments are organized. For closer obstacles have more effect on the robot, the vector
example, if the maximum empty space is in front, the magnitude increases with the proximity of the cell to the
navigator can keep moving forward; if the maximum empty robot. Therefore, the vector magnitude in the cell is a
space is on the left, a left turn can be made, etc. This function of the cell’s row and column.
strategy allows RG to follow hallways, avoid obstacles, and A repulsive vector in Cij is denoted as Rij (mij , −αij ),
turn without using any orientation sensors, such as digital where mij is the vector’s magnitude and −αij is its
compasses or inertia cubes. direction. The magnitude is inversely proportional to the
To find the maximum empty space, RG uses a total of 90 distance of the occupied cell from the robot. For the left-
laser range finder readings, taken at every 2 degrees. The side vectors, mij = M agn(i, j) ∗ P1 ; for the right-side
readings are taken every millisecond. An initial threshold of vectors, mij = M agn(i, j) ∗ P2 , where M agn(i, j) is
3000 mm is used. If no empty space is found, this threshold the magnitude of the corresponding vector and P1 and P2
1981
4. are constants that vary the replusion vectors on the robot’s nents: 1) a map server, 2) a path planner, and 3) a speech
left and right sides, respectively. Thus, one can adjust the recognition and synthesis engine.
distance maintained by RG from the right or left wall, The Map Server realizes the causal and topological
respectively. Since RG’s localization relies on RFID tags levels of the SSH. The server’s knowledge base repre-
placed in hallways, it has to navigate closer to the right sents a connectivity graph of the environment in which
wall, which is achieved by increasing the repulsive force RG operates. No global map is assumed. In addition,
of the left side vectors. Repulsive vectors for occupied the knowledge base contains tag to destination mappings
cells are summed up to get the resultant repulsive vector and simple behavior trigger/disable scripts associated with
Rr = ij Rij . specific tags. The Map Server continuously registers the
The target vector in a free cell Cij is denoted as latest location of RG on the connectivity graph. The
Tij (M, αt ), where M is the vector’s magnitude. In our location is updated as soon as RG detects a RFID tag.
implementation, all vectors in the unoccupied cells are unit Given the connectivity graph, the Path Planner uses the
vectors. The resultant target vector is Tr = ij Tij = standard breadth first search algorithm to find a path from
Tr (M ∗N, αt ), where N is the number of unoccupied cells. one location to the other. A path plan is a sequence
The resultant vector, RES, is the sum of the repulsive and of tag numbers and behavior scripts at each tag. Thus,
target vectors: RES(mr , αr ) = Rr + Tr , where, mr is the RG’s trips are sequences of locally triggered behaviors that
magnitude and αr is the direction. achieve global navigation objectives. The SSH metric level
To ensure smooth turns and avoid abrupt speed changes, is not implemented, because, as studies in mobile robotics
RG never stops and turns in place. Instead, RG sets the show[16], [14], odometry, from which metric information
left (V1 ) and right (V2 ) wheel velocities, to produce a is typically obtained, is not reliable in robotic navigation.
smooth turn. V1 and V2 are functions of mr and αr :
V1 = V2 = v + (αr ∗ S)/mr , v is the robot’s velocity, D. Human-Robot Interaction
and S is a constant that determines the sharpness of turns; Human-robot interaction in RG is described in detail
αr is positive for left turns and negative for right. The elsewhere[18], [19]. Here we give a brief summary only
robot’s velocity v is a function of the front distance. Thus, for the sake of completeness. Visually impaired users can
if mr is large, the turns are less sharp. This is precisely interact with RG through speech and wearable keyboards.
why RG follows a smooth, straight path even in narrow Speech is received by RG through a wireless microphone
hallways without oscillating, which has been a problem placed on the user’s clothing. Speech is recognized and
for some PF algorithms[15]. Given this implementation, synthesized with Microsoft Speech API (SAPI) 5.1. RG
RG maintains, at most times, a moderate walking speed of interacts with its users and people in the environment
0.7 m/s without losing smoothness or robustness. through speech and audio icons, i.e., non-verbal sounds that
are readily associated with specific objects, e.g., the sound
C. Ethology and Spatial Semantic Hierarchy of water bubbles associated with a water cooler. When RG
As a software system, RG is based on Kupiers’ is passing a water cooler, it can either say “water cooler”
Spatial Semantic Hierarchy (SSH)[16] and Tinbergen’s or play an audio file with sounds of water bubbles. We
ethology[17]. The SSH is a framework for representing added audio icons to the system because, as recent research
spatial knowledge. It divides spatial knowledge of au- findings indicate [20], speech perception can be slow and
tonomous agents into four levels: control, causal, topolog- prone to block ambient sounds from the environment. To
ical, and metric. The control level consists of low level other people in the environment, RG is personified as
mobility laws, e.g., trajectory following and aligning with Merlin, a Microsoft software character, always present on
a surface. The causal level represents the world in terms the WT laptop’s screen.
of views and actions. A view is a collection of data items
that an agent gathers from its sensors. Actions move agents IV. E XPERIMENTS
from view to view. The topological level represents the We deployed our system for a total of approximately
world’s connectivity, i.e., how different locations are con- seventy hours in two indoor environments: the Assistive
nected. The metric level adds distances between locations. Technology Laboratory (ATL) of the USU Center for
In RG, the control level is implemented with the PF Persons with Disabilities and the USU CS Department. The
methods described above and includes the following be- ATL occupies part of a floor in a building on the USU
haviors: follow-wall, turn-left, turn-right, avoid-obstacles, North Campus. The floor has an area of approximately
go-thru-doorway, pass-doorway, and make-u-turn. These 4,270 square meters. The floor contains 6 laboratories, two
behaviors are coordinated and controlled through Tinber- bathrooms, two staircases, and an elevator. The CS Depart-
gen’s release mechanisms[17]. RFID tags are viewed as ment occupies an entire floor in a multi-floor building. The
stimuli that trigger or disable specific behaviors. To ensure floor’s area is 6,590 square meters. The floor contains 23
portabilty, all these behaviors are written in the behavior offices, 7 laboratories, a conference room, a student lounge,
programming language of the ActivMedia Robotics Inter- a tutor room, two elevators, several bathrooms, and two
face for Applications (ARIA) system from ActivMedia staircases.
Robotics, Inc. The routines run on the WT laptop. In Forty RFID tags were deployed at the ATL and one hun-
addition, the WT laptop runs three other software compo- dred tags were deployed at the CS Department. It took one
1982
5. (a) Narrow (1m wide) Hallway Runs (b) Medium (1.5m wide) Hallway Runs (c) Wide (2.5m wide) Hallway Runs
Fig. 3. Path Deviations in Hallways
(a) Narrow (1m wide) Hallway Runs (b) Medium (1.5m wide) Hallway Runs (c) Wide (2.5m wide) Hallway Runs
Fig. 4. Velocity Changes in Hallways.
person 20 minutes to deploy the tags and about 10 minutes was computed as the average of the distances taken during
to remove them at the ATL. The same measurements at the the run. Once the ideal distances were known, we ran the
CS Department were 30 and 20 minutes, respectively. As robot three times in each type of hallway. The hallways
Figure 1(b) indicates, the tags were placed on small pieces in which the robot ran were different from the hallways
of cardboard to insulate them from the walls and were in which the ideal distances were computed. Obstacles,
attached to the walls with regular scotch tape. The creation e.g., humans walking by and open doors, were allowed
of the connectivity graphs, took one hour at the ATL and during the test runs. Figure 3 gives the distance graphs
about 2 hours at the CS Department. One administrator of the three runs compared in each hallway type. The
first walked around the areas with a laptop and recorded vertical bars in each graph represent the robot’s width.
tag-destination associations and then associated behavior As can be seen from Figure 3(a), there is almost no
scripts with tags. deviation from the ideal distance in narrow hallways. Nor
RG was first repeatedly tested in the ATL, the smaller is there any oscillation. Figure 3(b) and Figure 3(c) show
of the two environments, and then deployed for pilot some insignificant deviations from the ideal distance. The
experiments at the USU CS Department. We ran two sets deviations were caused by people walking by and by
of pilot experiments. The first set did not involve visually open doors. However, there is no oscillation, i.e., sharp
impaired participants. The second set did. In the first set of movements in different directions. In both environments,
experiments, we had RG navigate three types of hallways we observed several tag detection failures, particularly in
of the CS Department: narrow (1 m), medium (1.5 m) and metallic door frames. However, after we insulated the tags
wide (2.5 m) and estimated its navigation in terms of two with small pieces of cardboard (see Figure 1(b)), the tag
variables: path deviations and abrupt speed changes. We detection failures stopped.
also wanted to test how well RG’s RFID reader detected Figure 4 gives the velocity graphs for each hallway type
the tags. (x-axis is time in seconds, y-axis is velocity in mm/sec).
To estimate path deviations, in each experiment we first The graphs show that the narrow hallways cause short
computed the ideal distance that the robot has to maintain abrupt changes in velocity. This is because in narrow
from the right wall in a certain type of hallway (narrow, hallways even a slight disorientation, e.g., 3 degrees, in the
medium, and wide). The ideal distance was computed by robot causes changes in velocity because less free space is
running the robot in a hallway of that type with all doors detected in the grid. In medium and wide hallways, the
closed and no obstacles en route. During the run, the velocity remains mostly smooth. Several speed changes
distance read by the laser range finder between the robot occur when the robot passes or navigates through doorways
and the right wall was recorded every 50 milliseconds. In or avoids obstacles.
recording the distance, the robot orientation was taken into The second set of pilot experiments involved five vi-
account from two consecutive readings. The ideal distance sually impaired participants, one participant at a time,
1983
6. over a period of two months. Three participants were Research Initiative (CURI) grant from the State of Utah,
completely blind and two participants could perceive only and through a New Faculty Research grant from Utah State
light. The participants had no speech impediments, hearing University.
problems, or cognitive disabilities. Two participants were
dog users and the other three used white canes. The
participants were asked to use RG to navigate to three R EFERENCES
distinct locations (an office, a lounge, and a bathroom) [1] M. P. LaPlante and D. Carlson, Disability in the United States:
at the USU CS Department. All participants were new Prevalence and Causes. Washington, DC: U.S. Department of Ed-
to the environment and had to navigate approximately 40 ucation, National Institute of Disability and Rehabilitation Research,
2000.
meters to get to all destinations. Thus, in the experiments [2] S. Shoval, J. Borenstein, and Y. Koren, “Mobile Robot Obstacle
with visually impaired participants, the robot navigated Avoidance in a Computerized Travel for the Blind,” in IEEE
approximately 200 meters. All participants reached their International Conference on Robotics and Automation, San Diego,
CA, 1994.
destinations without a problem. In their exit interviews, [3] D. Ross and B. Blasch, “Development of a Wearable Computer
the participants complained mostly about the human-robot Orientation System,” IEEE Personal and Ubiquitous Computing,
interaction aspects of the system. For example, all of them vol. 6, pp. 49–63, 2002.
[4] H. Mori and S. Kotani, “Robotic Travel Aid for the Blind:
had problems with the speech recognition system[21], [19]. HARUNOBU-6,” in Second European Conference on Disability,
The participants especially liked the fact that they did not o
Virtual Reality, and Assistive Technology, S¨vde, Sweden, 1998.
have to give up their white canes and guide dogs to use [5] I. Horswill, “Polly: A Vision-Based Artificial Agent,” in Pro-
ceedings of the 11th Conference of the American Association for
RG. Artificial Intelligence (AAAI-93), Washington, DC, July 1993.
[6] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert,
V. L IMITATIONS a
D. Fox, D. H¨hnel, C. Rosenberg, N. Roby, J. Schutle, and
D. Schultz, “Minerva: A Second Generation Mobile Tour-Guide
In addition to velocity changes in narrow hallways, RG Robot,” in Proceedings of the IEEE International Conference on
has three other limitations. First, the robot cannot create Robotics and Automation (ICRA-99), Antwerp, Belgium, June 1999.
a connectivity graph for a given environment once the a
[7] W. Burgard, A. Cremers, D. Fox, D. H¨hnel, G. Lakemeyer,
D. Schulz, W. Steiner, and S. Thrun, “Experiences with an Inter-
RFID tags are deployed. We are currently working on active Museum Tour-Guide Robot,” Artificial Intelligence, no. 114,
creating connectivity graphs and behavior scripts in a semi- pp. 3–55, 1999.
automatic fashion. Second, the robot cannot detect route [8] G. Kantor and S. Singh, “Preliminary Results in Range-Only Lo-
calization and Mapping,” in Proceedings of the IEEE Conference
blockages. If the route is blocked, the robot first slows on Robotics and Automation, Washington, DC, May 2002.
down to a stop and then starts turning in order to find [9] T. Tsukiyama, “Navigation System for the Mobile Robots using
some free space. In this fashion, RG makes a gradual u- RFID Tags,” in Proceedings of the IEEE Conference on Advanced
Robotics, Coimbra, Portugal, June-July 2003.
turn by looking for the maximum free space around itself. a
[10] D. H¨hnel, W. Burgard, D. Fox, K. Fishkin, and M. Philipose,
Since RG has no orientation sensor, currently the only way “Mapping and localization with rfid technology,” Intel Research
it can detect a detour is by detecting an RFID tag that is Institute, Seattle, WA, Tech. Rep. IRS-TR-03-014, December 2003.
[11] V. Kulyukin and M. Blair, “Distributed Tracking and Guidance in
not on the path to the current destination. Finally, while Indoor Environments,” in Conference of the Rehabilitation Engi-
several visually impaired participants told us that it would neering and Assistive Technology Society of North America (RESNA-
be helpful if RG could guide them in and out of elevators, 2003), Atlanta, GA, June 2003.
[12] http://www.activmedia.com, ActivMedia Robotic Platforms. Activ-
RG cannot negotiate elevators yet. Media Robotics, Inc.
[13] J. H. Chuang and N. Ahuja, “An Analytically Tractable Potential
VI. C ONCLUSION Field Model of Free Space and its Application in Obstacle Avoid-
ance,” IEEE Trans. Sys. Man, Cyb., vol. 5, no. 28, pp. 729–736,
In this paper, we showed how Radio Frequency Iden- 1998.
tification (RFID) can be used in robot-assisted indoor [14] R. Murphy, Introduction to AI Robotics. Cambridge, MA: The MIT
navigation for the visually impaired. We presented a robotic Press, 2000.
[15] Y. Koren and J. Borenstein, “Potential Field Methods and their
guide for the visually impaired that was deployed and Inherent Limitations for Mobile Robot Navigation,” in Proceedings
tested both with and without visually impaired participants of the IEEE Conference on Robotics and Automation, Sacramento,
in two indoor environments. The experiments illustrate that CA, April 1991.
[16] B. Kupiers, “The Spatial Semantic Hierarchy,” Artificial Intelligence,
passive RFID tags can act as reliable stimuli that trigger no. 119, pp. 191–233, 2000.
local navigation behaviors to achieve global navigation [17] N. Tinbergen, Animal in its World: Laboratory Experiments and
objectives. General Papers. Cambridge, MA: Harvard University Press, 1976.
[18] V. Kulyukin, “Towards Hands-Free Human-Robot Interaction
through Spoken Dialog,” in AAAI Spring Symposium on Human
ACKNOWLEDGMENT Interaction with Autonomous Systems in Complex Environments,
The authors would like to thank the visually impaired Palo Alto, CA, March 2003.
[19] ——, “Human-Robot Interaction through Gesture-Free Spoken Di-
participants for generously volunteering their time for the alogue,” Autonomous Robots,, vol. 16, no. 3, 2004.
pilot experiments. The authors would like to thank Marty [20] T. V. Tran, T. Letowski, and K. S. Abouchacra, “Evaluation of
Blair, Director of the Utah Assistive Technology Program Acoustic Beacon Characteristics for Navigation Tasks,” Ergonomics,
vol. 43, no. 6, pp. 807–827, 2000.
for his administrative assistance and support. The first [21] V. Kulyukin, C. Gharpure, and N. De Graw, “Human-Robot In-
author would like to acknowledge that this research has teraction in a Robotic Guide for the Visually Impaired,” in AAAI
been supported, in part, through the NSF Universal Ac- Spring Symposium on Interaction between Humans and Autonomous
Systems over Extended Operation, Palo Alto, CA, March 2004.
cess Career Grant (IIS-0346880), a Community University
1984