SlideShare ist ein Scribd-Unternehmen logo
1 von 8
Downloaden Sie, um offline zu lesen
A Probabilistic Approach to Inference with Limited
                    Information in Sensor Networks

                       Rahul Biswas                                   Sebastian Thrun                    Leonidas J. Guibas
                   Stanford University                               Stanford University                   Stanford University
                Stanford, CA 94309 USA                            Stanford, CA 94309 USA                Stanford, CA 94309 USA
               rahul@cs.stanford.edu                            thrun@cs.stanford.edu                 guibas@cs.stanford.edu


ABSTRACT                                                                            good balance between increasing network longevity by de-
We present a methodology for a sensor network to answer                             creasing costs on the one hand and providing useful answers
queries with limited and stochastic information using prob-                         to queries on the other.
abilistic techniques. This capability is useful in that it al-                         One aspect in which algorithms used for sensor networks
lows sensor networks to answer queries effectively even when                         must differ from traditional algorithms is that they must
present information is partially corrupt and when more in-                          be able to reason with incomplete and noisy information.
formation is unavailable or too costly to obtain. We use a                          Some areas of interest may lie outside sensor range. Any
Bayesian network to model the sensor network and Markov                             information, if available, is not free but has a sensing cost
Chain Monte Carlo sampling to perform approximate infer-                            and comes at the expense of decreased network longevity.
ence. We demonstrate our technique on the specific problem                           Also, real sensors do not always operate as they should and
of determining whether a friendly agent is surrounded by en-                        any algorithm that deals with sensor information must be
emy agents and present simulation results for it.                                   able to handle partial corruption of the input data.
                                                                                       We present an algorithm that leverages both limited and
                                                                                    stochastic sensor data to probabilistically answer queries
1.     INTRODUCTION                                                                 normally requiring complete world state to answer correctly.
   Recent advances in sensor and communication technolo-                            By answering probabilistically, i.e. by assigning a probabil-
gies have enabled the construction of miniature nodes that                          ity to each possible answer, we provide the inquirer with
combine one or more sensors, a processor and memory, plus                           more useful information than only one answer whose cer-
one or more wireless radio links [9, 12]. Large numbers of                          tainty is not known. This technique is applicable to a broad
these sensors can then be jointly deployed to form what                             range of queries and we demonstrate its effectiveness on
has come to be known as a sensor network. Such net-                                 an open problem in sensor networks, that of determining
works enable decentralized monitoring applications and can                          whether a friendly agent with known location is surrounded
be used to sense large-area phenomena that may be impos-                            by enemy agents with unknown but stochastically sensable
sible to monitor in any other way. Sensor networks can be                           locations [7].
used in a variety of civilian and military contexts, including                         We structure the probabilistic dependencies between sen-
factory automation, inventory management, environmental                             sor measurements, enemy agent locations, and the query of
and habitat monitoring, health-care delivery, security and                          interest as a Bayesian network. We then enter the sensor ob-
surveillance, and battlefield awareness.                                             servations into the Bayesian network and use Markov Chain
   In some settings, sensors enjoy the use of external power                        Monte Carlo (MCMC) methods to perform approximate in-
sources and need not factor the energy costs of computation,                        ference on the Bayesian network and extract a probabilistic
sensing, and communication into their decisions. Informa-                           answer to our original query.
tion theory tells us that more information is always bet-                              Our experimental results demonstrate the effectiveness of
ter [4] and any algorithm analyzing sensing results should                          our inference algorithm on the problem of determining sur-
be able to perform at least the same and hopefully better                           roundedness on a wide variety of sensor measurements. The
with more information. In most actual deployment scenar-                            inference algorithm consistently provides accurate estimates
ios however, sensors must operate with limited power sources                        of the posterior probability of being surrounded. It demon-
and carefully ration their energy utilization. In this world,                       strates the ability to incorporate both positive and negative
computation, sensing, and communication comes at a cost                             information, to use additional information to disambiguate
and algorithms designed for sensor networks must strike a                           almost colinear points, to cope effectively with sensor failure,
                                                                                    and provide useful prior probabilities of being surrounded
                                                                                    without any sensor measurements at all.

Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are           2. PROBLEM DEFINITION
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to      2.1 Overview
republish, to post on servers or to redistribute to lists, requires prior specific     We seek to determine whether an agent F is surrounded
permission and/or a fee.
IPSN’04, April 26–27, 2004, Berkeley, California, USA.                              by N enemy agents E1 . . . EN . Let the location of agent
Copyright 2004 ACM 1-58113-846-6/04/0004 ...$5.00.                                  F be given as LF and the locations of agents E1 . . . EN be
given by LE1 . . . LEN respectively. Let the assertion D refer
to the condition that LF lies in the convex hull generated
by points LE1 . . . LEN , which we will henceforth refer to in
aggregate as simply LE.
  We assume that LF and N are known. We have no prior
knowledge over the LE but form posteriors over them from
the evidence gathered by the sensor measurements.

2.2 Sensor Model
   Sensors can detect the presence of enemy agents in a cir-
cular region surrounding the sensor. We have a total of M
sensors S1 . . . SM at our disposal. Let the sensor locations
                                                        ˆ
be given by LS1 . . . LSM . For a given sensor Si , let di equal
1 if there exists one or more enemy agents inside the circle
with its center collocated with the sensor and radius R, and
                               ˆ
0 otherwise. In other words, di is a boolean variable that is
the response sensor Si should obtain if it is operating cor-
rectly. Let di be the value sensor Si actually obtains and         Figure 1: Bayesian Network with 3 sensor measure-
shares with other sensors. The variable di depends stochas-        ments and 4 enemy agents
           ˆ
tically on di as follows:

                                                                   algorithms exist that compute the convex hull of N points
                       ˆ
                    




               ¡   ¢
                       di   :   with probability   ζ               in the plane in Θ(N · logN ) time [16].
                                                   1−ζ
        di =            0   :   with probability    2
                                                            (1)       In fact, convex hull inclusion can be tested directly in
                                                   1−ζ
                        1   :   with probability    2
                                                       .           O(N ) time. Let CH represent the convex hull of LE and
                                                                   let CH1 . . . CHS represent the enemy agents, in order, lying
  Setting ζ to 1 is equivalent to having perfect sensors and       on this hull. S is less than or equal to N . LF lies within
setting ζ to 0 is equivalent to having no sensors. Typically,      CH if and only if either of the following conditions holds:
ζ will be quite high since sensors usually tend to work cor-
rectly.
                                                                              ∀s ∈ S, CCW (CHs , CHs+1 , LF )                 (2)
2.3 Objective Function
  Our objective is to determine, given LF and a set of                                           or
P stochastic sensor measurements R1 . . . RP , the posterior                   ∀s ∈ S, CW (CHs , CHs+1 , LF ).                (3)
probability of D. The set of sensor measurements may be
empty.                                                                Here, CHS+1 wraps around to CH1 , CCW (a, b, c) refers
  We do not seek to address the issues of network control          to the assertion that point c is strictly counterclockwise to
and communication in this paper. While these are impor-            ray ab, and CW (a, b, c) refers to the assertion that point c
tant topics and are revisited in Section 6.2, our inference al-    is strictly clockwise to ray ab.
gorithm works independently of the control algorithm that             Closest in spirit to our effort are techniques that try to
decides who will sense and manages inter-sensor communi-           approximate a convex body using a number of probes [1],
cation.                                                            or those that try to maintain approximate convex hulls in
                                                                   a data stream of points, using bounded storage [8]. These
                                                                   works both address similar problems but their methods can-
3.   PRIOR WORK                                                    not be applied to our problem at hand.
   The traditional approach to solving this sort of problem
is to determine the values of each of the variables of inter-
est and then use standard techniques to answer the query           4. OUR APPROACH
given the full world state. The potency of our approach lies          We show in this section how the query, in this case, whether
in its ability to take partial world state and estimate the        the friendly agent is surrounded by enemy agents can be rep-
complete world state in expectation. By doing so, we can           resented by a variable in a Bayesian network. The proba-
use the same off-the-shelf techniques the other algorithms          bilistic relationships between sensors, enemy agent locations,
can. In fact, in the example we have chosen to showcase,           and the query are faithfully represented by the Bayesian net-
that of determining whether a friendly agent is surrounded,        work. Thus the problem of answering the query reduces to
we use an unmodified traditional computational geometry             computing the posterior probability of the variable in the
algorithm and simply plug it into our inference. This allows       network representing our query. We use MCMC sampling
us to probabilistically answer any other query just as easily      to perform this inference. We start with the formal defini-
by just plugging in existing algorithms requiring full world       tion of our Bayesian network, identify why exact inference is
state into the core of our MCMC sampler.                           not possible, and then discuss our MCMC implementation.
   For the specific problem at hand, we can determine whether
the friendly agent lies within the convex hull of enemy agents     4.1 Bayesian Network
by starting with the exact values of LE, computing their             A Bayesian network [11] is characterized by its variables
convex hull, and then determining whether LF lies within           and the prior and conditional probabilities relating the vari-
this convex hull. Many well-known computational geometry           ables to one another. We start with the variables and what
they represent and then move on to their conditional prob-         containing each R variable along with each of the LE vari-
abilities. Figure 1 shows a Bayesian Network for the case of       ables. The inadequacy of multivariate Gaussians would force
three sensor measurements and four enemy agents. In prac-          us to perform discretization of the joint space. The expo-
tice, the network structure will vary depending on the num-        nential blowup that arises from this enormous state space
ber of sensor measurements and number of enemy agents.             would render the problem computationally intractable.

4.1.1 Variable 1: Sensor Measurements                              4.3 MCMC for Approximate Inference
  Let there be P sensor measurement variables R1 . . . RP in          Fortunately, we can get an accurate estimate of the poste-
our Bayesian network. These variables are identical to the         rior probability of interest, P (D|R1 . . . RP ). We do this with
variables mentioned in Section 2 and contain both the index        the Metropolis algorithm, an instance of a MCMC method
of the sensor it originated from and the stochastic value d        that performs approximate inference on Bayesian networks.
that the sensor perceived. These variables are observed in         It is equipped to handle the continuous nature of this dis-
the Bayesian network.                                              tribution as it samples evenly over the entire joint space.
                                                                      We follow the treatment of Metropolis and use the termi-
4.1.2 Variable 2: Enemy Agent Locations                            nology as presented in [5]. The function f (X) we want to
  Let there be N enemy agent location variables LE1 . . . LEN      estimate is the variable D and the space Xt we sample from
in our Bayesian network. These variables are also identical        is the joint distribution over LE. Let us choose to draw T
to the variables mentioned in Section 2. The variables are         samples using the Metropolis algorithm and let the hypothe-
not observed in the Bayesian network, i.e., their values are       sis over LE while taking sample t be LE t . The starting state
not known to the inference algorithm. By doing inference           often systematically biases the initial samples so we discard
however, it is possible to form posterior probability distri-      the first ι · T samples and estimate our posterior probability
butions over them.                                                 of interest as the mean of the remaining samples:

4.1.3 Prior Probability: Enemy Agent Locations                                                                   T
                                                                                                   1          




  We assume that the enemy agents are independently and                 P (D|R1 . . . RP ) =                         P (D|LE t )   (4)
uniformly distributed across a bounded rectangular region.                                     (1 − ι) · T   t=ι·T


4.1.4 Variable 3: Whether the Friendly Agent is Sur-               where ι is the fraction of samples discarded until the Markov
      rounded                                                      chain converges.
                                                                      There are four aspects to evaluating this expression. The
  Let there be a variable D in our Bayesian network. Like
                                                                   first three deal with the inner workings of Metropolis. First,
the variables that came before it, this variable is identical to
                                                                   where do we start the chain? Second, how does LE t tran-
the variable mentioned in Section 2. Indeed, the potency of
                                                                   sition to LE t+1 ? Third, how do we decide whether to ac-
our Bayesian network lies in the simplicity of its conception.
                                                                   cept the proposed transition? Fourth, how do we evaluate
This variable is also not observed in the Bayesian network
                                                                   P (D|LE t )?
but it is possible to form a posterior probability distribution
over it as well.                                                   4.3.1 Starting the Chain
4.1.5 Conditional Probability 1: How Enemy Agent                     We place each of the enemy agents at a random location
      Locations Affect Sensor Measurements                         uniformly distributed over the rectangular region where en-
                                                                   emy agents are known to be located.
  Sensor measurements are stochastically determined by en-
emy agent locations as presented in Section 2.                     4.3.2 Proposal Distribution
4.1.6 Conditional Probability 2: How Enemy Agent                     One of the enemy agents is chosen at random and is moved
      Locations Affect the Query                                   to a random location uniformly chosen over the same rect-
                                                                   angular region as before. Thus, each component of LE t+1 ,
  The value of D is deterministic given LE1 . . . LEN . We         that is, each enemy agent location, is conditionally indepen-
use the off-the-shelf convex hull generation and containment        dent, given LE t . This condition is required for Metropolis
algorithm described in section 3 to establish this relation        to work correctly.
between D and LE1 . . . LEN .
                                                                   4.3.3 Determining Proposal Acceptance
4.2 Problems with Exact Inference
                                                                     We use the Metropolis proposal acceptance criterion:
   Inference in Bayesian networks refers to the process of
discovering posterior probabilities of unobserved variables
given the values of observed variables. A family of tech-                                           P (LE t+1 , R1 . . . RP )
                                                                      α(LE t , LE t+1 ) = min(1,                              ).   (5)
niques known as clique tree methods is used for this pur-                                            P (LE t , R1 . . . RP )
pose.
   Unfortunately, our Bayesian network has both continu-             The proposal is accepted with probability α(LE t , LE t+1 )
ous (LE) and discrete (R and D) variables. In fact, this           as given above. This is easily done by generating a ran-
is the hardest class of Bayesian networks to perform infer-        dom number U uniformly drawn from between 0 and 1 and
ence on [10]. Moreover, the nature of the problem preempts         accepting the proposal if U ≤ α(LE t , LE t+1 ).
the use of multivariate Gaussians to approximate the LE              We can simplify the expression
variables while maintaining the rich hypotheses inference
required to form accurate posteriors. Faced with such a                              P (LE t+1 , R1 . . . RP )
network, clique tree methods would form an immense clique                                                                          (6)
                                                                                      P (LE t , R1 . . . RP )
substantially. We discover from the Bayesian network that                                           4.4 Alternate Queries
the expression P (LE t , R1 . . . RP ) decomposes as follows:                                          Up to this point, we have explored our algorithm for the
                                                                                                    specific query of determining whether a friendly agent is
                       
                          N                               
                                                             P                                      surrounded by enemy agents. This problem dealt with de-
                                           t
                                      P (LEi )                    P (Rj |LE t )               (7)   termining properties of the convex hull of a set of stochasti-
                      i=1                                j=1                                        cally sensable points. While we present the scenario in 2D,
          t                                                                                         it generalizes directly to higher dimensional spaces.
where LEi refers to the hypothesized location of enemy                                                 We briefly note two other scenarios where we can use our
agent i when Metropolis takes sample t.                                                             algorithm to discover useful geometric properties of a set
  Similarly, the expression P (LE t+1 , R1 . . . RP ) decomposes                                    of points. The first scenario explores discovering both the
as:                                                                                                 extent and area of a set of stochastically sensable points.
                                                                                                    The second deals with finding a point maximally distant
               
                  N                                   
                                                             P                                      in expectation from all other points. These scenarios were
                               t+1
                          P (LEi )                                P (Rj |LE t+1 ).            (8)   chosen to explore different areas of computational geometry
              i=1                                j=1                                                but our algorithm works on any query where small changes
  Thus, we can rewrite 6 as:                                                                        in the input induce small changes in the output.
                                                                                                    4.4.1 Extent and Area of a Point Set
              N          t+1                                     P
                                                                       P (Rj |LE t+1 )
          ¡                                      ¡




              i=1 P (LEi     )                                   j=1                                   The problem of finding the farthest point along a cer-
                N          t                                     P
                                                                                         .    (9)   tain direction in a point set is more generally known as the
                                                                       P (Rj |LE t )
              ¡                                  ¡




                i=1  P (LEi )                                    j=1
                                                                                                    extremal polytope query and can be solved efficiently by
                       t
  If we recall that LEi is uniformly distributed and that                                           Kirkpatrick’s algorithm [16]. Let us define a new variable B
each of the LE terms cancel out in both the numerator and                                           that contains both the minimal and maximal point in each
denominator, this expression simplifies to:                                                          direction. We replace the last step of our algorithm, comput-
                                                                                                    ing P (D|LEt ), with computing P (B|LEt ). This conditional
                              ¡




                                           P             t+1                                        probability is deterministic, as its predecessor was. The ex-
                                           j=1 P (Rj |LE     )
                                                                                                    pected value of the extents are the mean of the components
                                            P
                                                               .                             (10)
                                                          t)
                                       ¡




                                            j=1 P (Rj |LE                                           of the individual samples. The expected value of the area
                                                                                                    is the mean of the areas stemming from the individual sam-
   Our Bayesian network does not directly allow us to sim-                                          ples and will typically not equal the area stemming from the
plify this expression further but since sensors are not af-                                         mean extents. Using only discovered points in this scenario
fected, even stochastically, by enemy agents outside their                                          will tend to underestimate the extent of the point set.
sensing range, we can exploit context-specific independence
to reduce this expression further. Specifically, only one of                                         4.4.2 Maximally Distant Point
the LEi variables has changed value from step t to step t+1.                                           The problem of finding a point that is maximally distant
Only sensors for which this enemy agent is within range will                                        from its nearest neighbor stems from the Voronoi diagram
be affected by this transition. Let there be K such sensors                                          of a point set and can be computed as described in [6]. Let
and let Kk refer to the index of the k-th such sensor.                                              us define a new variable E that uses a discretized grid to
   The sensor measurements that are not affected by the                                              represent the distance of each cell to its nearest neighbor in
change in LEi cancel out in the numerator and denominator,                                          the point set. By replacing the last step with computing
leaving us with:                                                                                    P (E|LEt ) (again deterministic), we calculate the expected
                                                                                                    distance from each cell and choose the one with the highest
                                   
                                      K
                                            P (RKk |LE t+1 )                                        expected distance. Using only discovered points in this sce-
                                                             .                               (11)   nario will tend to ignore the effects of undiscovered points
                                             P (RKk |LE t )
                              k=1                                                                   on the solution.
   Computing this expression is extremely fast and is further
sped up in our implementation by discretizing the rectangu-                                         5. EXPERIMENTAL RESULTS
lar region and associating with each grid cell a list of sensor                                       We show in this section some examples of the sort of re-
measurements that affect that cell.                                                                  sults the inference algorithm produces. First, we have some
   The conditional probabilities of the sensor measurements                                         specific networks and show the posterior probabilities pro-
given the enemy agent locations can be computed directly                                            duced conditioned on different sets of sensor measurements.
from the expressions of the conditional probability. This                                           Second, we show how the average posterior probabilities
formulation of the problem, which relies only on the con-                                           vary as different parameters of the sensor network varies.
ditional probabilities as they are stated, i.e. the forward                                         Third, we examine how many iterations MCMC requires to
model, and not on evidential reasoning make computation                                             converge.
of likelihoods and thus inference easier to implement, faster,
and more accurate [17].                                                                             5.1 Specific Inference Examples
                                                                                                       In Figure 2, we see a sensor network. Here, LF corre-
4.3.4 Computing P (D|LEt )                                                                          sponds to the friendly agent and is designated by a plus sign,
  This conditional probability can be computed directly from                                        LEi corresponds to the ith enemy agent and is designated
the definition as presented in 4.1.6. The locations of the en-                                       by a circle, and LSj the jth sensor and is designated by an
emy agents are given by LEt . The expression will yield a                                           asterisk. The circle surrounding a sensor shows the extent
probability that is either 0 or 1 but not in between.                                               of its sensing range. The labels and circles for only a subset
LE 2


                                LS 68
                                                            LE 3                          LS 56
                                                                   LS 2                              LE 4




                                                                                             LE 0
                                                                             LF

                                                                          +       LS 36
                                                                                             LS 37



                                                                              LS 41
                                                                                  LE 1




                                          Figure 2: An Example Sensor Network


of sensors are drawn to reduce clutter in the diagram. The            each line passing through LF but without double counting.
following table provides the posterior probabilities resulting        For example, the probability of all of the enemies lying to
from inference when different sets of sensors are chosen to            one side of the north-south meridian is also approximately
sense. None of the sensors failed during the simulation runs          9% because of the approximate symmetry of the situation.
but the algorithm runs with ζ set to 0.01.                            We do not seek to compute the prior via this method but
                                                                      only to show where the prior comes from and demonstrate
          Posterior Probabilities for Figure 2                        the difficulty of computing it.
          Sensors Sensing    Posterior Probability                       We next consider the situation where sensors 56 and 68
          56, 68                       44                             have sensed and found enemy agents but the posterior has
          none                         51                             decreased to 44%. This may seem counterintuitive that
          2, 41                        80                             we have useful information and our posterior has dropped
          41                           81                             but with a fixed number of enemies, finding two of them
          41, 56, 68                  86                              in inessential locations decreases the probability of enemies
          36, 37, 41, 56, 68          93                              elsewhere. Specifically, the probability of enemy agents to
                                                                      the south and east of the friendly agent, where they were
   We analyze our results from a geometric perspective. It is         already less likely to be, is further reduced. Contrast this
important to note that our inference algorithm approaches             with the situation where only sensor 41 has sensed. Finding
the problem in a fundamentally different way than the dis-             an enemy where it was unlikely but more potentially useful
cussion that follows.                                                 increases the posterior probability substantially, in this case
   Let us start with the prior probability that the agent is          to 81%.
surrounded without any sensor measurements. In this in-                  Let us next consider the case where sensors 2 and 41 have
stance, the entire field is 100 by 100 and the friendly agent          both sensed. The posterior probability is lower than the sce-
is at 58, 40. Ignoring collinearity, which is of measure 0 any-       nario where only 41 has sensed. Intuitively, the sensor is al-
way, the friendly agent is only surrounded if all of the enemy        most collinear to the previously detected sensor and friendly
agents do not lie to one side of it. Let us analyze the case          agent and the discovery of this enemy is neither immensely
that all the enemy agents are to the north of the friendly            helpful nor it useless. It does however reduce the number of
agent. The probability of any one of them being there is              unknown enemies that might have been used to expand the
60% and the probability of all five of them being there is             convex hull in a useful direction.
approximately 8%.                                                        The next scenario we consider, where sensors 41, 56, and
   The alternative scenario is that each of the agents is to the      68 have sensed, is perhaps the most intuitive. We can see
south. The probability of any one of them being there is 40%          that the discovered enemies form a triangle that just encloses
and the probability of all five of them being there is approx-         the friendly agent. The sensors have less information than
imately 1%. Thus the probability of not being surrounded              we do though – they only know that enemy agents lie within
because all the agents lie to one side of the east-west merid-        their sensing radius. It may be that both enemy agents lie
ian is approximately 9%. We need to extend this case to               north of the sensors that have located them. If this were the
LS 35
                          LE 4
                                                                   LS 58
                                 LS 36                                        LE 3



                                                                  LS 75


                                                             LF

                                                             +
                                                 LS 86
                                                                              LE 1                  LE 2
                                          LS 8                                            LS 50
                             LE 0

                                                                                         LS 46



                                          Figure 3: Another Example Sensor Network


case, the friendly agent would not longer be surrounded.               Third, when there is evidence from multiple sensors sup-
   The last scenario, where sensors 36, 37, 41, 56, and 68 have      porting the presence of an enemy agent, as in the case where
sensed, shows the powerful effect of negative information.            sensors 8, 35, 36, 46, 50, 58, 75, and 86 have all sensed and
The addition of results from sensors 36 and 37 demonstrate           each enemy agent is covered by at least two sensors, the
that the enemy agent that sensor 41 found is not in the              posterior probability increases substantially.
sensable region of either sensor 36 and 37 and therefore must
lie farther south, where it is more likely to participate in         5.2 Effects of Changing Network Parameters
surrounding the friendly agent.                                         We now move on to aggregate results of how different
   Another example sensor network and inference results fol-         parameters of the scenario affect the posterior probability of
low. In this case, none of the sensors failed during the sim-        being able to detect that the friendly agent is surrounded.
ulation runs but the algorithm now runs with ζ set to 0.2.           It is possible to perform such analysis for any query and can
We will see how this difference affects inference.                     provide valuable insights into how the parameters of the
                                                                     sensor network help or hinder in answering specific types of
         Posterior Probabilities for Figure 3                        queries.
    Sensors Sensing               Posterior Probability                 Figure 4 shows the aggregate of two sets of ground truth
    none                                    67                       scenarios. The Surrounded case shows examples where the
    35, 50, 58                             70                        agent is actually surrounded. The Not Surrounded case
    8, 36, 58                               74                       shows examples where the agent is actually not surrounded.
    8, 36, 46                               75                       The distinction is determined by the simulator and not through
    8, 35, 50, 58                          78                        sensor measurement. The chart thus shows how the aver-
    8, 35, 36, 46, 50, 58, 75, 86          91                        age posterior of the inference algorithm across a variety of
   There is much to be learned from these results but we fo-         scenarios evolves. The number of enemies is set to four and
cus on the three essential points that were not demonstrated         the sensing radius is set to 12. As we see, more informa-
by the previous example. First, the prior probability of be-         tion brings the posterior probability closer and closer to the
ing surrounded is much higher. This is due solely to the fact        ground truth.
that the friendly agent is closer to the center of the region           Figure 5 shows the same scenario but with the number
where enemy agents are distributed.                                  of enemies varying now. The number of sensors is set to
   Second, even when the sensor evidence supports the as-            twenty-five and the sensing radius remains at twelve. We
sertion that the friendly agent is surrounded, as in the case        see that it is easier to locate enemies and believe that one is
where sensors 8, 35, 50, and 58 have sensed, the posterior           surrounded when there are more enemies present but that
probability is not as high as one may expect. The sensor             the effect is not strong.
network realizes due to the relatively high value of ζ that it          Figure 6 shows the same scenario but with the range of
is likely that one or more of these sensor measurements are          the sensor varying now. The number of sensors is set to
inaccurate and considers these possibilities in its inference.       twenty-five and the number of enemies is set at four. While
                                                                     it is easier to locate enemies with increased sensing range,
Posterior Probability Evolution as Number of Sensors Varies                                                           Posterior Probability Evolution as Number of Enemies Varies
                          1                                                                                                                     1
                                   Surrounded                                                                                                            Surrounded
                                   Not Surrounded                                                                                                        Not Surrounded
                         0.9                                                                                                                   0.9


                         0.8                                                                                                                   0.8


                         0.7                                                                                                                   0.7
Probability Surrounded




                                                                                                                      Probability Surrounded
                         0.6                                                                                                                   0.6


                         0.5                                                                                                                   0.5


                         0.4                                                                                                                   0.4


                         0.3                                                                                                                   0.3


                         0.2                                                                                                                   0.2


                         0.1                                                                                                                   0.1


                          0                                                                                                                     0
                               0    5        10       15        20      25       30          35        40   45   50                                  3       4            5         6          7          8              9        10   11
                                                                 Number of Sensors                                                                                                      Number of Enemies



                         Figure 4: Posterior Probability Evolution as Num-                                                                     Figure 5: Posterior Probability Evolution as Num-
                         ber of Sensors Varies                                                                                                 ber of Enemies Varies

                         the uncertainity associated with such information makes this                                                          information to the friendly agents to determine where en-
                         information less useful. These contrary forces appear to bal-                                                         emies may or may not be. These sensors will be subject
                         ance out and varying the sensor range appears to neither                                                              to noise as they will sometimes miss a robot going by or
                         help nor hurt inference.                                                                                              mistake a person walking by for an enemy robot.
                         5.3 MCMC Convergence                                                                                                  6.2 Intelligent Sensor Selection
                            Although MCMC converges in the limit to the exact pos-
                                                                                                                                                 While our algorithm can effectively leverage very limited
                         terior, it is useful to know how many iterations are required
                                                                                                                                               information, to run on an autonomous sensor network, it
                         in practice to obtain accurate results. It is unfortunately
                                                                                                                                               needs to be paired with a decentralized algorithm that runs
                         only feasible to compute analytical bounds for MCMC al-
                                                                                                                                               on each sensor node that decides when to sense and also
                         gorithms for only the simplest distributions though and the
                                                                                                                                               when and what type of information to communicate to its
                         conditions required by most bounding theorems are not sat-
                                                                                                                                               neighbors.
                         isfied by general distributions. However, it often suffices in
                                                                                                                                                 Such an algorithm will need to make tradeoffs between
                         practice to answer this question heuristically through ex-
                                                                                                                                               energy conservation and answering queries more effectively.
                         perimentation [15]. Figure 7 shows the standard deviation
                                                                                                                                               One way to address this problem is a cost-based scheme
                         of the MCMC posterior estimate errors before convergence
                                                                                                                                               that casts utility and battery depletion into a single cur-
                         occurs. Regardless of the number of queries, we find that
                                                                                                                                               rency [3]. While the cost of sensing, asking another node
                         the posterior estimate errors level off after approximately
                                                                                                                                               to sense, or sending information to another node will fall
                         15,000 steps of MCMC, a computation that takes only a
                                                                                                                                               directly out of a coherent energy-aware sensor model, the
                         few hundred milliseconds to perform.
                                                                                                                                               value of such information is intricately tied to the inference
                                                                                                                                               algorithm. Specifically, to prevent bias, the value of sensing
                         6.        FUTURE WORK                                                                                                 should be the expected reduction in entropy stemming from
                           While we feel that we have made significant progress with                                                            the acquisition of this information.
                         the methods presented in this paper, there are two major                                                                Constructing such an algorithm requires answering several
                         directions we plan to proceed with this work. First, we want                                                          difficult questions. First, how can sensors with only partial
                         to use this algorithm with a set of actual sensors. Second, we                                                        world state display utility-maximizing emergent behavior?
                         want the sensor network to guide its own sensing activities                                                           Second, what is the tradeoff between expending more energy
                         and direct sensing in such a way to minimize both sensing                                                             and acquiring a more accurate answer? Third, how can
                         and communication cost. We discuss these ideas in turn.                                                               sensors distribute work in a way that takes into account the
                                                                                                                                               effects of remaining power at the granularity of individual
                         6.1 Moving from Simulation to Actual Sensors                                                                          nodes?
                           We plan to use this algorithm to help mobile robots play-
                         ing laser tag with one another [14] determine where their
                         enemy agents may or may not be. For example, if a robot                                                               7. CONCLUSION
                         knew that a certain corridor was most likely free of enemy                                                              We have presented an inference algorithm for a sensor
                         agents, it could focus its searching efforts elsewhere.                                                                network to effectively answer generalized queries. Using
                           Specifically, we plan to use vibration sensors that will de-                                                         probabilistic techniques, the algorithm is able to answer
                         tect the absence or presence of enemy robots and route this                                                           these queries effectively with only partial information over
Posterior Probability Convergence as MCMC Progresses
                                                  Posterior Probability Evolution as Sensor Range Varies                                                           0.055
                         0.9                                                                                                                                                                                                                     0 queries
                                    Surrounded                                                                                                                                                                                                   2 queries
                                    Not Surrounded                                                                                                                  0.05                                                                         4 queries
                                                                                                                                                                                                                                                 6 queries
                         0.8
                                                                                                                                                                                                                                                 8 queries
                                                                                                                                                                   0.045




                                                                                                                     Standard Deviation of Posterior Probability
                         0.7
                                                                                                                                                                    0.04

                         0.6
Probability Surrounded




                                                                                                                                                                   0.035

                         0.5
                                                                                                                                                                    0.03

                         0.4
                                                                                                                                                                   0.025

                         0.3
                                                                                                                                                                    0.02


                         0.2                                                                                                                                       0.015


                         0.1                                                                                                                                        0.01


                          0                                                                                                                                        0.005
                               0              5                  10                   15                   20   25                                                         0     5                10                 15                     20               25
                                                                      Sensing Range                                                                                                         Number of MCMC Steps, in Thousands



        Figure 6: Posterior Probability Evolution as Sensing                                                                                                        Figure 7: Posterior Probability Convergence as
        Range Varies                                                                                                                                                MCMC Converges


        the world state. Moreover, it can reason coherently despite                                                                                                 [6] L. Guibas, and J. Stolfi. “Primitives for the
        the stochastic nature of the sensor data. The results shown                                                                                                     manipulation of general subdivisions and the
        demonstrate that the inference algorithm is highly accurate                                                                                                     computation of Voronio diagrams”. ACM Trans.
        in the scenario presented and is able to handle a wide range                                                                                                    Graph. 4, 74-123. 1985.
        of types of sensor information.                                                                                                                             [7] L. Guibas. “Sensing, Tracking and Reasoning with
                                                                                                                                                                        Relations”. IEEE Signal Processing Magazine. Volume:
                                                                                                                                                                        19 Issue: 2, March 2002.
        8.                         ACKNOWLEDGEMENTS                                                                                                                 [8] J. Hershberger and S. Suri. “Convex Hulls and Related
          This research is supported in part by DARPA’s MARS                                                                                                            Problems in Data Streams”. Proc. ACM
        Program (Contract number N66001-01-C-6018), DARPA’s                                                                                                             SIGMOD/PODS Workshop on Management and
        Sensor Information Technology Program (Contract number                                                                                                          Processing of Data Stream, 2003.
        F30602-00-C-0139), the National Science Foundation (CA-                                                                                                     [9] J. Kahn, R. Katz, and K. Pister. “Mobile networking
        REER grant number IIS-9876136 and regular grant number                                                                                                          for smart dust” Proc. ACM/IEEE Intl. Conf. on
        IIS-9877033), Intel Corporation, the Stanford Networking                                                                                                        Mobile Computing and Networking, 1999.
        Research Center, ONR MURI grant N00014-02-1-0720, and
                                                                                                                                                                    [10] U. Lerner. Hybrid Bayesian Networks for Reasoning
        a grant from the DARPA Software for Distributed Robotics
                                                                                                                                                                        about Complex Systems Ph.D. Thesis, Stanford
        program under a subcontract from SRI, all of which is grate-
                                                                                                                                                                        University, October 2002.
        fully acknowledged. The views and conclusions contained in
                                                                                                                                                                    [11] J. Pearl. Probabilistic Reasoning in Intelligent
        this document are those of the authors and should not be
                                                                                                                                                                        Systems: Networks of Plausible Inference. Morgan
        interpreted as necessarily representing official policies or en-
                                                                                                                                                                        Kaufmann Publishers, Inc., 1988.
        dorsements, either expressed or implied, of the United States
        Government or any of the sponsoring institutions.                                                                                                           [12] G. Pottie and W. Kaiser. “Wireless integrated
                                                                                                                                                                        network sensors.” Proc. ACM Communications, 2000.
                                                                                                                                                                    [13] T. Richardson. “Approximation of planar convex sets
        9.                         REFERENCES                                                                                                                           from hyperplane probes”. Proc. Discrete and
        [1] G. Battista, R. Tamassia, and I. Tollis. Constrained                                                                                                        Computational Geometry, 1997.
            Visibility Representation of Graphics Proc. IPL, 1992.                                                                                                  [14] M. Rosencrantz, G. Gordon, and S. Thrun. “Locating
        [2] M. Berg, M. Kreveld, M. Overmars, and                                                                                                                       moving entities in indoor environments with teams of
            O. Schwarzkopf. Computational Geometry: Algorithms                                                                                                          mobile robots.” Proc. AAMAS-2003.
            and Applications. Springer-Verlag, 2000.                                                                                                                [15] J. Rosenthal. “Minorization Conditions and
        [3] J. Byers and G. Nasser. “Utility-based decision-making                                                                                                      Convergence Rates for Markov Chain Monte Carlo.”
            in wireless sensor networks” Proc. IEEE MobiHOC                                                                                                             Proc. Journal of the American Statistical Association,
            Symposium, 2000                                                                                                                                             1995.
        [4] T. Cover and J. Thomas Elements of Information                                                                                                          [16] J. Rourke Computational Geometry in C Cambridge
            Theory John Wiley and Sons, Inc., 1991.                                                                                                                     University Press, 1998.
        [5] W. Gilks, S. Richardson, and D. Spiegelhalter. Markov                                                                                                   [17] S. Thrun. “Learning occupancy grids with forward
            Chain Monte Carlo in Practice. Chapman and Hall /                                                                                                           models”. IEEE Conference on Intelligent Robots and
            CRC, 1996.                                                                                                                                                  Systems. 2001.

Weitere ähnliche Inhalte

Was ist angesagt?

Iaetsd security and privacy enhancement using jammer in
Iaetsd security and privacy enhancement using jammer inIaetsd security and privacy enhancement using jammer in
Iaetsd security and privacy enhancement using jammer inIaetsd Iaetsd
 
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksBallpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksEditor IJCATR
 
On Multihop Distances in Wireless Sensor Networks with Random Node Locations
On Multihop Distances in Wireless Sensor Networks with Random Node LocationsOn Multihop Distances in Wireless Sensor Networks with Random Node Locations
On Multihop Distances in Wireless Sensor Networks with Random Node Locationsambitlick
 
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...ijtsrd
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...ambitlick
 
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...cscpconf
 
Energy efficient routing_in_wireless_sensor_networks
Energy efficient routing_in_wireless_sensor_networksEnergy efficient routing_in_wireless_sensor_networks
Energy efficient routing_in_wireless_sensor_networksGr Patel
 
Z02417321735
Z02417321735Z02417321735
Z02417321735IJMER
 
Neural network based energy efficient clustering and routing
Neural network based energy efficient clustering and routingNeural network based energy efficient clustering and routing
Neural network based energy efficient clustering and routingambitlick
 
QRC-ESPRIT Method for Wideband Signals
QRC-ESPRIT Method for Wideband SignalsQRC-ESPRIT Method for Wideband Signals
QRC-ESPRIT Method for Wideband SignalsIDES Editor
 
Final year project list for the year 2012
Final year project list for the year 2012Final year project list for the year 2012
Final year project list for the year 2012Muhammad Farhan
 
Efficient Belief Propagation in Depth Finding
Efficient Belief Propagation in Depth FindingEfficient Belief Propagation in Depth Finding
Efficient Belief Propagation in Depth FindingSamantha Luber
 
Lab Seminar 2009 07 08 Message Drop Reduction
Lab Seminar 2009 07 08  Message Drop ReductionLab Seminar 2009 07 08  Message Drop Reduction
Lab Seminar 2009 07 08 Message Drop Reductiontharindanv
 
Iaetsd game theory and auctions for cooperation in
Iaetsd game theory and auctions for cooperation inIaetsd game theory and auctions for cooperation in
Iaetsd game theory and auctions for cooperation inIaetsd Iaetsd
 
Energy Aware Shortest Path Minded SPIN
Energy Aware Shortest Path Minded SPINEnergy Aware Shortest Path Minded SPIN
Energy Aware Shortest Path Minded SPINPrajwal Panchmahalkar
 

Was ist angesagt? (19)

Iaetsd security and privacy enhancement using jammer in
Iaetsd security and privacy enhancement using jammer inIaetsd security and privacy enhancement using jammer in
Iaetsd security and privacy enhancement using jammer in
 
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksBallpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
 
On Multihop Distances in Wireless Sensor Networks with Random Node Locations
On Multihop Distances in Wireless Sensor Networks with Random Node LocationsOn Multihop Distances in Wireless Sensor Networks with Random Node Locations
On Multihop Distances in Wireless Sensor Networks with Random Node Locations
 
Ks3618311836
Ks3618311836Ks3618311836
Ks3618311836
 
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...
Simulation of Direct Sequence Spread Spectrum for Wireless Communication Syst...
 
1004.0152v1
1004.0152v11004.0152v1
1004.0152v1
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage in Sen...
 
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...
FUZZY-CONTROLLED POWER-AWARE PROACTIVE-ACKNOWLEDGEMENT BASED BROADCASTING (FP...
 
47 50
47 5047 50
47 50
 
Energy efficient routing_in_wireless_sensor_networks
Energy efficient routing_in_wireless_sensor_networksEnergy efficient routing_in_wireless_sensor_networks
Energy efficient routing_in_wireless_sensor_networks
 
Z02417321735
Z02417321735Z02417321735
Z02417321735
 
Neural network based energy efficient clustering and routing
Neural network based energy efficient clustering and routingNeural network based energy efficient clustering and routing
Neural network based energy efficient clustering and routing
 
QRC-ESPRIT Method for Wideband Signals
QRC-ESPRIT Method for Wideband SignalsQRC-ESPRIT Method for Wideband Signals
QRC-ESPRIT Method for Wideband Signals
 
Final year project list for the year 2012
Final year project list for the year 2012Final year project list for the year 2012
Final year project list for the year 2012
 
Efficient Belief Propagation in Depth Finding
Efficient Belief Propagation in Depth FindingEfficient Belief Propagation in Depth Finding
Efficient Belief Propagation in Depth Finding
 
Lab Seminar 2009 07 08 Message Drop Reduction
Lab Seminar 2009 07 08  Message Drop ReductionLab Seminar 2009 07 08  Message Drop Reduction
Lab Seminar 2009 07 08 Message Drop Reduction
 
Iaetsd game theory and auctions for cooperation in
Iaetsd game theory and auctions for cooperation inIaetsd game theory and auctions for cooperation in
Iaetsd game theory and auctions for cooperation in
 
Ijcnc050207
Ijcnc050207Ijcnc050207
Ijcnc050207
 
Energy Aware Shortest Path Minded SPIN
Energy Aware Shortest Path Minded SPINEnergy Aware Shortest Path Minded SPIN
Energy Aware Shortest Path Minded SPIN
 

Ähnlich wie Probablistic Inference With Limited Information

International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Editor IJARCET
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Editor IJARCET
 
Detection of Replica Nodes in Wireless Sensor Network: A Survey
Detection of Replica Nodes in Wireless Sensor Network: A SurveyDetection of Replica Nodes in Wireless Sensor Network: A Survey
Detection of Replica Nodes in Wireless Sensor Network: A SurveyIOSR Journals
 
Distributed fault tolerant event
Distributed fault tolerant eventDistributed fault tolerant event
Distributed fault tolerant eventijscai
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
On-line Fault diagnosis of Arbitrary Connected Networks
On-line Fault diagnosis of Arbitrary Connected NetworksOn-line Fault diagnosis of Arbitrary Connected Networks
On-line Fault diagnosis of Arbitrary Connected NetworksIDES Editor
 
Node detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkNode detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkRamesh Patriotic
 
Iisrt 6-node detection technique for node replication attack in mobile sensor...
Iisrt 6-node detection technique for node replication attack in mobile sensor...Iisrt 6-node detection technique for node replication attack in mobile sensor...
Iisrt 6-node detection technique for node replication attack in mobile sensor...IISRTJournals
 
Node detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkNode detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkIISRT
 
Energy efficient protocol in wsn WITH ACO
Energy efficient protocol in wsn WITH ACOEnergy efficient protocol in wsn WITH ACO
Energy efficient protocol in wsn WITH ACONeelam Choudhary
 
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...IJMTST Journal
 
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...ijcseit
 
A new secure localization approach of wireless sensor nodes in the presence o...
A new secure localization approach of wireless sensor nodes in the presence o...A new secure localization approach of wireless sensor nodes in the presence o...
A new secure localization approach of wireless sensor nodes in the presence o...ijcseit
 

Ähnlich wie Probablistic Inference With Limited Information (20)

D0952126
D0952126D0952126
D0952126
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107
 
Detection of Replica Nodes in Wireless Sensor Network: A Survey
Detection of Replica Nodes in Wireless Sensor Network: A SurveyDetection of Replica Nodes in Wireless Sensor Network: A Survey
Detection of Replica Nodes in Wireless Sensor Network: A Survey
 
P017129296
P017129296P017129296
P017129296
 
Distributed fault tolerant event
Distributed fault tolerant eventDistributed fault tolerant event
Distributed fault tolerant event
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
On-line Fault diagnosis of Arbitrary Connected Networks
On-line Fault diagnosis of Arbitrary Connected NetworksOn-line Fault diagnosis of Arbitrary Connected Networks
On-line Fault diagnosis of Arbitrary Connected Networks
 
Node detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkNode detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor network
 
Iisrt 6-node detection technique for node replication attack in mobile sensor...
Iisrt 6-node detection technique for node replication attack in mobile sensor...Iisrt 6-node detection technique for node replication attack in mobile sensor...
Iisrt 6-node detection technique for node replication attack in mobile sensor...
 
Node detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor networkNode detection technique for node replication attack in mobile sensor network
Node detection technique for node replication attack in mobile sensor network
 
10.1.1.118.4231
10.1.1.118.423110.1.1.118.4231
10.1.1.118.4231
 
Ijetr012022
Ijetr012022Ijetr012022
Ijetr012022
 
Energy efficient protocol in wsn WITH ACO
Energy efficient protocol in wsn WITH ACOEnergy efficient protocol in wsn WITH ACO
Energy efficient protocol in wsn WITH ACO
 
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...
 
Bz02516281633
Bz02516281633Bz02516281633
Bz02516281633
 
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...
A NEW SECURE LOCALIZATION APPROACH OF WIRELESS SENSOR NODES INTHE PRESENCE OF...
 
A new secure localization approach of wireless sensor nodes in the presence o...
A new secure localization approach of wireless sensor nodes in the presence o...A new secure localization approach of wireless sensor nodes in the presence o...
A new secure localization approach of wireless sensor nodes in the presence o...
 
O010528791
O010528791O010528791
O010528791
 

Kürzlich hochgeladen

Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbuapidays
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 

Kürzlich hochgeladen (20)

Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

Probablistic Inference With Limited Information

  • 1. A Probabilistic Approach to Inference with Limited Information in Sensor Networks Rahul Biswas Sebastian Thrun Leonidas J. Guibas Stanford University Stanford University Stanford University Stanford, CA 94309 USA Stanford, CA 94309 USA Stanford, CA 94309 USA rahul@cs.stanford.edu thrun@cs.stanford.edu guibas@cs.stanford.edu ABSTRACT good balance between increasing network longevity by de- We present a methodology for a sensor network to answer creasing costs on the one hand and providing useful answers queries with limited and stochastic information using prob- to queries on the other. abilistic techniques. This capability is useful in that it al- One aspect in which algorithms used for sensor networks lows sensor networks to answer queries effectively even when must differ from traditional algorithms is that they must present information is partially corrupt and when more in- be able to reason with incomplete and noisy information. formation is unavailable or too costly to obtain. We use a Some areas of interest may lie outside sensor range. Any Bayesian network to model the sensor network and Markov information, if available, is not free but has a sensing cost Chain Monte Carlo sampling to perform approximate infer- and comes at the expense of decreased network longevity. ence. We demonstrate our technique on the specific problem Also, real sensors do not always operate as they should and of determining whether a friendly agent is surrounded by en- any algorithm that deals with sensor information must be emy agents and present simulation results for it. able to handle partial corruption of the input data. We present an algorithm that leverages both limited and stochastic sensor data to probabilistically answer queries 1. INTRODUCTION normally requiring complete world state to answer correctly. Recent advances in sensor and communication technolo- By answering probabilistically, i.e. by assigning a probabil- gies have enabled the construction of miniature nodes that ity to each possible answer, we provide the inquirer with combine one or more sensors, a processor and memory, plus more useful information than only one answer whose cer- one or more wireless radio links [9, 12]. Large numbers of tainty is not known. This technique is applicable to a broad these sensors can then be jointly deployed to form what range of queries and we demonstrate its effectiveness on has come to be known as a sensor network. Such net- an open problem in sensor networks, that of determining works enable decentralized monitoring applications and can whether a friendly agent with known location is surrounded be used to sense large-area phenomena that may be impos- by enemy agents with unknown but stochastically sensable sible to monitor in any other way. Sensor networks can be locations [7]. used in a variety of civilian and military contexts, including We structure the probabilistic dependencies between sen- factory automation, inventory management, environmental sor measurements, enemy agent locations, and the query of and habitat monitoring, health-care delivery, security and interest as a Bayesian network. We then enter the sensor ob- surveillance, and battlefield awareness. servations into the Bayesian network and use Markov Chain In some settings, sensors enjoy the use of external power Monte Carlo (MCMC) methods to perform approximate in- sources and need not factor the energy costs of computation, ference on the Bayesian network and extract a probabilistic sensing, and communication into their decisions. Informa- answer to our original query. tion theory tells us that more information is always bet- Our experimental results demonstrate the effectiveness of ter [4] and any algorithm analyzing sensing results should our inference algorithm on the problem of determining sur- be able to perform at least the same and hopefully better roundedness on a wide variety of sensor measurements. The with more information. In most actual deployment scenar- inference algorithm consistently provides accurate estimates ios however, sensors must operate with limited power sources of the posterior probability of being surrounded. It demon- and carefully ration their energy utilization. In this world, strates the ability to incorporate both positive and negative computation, sensing, and communication comes at a cost information, to use additional information to disambiguate and algorithms designed for sensor networks must strike a almost colinear points, to cope effectively with sensor failure, and provide useful prior probabilities of being surrounded without any sensor measurements at all. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are 2. PROBLEM DEFINITION not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to 2.1 Overview republish, to post on servers or to redistribute to lists, requires prior specific We seek to determine whether an agent F is surrounded permission and/or a fee. IPSN’04, April 26–27, 2004, Berkeley, California, USA. by N enemy agents E1 . . . EN . Let the location of agent Copyright 2004 ACM 1-58113-846-6/04/0004 ...$5.00. F be given as LF and the locations of agents E1 . . . EN be
  • 2. given by LE1 . . . LEN respectively. Let the assertion D refer to the condition that LF lies in the convex hull generated by points LE1 . . . LEN , which we will henceforth refer to in aggregate as simply LE. We assume that LF and N are known. We have no prior knowledge over the LE but form posteriors over them from the evidence gathered by the sensor measurements. 2.2 Sensor Model Sensors can detect the presence of enemy agents in a cir- cular region surrounding the sensor. We have a total of M sensors S1 . . . SM at our disposal. Let the sensor locations ˆ be given by LS1 . . . LSM . For a given sensor Si , let di equal 1 if there exists one or more enemy agents inside the circle with its center collocated with the sensor and radius R, and ˆ 0 otherwise. In other words, di is a boolean variable that is the response sensor Si should obtain if it is operating cor- rectly. Let di be the value sensor Si actually obtains and Figure 1: Bayesian Network with 3 sensor measure- shares with other sensors. The variable di depends stochas- ments and 4 enemy agents ˆ tically on di as follows: algorithms exist that compute the convex hull of N points ˆ   ¡ ¢ di : with probability ζ in the plane in Θ(N · logN ) time [16]. 1−ζ di = 0 : with probability 2 (1) In fact, convex hull inclusion can be tested directly in 1−ζ 1 : with probability 2 . O(N ) time. Let CH represent the convex hull of LE and let CH1 . . . CHS represent the enemy agents, in order, lying Setting ζ to 1 is equivalent to having perfect sensors and on this hull. S is less than or equal to N . LF lies within setting ζ to 0 is equivalent to having no sensors. Typically, CH if and only if either of the following conditions holds: ζ will be quite high since sensors usually tend to work cor- rectly. ∀s ∈ S, CCW (CHs , CHs+1 , LF ) (2) 2.3 Objective Function Our objective is to determine, given LF and a set of or P stochastic sensor measurements R1 . . . RP , the posterior ∀s ∈ S, CW (CHs , CHs+1 , LF ). (3) probability of D. The set of sensor measurements may be empty. Here, CHS+1 wraps around to CH1 , CCW (a, b, c) refers We do not seek to address the issues of network control to the assertion that point c is strictly counterclockwise to and communication in this paper. While these are impor- ray ab, and CW (a, b, c) refers to the assertion that point c tant topics and are revisited in Section 6.2, our inference al- is strictly clockwise to ray ab. gorithm works independently of the control algorithm that Closest in spirit to our effort are techniques that try to decides who will sense and manages inter-sensor communi- approximate a convex body using a number of probes [1], cation. or those that try to maintain approximate convex hulls in a data stream of points, using bounded storage [8]. These works both address similar problems but their methods can- 3. PRIOR WORK not be applied to our problem at hand. The traditional approach to solving this sort of problem is to determine the values of each of the variables of inter- est and then use standard techniques to answer the query 4. OUR APPROACH given the full world state. The potency of our approach lies We show in this section how the query, in this case, whether in its ability to take partial world state and estimate the the friendly agent is surrounded by enemy agents can be rep- complete world state in expectation. By doing so, we can resented by a variable in a Bayesian network. The proba- use the same off-the-shelf techniques the other algorithms bilistic relationships between sensors, enemy agent locations, can. In fact, in the example we have chosen to showcase, and the query are faithfully represented by the Bayesian net- that of determining whether a friendly agent is surrounded, work. Thus the problem of answering the query reduces to we use an unmodified traditional computational geometry computing the posterior probability of the variable in the algorithm and simply plug it into our inference. This allows network representing our query. We use MCMC sampling us to probabilistically answer any other query just as easily to perform this inference. We start with the formal defini- by just plugging in existing algorithms requiring full world tion of our Bayesian network, identify why exact inference is state into the core of our MCMC sampler. not possible, and then discuss our MCMC implementation. For the specific problem at hand, we can determine whether the friendly agent lies within the convex hull of enemy agents 4.1 Bayesian Network by starting with the exact values of LE, computing their A Bayesian network [11] is characterized by its variables convex hull, and then determining whether LF lies within and the prior and conditional probabilities relating the vari- this convex hull. Many well-known computational geometry ables to one another. We start with the variables and what
  • 3. they represent and then move on to their conditional prob- containing each R variable along with each of the LE vari- abilities. Figure 1 shows a Bayesian Network for the case of ables. The inadequacy of multivariate Gaussians would force three sensor measurements and four enemy agents. In prac- us to perform discretization of the joint space. The expo- tice, the network structure will vary depending on the num- nential blowup that arises from this enormous state space ber of sensor measurements and number of enemy agents. would render the problem computationally intractable. 4.1.1 Variable 1: Sensor Measurements 4.3 MCMC for Approximate Inference Let there be P sensor measurement variables R1 . . . RP in Fortunately, we can get an accurate estimate of the poste- our Bayesian network. These variables are identical to the rior probability of interest, P (D|R1 . . . RP ). We do this with variables mentioned in Section 2 and contain both the index the Metropolis algorithm, an instance of a MCMC method of the sensor it originated from and the stochastic value d that performs approximate inference on Bayesian networks. that the sensor perceived. These variables are observed in It is equipped to handle the continuous nature of this dis- the Bayesian network. tribution as it samples evenly over the entire joint space. We follow the treatment of Metropolis and use the termi- 4.1.2 Variable 2: Enemy Agent Locations nology as presented in [5]. The function f (X) we want to Let there be N enemy agent location variables LE1 . . . LEN estimate is the variable D and the space Xt we sample from in our Bayesian network. These variables are also identical is the joint distribution over LE. Let us choose to draw T to the variables mentioned in Section 2. The variables are samples using the Metropolis algorithm and let the hypothe- not observed in the Bayesian network, i.e., their values are sis over LE while taking sample t be LE t . The starting state not known to the inference algorithm. By doing inference often systematically biases the initial samples so we discard however, it is possible to form posterior probability distri- the first ι · T samples and estimate our posterior probability butions over them. of interest as the mean of the remaining samples: 4.1.3 Prior Probability: Enemy Agent Locations T 1   We assume that the enemy agents are independently and P (D|R1 . . . RP ) = P (D|LE t ) (4) uniformly distributed across a bounded rectangular region. (1 − ι) · T t=ι·T 4.1.4 Variable 3: Whether the Friendly Agent is Sur- where ι is the fraction of samples discarded until the Markov rounded chain converges. There are four aspects to evaluating this expression. The Let there be a variable D in our Bayesian network. Like first three deal with the inner workings of Metropolis. First, the variables that came before it, this variable is identical to where do we start the chain? Second, how does LE t tran- the variable mentioned in Section 2. Indeed, the potency of sition to LE t+1 ? Third, how do we decide whether to ac- our Bayesian network lies in the simplicity of its conception. cept the proposed transition? Fourth, how do we evaluate This variable is also not observed in the Bayesian network P (D|LE t )? but it is possible to form a posterior probability distribution over it as well. 4.3.1 Starting the Chain 4.1.5 Conditional Probability 1: How Enemy Agent We place each of the enemy agents at a random location Locations Affect Sensor Measurements uniformly distributed over the rectangular region where en- emy agents are known to be located. Sensor measurements are stochastically determined by en- emy agent locations as presented in Section 2. 4.3.2 Proposal Distribution 4.1.6 Conditional Probability 2: How Enemy Agent One of the enemy agents is chosen at random and is moved Locations Affect the Query to a random location uniformly chosen over the same rect- angular region as before. Thus, each component of LE t+1 , The value of D is deterministic given LE1 . . . LEN . We that is, each enemy agent location, is conditionally indepen- use the off-the-shelf convex hull generation and containment dent, given LE t . This condition is required for Metropolis algorithm described in section 3 to establish this relation to work correctly. between D and LE1 . . . LEN . 4.3.3 Determining Proposal Acceptance 4.2 Problems with Exact Inference We use the Metropolis proposal acceptance criterion: Inference in Bayesian networks refers to the process of discovering posterior probabilities of unobserved variables given the values of observed variables. A family of tech- P (LE t+1 , R1 . . . RP ) α(LE t , LE t+1 ) = min(1, ). (5) niques known as clique tree methods is used for this pur- P (LE t , R1 . . . RP ) pose. Unfortunately, our Bayesian network has both continu- The proposal is accepted with probability α(LE t , LE t+1 ) ous (LE) and discrete (R and D) variables. In fact, this as given above. This is easily done by generating a ran- is the hardest class of Bayesian networks to perform infer- dom number U uniformly drawn from between 0 and 1 and ence on [10]. Moreover, the nature of the problem preempts accepting the proposal if U ≤ α(LE t , LE t+1 ). the use of multivariate Gaussians to approximate the LE We can simplify the expression variables while maintaining the rich hypotheses inference required to form accurate posteriors. Faced with such a P (LE t+1 , R1 . . . RP ) network, clique tree methods would form an immense clique (6) P (LE t , R1 . . . RP )
  • 4. substantially. We discover from the Bayesian network that 4.4 Alternate Queries the expression P (LE t , R1 . . . RP ) decomposes as follows: Up to this point, we have explored our algorithm for the specific query of determining whether a friendly agent is   N   P surrounded by enemy agents. This problem dealt with de- t P (LEi ) P (Rj |LE t ) (7) termining properties of the convex hull of a set of stochasti- i=1 j=1 cally sensable points. While we present the scenario in 2D, t it generalizes directly to higher dimensional spaces. where LEi refers to the hypothesized location of enemy We briefly note two other scenarios where we can use our agent i when Metropolis takes sample t. algorithm to discover useful geometric properties of a set Similarly, the expression P (LE t+1 , R1 . . . RP ) decomposes of points. The first scenario explores discovering both the as: extent and area of a set of stochastically sensable points. The second deals with finding a point maximally distant   N   P in expectation from all other points. These scenarios were t+1 P (LEi ) P (Rj |LE t+1 ). (8) chosen to explore different areas of computational geometry i=1 j=1 but our algorithm works on any query where small changes Thus, we can rewrite 6 as: in the input induce small changes in the output. 4.4.1 Extent and Area of a Point Set N t+1 P P (Rj |LE t+1 ) ¡ ¡ i=1 P (LEi ) j=1 The problem of finding the farthest point along a cer- N t P . (9) tain direction in a point set is more generally known as the P (Rj |LE t ) ¡ ¡ i=1 P (LEi ) j=1 extremal polytope query and can be solved efficiently by t If we recall that LEi is uniformly distributed and that Kirkpatrick’s algorithm [16]. Let us define a new variable B each of the LE terms cancel out in both the numerator and that contains both the minimal and maximal point in each denominator, this expression simplifies to: direction. We replace the last step of our algorithm, comput- ing P (D|LEt ), with computing P (B|LEt ). This conditional ¡ P t+1 probability is deterministic, as its predecessor was. The ex- j=1 P (Rj |LE ) pected value of the extents are the mean of the components P . (10) t) ¡ j=1 P (Rj |LE of the individual samples. The expected value of the area is the mean of the areas stemming from the individual sam- Our Bayesian network does not directly allow us to sim- ples and will typically not equal the area stemming from the plify this expression further but since sensors are not af- mean extents. Using only discovered points in this scenario fected, even stochastically, by enemy agents outside their will tend to underestimate the extent of the point set. sensing range, we can exploit context-specific independence to reduce this expression further. Specifically, only one of 4.4.2 Maximally Distant Point the LEi variables has changed value from step t to step t+1. The problem of finding a point that is maximally distant Only sensors for which this enemy agent is within range will from its nearest neighbor stems from the Voronoi diagram be affected by this transition. Let there be K such sensors of a point set and can be computed as described in [6]. Let and let Kk refer to the index of the k-th such sensor. us define a new variable E that uses a discretized grid to The sensor measurements that are not affected by the represent the distance of each cell to its nearest neighbor in change in LEi cancel out in the numerator and denominator, the point set. By replacing the last step with computing leaving us with: P (E|LEt ) (again deterministic), we calculate the expected distance from each cell and choose the one with the highest   K P (RKk |LE t+1 ) expected distance. Using only discovered points in this sce- . (11) nario will tend to ignore the effects of undiscovered points P (RKk |LE t ) k=1 on the solution. Computing this expression is extremely fast and is further sped up in our implementation by discretizing the rectangu- 5. EXPERIMENTAL RESULTS lar region and associating with each grid cell a list of sensor We show in this section some examples of the sort of re- measurements that affect that cell. sults the inference algorithm produces. First, we have some The conditional probabilities of the sensor measurements specific networks and show the posterior probabilities pro- given the enemy agent locations can be computed directly duced conditioned on different sets of sensor measurements. from the expressions of the conditional probability. This Second, we show how the average posterior probabilities formulation of the problem, which relies only on the con- vary as different parameters of the sensor network varies. ditional probabilities as they are stated, i.e. the forward Third, we examine how many iterations MCMC requires to model, and not on evidential reasoning make computation converge. of likelihoods and thus inference easier to implement, faster, and more accurate [17]. 5.1 Specific Inference Examples In Figure 2, we see a sensor network. Here, LF corre- 4.3.4 Computing P (D|LEt ) sponds to the friendly agent and is designated by a plus sign, This conditional probability can be computed directly from LEi corresponds to the ith enemy agent and is designated the definition as presented in 4.1.6. The locations of the en- by a circle, and LSj the jth sensor and is designated by an emy agents are given by LEt . The expression will yield a asterisk. The circle surrounding a sensor shows the extent probability that is either 0 or 1 but not in between. of its sensing range. The labels and circles for only a subset
  • 5. LE 2 LS 68 LE 3 LS 56 LS 2 LE 4 LE 0 LF + LS 36 LS 37 LS 41 LE 1 Figure 2: An Example Sensor Network of sensors are drawn to reduce clutter in the diagram. The each line passing through LF but without double counting. following table provides the posterior probabilities resulting For example, the probability of all of the enemies lying to from inference when different sets of sensors are chosen to one side of the north-south meridian is also approximately sense. None of the sensors failed during the simulation runs 9% because of the approximate symmetry of the situation. but the algorithm runs with ζ set to 0.01. We do not seek to compute the prior via this method but only to show where the prior comes from and demonstrate Posterior Probabilities for Figure 2 the difficulty of computing it. Sensors Sensing Posterior Probability We next consider the situation where sensors 56 and 68 56, 68 44 have sensed and found enemy agents but the posterior has none 51 decreased to 44%. This may seem counterintuitive that 2, 41 80 we have useful information and our posterior has dropped 41 81 but with a fixed number of enemies, finding two of them 41, 56, 68 86 in inessential locations decreases the probability of enemies 36, 37, 41, 56, 68 93 elsewhere. Specifically, the probability of enemy agents to the south and east of the friendly agent, where they were We analyze our results from a geometric perspective. It is already less likely to be, is further reduced. Contrast this important to note that our inference algorithm approaches with the situation where only sensor 41 has sensed. Finding the problem in a fundamentally different way than the dis- an enemy where it was unlikely but more potentially useful cussion that follows. increases the posterior probability substantially, in this case Let us start with the prior probability that the agent is to 81%. surrounded without any sensor measurements. In this in- Let us next consider the case where sensors 2 and 41 have stance, the entire field is 100 by 100 and the friendly agent both sensed. The posterior probability is lower than the sce- is at 58, 40. Ignoring collinearity, which is of measure 0 any- nario where only 41 has sensed. Intuitively, the sensor is al- way, the friendly agent is only surrounded if all of the enemy most collinear to the previously detected sensor and friendly agents do not lie to one side of it. Let us analyze the case agent and the discovery of this enemy is neither immensely that all the enemy agents are to the north of the friendly helpful nor it useless. It does however reduce the number of agent. The probability of any one of them being there is unknown enemies that might have been used to expand the 60% and the probability of all five of them being there is convex hull in a useful direction. approximately 8%. The next scenario we consider, where sensors 41, 56, and The alternative scenario is that each of the agents is to the 68 have sensed, is perhaps the most intuitive. We can see south. The probability of any one of them being there is 40% that the discovered enemies form a triangle that just encloses and the probability of all five of them being there is approx- the friendly agent. The sensors have less information than imately 1%. Thus the probability of not being surrounded we do though – they only know that enemy agents lie within because all the agents lie to one side of the east-west merid- their sensing radius. It may be that both enemy agents lie ian is approximately 9%. We need to extend this case to north of the sensors that have located them. If this were the
  • 6. LS 35 LE 4 LS 58 LS 36 LE 3 LS 75 LF + LS 86 LE 1 LE 2 LS 8 LS 50 LE 0 LS 46 Figure 3: Another Example Sensor Network case, the friendly agent would not longer be surrounded. Third, when there is evidence from multiple sensors sup- The last scenario, where sensors 36, 37, 41, 56, and 68 have porting the presence of an enemy agent, as in the case where sensed, shows the powerful effect of negative information. sensors 8, 35, 36, 46, 50, 58, 75, and 86 have all sensed and The addition of results from sensors 36 and 37 demonstrate each enemy agent is covered by at least two sensors, the that the enemy agent that sensor 41 found is not in the posterior probability increases substantially. sensable region of either sensor 36 and 37 and therefore must lie farther south, where it is more likely to participate in 5.2 Effects of Changing Network Parameters surrounding the friendly agent. We now move on to aggregate results of how different Another example sensor network and inference results fol- parameters of the scenario affect the posterior probability of low. In this case, none of the sensors failed during the sim- being able to detect that the friendly agent is surrounded. ulation runs but the algorithm now runs with ζ set to 0.2. It is possible to perform such analysis for any query and can We will see how this difference affects inference. provide valuable insights into how the parameters of the sensor network help or hinder in answering specific types of Posterior Probabilities for Figure 3 queries. Sensors Sensing Posterior Probability Figure 4 shows the aggregate of two sets of ground truth none 67 scenarios. The Surrounded case shows examples where the 35, 50, 58 70 agent is actually surrounded. The Not Surrounded case 8, 36, 58 74 shows examples where the agent is actually not surrounded. 8, 36, 46 75 The distinction is determined by the simulator and not through 8, 35, 50, 58 78 sensor measurement. The chart thus shows how the aver- 8, 35, 36, 46, 50, 58, 75, 86 91 age posterior of the inference algorithm across a variety of There is much to be learned from these results but we fo- scenarios evolves. The number of enemies is set to four and cus on the three essential points that were not demonstrated the sensing radius is set to 12. As we see, more informa- by the previous example. First, the prior probability of be- tion brings the posterior probability closer and closer to the ing surrounded is much higher. This is due solely to the fact ground truth. that the friendly agent is closer to the center of the region Figure 5 shows the same scenario but with the number where enemy agents are distributed. of enemies varying now. The number of sensors is set to Second, even when the sensor evidence supports the as- twenty-five and the sensing radius remains at twelve. We sertion that the friendly agent is surrounded, as in the case see that it is easier to locate enemies and believe that one is where sensors 8, 35, 50, and 58 have sensed, the posterior surrounded when there are more enemies present but that probability is not as high as one may expect. The sensor the effect is not strong. network realizes due to the relatively high value of ζ that it Figure 6 shows the same scenario but with the range of is likely that one or more of these sensor measurements are the sensor varying now. The number of sensors is set to inaccurate and considers these possibilities in its inference. twenty-five and the number of enemies is set at four. While it is easier to locate enemies with increased sensing range,
  • 7. Posterior Probability Evolution as Number of Sensors Varies Posterior Probability Evolution as Number of Enemies Varies 1 1 Surrounded Surrounded Not Surrounded Not Surrounded 0.9 0.9 0.8 0.8 0.7 0.7 Probability Surrounded Probability Surrounded 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 5 10 15 20 25 30 35 40 45 50 3 4 5 6 7 8 9 10 11 Number of Sensors Number of Enemies Figure 4: Posterior Probability Evolution as Num- Figure 5: Posterior Probability Evolution as Num- ber of Sensors Varies ber of Enemies Varies the uncertainity associated with such information makes this information to the friendly agents to determine where en- information less useful. These contrary forces appear to bal- emies may or may not be. These sensors will be subject ance out and varying the sensor range appears to neither to noise as they will sometimes miss a robot going by or help nor hurt inference. mistake a person walking by for an enemy robot. 5.3 MCMC Convergence 6.2 Intelligent Sensor Selection Although MCMC converges in the limit to the exact pos- While our algorithm can effectively leverage very limited terior, it is useful to know how many iterations are required information, to run on an autonomous sensor network, it in practice to obtain accurate results. It is unfortunately needs to be paired with a decentralized algorithm that runs only feasible to compute analytical bounds for MCMC al- on each sensor node that decides when to sense and also gorithms for only the simplest distributions though and the when and what type of information to communicate to its conditions required by most bounding theorems are not sat- neighbors. isfied by general distributions. However, it often suffices in Such an algorithm will need to make tradeoffs between practice to answer this question heuristically through ex- energy conservation and answering queries more effectively. perimentation [15]. Figure 7 shows the standard deviation One way to address this problem is a cost-based scheme of the MCMC posterior estimate errors before convergence that casts utility and battery depletion into a single cur- occurs. Regardless of the number of queries, we find that rency [3]. While the cost of sensing, asking another node the posterior estimate errors level off after approximately to sense, or sending information to another node will fall 15,000 steps of MCMC, a computation that takes only a directly out of a coherent energy-aware sensor model, the few hundred milliseconds to perform. value of such information is intricately tied to the inference algorithm. Specifically, to prevent bias, the value of sensing 6. FUTURE WORK should be the expected reduction in entropy stemming from While we feel that we have made significant progress with the acquisition of this information. the methods presented in this paper, there are two major Constructing such an algorithm requires answering several directions we plan to proceed with this work. First, we want difficult questions. First, how can sensors with only partial to use this algorithm with a set of actual sensors. Second, we world state display utility-maximizing emergent behavior? want the sensor network to guide its own sensing activities Second, what is the tradeoff between expending more energy and direct sensing in such a way to minimize both sensing and acquiring a more accurate answer? Third, how can and communication cost. We discuss these ideas in turn. sensors distribute work in a way that takes into account the effects of remaining power at the granularity of individual 6.1 Moving from Simulation to Actual Sensors nodes? We plan to use this algorithm to help mobile robots play- ing laser tag with one another [14] determine where their enemy agents may or may not be. For example, if a robot 7. CONCLUSION knew that a certain corridor was most likely free of enemy We have presented an inference algorithm for a sensor agents, it could focus its searching efforts elsewhere. network to effectively answer generalized queries. Using Specifically, we plan to use vibration sensors that will de- probabilistic techniques, the algorithm is able to answer tect the absence or presence of enemy robots and route this these queries effectively with only partial information over
  • 8. Posterior Probability Convergence as MCMC Progresses Posterior Probability Evolution as Sensor Range Varies 0.055 0.9 0 queries Surrounded 2 queries Not Surrounded 0.05 4 queries 6 queries 0.8 8 queries 0.045 Standard Deviation of Posterior Probability 0.7 0.04 0.6 Probability Surrounded 0.035 0.5 0.03 0.4 0.025 0.3 0.02 0.2 0.015 0.1 0.01 0 0.005 0 5 10 15 20 25 0 5 10 15 20 25 Sensing Range Number of MCMC Steps, in Thousands Figure 6: Posterior Probability Evolution as Sensing Figure 7: Posterior Probability Convergence as Range Varies MCMC Converges the world state. Moreover, it can reason coherently despite [6] L. Guibas, and J. Stolfi. “Primitives for the the stochastic nature of the sensor data. The results shown manipulation of general subdivisions and the demonstrate that the inference algorithm is highly accurate computation of Voronio diagrams”. ACM Trans. in the scenario presented and is able to handle a wide range Graph. 4, 74-123. 1985. of types of sensor information. [7] L. Guibas. “Sensing, Tracking and Reasoning with Relations”. IEEE Signal Processing Magazine. Volume: 19 Issue: 2, March 2002. 8. ACKNOWLEDGEMENTS [8] J. Hershberger and S. Suri. “Convex Hulls and Related This research is supported in part by DARPA’s MARS Problems in Data Streams”. Proc. ACM Program (Contract number N66001-01-C-6018), DARPA’s SIGMOD/PODS Workshop on Management and Sensor Information Technology Program (Contract number Processing of Data Stream, 2003. F30602-00-C-0139), the National Science Foundation (CA- [9] J. Kahn, R. Katz, and K. Pister. “Mobile networking REER grant number IIS-9876136 and regular grant number for smart dust” Proc. ACM/IEEE Intl. Conf. on IIS-9877033), Intel Corporation, the Stanford Networking Mobile Computing and Networking, 1999. Research Center, ONR MURI grant N00014-02-1-0720, and [10] U. Lerner. Hybrid Bayesian Networks for Reasoning a grant from the DARPA Software for Distributed Robotics about Complex Systems Ph.D. Thesis, Stanford program under a subcontract from SRI, all of which is grate- University, October 2002. fully acknowledged. The views and conclusions contained in [11] J. Pearl. Probabilistic Reasoning in Intelligent this document are those of the authors and should not be Systems: Networks of Plausible Inference. Morgan interpreted as necessarily representing official policies or en- Kaufmann Publishers, Inc., 1988. dorsements, either expressed or implied, of the United States Government or any of the sponsoring institutions. [12] G. Pottie and W. Kaiser. “Wireless integrated network sensors.” Proc. ACM Communications, 2000. [13] T. Richardson. “Approximation of planar convex sets 9. REFERENCES from hyperplane probes”. Proc. Discrete and [1] G. Battista, R. Tamassia, and I. Tollis. Constrained Computational Geometry, 1997. Visibility Representation of Graphics Proc. IPL, 1992. [14] M. Rosencrantz, G. Gordon, and S. Thrun. “Locating [2] M. Berg, M. Kreveld, M. Overmars, and moving entities in indoor environments with teams of O. Schwarzkopf. Computational Geometry: Algorithms mobile robots.” Proc. AAMAS-2003. and Applications. Springer-Verlag, 2000. [15] J. Rosenthal. “Minorization Conditions and [3] J. Byers and G. Nasser. “Utility-based decision-making Convergence Rates for Markov Chain Monte Carlo.” in wireless sensor networks” Proc. IEEE MobiHOC Proc. Journal of the American Statistical Association, Symposium, 2000 1995. [4] T. Cover and J. Thomas Elements of Information [16] J. Rourke Computational Geometry in C Cambridge Theory John Wiley and Sons, Inc., 1991. University Press, 1998. [5] W. Gilks, S. Richardson, and D. Spiegelhalter. Markov [17] S. Thrun. “Learning occupancy grids with forward Chain Monte Carlo in Practice. Chapman and Hall / models”. IEEE Conference on Intelligent Robots and CRC, 1996. Systems. 2001.