You are on page 1of 6

Adaptive Localization in a Dynamic WiFi Environment Through Multi-view

Learning ∗
Sinno Jialin Pan, James T. Kwok, Qiang Yang, and Jeffrey Junfeng Pan
Department of Computer Science and Engineering
Hong Kong University of Science and Technology, Hong Kong
{sinnopan,jamesk,qyang,panjf}@cse.ust.hk

Abstract nificantly different. For example, Figure 1 shows RSS dis-


tributions at the same location in two time periods collected
Accurately locating users in a wireless environment is an im- in our indoor environment. As a result, location estimation
portant task for many pervasive computing and AI applica- based on a static radio map may be grossly inaccurate.
tions, such as activity recognition. In a WiFi environment, a
mobile device can be localized using signals received from However, collecting RSS values together with their loca-
various transmitters, such as access points (APs). Most lo- tions is a very expensive process. Thus, it would be im-
calization approaches build a map between the signal space portant for us to transfer as much knowledge from an early
and the physical location space in a offline phase, and then time period to latter time periods. If we can do this effec-
using the received-signal-strength (RSS) map to estimate the tively, we can reduce the need to obtain new labeled data. In
location in an online phase. However, the map can be out- this paper we present LeManCoR, a localization approach
dated when the signal-strength values change with time due that adapts a previously learned RSS mapping function for
to environmental dynamics. It is infeasible or expensive to latter time periods while requiring only a small amount of
repeat data calibration for reconstructing the RSS map. In new calibration data. Our approach extends the Manifold
such a case, it is important to adapt the model learnt in one
time period to another time period without too much re-
Co-Regularization of (Sindhwani, Niyogi, & Belkin 2005),
calibration. In this paper, we present a location-estimation a recently developed technique for semi-supervised learning
approach based on Manifold co-Regularization, which is a with multiple views. We consider the RSS values received
machine learning technique for building a mapping function in different time periods at the same location as multiple
between data. We describe LeManCoR, a system for adapting views of this location. The intuition behind our approach is
the mapping function between the signal space and physical that the signal data in multiple views are multi-dimensional.
location space over different time periods based on Manifold Although their distributions are different, these data should
Co-Regularization. We show that LeManCoR can effectively have a common underlying low-dimensional manifold struc-
transfer the knowledge between two time periods without re- ture, which can be interpreted as the physical location space.
quiring too much new calibration effort.We illustrate LeMan- This geometric property is the intrinsic knowledge of the lo-
CoR’s effectiveness in a real 802.11 WiFi environment.
calization problem in a WiFi environment.
Introduction
12 12

10 10

Localizing users or mobile nodes in wireless networks using 8 8


Frequency

Frequency

received-signal-strength (RSS) values has attracted much at- 6 6

tention in several research communities, especially in ac- 4 4

tivity recognition in AI. In recent years, different statis- 2 2

tical machine learning methods have been applied to the 0 0

localization problem (Nguyen, Jordan, & Sinopoli 2005;


−80 −75 −70 −65 −60 −55 −50 −80 −75 −70 −65 −60 −55 −50
Signal Strength Values (unit: dB) Signal Strength Values (unit: dB)

Ferris, Haehnel, & Fox 2006; Pan et al. 2006; Ferris, Fox, (a) Time Period 1 (b) Time Period 2
& Lawrence 2007). Figure 1: Variations of signal-strength histograms in two
However, many previous localization methods assume time periods at the same location from one access point
that RSS maps between the signal space and physical lo- In this paper, we extend the Manifold Co-Regularization
cation space is static, thus a RSS map learnt in one time framework to location estimation in a dynamic WiFi envi-
period can be applied for location estimate in latter time pe- ronment. We empirically demonstrate that this framework
riods directly without adaptation. In a complex indoor WiFi is effective in reducing the calibration effort when adapting
environment, however, the environment is dynamic in na- the learnt mapping function between time periods. This pa-
ture, caused by unpredictable movements of people, radio per contributes to machine learning by extending the Man-
interference and signal propagation. Thus, the distribution ifold Co-Regularization framework to a more general case,
of RSS values in training and application periods may be sig- in which the number of data in different views can be differ-

Sinno J. Pan, Qiang Yang and Jeffrey J. Pan are supported by ent and there are only a subset of the data having pairwise
NEC China Lab (NECLC05/06.EG01). correspondence. We also contribute to location estimation in
Copyright ° c 2007, Association for the Advancement of Artificial pervasive computing, by developing a dynamic localization
Intelligence (www.aaai.org). All rights reserved. approach for adapting RSS mapping functions effectively.
Related Work tion, which maps the signal vectors to locations, f (1) learnt
(j)
in time period t1 and (3) l labeled data {(si , `i )}li=1 and
Existing approaches to RSS based localization fall into (j) l+uj
two main categories. The first approach uses radio prop- uj unlabeled data {si }i=l1 +1 collected in time period tj ,
agation models, which rely on the knowledge of access where `i is the location of ith reference point. Our objec-
point locations (Bahl, Balachandran, & Padmanabhan 2000; tive is to adapt the mapping function f (1) to a new mapping
LaMarca et al. 2005). In recent years, statistical machine function f , for each online time period tj .
learning methods have been applied to the localization prob-
lem. (Ferris, Haehnel, & Fox 2006) apply Gaussian process Overall Approach
models and (Nguyen, Jordan, & Sinopoli 2005) apply Ker- Our LeManCoR approach has the following assumptions:
nel methods for location estimation, respectively. In order 1. Physical locations → RSS Values: Physical locations that
to reduce the calibration effort, (Ferris, Fox, & Lawrence are close to each other should get similar RSS values.
2007) apply Gaussian-Process-Latent-Variable models (GP-
LVMs) to construct RSS mapping function under an unsu- 2. RSS Values → Physical locations: Similar RSS values at
pervised framework. (Pan et al. 2006) show how to apply corresponding physical locations should be close to each
manifold regularization to mobile-node tracking in sensor other.
networks for reducing calibration effort. 3. The RSS values could vary greatly over different time pe-
However, there are few previous works that consider dy- riods, while in the same time period they change a little.
namic environments with some exceptions. The LEASE
system (Krishnan et al. 2004) utilizes different hardware The first two assumptions have been proven to be mostly
systems to solve this problem. LEASE employs a number true in indoor localization problem in wireless and sensor
of stationary emitters and sniffers to obtain up-to-date RSS networks (Pan et al. 2006; Ferris, Fox, & Lawrence 2007),
values for updating the maps. The localization accuracy can over different time periods. For example, in Figure 2, RSS
only be guaranteed when these additional hardware systems SA should be more similar to RSS SB than RSS SC in the
0
are deployed in high density. To reduce the need of the ad- old signal space, and RSS SA should be more similar to RSS
0 0
ditional equipments, (Yin, Yang, & Ni 2005) apply a model SB than RSS SC in the new signal space, since location A is
tree based method, called LEMT, to adapt RSS maps by only closer to location B than location C in the physical location
0
using a few reference points, which are additional sensors space. The pair {SA , SA } is called RSS corresponding pair.
for recording RSS values over time. LEMT needs to build The third assumption is also often true in a real world.
A
a model tree at each location to capture the global relation- Y

ship between the RSS values received at virous locations and B


C
those received at reference points. Another related work is
(Haeberlen et al. 2004), which adapted a static RSS map by 0
SA

calibrating new RSS samples at a few known locations and S2 SA 0 Location Space (2-dimension) X
S2
fitting a linear function between these values and the corre- SB
sponding values from the RSS map. 0
SB 0
SC

Problem Statement 0 SC
S1
Signal Space in Old Time Period (m-dimension)
0 S1
Signal Space in New Time Period(m-dimension)
*signal from two access points is shown here
Consider a two-dimensional localization problem in a dy- Legend signal vector grid point
namic indoor environment.1 A location can be represented signal boundary grid boundary

by ` = (x, y). Assume that there are m access points (APs)


Figure 2: Correlation between location space and two dif-
in the indoor environment, which periodically send out wire-
ferent signal space
less signals to others. The locations of APs are not necessar-
ily known. A mobile node can measure the RSS sent by We now explain the motivation behind LeManCoR. For
the m APs. Thus, each signal vector can be represented by each online time period tj , where j ≥ 2, the mapping func-
s = (s1 , s2 , ..., sm )T ∈ Rm . The signal data of known tion f (1) learnt in time period t1 may be grossly inaccurate
locations are called labeled data, while those of unknown in tj . Furthermore, we only have l labeled data collected
locations are called unlabeled data. We also assume there from reference points and uj unlabeled data, where l ¿ l1
are l reference points placed at various locations for getting and uj ¿ u1 , in new time period tj . This means in an online
the real-time RSS values over different time periods. There period tj , we only collect a tiny amount of new data, for the
are several time periods in each day {ti }Ni=1 , where t1 is the propose of adapting f1 for tj These new data are far insuffi-
offline time period , in which we can collect more labeled cient to rebuild a mapping function f (j) with high accuracy.
and unlabeled data, while the others are online time periods, Thus, neither f (1) or f (j) , which are learnt independently,
in which we can only collect a few labeled and unlabeled can solve the dynamic localization problem. Our motivation
data. For each online time period tj , j ≥ 2, the inputs are is that since RSS values from different time periods are mul-
(1) (1)
(1) l1 labeled data {(si , `i )}li=1
1
and u1 unlabeled data tiple views of locations, we can learn a new function that can
(1) l1 +u1 take into account both sets of data by optimizing a pair of
{si }i=l1 +1 collected in time period t1 , (2) a mapping func-
functions in multiple views. This adapted mapping function
1
Extension to the three-dimensional case is straight-forward. f 0(j) can be used in time period tj for location estimation.
From Figure 2, we can see that this pair of functions in mul- Manifold Co-Regularization
tiple view should agree on the same locations for each RSS
corresponding pair. (Sindhwani, Niyogi, & Belkin 2005) extended the manifold
Thus, the question is how to learn pairs of mapping func- regularization framework, Manifold Co-Regularization, to
tions together with the constraints mentioned above. In the multi-view learning. In Manifold Co-Regularization frame-
following we will show how to solve this problem under our work, we attempt to learn a function pair to correctly clas-
extended Manifold Co-Regularization framework. sify the labeled examples and to be smooth with respect to
similarity structure in both views. These structures may be
Manifold Co-Regularization encoded as graphs on which regularization operators may be
defined and then combined to form a multi-view regularizer.
Before introducing our extended Manifold Co-
Regularization approach to dynamic localization problem, (Sindhwani, Niyogi, & Belkin 2005) construct a multi-
we first give a brief introduction to Manifold Regulariza- view regularizer by taking a convex combination L = (1 −
tion and its extension to multi-view learning, Manifold α)L1 + αL2 where α ≥ 0 is a parameter that controls the
Co-Regularization. influence of the two views; and Ls , where s = 1, 2 cor-
responds to the graph Laplacian matrix in each view, re-
Manifold Regularization spectively. Thus, to learn the pair f = (f (1)∗ , f (2)∗ ) is
The standard regularization framework (Schölkopf & Smola equivalent to solving the following optimization problems
2002) for supervised learning solves the following mini- for s = 1, 2:
mization problem: Pl (s)
1
Pl f (s)∗ = argmin l i=1 [yi − f (s) (xi )]2 +
f ∗ = argmin 1l i=1 V (xi , yi , f ) + γkf k2K, (1)
f (s) ∈HKs
(3)
f ∈HK (s) (s)
γA kf (s) k2HKs + γI f(s) Lf(s) ,
where Hk is an Reproducing Kernel Hilbert space (RKHS)
l (s) (s)
of functions with kernel function K. (xi , yi )i=1 is the la- where f denotes the vector (f (s) (x1 ), ..., f (s) (xl+u ))T ; and
beled training data drawn from a probability distribution P . (s) (s)
V is the loss function, such as squared loss for Regularized the regularization parameters γA , γI control the influence
Least Squares or hinge loss function for Support Vector Ma- of unlabeled data relative to the RKHS norm. The resulting
chine, kf k2K is a penalty term that reflects the complexity of algorithm is termed Co-Laplacian RLS.
the model. (Belkin, Niyogi, & Sindhwani 2005) extended
this framework, called manifold regularization, by incorpo- Our Extension to Manifold Co-Regularization
rating additional information about geometric structure of
the marginal distribution PX of P . To achieve that, (Belkin, So far, we have reviewed how to learn a pair of classifiers un-
Niyogi, & Sindhwani 2005) added an additional regularizer: der the Manifold Co-Regularization framework. However,
Pl the standard Manifold Co-Regularization approach requires
f ∗ = argmin 1l i=1 V (xi , yi , f ) + γA kf k2K + γI kf k2I, all the examples (including labeled and unlabeled data) be-
f ∈HK
ing corresponding pairs. That means the number of labeled
(2) data and unlabeled data in each view are the the same, re-
where kf k2I is an appropriate penalty term that reflects the
intrinsic structure of PX . Here γA controls the complex- spectively. However, in many cases, the number of examples
ity of the function in ambient space while γI controls the in each view could be different, such as an indoor dynamic
complexity of the function in the intrinsic geometry of PX . localization problem. Furthermore, there are only a subset
In (Belkin, Niyogi, & Sindhwani 2005), the third term in of the data being corresponding pairs. For example, in our
Equation (2) is approximated on the basis of labeled and un- dynamic localization problem, we can collect more signal
labeled data using graph Laplacian associated to the data. data in time period t1 (offline phase), including l1 labeled
Thus, given a set of l labeled data {(xi , yi )}li=1 and a set of data and u1 unlabeled data, while in time period tj , we can
unlabeled data {xj }j=l+u only collect a few labeled data l from reference points and
j=l+1 , the optimization problem can be a few unlabeled data uj , where l ¿ l1 and uj ¿ u1 . To
reformulated as: cope with this problem, we put forward our proposed ex-
l tended Manifold Co-Regularization approach to regression
1X γI
f ∗ = argmin V (xi , yi , f )+γA kf k2K + fT Lf, problem.
f ∈HK l i=1 (u + l)2
We assume that there are l1 + u1 data in the first view,
where l1 are labeled and u1 are unlabeled. While there are
where f = [f (x1 ), ..., f (xl+u )]T , and L is the graph Lapla- l2 + u2 data in the second view, where l2 are labeled and
cian given by L = D − W where Wij are the edge weights u2 are unlabeled. l1 and u1 can be different from l2 and u2 ,
in the data adjacency graph and D is a diagonal matrix, respectively. Furthermore, there are l corresponding pairs
Dii = Σl+u
j=1 Wij . between two views, where l ≤ min(l1 , l2 ). Instead of Equa-
Since the localization problem is a regression problem, in tion (3), we use the following optimization form to learn a
the rest of paper, we only focus on squared loss function pair of mapping functions f = (f (1)∗ , f (2)∗ ):
V (xi , yi , f ) = (yi − f (xi ))2 .
(f (1)∗ , f (2)∗ ) =
(2)
argmin { (2)
( l12 J2 K2 + γA I2 + γI 0T 0 γI
f (1) ∈HK1 ,f (2) ∈HK2 l J2 J2 K2 + l2 +u2 L1 K1 )β
(7)
µX
l1
(1) (1) (1) (1) − γlI J20T J10 K1 α = 1
l2 Y2,
V (xi , yi , f (1) ) + γA kf (1) k2HK + γI kf (1) k2I +
l1 i=1 1
which leads to the following solution.
l2
X
1 α∗ = (A1 − B1 B2−1 A2 )−1 ( lµ1 Y1 + 1 −1
l2 B1 B2 Y2 ), (8)
(2) (2) (2) (2)
V (xi , yi , f (2) ) + γA kf (2) k2HK + γI kf (2) k2I +
l2 i=1
2

Pl β ∗ = (B2 − A2 A−1 −1 µ
( l1 A2 A−1 1
γI (1) (1) (2)
− f (2) (xi ))2 }. 1 B1 ) 1 Y1 + l2 Y2 ), (9)
l i=1 (f (xi )
(4) where
In Equation (4), the first three terms make the mapping func-
tion f (1) correctly classify the labeled examples and smooth B1 = γlI J10T J20 K2 ; A2 = γI 0T 0
l J2 J1 K1 ;
(1)
(1) γI
both in function space and intrinsic geometry space. The A1 = µ γI 0T 0
l1 J1 K1 + γA I1 + l J1 J1 K1 + l1 +u1 L1 K1 ;
next three terms make the same thing to the mapping func- (2) γI 0T 0
(2)
γI
1
tion f (2) . The last term make f (1) and f (2) agree the same B2 = l2 J2 K2 + γA I2 + l J2 J2 K2 + l2 +u2 L1 K1 .
labels, in our case locations, of the corresponding pairs. µ is We call this resulting algorithm extended Co-Manifold
a parameter to balance data fitting in the two views and γI is RLS (eManCoR). eManCoR extends the Co-LapRLS to the
a parameter that regularizes the pair mapping functions. situation that the number of training data in multi-view is
different and there are only a subset of the data having pair-
As mentioned above, we use graph Laplacian to approx- wise correspondence among different views. Thus, we can
(1)
imate the term kf kI . Thus the terms γI kf (1) k2I and apply eManCoR algorithm to learn a pair of mapping func-
(2)
γI kf (2) k2I in Equation 4 can be replaced with the follow- tions in dynamic localization problem. The localization ver-
γI
(1)
γI
(2) sion of eManCoR is called LeManCoR
(1)T
ing two terms l1 +u1 f L1 f (1) and l2 +u2 f
(2)T
L2 f (2) , re-
spectively. The LeManCoR Algorithm
Pl1 +u1 (1)
Using f (1)∗ = ∗
i=1 αi K1 (x
(1)
, xi ) and f (2)∗ = Our dynamic location-estimation algorithm LeManCoR,
Pl2 +u2 ∗ (2) (2) which is based on our Extension to Manifold Co-
i=1 βi K2 (x , xi ) to substitute f (1)∗ and f (2)∗ in
Equation 4, we arrive at the following objective function of Regularization above, has three phases: an offline training
the (l1 + u1 )-dimensional variable α = [α1 , · · ·, αl1 +u1 ] and phase, an online adaptation phase and an online localization
(l2 + u2 )-dimensional variable β = [β1 , · · ·, βl2 +u2 ]: phase.
• Offline Training Phase
(1) (1)
α∗ , β ∗ = argmin { 1. Collect l1 labeled signal-location pairs {(si , `i )}li=1
1

α∈Rl1 +u1 ,β∈Rl2 +u2 at various locations (the first l pairs are collected from
µ (1) reference points) and u1 unlabeled signal examples
l1 (Y1 − J1 K1 α)T (Y1 − J1 K1 α) + γA αT K1 α+ (1) 1 +u1
{sj }lj=l 1 +1
.
(1) (1)
γI T 1
− J2 K2 β)T (Y2 − J2 K2 β)+ 2. For each signal vector si ∈ S (1) , build an edge to its
l1 +u1 α K1 L1 K1 α + l2 (Y2
(2)
k nearest neighbors in Euclidean distance measure. After
(2)
γA β T K2 β +
γI T that, the adjacency graph and weight matrix can be used
l2 +u2 β K2 L2 K2 β+
to form the the graph Laplacian L1 .
γI 0 (1)
l (J1 K1 α − J20 K2 β)T (J10 K1 α − J20 K2 β)}, 3. Apply LapRLS to learn the mapping function fx and
(5) (1)
fy , which are for x and y coordinates, respectively.
where Ks is the kernel matrix over labeled and unlabeled
examples in each view. Ys is an (ls + us ) dimensional label • Online Adaptation Phase
vector given by: Y = [y1 , ..., yls , 0, ..., 0], where the first ls 1. For each time period tj , j ≥ 1:
examples are labeled while the others are unlabeled. In our (1) (1)
If j = 1, the mapping function fx and fy can be ap-
case, Ys corresponds to a vector of locations of each coordi- 0(j) (1)
nate. Js is a diagonal matrix given by Js (i, i) = |Ys (i)|. Js0 : plied to estimate the location directly, fx = fx and
0(j) (1)
is a (l × (us + ls )) given by Js0 (i, i) = 1 for i = 1, 2, ..., l, fy = fy .
otherwise, Js0 (i, j) = 0. l is the number of corresponding If j > 1, randomly collect a few unlabel data uj around
pairs. In the above formulas, s = 1 and 2, respectively. (j)
the environment and get l labeled data: {(si , `i )}li=1 ,
The derivatives of the objective function, respectively, where `i is the location of ith reference point, from ref-
(1) (j)
vanish at the pair of minimizers α and β: erence points. Thus, each pair {(si , si )} is a corre-
sponding pair, where i = 1, ..., l.
(1)
( lµ1 J1 K1 + γA I1 + γI 0T 0
(1)
γI 2. Apply eManCoR to learn the pair of mapping functions
l J1 J1 K1 + l1 +u1 L1 K1 )α ∗(1) ∗(j) ∗(1) ∗(j) ∗(1) ∗(1)
(6) {fx , fx } and {fy , fy }. We use fx and fy
− γlI J10T J20 K2 β = µ
l1 Y1, as the new mapping functions in time period tj instead of
∗(j) ∗(j)
fx and fy . This is because the data in tj are insuffi- We first show the comparison results in Figure 4 (see next
cient to build a strong classifier. The learning process of page) where the number of reference points is fixed to 10,
∗(j) ∗(j) ∗(1)
fx and fy is to adapt the parameter of fx and fy
∗(1) and we further show the impact of different number of ref-
0(j) ∗(1) 0(j) ∗(1) erence points to LeManCoR in Figure 5. In Figure 4, Le-
with new data in tj . Thus fx = fx and fy = fy . ManCoR denotes our model while LeMan denotes the sta-
• Online Localization Phase tic mapping function leant in time period 12:30am-01:30am
by LapRLS. LeMan2 gives that for each online time period,
1. For each time period tj , each signal vector s̃ collected by reconstructed mapping function using the up-to-date read-
0(j) 0(j)
a mobile node, ˜ ` = (x̃, ỹ) = (fx (s̃), fy (s̃)) is the ing of reference points and a few new unlabeled data by
location estimation. LapRLS. LeMan3 denotes constructing the mapping func-
2. Optionally, we can apply a Bayes Filter (Fox et al. 2003; tion from RSS data collected in 12:30am-01:30am, up-to-
Hu & Evans 2004) to smooth the trajectory and enhance date reading of reference points and a few unlabeled data.
the performance of localization. Figure 4(a) shows that when the training data and test data
are in the same time period, 12:30am-01:30am, LeMan can
Experimental Results get a high location estimation accuracy. However, when this
static mapping function is applied to different time periods,
We evaluate the performance of LeManCoR in an of-
the accuracy decreases fast. See Figure 4(b), 4(c), 4(d), 4(e),
fice environment in an academic building, which is about
4(f). The curves of LeMan2 and show that only several la-
30 × 40m2 , over six time periods: 12:30am-1:30am,
beled and a few unlabeled data are far insufficient to build a
08:30am-09:30am, 12:30pm-1:30pm, 04:30pm-05:30pm,
new classifier. In the contrary, our LeManCoR system can
08:30pm-09:30pm and 10:30pm-11:30pm. The distribu-
adapt the mapping function effectively by only 10 reference
tions of RSS values of three different time periods shown in
points and 300 unlabeled examples in new time period.
Figure 3. For every time period, we collect totally 1635 WiFi
examples in 81 grids, each of which is about 1.5 × 1.5m2 . Robustness to the Number of Reference Points
Thus we obtained six data sets of WiFi signal data. We In this experiment, we study how is the impact of the num-
install several USB Wireless network adapters on various ber of reference points to LeManCoR. We set the number
PCs around the environment. These adapters act as refer- of reference points from 5 to 20, and compare the accuracy
ence points to collect the labeled signal data over differ- with error distance 3.0m. The result is shown in Figure 5. As
ent time periods. We consider the midnight time period can be seen, LeManCoR is robust to the number of reference
12:30am-1:30am as the offline training phase, when we ran- points. 1

domly choose 70% of the 1635 examples as the training


Culmulative Probability with Error Distance 3.0m

0.9
data, of which only 30% have labels, that means the cor- 0.8
responding locations are known. The other five periods are
considered as online phase. For each of them, we randomly 0.7

choose 20% of examples as unlabeled data used in the adap- 0.6

tation phase. 0.5

We use a Gaussian kernel exp(−kxi − xj k)2 /2σ 2 ) for 0.4

Equations (6) and (7), which is widely used in localization 0.3

problems for adapting the noisy characteristic of radio signal 0.2


5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

(Nguyen, Jordan, & Sinopoli 2005), where σ is set to 0.5. LeManCoR


NumberofofReference
Number
LeMan
ReferencePoint
Point
LeMan2 LeMan3
(1) (2) (1) (2)
For γA , γA , γI , γI , µ and γI in Equations (6) and (7), Figure 5: Varying the number of reference points in
(1) (2)
γ l γ l (1) (2)
we set l1A+u11 = l2A+u22 = 0.05 and γI l1 = γI l2 = 0.045. 12:30pm-1:30pm, where accuracy is with error distance 3m
µ = 1 and l1lγI = 0.3, where l = l2 in our case. Due to space limitation, several other issues are not dis-
22 16 18
cussed. For example we also varied the number of unlabeled
20

18
14

12
16

14
examples in online time periods, and observed that our Le-
ManCoR is also robust to the number of unlabeled data us-
16
12
14
Frequency

Frequency

Frequency

10

12 10

ing in online adaptation phase. We also test the running time


8
10 8

8 6
6
6

of LeManCoR on our Pentium 4 PC. For each time period,


4
4
4
2 2
2

learning an adapted mapping function from 800 old signal


0 0 0
−80 −75 −70 −65 −60 −55 −50 −80 −75 −70 −65 −60 −55 −50 −80 −75 −70 −65 −60 −55 −50
Signal Strength Values (unit: dB) Signal Strength Values (unit: dB) Signal Strength Values (unit: dB)

(a) 12:30-1:30 (am) (b) 8:30-9:30 (am) (c) 12:30-1:30 (pm) examples and 300 new signal examples only took 10 sec-
Figure 3: Variations of signal-strength histograms over dif- onds.
ferent time periods at the same location Conclusions and Future Work
Dynamic Location Estimation Accuracy In this paper, we describe LeManCoR, a dynamic localiza-
tion approach which transfers geometric knowledge of the
For evaluating the adaptability of LeManCoR, we compare WiFi data in different time periods by modified manifold co-
it with a static mapping method LeMan (Pan et al. 2006). regularization. Our model is based on the observations that
Since LeMan is also based on manifold setting, we can see in each different time period, similar signals strength im-
the adaptability of LeManCoR clearly by the comparison. ply close locations, and the pairwise signals over different
0.92 0.8 1

0.7 0.9
0.9

0.8
0.6
Culmulative Probability

Culmulative Probability

Culmulative Probability
0.88
0.7
0.5
0.86 0.6
0.4
0.5
0.84
0.3
0.4

0.82
0.2 0.3

0.8 0.1 0.2


1.5 1.75 2 2.25 2.5 2.75 3 1.5 1.75 2 2.25 2.5 2.75 3 1.5 1.75 2 2.25 2.5 2.75 3
Error Distance (unit: m) Error Distance (unit: m) Error Distance (unit: m)

(a) 12:30am-1:30am (b) 8:30am-9:00am (c) 12:30pm-1:30pm


0.8 0.9 0.9

0.7 0.8 0.8

0.6 0.7 0.7


Culmulative Probability

Culmulative Probability

Culmulative Probability
0.5 0.6 0.6

0.4 0.5 0.5

0.3 0.4 0.4

0.2 0.3 0.3

0.1 0.2 0.2


1.5 1.75 2 2.25 2.5 2.75 3 1.5 1.75 2 2.25 2.5 2.75 3 1.5 1.75 2 2.25 2.5 2.75 3
Error Distance (unit: m) Error Distance (unit: m) Error Distance (unit: m)

(d) 4:30pm-5:30pm (e) 20:30pm-9:30pm (f) 10:30pm-11:30pm


Number of Reference Point
LeManCoR LeMan LeMan2 LeMan3
Figure 4: Comparison of accuracy over different time periods when the number of reference points is 10

time periods should correspond to the same locations. The Localization over Large-Scale 802.11 Wireless Networks.
mapping function between the signal space and the physi- In Proceedings of MobiCom 2004.
cal location space is adapted dynamically by several labeled Hu, L., and Evans, D. 2004. Localization for mobile sensor
data from reference points and a few unlabeled data in a new networks. In Proceedings of MobiCom 2004.
time period. Experimental results show that our model can Krishnan, P.; Krishnakumar, A. S.; Ju, W.-H.; Mallows,
adapt the static mapping function effectively and is robust to C.; and Ganu, S. 2004. A System for LEASE: Location
the number of reference points. In the future, we will ex- Estimation Assisted by Stationary Emitters for Indoor RF
tend LeManCoR to continuous time instead of discrete time Wireless Networks. In Proceedings of INFOCOM 2004.
periods.
LaMarca, A.; Hightower, J.; Smith, I.; and Consolvo, S.
References 2005. Self-Mapping in 802.11 Location Systems. In Pro-
ceedings of Ubicomp 2005.
Bahl, P.; Balachandran, A.; and Padmanabhan, V. 2000. Nguyen, X.; Jordan, M. I.; and Sinopoli, B. 2005. A kernel-
Enhancements to the RADAR user location and tracking based learning approach to ad hoc sensor network localiza-
system. Technical report, Microsoft Research. tion. ACM Transactions on Sensor Networks 1(1):134–152.
Belkin, M.; Niyogi, P.; and Sindhwani, V. 2005. On Pan, J. J.; Yang, Q.; Chang, H.; and Yeung, D.-Y. 2006. A
Manifold Regularization. In Robert G. Cowell and Zoubin Manifold Regularization Approach to Calibration Reduc-
Ghahramani., ed., Proceedings of AISTATS 2005. tion for Sensor-Network Based Tracking. In Proceedings
of AAAI 2006.
Ferris, B.; Fox, D.; and Lawrence, N. 2007. WiFi-SLAM Schölkopf, B., and Smola, A., eds. 2002. Learning with
Using Gaussian Process Latent Variable Models. In Pro- Kernels, Support Vector Machines, Regularization, Opti-
ceedings of IJCAI 2007. mization and Beyond. Cambridge, Massachusetts: MIT
Ferris, B.; Haehnel, D.; and Fox, D. 2006. Gaussian Press.
Processes for Signal Strength-Based Location Estimation. Sindhwani, V.; Niyogi, P.; and Belkin, M. 2005. A
In Proceedings of Robotics: Science and Systems 2006. Co-Regularization Approach to Semi-supervised Learning
Fox, D.; Hightower, J.; Liao, L.; Schulz, D.; and Borriello, with Multiple Views. In Workshop on Learning with Mul-
G. 2003. Bayesian filtering for location estimation. IEEE tiple Views at ICML 2005.
Pervasive Computing 2(3):24–33. Yin, J.; Yang, Q.; and Ni, L. 2005. Adaptive temporal
radio maps for indoor location estimation. In Proceedings
Haeberlen, A.; Flannery, E.; Ladd, A. M.; Rudys, A.; Wal-
of PerCom 2005.
lach, D. S.; and Kavraki, L. E. 2004. Practical Robust

You might also like