You are on page 1of 8

International Journal of Advanced Computer Science, Vol. 2, No. 11, pp. 412-419, Nov., 2012.

Manuscript
Received:
28, Jan.,2012
Revised:
6, Jun.,2012
Accepted:
3, Aug.,2012
Published:
15, Dec.,2012
Keywords
Self-Organizing
Maps.
Clustering.
I nterval-valued
data.
Meteorological
stations data.


Abstract Kohonen neural Networks have
been widely used as multidimensional
unsupervised classifiers. The aim of this
paper is to develop a kohonen network for
interval data. Due to the increasing use of
such data in Data Mining, many clustering
methods for interval data have been
proposed this last decade. In this paper, we
propose an algorithm to train the Kohonen
Network in order to cluster interval data
while preserving the topology of the data.
Similar data vectors will be allocated to
same neuron or to a neighbor neuron on
the Kohonen network. We use an extension
of the Euclidian distance to compare two
vectors of intervals. In order to show the
usefulness of our approach, we apply the
proposed algorithm on real interval data
issued from Chinese and Lebanese
meteorological stations.


1. Introduction
In real world applications, data may not be formatted
as single values, but are represented by lists, intervals,
distributions, etc. This type of data is called symbolic data.
Interval data are a kind of symbolic data that typically
reflect the variability and uncertainty in the observed
measurements. Many data analysis tools have been already
extended to handle in a natural way interval data: principal
component analysis (see for example [1]), factor analysis
[2], regression analysis [3], multilayer perceptron [4], etc.
Within the clustering framework, several authors presented
clustering algorithms for interval data. Chavent and
Lechevallier [5] proposed a dynamic clustering algorithm
for interval data where the prototypes are elements of the
representation space of objects to classify, that is to say
vectors whose components are intervals. In this approach,
prototypes are defined by the optimization of an adequacy
criterion based on Hausdorff distance [6], [7]. Bock [8]
constructed a self-organizing map (SOM) based on the


Chantal Hajjar and Hani Hamdan, SUPELEC, Department of Signal
Processing and Electronic Systems, France (ChantalHajjar@Supelec.fr,
HaniHamdan@Supelec.fr)
vertex-type distance for visualizing interval data. Hamdan
and Govaert developed a theory on mixture model-based
clustering for interval data. In this context, they proposed
two interval data-based maximum likelihood approaches:
the mixture approach [9], [10], [11] and the classification
approach [12], [13]. In the mixture approach, a partition of
interval data can be directly derived from the interval
data-based maximum likelihood estimates of the mixture
model parameters by assigning each individual (i.e. a vector
of intervals) to the component which provides the greatest
conditional probability that this individual arises from it. In
the classification approach, a partition is derived by
maximizing the interval data-based likelihood over the
mixture model parameters and over the identifying labels
for mixture component origin of each vector of intervals.
Chavent [14] presented an algorithm similar to that
presented in [5] by providing the

L Hausdorff distance
between two vectors of intervals. De Souza and De
Carvalho [15] proposed two dynamic clustering methods for
interval data. The first method uses an extension for interval
data of the city-block distance. The second method is
adaptive and has been proposed in two variants. In the first
variant, the adaptive distance has a single component,
whereas it has two components in the second variant. De
Souza et al. [16] proposed two dynamic clustering methods,
based on Mahalanobis distance, for interval data. In both
methods, the prototypes are defined by optimizing an
adequacy criterion based on an extension for interval data of
the Mahalanobis distance. In the first method, the used
distance is adaptive and common to all classes, and
prototypes are vectors of intervals. In the second method,
each class has its own adaptive distance, and the prototype
of each class is then composed of an interval vector and an
adaptive distance. El Golli et al. [17] proposed an
adaptation of the self-organizing map to interval-valued
dissimilarity data by implementing the SOM algorithm on
interval-valued dissimilarity measures rather than on
individuals-variables interval data. De Carvalho et al. [18]
applied the dynamic clustering algorithm based on L
2

distance to interval data and presented three standardization
techniques for interval-type variables.
Recently, Hajjar and Hamdan proposed a
self-organizing map for individuals-variables interval data
Kohonen Neural Networks for Interval-valued
Data Clustering
Chantal Hajjar & Hani Hamdan
Chantal Hajjar et al.: Kohonen Neural networks for Interval-valued Data clustering.
International Journal Publishers Group (IJPG)


413
using the following distances: L
2
distance [19], [20],
Hausforff distance [21] and city-block distance [22].
In this paper, we propose a Kohonen Network based
on an extension of L
2
distance for interval data. In Section
2, we give a definition of the Kohonen Networks and their
training algorithms. In Section 3, we present the Kohonen
Network algorithm for interval data. In Section 4, we show
the results of implementation of our approach on real
interval data observed in Chinese and Lebanese
meteorological stations. Finally, in Section 5, we give our
conclusion.
2. Kohonen Networks
A Kohonen Network also called Self-Organizing Map
(SOM) is a kind of artificial neural network trained using
unsupervised learning methods to map high dimensional
data into a low dimensional space. The concept of the
self-organizing maps was developed by Kohonen in 1984
[23], [24].
A. Architecture
From an architectural point of view, a SOM consists of
a grid (usually one or two-dimensional) made of
components called neurons. Each neuron is related to a
prototype vector that represents a group of similar data
vectors in the input space.

Fig. 1. Mapping the data space to the self-organizing map.
In Figure 1, the prototype
k
w
represents an area of the
data space. It is related to the neuron
k
of the grid
positioned at the second line of the third column.
In a self-organizing map, the prototype vectors provide
a discrete representation of the input space. They are
positioned in a way that they retain the topological
properties of the input space. In this paper, we use the
self-organizing maps for clustering where each neuron
represents a class while preserving the topology of the data.


Fig. 2. SOM prototypes update.
B. Learning Algorithms
After initializing the prototype vectors, the
self-organizing map is usually trained using either
incremental or batch training.
In the incremental training, each iteration consists of
presenting a randomly chosen input vector to the network
and its Euclidian distance to all prototype vectors is
calculated. The neuron whose prototype vector is closest to
the input vector is called the Best Matching Unit (BMU). If
we denote c the BMU of input vector x, and
c
w the
prototype vector of this BMU, c is defined as the neuron
for which:
) , ( = ) , (
k
,K k=1,
c
d min d w x w x

(1)
where d is the Euclidian distance.
After finding the BMU, the prototypes of the BMU
and of the neurons belonging to a certain neighborhood
around it are updated to get closer to the input vector. The
neighborhood size is broad at the beginning of the training
and then shrinks with time. A neighborhood function is
International Journal of Advanced Computer Science, Vol. 2, No. 11, pp. 412-419, Nov., 2012.
International Journal Publishers Group (IJPG)


414
required to determine the closeness between neurons. The
more the neurons are close to the BMU, the more their
prototypes will be pulled toward the input vector (see
Figure 2). In this paper, we use the Gaussian function as
neighborhood function.

( )
|
|
.
|

\
|

) ( 2
) , (
exp = ) (
2
2
t
d
t h
k c
ck
o
o
r r
|
|
.
|

\
|

) ( 2
exp =
2
2
t
k c
o
r r
(2)
where
c
r
and
k
r are respectively the location of neuron
c and neuron k on the grid and ) (t o is the
neighborhood size at time t which defines also the width
of the Gaussian function. Figure 3 represents a Gaussian
neighborhood function with neighborhood size equal to 2


Fig. 3. Gaussian neighborhood function.

Equation (3) shows the updating of the prototype vectors at
iteration 1 t + .

( )| | ) ( ) ( ) ( ) ( ) ( = 1) ( t t t h t t t
k ck k k
w x w w + + o o (3)
) , 1, = ( K k
where ) (t x is the input vector, ( ) ) (t h
ck
o is the
neighborhood function, c is the BMU of x,
) (t o
is
the neighborhood size and ) (t o is the learning rate. The
learning rate takes large values at the beginning of the
training and decreases with time.
In the batch algorithm, the prototype vectors are
updated after presenting the whole data set to the map
according to Equation (4). Each prototype vector is replaced
by a weighted mean over the data vectors. The weights are
the neighborhood function values.
( )
( ) ) (
) (
= 1) (
1 =
1 =
t h
t h
t
kc(i)
n
i
i kc(i)
n
i
k
o
o

+
x
w
(4)
) , 1, = ( K k
where
( ) ) (
) (
t h
i kc
o
is the neighborhood function between
the neuron k and the BMU ) (i c of the input vector
i
x ,
) (t o is the neighborhood size and ) (i c is the neuron
whose prototype vector is closest to
i
x in term of
Euclidian distance. All the input vectors associated to the
same BMU have the same value of the neighborhood
function.
In the last few iterations of the algorithm, when the
neighborhood size tends to zero, the neighborhood function
( ) ) (
) (
t h
i kc
o
will be equal to 1 only if
) ( = i c k
( k is the
BMU of input vector
i
x ) and 0 otherwise. The input data
set is then clustered into K classes. The center of each class
k
C is the neuron k whose prototype vector
k
w is a
mean of the data vectors belonging to that class. This
implies that the updating formula of Equation (4) will
minimize, at convergence of the algorithm, the
2
L
distance clustering criterion:
) , ( =
1 =
k i
k
C
K
k
d G
i
w x
x

e
(5)
In addition, using the values of the neighborhood
function as weights in the weighted mean defined in
Equation (4) will preserve the topology of the map. We
notice that this method is similar to the dynamical clustering
method (in this case K-means) with the advantage that
clusters that are close to each other are mapped to
neighboring neurons on the map grid.
3. Kohonen Networks for Interval
Data
Let } {
n
R R R ,..., =
1
be a set of n symbolic data
objects described by p interval variables. Each object
i
R
is represented by a vector of intervals
T p
i
p
i i i i
b a b a ]) , [ , ], , ([ =
1 1
R where
} , , ; ] , {[ = ] , [ b a b a b a I b a
j
i
j
i
s 9 e e .
A. L
2
distance between two vectors of intervals
The distance between two vectors of intervals
T p
i
p
i i i i
b a b a ]) , [ , ], , ([ =
1 1
R and
T p
i
p
i i i i
b a b a ]) , [ , ], , ([ =
1 1
' ' ' ' '
R is defined by the
Minkowski-like distance of type
2
L given by:
(

+
' '
2 2
1 =
=
j
i
j
i
j
i
j
i
p
j
b b a a d
(6)
B. Learning Algorithm for Interval Data
Chantal Hajjar et al.: Kohonen Neural networks for Interval-valued Data clustering.
International Journal Publishers Group (IJPG)


415
The set R is used to train a self-organizing
rectangular map of K neurons. Each neuron k of the
map has an associated p -dimensional prototype consisting
of a vector of intervals
T p
k
p
k k k k
v u v u ]) , [ , ], , ([ =
1 1
w .
The proposed algorithm is based on the batch training
algorithm explained in Section 2.B. The L
2
distance defined
in Equation (6) is used to measure the proximity between
two vectors of intervals. The Gaussian neighborhood
function presented in Equation (2) is used to determine the
closeness between neurons.
The algorithm is listed as follows:
1. Initialization: 0 = t
- Choose the map dimensions ) , ( cols lines .
The number of neurons is
) , ( cols lines K =
- Choose the initial value (
i
o ) and final value
(
f
o ) of the neighborhood size.
- Choose the total number of iterations
( TolalIter ).
- Choose the first K input vectors as the first
initial prototype vectors:
2. Allocation:
- For 1 = i to n , compute the Best Matching
Unit ) (i c of the input vector
T p
i
p
i i i i
b a b a ]) , [ , ], , ([ =
1 1
R . ) (i c is the
neuron whose prototype vector is closest to
data vector
i
R in term of
2
L distance:

) , ( = ) , (
) ( k i
,K k=1,
i c i
d min d w R w R

(7)
where d is the L
2
distance defined in
Equation (6).
- For 1 = k to K , compute the values of the
neighborhood function
( ) ) ,..., 1 ( , ) (
) (
n i t h
i kc
= o
3. Training:
- Update the prototype vectors of the map:
For 1 = k to K , compute the prototype
vectors as follows:
( )
( )
) , 1, = ( ,
) (
) (
= 1) (
) (
1 =
) (
1 =
p j
t h
a t h
t u
i kc
n
i
j
i i kc
n
i j
k

o
o

+


( )
( )
) , 1, = ( ,
) (
) (
= 1) (
) (
1 =
) (
1 =
p j
t h
b t h
t v
i kc
n
i
j
i i kc
n
i j
k

o
o

+

4. Increment t and reduce the neighborhood size
) (t o according to Equation (8). Repeat step 2
until t reaches the maximum number of iterations
( TolalIter ).
) ).( ( ) (
i f i
totalIter
t
t o o o o + =
(8)
4. Experimental Results
Two data sets are used for experiments. The first one
consists of Chinese interval meteorological data. The
second one consists of Lebanese interval meteorological
data.
A. Chinese Meteorological Data
The Chinese data set concerns monthly minimal and
maximal temperatures observed in 60 meteorological
stations mounted all over China. These data are provided by
the Institute of Atmospheric Physics of the Chinese
Academy of Sciences in Beijing, China and can be
downloaded at http://dss.ucar.edu/datasets/ds578.5/
Table 1 shows an example of the interval data set. The
lower bound and the upper bound of each interval are
respectively the monthly minimal and maximal temperature
recorded by a station for the year 1988.
TABLE 1
CHINESE STATIONS MINIMAL AND MAXIMAL MONTHLY TEMPERATURES


The data set consists of 60 = n vectors of intervals
each one of dimension 12 = p . The self-organizing map is a
square grid composed of 16 = K neurons. The training of
the map is performed according to the algorithm described
in Section 3.B. The initial neighborhood size is 3 =
i
o and
the final neighborhood size is
0.1 =
f
o
. The total number
Num Station January February ... December
1 AnQing [1.8,7.1] [2.1,7.2] ... [4.3,11.8]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30 NenJiang [-27.9,-16] [-27.7,-12.9
]
... [-26.1,-13.8
]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
60 ZhiJiang [2.7,8.4] [2.7,8.7] ... [5.1,13.3]

International Journal of Advanced Computer Science, Vol. 2, No. 11, pp. 412-419, Nov., 2012.
International Journal Publishers Group (IJPG)


416
of iterations is 200 = totalIter . The initial prototype
vectors are equal to the first K input vectors.

-10 -5 0 5
-10
-8
-6
-4
-2
0
2
4
6


Data
Prototypes
Prototype Centers

Fig. 4. PCA Projection of the prototype vectors and the data for the
Chinese data set.

1) Visualization of the map and the data in
two-dimensional subspace: In order to be able to visualize
the map and the data, we used interval principal component
analysis (method of centers) [1], [19] to project the
prototype vectors and the data vectors on a subspace
spanned by two eigenvectors of the data with greatest
eigenvalues. Figure 4 shows the results of PCA projection
of the data vectors and the prototype vectors connected with
their centers. We notice a good degree of map deployment
over the data and a good degree of map topology
preserving.
2) Clustering results and interpretation: The trained
network leads us to a partition of 16 clusters.


Fig. 5. SOM grid and stations clustering results for the Chinese data set.
Figure 5 shows the repartition of the data vectors over
the 16 neurons. Figure 6 represents the China map
containing the 60 stations. All stations of the same cluster
are drawn with the same colour. We can conclude that the
stations located near each other geographically or
situated approximately at the same latitude tend to be
assigned to the same cluster or to a neighbour cluster.
The first neuron (first line, first column) of the grid
contains stations installed in the warmest regions of
China (south coast). The last neuron of the grid (fourth
line, fourth column) contains stations installed in the
coldest regions of China. We can conclude that colder
regions are found as we move down and to the right on
the SOM grid. Appendix 1 contains the list of the 60
stations.


Fig. 6. Clusters distribution on the geographical map of China.
B. Lebanese Meteorological Data
The second data set used as experiment concerns
monthly average of daily minimal and monthly average of
daily maximal temperatures observed in 42 meteorological
stations spread all over Lebanon for the year 2010. These
stations are installed and managed by the Lebanese
Agriculture Research Institute (LARI).
Table 2 shows an example of the interval data set. The
lower bound and the upper bound of each interval are
respectively the monthly minimal and maximal temperature
recorded by a station for the year 2010.
TABLE 2
LEBANESE STATIONS MINIMAL AND MAXIMAL MONTHLY TEMPERATURES


The data set consists of 42 = n vectors of intervals
Num Station January February ... December
1 Tal Amara [2.3,13.5] [2.3,14.2] ... [2.1,16.0]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21 Akoura [4.0,11.0] [3.7,11.0] [6.0,13.2]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
42 Aabdeh [11.6,20.3] [11,20.6] [13.2,23.6]

Chantal Hajjar et al.: Kohonen Neural networks for Interval-valued Data clustering.
International Journal Publishers Group (IJPG)


417
each one of dimension 12 = p . The self-organizing map is
a rectangular grid ) 4 , 3 ( = = Cols Lines composed of
12 = K neurons. The training of the map is performed
according to the algorithm described in Section 3.B. The
initial neighborhood size is
3 =
i
o
and the final
neighborhood size is
0.1 =
f
o
. The total number of
iterations is 200 = totalIter . The initial prototype vectors
are equal to the first K input vectors.
1) Visualization of the map and the data in
two-dimensional subspace: Figure 7 shows the results of the
Interval PCA projection of the data vectors and the
prototype vectors connected with their centers. We notice a
good degree of map deployment over the data and a good
degree of map topology preserving.
-20 -15 -10 -5 0 5 10 15
-20
-15
-10
-5
0
5
10
15


Data
Prototypes
Prototype Centers

Fig. 7. PCA Projection of the prototype vectors and the data for the
Lebanese data set.
2) Clustering results and interpretation: The trained
network leads us to a partition of 12 clusters. Figure 8
shows the repartition of the data vectors over the 12 neurons
or clusters. Figure 9 represents the Lebanon map containing
the 42 stations. All stations of the same cluster are
connected together with the same colour. We notice that the
stations located near each other geographically or installed
in regions having similar climate tend to be assigned to the
same neuron or to a neighbour neuron on the SOM grid.
The first neuron of the grid (first row, first column) contains
stations installed in the coldest regions of Lebanon. The
neurons located at the third row of the first and second
columns of the grid contain stations mounted in the Bekaa
plain. Stations of the coastal regions, having the highest
temperatures, are allocated to the last neuron of the grid
(third row, fourth column). We can conclude that warmer
regions are found as we move down and to the right on the
SOM grid. Appendix 2 contains the list of the 42 stations.


Fig. 8. SOM grid and stations clustering results for the Lebanese data set.

Fig. 9. Clusters distribution on the geographical map of Lebanon.

5. Conclusion
We proposed an algorithm to train the self-organizing
map for interval data. We used the L
2
distance to compare
two vectors of intervals. We applied our method on real
interval temperature data provided by Chinese and
Lebanese meteorological stations. We obtained good
clustering results while preserving the topology of the data.
Stations installed in regions with similar climate were
allocated to the same neuron or to a neighbor neuron on the
SOM grid.
A prospect of this work might be to develop a
self-organizing map for interval-valued data with adaptive
distances which allows the recognition of clusters of
different shapes and sizes.
International Journal of Advanced Computer Science, Vol. 2, No. 11, pp. 412-419, Nov., 2012.
International Journal Publishers Group (IJPG)


418
References
[1] P. Cazes, A. Chouakria, E. Diday, & Y. Schektman,
Extension de lanalyse en composantes principales
des donnes de type intervalle, ( 1997) Revue de
Statistique Applique, vol. 15, no. 3, pp. 524.
[2] A. Chouakria, Extension des mthodes danalyse
factorielle des donnes de type intervalle, (1998)
Ph.D. dissertation, Universit Paris 9 Dauphine.
[3] L. Billard & E. Diday, Regression analysis for
interval-valued data, (2000) Data Analysis,
Classification, and Related Methods, Proc.
IFCS2000, Namur, Belgium, H. Kiers, P. Groenen,
J.-P. Rasson, & S. M., Eds. Springer Verlag, 11-14
July.
[4] F. Rossi & B. Conan-Guez, Multi-layer perceptron
on interval data, (2002) Classification, Clustering,
and Data Analysis, K. Jajuga, A. Sokolowski, & H.
H. Bock, Eds. Berlin, Germany: Springer, pp.
427436.
[5] M. Chavent & Y. Lechevallier, Dynamical
clustering of interval data: Optimization of an
adequacy criterion based on hausdorff distance
(2002) Classification, Clustering and Data Analysis,
K. Jajuga, A. Sokolowski, & H. H. Bock, Eds.
Berlin, Germany: Springer, pp. 5360, also in the
proceedings of IFCS2002, Poland.
[6] M. Barnsley, (1993) Fractals Everywhere, Second
Edition. Academic Press.
[7] M. Chavent, Analyse des donnes symboliques.
une mthode divisive de classification, (1997)
Ph.D. dissertation, Universit de PARIS-IX
Dauphine.
[8] H. H. Bock, Clustering methods and Kohonen
maps for symbolic data, (2003) Journal of the
Japanese Society of Computational Statistics, vol.
15, no. 2, pp. 217229.
[9] H. Hamdan & G. Govaert, Classification de
donnes de type intervalle via lalgorithme EM,
(2003) XXXV
me
Journes de Statistique, SFdS,
Lyon, France, 2-6 juin, pp. 549552.
[10] Modlisation et classification de donnes
imprcises, (2003) CIMNA03, premier congrs
international sur les modlisations numriques
appliques, Beyrouth, Liban, 14-15 novembre, pp.
1619.
[11] Mixture model clustering of uncertain data, (2005)
IEEE International Conference on Fuzzy Systems,
Reno, Nevada, USA, 22-25 may, pp. 879884.
[12] CEM algorithm for imprecise data. Application to
flaw diagnosis using acoustic emission, (2004)
IEEE International Conference on Systems, Man
and Cybernetics, The Hague, The Netherlands,
10-13 october, pp. 47744779.
[13] Int-EM-CEM algorithm for imprecise data.
Comparison with the CEM algorithm using monte
carlo simulations, (2004) IEEE International
Conference on Cybernetics and Intelligent Systems,
Singapore, 1-3 december, pp. 410415.
[14] M. Chavent, An Hausdorff distance between
hyper-rectangles for clustering interval data, (2004)
Classification, Clustering and Data Mining
Applications, D. Banks, L. House, F. R. McMorris,
P. Arabie, & W. Gaul, Eds. Springer, pp. 333340.
[15] R. M. C. R. De Souza & F. A. T. De Carvalho,
Clustering of interval data based on city-block
distances, (2004) Pattern Recognition Letters, vol.
25, no. 3, pp. 353365.
[16] R. M. C. R. De Souza, F. A. T. De Carvalho, C. P.
Tenrio, & Y. Lechevallier, Dynamic cluster
methods for interval data based on mahalanobis
distances, (2004) Proceedings of the 9th
Conference of the International Federation of
Classification Societies. Chicago, USA:
Springer-Verlag, pp. 351360.
[17] A. El Golli, B. Conan-Guez, & F. Rossi,
Self-organizing maps and symbolic data, (2004)
JSDA Electronic Journal of Symbolic Data Analysis,
vol. 2, no. 1.
[18] F. A. T. De Carvalho, P. Brito, & H. H. Bock,
Dynamic clustering for interval data based on L
2

distance, (2006) Computational Statistics, vol. 21,
no. 2, pp. 231250.
[19] C. Hajjar & H. Hamdan, Self-organizing map
based on L
2
distance for interval-valued Data,
(2011) IEEE International Symposium on Applied
Computational Intelligence and Informatics,
Timisoara, Romania, 19-21 may, pp. 317-322.
[20] H. Hamdan & C. Hajjar, A neural networks
approach to interval-valued data clustering.
Application to Lebanese meteorological stations
data, (2011) IEEE Workshop on Signal Processing
Systems, Beirut, Lebanon, 4-7 october , pp.
373378.
[21] C. Hajjar & H. Hamdan, Self-organizing map
based on Hausdorff distance for interval-valued
data, (2011) IEEE International Conference on
Systems, Man, and Cybernetics, Anchorage, Alaska,
9-12 october, pp. 17471752.
[22] Self-Organizing Map based on city-block distance
for interval-valued data, (2011) Complex Systems
Design & Management, Paris, France, 7-9 december,
Chantal Hajjar et al.: Kohonen Neural networks for Interval-valued Data clustering.
International Journal Publishers Group (IJPG)


419
pp. 281-292.
[23] T. Kohonen, (1984) Self Organization and
Associative Memory, Second Edition.
Springer-Verlag.
[24] (2001) Self-Organizing Maps, Third Edition.
Springer.

Appendix 1 List of Chinese Stations

Table 3 lists the 60 Chinese meteorological stations.

TABLE 3
CHINESE METEOROLOGICAL STATIONS.

Appendix 2 List of Lebanese Stations

Table 4 lists the 42 Lebanese meteorological stations.

TABLE 4
LEBANESE METEOROLOGICAL STATIONS.






Num Station Num Station
1 AnQing 31 QingDao
2 BaoDing 32 QingJiang
3 BeiJing 33 QiQiHaEr
4 BoKeTu 34 QuZhou
5 ChangChun 35 ShangHai
6 ChangSha 36 ShanTou
7 ChengDu 37 ShenYang
8 ChongQing 38 TaiYuan
9 DaLian 39 TengChong
10 FuZhou 40 TianJin
11 GuangZhou 41 TianShui
12 GuiYang 42 WenZhou
13 HaErBin 43 WuHan
14 HaiKou 44 WuLuMuQi
15 Hailaer 45 WuZhou
16 HaMi 46 XiaMen
17 HangZhou 47 XiAn
18 HanZhong 48 XiChang
19 HuHeHaoTe 49 XiNing
20 JiNan 50 XuZhou
21 Jiu Quan 51 YanTai
22 KunMing 52 Yi Ning
23 LanZhou 53 YiChang
24 LaSa 54 YinChuan
25 LiuZhou 55 YongAn
26 MuDanJiang 56 YuLin
27 NanChang 57 ZhangYe
28 NanJing 58 ZhanJiang
29 NanNing 59 ZhengZhou
30 NenJiang 60 ZhiJiang

Num Station Num Station
1 Tel Amara 22 Baakline
2 Fanar 23 Btedii
3 Ghazir 24 Byr Hassan
4 Hawch Ammik 25 Derdgheiya
5 Kaa 26 Ehden
6 Kfarchakhna 27 Fnaydek
7 Lebaa 28 Hermil
8 Nabatyeh 29 Hoch el Omara
9 Shheem 30 Bar Elias
10 Rachaya el Fakhar 31 Jabouleh
11 Rmeich 32 Kherbit Kanafar
12 Saida 33 Machghara
13 Tyr 34 Markaba
14 Sir el Donieh 35 Mayrouba
15 Talia 36 Mchaytiyeh
16 Tarchich 37 Mymis
17 Watahoub 38 Terbol
18 Aamatour 39 Kfarhata
19 Aarsal 40 Kferden
20 Ain el Abou 41 Kheyem
21 Akoura 42 Aabdeh

You might also like