You are on page 1of 8

Combining shape moments features for

improving the retrieval performance


A.Mohamed Eisa., Computer Science Department Port-Said University
B. Amira Eletrebi, Computer Science Department Mansoura University
C. Ebrahim Elhenawy, Computer Science Department, Zagazig University


Abstract Content-based Image Retrieval (CBIR) is fast growing technology and is the field that deals with application of computer for
retrieval of images from digital libraries. Recently, the CBIR has become the hot topic and the techniques of CBIR have been achieved great
development. In medical field the objective of CBIR is to permit radiologist to retrieve images of similar features that lead to similar diagnosis
as the input image. This is different from other field where the objective is to find the nearest image from the same category of an image.This
paper aims to present a novel methods based on image moments. In many applications, different kinds of moments have been utilized to
retrieval images and object shapes.Moments are important features used in recognition of different types of images. In this paper, three kinds
of moments: Geometrical, Zernike and Legendre Moments have been evaluated for Retrieval images using Nearest Neighbor classifier.
Index Terms Features Extraction, Geometrical Moments, Zernike and Legendre Moments, Nearest Neighbor classifier.



1 INTRODUCTION
In image analysis, it is of utmost importance to look for
pattern features that are invariant with respect to change
of size, translation, and/or rotation [1]. There are two
different moment approaches to this problem: (i) direct
description by moment invariants and (ii) image normali-
zation. The direct description of moment invariants was
first introduced by Hu, showing how they can be derived
from algebraic invariants in his fundamental theorem of
moment invariants [2]. He used geometric moments to
generate a set of invariants that were then widely used in
pattern recognition, ship identification [3], aircraft identi-
fication [4], pattern matching, scene matching [5], image
analysis, object representation [6], edge detection [7], and
texture analysis [8]. The Hus invariants became classical
and, despite of their drawbacks, they have found numer-
ous successful applications in various areas.
Major weakness of the Hus theory is that it does not pro-
vide for a possibility of any generalization [9]. Examples
of moment-based feature descriptors include Cartesian
geometrical moments, rotational moments, orthogonal
moments, and complex moments. Moments with an or-
thogonal basis set (e.g., Legendre and Zernike polynomi-
als) can be used to represent the image with a minimum
amount of information redundancy [10].
These orthogonal moments and their inverse transforms
have been used in the field of pattern representation [11],
image analysis [12], and image reconstruction [13] with
some success. As is well known, the difficulty in the use
of moments is due to their high computational complexi-
ty, especially when a higher order of moments is used.
Teague proposed Zernike moments based on the basis set
of orthogonal Zernike polynomials [14]. Other orthogonal
moments are Legendre and pseudo-Zernike moments
which are derived from Legendre and pseudo-Zernike
polynomials, respectively. Zernike moments have been
proven to be more robust in the presence of noise. They
are able to achieve a near-zero value of redundancy
measure in a set of moment functions where the moments
correspond to independent characteristics of the image
[15].
Since their moment functions are defined using a polar
coordinate representation of the image space, Zernike
moments are commonly used in recognition tasks requir-
ing rotation invariance. However, this coordinate repre-
sentation does not easily yield translation invariant func-
tions, which are also sought after in pattern recognition
applications. Since the Zernike and Legendre polynomials
are defined only inside the unit circle, the computation of
those moments requires a coordinate transformation and
suitable approximation of the continuous moment inte-
grals [16].
In various computer vision applications widely used is
the process of retrieving desired images from a large col-
lection on the basis of features that can be automatically
extracted from the images themselves. These systems
called CBIR (Content-Based Image Retrieval).
The algorithms used in tasks [16]: extraction, selection
and classification.
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 23
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617


The extraction task transforms rich content of images into
various content features. Feature extraction is the process
of generating features to be used in the selection and clas-
sification tasks. Feature selection reduces the number of
features provided to the classification task. Those features
which are likely to assist in discrimination are selected
and used in the classification task. Features which are not
selected are discarded [17]. Image classification helps the
selection of proper features and descriptors for the index-
ing and retrieval purpose. It enhances not only the re-
trieval accuracy but also the retrieval speed, since a large
image database can be organized according to the classifi-
cation rule and search can be performed within relevant
classes [17]. In this paper three kinds of moments (Geo-
metric, Zernike, and Legendre) are used for feature ex-
traction. While Nearest Neighbor Classifier is used for
classifying the contours of 3D images.
2 FEATURES EXTRACTION
The feature is defined as a function of one or more meas-
urements, each of which specifies some quantifiable
property of an object, and is computed such that it quanti-
fies some significant characteristics of the object. In pat-
tern recognition and in image processing, feature extrac-
tion is a special form of dimensionality reduction. When
the input data to an algorithm is too large to be processed
and it is suspected to be notoriously redundant (much
data, but not much information) then the input data will
be transformed into a reduced representation set of fea-
tures (also named features vector).
Transforming the input data into the set of features is
called features extraction. If the features extracted are
carefully chosen it is expected that the features set will
extract the relevant information from the input data in
order to perform the desired task using this reduced rep-
resentation instead of the full size input [18]. Feature ex-
traction involves simplifying the amount of resources
required to describe a large set of data accurately. When
performing analysis of complex data one of the major
problems stems from the number of variables involved.
Analysis with a large number of variables generally re-
quires a large amount of memory and computation pow-
er or a classification algorithm which over fits the training
sample and generalizes poorly to new samples. Feature
extraction is a general term for methods of constructing
combinations of the variables to get around these prob-
lems while still describing the data with sufficient accura-
cy.
There are various features currently employed [18]:
1- General features: Application independent fea-
tures such as color, texture, and shape. Accord-
ing to the abstraction level, they can be further
divided into:
- Pixel-level features: Features calculated at each
pixel, e.g. color, location.
- Local features: Features calculated over the re-
sults of subdivision of the image band on
image segmentation or edge detection.
- Global features: Features calculated over the
entire image or just regular sub-area of an
image.

2- Domain-specific features: Application dependent
features such as human faces, fingerprints, and
conceptual features. These features are often a
synthesis of low-level features for a specific do-
main.
Moment functions of the two-dimensional image intensi-
ty distribution are used in a variety of applications, as
descriptors of shape. Image moments that are invariant
with respect to the transformations of scale, translation,
and rotation find applications in areas such as pattern
recognition, object identification and template matching
[19]. Orthogonal moments have additional properties of
being more robust in the presence of image noise, and
having a near-zero redundancy measure in a feature set.
Zernike moments, which are proven to have very good
image feature representation capabilities, are based on the
orthogonal Zernike radial polynomials. They are effec-
tively used in pattern recognition since their rotational
invariants can be easily constructed. Legendre moments
form another orthogonal set, defined on the Cartesian
coordinate space. Orthogonal moments also permit the
analytical reconstruction of an image intensity function
from a finite set of moments, using the inverse moment
transform. Both Legendre and Zernike moments are de-
fined as continuous integrals over a domain of normal-
ized coordinates [20]. In retrieval applications, a small set
of lower order moments is used to discriminate among
different images. The most common moments are: Geo-
metrical moments, Zernike moments and Legendre mo-
ments.
2.1 GEOMETRICAL MOMENTS
The shape of an object is a very important character in
humans perception, recognition, and comprehension.
Because geometric shape represents the essential charac-
teristic of an object, and has invariance with respect to
translation, scale and orientation, the analysis and dis-
cernment like geometry are of important significance in
computer vision. Historically, Hu published the first sig-
nificant paper on the use of image moment invariants for
two-dimensional pattern recognition applications [21].
His approach is based on the work of the 19th century
mathematicians Boole, Cayley and Sylvester, and on the
theory of algebraic forms.

02 20 1
+ = M
(1)

2
11
2
02 20 2
4 + ) - ( = M
(2)

2
03 21
2
12 30 3
) + 3( + ) 3 - ( = M
(3)

2
03 21
2
12 30 4
) + ( + ) + ( = M
(4)
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 24
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617



| |
| |
2
03 21
2
12 30
03 21 03 21
2
03 21
2
12 30
12 30 12 30 5
) ( ) ( 3
) )( 3 (
) ( 3 ) (
) )( 3 (




+ +
+ +
+ +
+ = M
(5)

| |
) )( ( 4
) ( ) (
) (
03 21 12 30 11
2
03 21
2
12 30
02 20 6



+ + +
+ +
= M
(6)

| |
| |
2
03 21
2
12 30
03 21 30 12
2
03 21
2
12 30
12 30 03 21 7
) ( ) ( 3
) )( 3 (
) ( 3 ) (
) )( 3 (




+ +
+ +
+ +
+ = M
(7)

Geometric moments of a
1D
signal
S(x)
are defined by
[21]:
}

= + =
e
e
,... 2 , 1 , 0 ) ( ) ( n dt t t x S x M
n
n
(8)
where
) (x M
n
is the moment of order n calculated from
a window of size
1) + (2e
pixels centered at the point
x
. Geometric moments of a 2D image
y) I(x,
are defined
by [21]:
} }

= + + =
l
l
n m
n m n m dudv v u v y u x I y x M
=
=
=
=
2
2
, ,... 2 , 1 , 0 , ) , ( ) , (
(9)
where
) (
,
x M
n m
is the moment of order
n) (m,
calcu-
lated from a window of size
1) + (2 1) + (2
2 1
= =
pix-
els centered at the pixel
y). (x,


2.2 ZERNIKE MOMENTS
Teague first introduced the use of Zernike moments to
overcome the shortcomings of information redundancy
present in the popular geometric moments [22]. Zernike
moments are a class of orthogonal moments and have
been shown effective in terms of image representation.
Zernike moments, a type of moment function, are the
mapping of an image onto a set of complex Zernike poly-
nomials. As these Zernike polynomials are orthogonal to
each other, Zernike moments can represent the properties
of an image with no redundancy or overlap of infor-
mation between the moments [23].
Due to these characteristics, Zernike moments have been
utilized as feature sets in applications such as pattern
recognition and content-based image retrieval [24]. To
calculate the Zernike moments, the image (or region of
interest) is first mapped to the unit disc using polar coor-
dinates, where the centre of the image is the origin of the
unit disc. Those pixels falling outside the unit disc are not
used in the calculation. The coordinates are then de-
scribed by the length of the vector from the origin to the
coordinate point. An important attribute of the geometric
representations of Zernike polynomials is that lower or-
der polynomials approximate the global features of the
shape/surface, while the higher ordered polynomial
terms capture local shape/surface features.
Zernike moments have the following advantages [23, 24]:

Rotation invariance: the magnitude of Zernike moments
has rotational invariant property.
Robustness: they are robust to noise and minor variations
in shape.
Expressiveness: Since the basis is orthogonal, they have
minimum information redundancy.
Effectiveness: an image can be better described by a small
set of its Zernike moments than any other types of mo-
ments such as geometric moments.
Multilevel representation: a relatively small set of Zernike
moments can characterize the global shape of pattern.
Lower order moments represent the global shape of pat-
tern and higher order moments represent the detail.
The ease of image reconstruction from them.
The computation of Zernike moments from an input im-
age consists of three steps [24]: Computation of radial
polynomials, Computation of Zernike basis functions,
Computation of Zernike moments by projecting the im-
age on the basis functions.
The procedure for obtaining Zernike moments from an
input image begins with the computation of Zernike radi-
al polynomials. The real-valued 1-D radial polynomial
) ( R
nm

is defined as [25]:

=
2 / ) (
0
2
) , , ( ) (
m n
s
s n
nm
s m n c R
(10)
where
! ) 2 / ) m - (n ! ) 2 / ) m (n s!
! ) (
) 1 ( ) , , (
s s
s n
s m n c
s
+

=
(11)
In equation (1), n and m are generally called order and
repetition respectively. The order n is a non negative in-
teger, and the repetition m is an integer satisfying
even = | m | n-
and
n | m | s
. The radial polynomials
satisfy the orthogonal properties for the same repetition
[25]:
} }

+ =
=
t
u u u
2
0
1
0
0
) 1 ( 2
1
) , ( ) , (
'
'
n n if
m n
nm
n d d R R
(12)
Using the radial polynomial, complex-valued 2-D Zernike
basis functions, which are defined within a unit circle, are
formed by [25]:

1 ), exp( ) ( ) , ( s = u u jm R V
nm nm
(13)

where
1 - = j
. Zernike basis functions are orthogonal
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 25
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617


and formed by [26].

2.3 LEGENDRE MOMENTS
Moments with Legendre polynomials as kernel function,
denoted as Legendre moments, were first introduced by
Teague [26]. Legendre moments belong to the class of
orthogonal moments, and they were used in several pat-
tern recognition applications. They can be used to attain a
near zero value of redundancy measure in a set of mo-
ment functions, so that the moments correspond to inde-
pendent characteristics of the image [27]. By convention,
the translation and scale invariant functions of Legendre
moments are achieved by using a combination of the cor-
responding invariants of geometric moments. They can
also be accomplished by normalizing the translated
and/or scaled images using complex or geometric mo-
ments.
However, the derivation of these functions is not based
on Legendre polynomials. This is mainly due to the fact
that it is difficult to extract a common displacement or
scale factor from Legendre polynomials. The two dimen-
sional Legendre moments of order
q), + (p
with image
intensity function
y), f(x,
are defined as:

| |
} }

e
+ +
=
1
1
1
1
1 , 1 , ) , ( ) ( ) (
4
) 1 2 )( 1 2 (
y dxdyx y x f y XP x P
q p
L
q p pq
(14)
Where Legendre polynomial,
, P
p(x)
of order p is given
by:

=
= +

+
+
=
p
k
even k p
k
p
k p
p
k!
k p k p
x k p
x P
0
2
)!
2
( )!
2
(
)! (
2
1
) 1 ( ) (
(15)
The recurrence relation of Legendre polynomials
, P
p(x)
is
given as follows:

p
x p p x xp p
x P
p p
p
) ( ) 1 ( ) ( ) 1 2 (
) (
1 1

=
(16)
Where
x = (x) P 1, = (x) P
1 0
and
1 > p
. Since the region
of definition of Legendre polynomials is the interior of
1] [-1,
, a square image of
N x N
pixels with intensity
function
j), f(i, 1) - (N j i, 0 s s
, is scaled in the region
of
1 < y x, < 1 -
.
3 NEAREST NEIGHBOR CLASSIFIER
The nearest neighbor classifier relies on a metric or a dis-
tance function between points. For all points x, y and z, a
metric
) D(,
must satisfy the following properties:

1- Non-negativity:
0 y) D(x, >
.
2- Reflexivity:
0 = y) D(x,
if and only if
y. = x

3- Symmetry:
x). D(y, = y) D(x,

4- Triangle inequality:
z). D(x, z) D(y, + y) D(x, >


The nearest neighbor classifier is used to compare the
features vector of the prototype image and features vec-
tors stored in the database. It is obtained by finding the
distance between the prototype image and the database.
Let
Ck , C3, C2, C1,
be the k clusters in the data-
base. The class is found by measuring the distance
Ck) d(x(q),
between
x(q)
and the
kth
cluster
Ck
.
The features vector with minimum difference is found to
be the closest matching vector. It is given by:
} C x : || x - x(q) min{|| = ) C d(x(q),
k k
, Nearest-
Neighbor classifiers provide good image classification
when the query image is similar to one of the labeled im-
ages in its class.
3.1 Pre-Processing
Image segmentation is the process of separating or group-
ing an image into different parts. These parts normally
correspond to something that humans can easily separate
and view as individual objects. The segmentation process
is based on various features found in the image. The goal
of image segmentation is to cluster pixels into salient im-
age regions, where the regions corresponding to individ-
ual surfaces, objects, or natural parts of objects.
Segmentation could be used for object recognition, occlu-
sion boundary, estimation within motion or stereo sys-
tems, image compression, image editing, or image data-
base. Segmentation is an important procedure in medical
image analysis and classification in [28]. Image segmenta-
tion methods can be divided into three types: Boundary-
based techniques, Region-based techniques, and Pixel-
based direct classification methods.
Region-based methods are most often used. They are typ-
ically very fast and easy to manipulate described by [28].
To improve the performance of region-based methods
and their results, preprocessing techniques are required.
The result of image segmentation is a set of segments that
collectively cover the entire image, or a set of contours
extracted from the image. Each of the pixels in a region is
similar with respect to some characteristic or computed
property, such as color, intensity, or texture.
The segmentation process is carried out as pre-processing
in the process. This method is used to separate the partic-
ular region in the image, since there are typically some
clearly defined areas within the image. Here the input
image is subdivided into blocks and it changes corre-
sponding to the size as in [29]. Then each block becomes
the smallest unit for further processing. For an image,
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 26
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617


Img is defined as:
W} < j 0 H, < i 0 , {pij = Img s s
(17)

Where
pij
is a pixel at location i, j; H and W are respec-
tively the height and the width of the image. The subdivi-
sion of Img into blocks can be expressed as

Ww} < J 0 Hh, < I 0 , {bIJ = Img s s
(18)
Where
pij
is a block at
Ith
row and
Jth
column; hand w
are respectively the height and the width of the blocks. A
block
bIJ
is defined as
1)} + (J w < j J w 1), + (I h < i I h , {pij s s
(19)
Pre-processing method is depicted in the following
fig (Fig 1):

Input image Output image

Fig 1. Sample pre-processed Image

4 EXPERIMENTS AND EVALUATIONS
When the input data is too large to be processed and re-
dundant, then the input data will be transformed into a
reduced representation set of features. Transforming the
input data into the set of features is called features extrac-
tion. Features extraction involves simplifying the amount
of resources required to describe a large set of data accu-
rately. When performing analysis of complex data one of
the major problems is the number of variables involved.
Here, the feature extraction process is carried out using
moment invariants, Geometrical Moments, Zernike and
Legendre Moments. Moment invariant was first intro-
duced by Hu and it is also known in some literature as
Hus moment invariant as in Hu.M.K and Puteh Saad,
Nursalasawati Rusli [29]. This moment was derived
from the theory of algebraic invariant.
The concept of moment in mathematics evolved from
the concept of moment in physics. It is an integrated
theory system for both contour and region of a shape;
one can use moments theory to analyze the object.

4.1 Algorithm for Invariant Moments
Step 1: The Input image is first converted into gray level
image.
Step 2: The obtained gray scale image is then analyzed
with different solutions and the value of mo-
ments at each level is calculated.
Step 3: There are totally seven types of moments varying
1
M
to
7
M
and each moment is calculated as fol-
lows:

Input image

Fig 2. Sample Geometrical Moments val-
ues for an Image
Output Values
TABLE 1: VALUES OF GEOMETRICAL MOMENTS
Geometrical Moments Value
M1
M2
M3
M4
M5
M6
M7
6.2648e+010
3.2645e+019
7.9648e+023
6.5765e+023
2.9417e+045
-2.6081e+037
1.1383e+046
Result Img1 Img2 Img3 Img4 Img5 Img6 Img7
Precision
Recall
48
55
12
95
20
85
10
100
60
15
43
65
35
75
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 27
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617



Fig 3. Sample Geometrical, Zernike and Legendre
Moments values, for an Image
4.2 Euclidean Distance Method
For retrieval of images from the database, Euclidean
method is used. It calculates the distance between the
images in Jie Chen et.al [29]. The formula for Euclidean
method is

=
=
n
i
i i y x y x d
1
2
) ( ) , ( (20)

Where

> ,....x x , x =< x
n 2 1
and > ,....y y , y =< y
n 2 1
. Thus,
in two-dimensional Euclidian space, the distance
AB
be-
tween ) a , (a = A
2 1
and ) b , (b = B
2 1
is

( ) ( )
2
2 2
2
1 1
b a b a + + .
4.3 Retrieval Efficiency
For retrieval efficiency, traditional measures namely pre-
cision and recall were computed with 1000 real time med-
ical images. Standard formulas have been computed for
determining the precision and recall measures. Precision
(P) is the ratio of the relevant images to the total number
of images retrieved.
1
n
r
= P

Where, r is the number of relevant images retrieved,
1
n
is
the total number of images retrieved.
Recall (R) is the percentage of relevant images among all
possible relevant images.

n
r
= R
2

Where, r is the number of relevant images retrieved,
2
n
is
the total number of relevant images in the database
The following Tables (2, 3) provides the precision and
recall results recorded for various test images, and Fig-
ures 4, 5 and 6 provides sample snapshot of the user in-
terface screen of the work. The experimental results
showed that the retrieval rate of the nearest neighbor
classifier based on Legendre moments is higher than the
recognition rate of Geometrical and Zernike moments.

Table 2: Precision and Recall results recorded for various
test images


Table 3: Retrieval Rate of Geometrical, Zernike and Le-
gendre Moments using Nearest Neighbor classifier
Result Img1
Img
2
Img3
Img
4
Img
5
Img
6
Img
7
Average
Geo 85% 65% 65% 65% 65% 85% 75% 71%
Zer. 75% 65% 65% 65% 75% 75% 65% 69%
Leg. 95% 65% 65% 85% 85% 95% 95% 81%
Ave. 85% 65% 65% 72% 75% 85% 78% 74%

Fig 4,5,6,7 : Sample User Interface Screens

Fig 4: CBIR using Geometrical Moments for medical im-
age

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 28
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617



Fig 5: CBIR using Zernike Moments for medical image


Fig 6: CBIR using Legendre Moments for medical image


Fig 7: CBIR using Combining Geometrical & Zer-
nike &Legendre Moments for medical image



5 CONCLUSIONS
This paper proposed a model for the Content-Based Med-
ical Image Retrieval System by using Invariant Moments
(Geometrical, Zernike and Legendre). The local features
together have been used for better retrieval accuracy and
efficiency. In this paper, for retrieval process, the Euclidi-
an distances are calculated between the two pixels for
matching the images. The results are quite good for most
of the query images and it can be further enhanced by
tuning the threshold and adding other low-level features.
REFERENCES
[1] C .W. Chonga, P. R and R. M, (2003). Translation invari-
ants of Zernike moments, Pattern Recognition 1765
1773, 36.
[2] Ye B, Peng JX , , (2012). Invariance analysis of improved
zernike moments. J Opt APure Appl Opt 4(6):606614.
[3] Ch. Srinivasa Rao, S. Srinivas Kumar and B. Chandra
Mohan ,( May 2010). Content Based Image Retrieval
Using Exact Legendre Moments and Support Vector
Machine, The International Journal of Multimedia and
its Application (IJMA), Vol.2, No. 2.
[4] R. S. dubey, Rajnish choubey, J. b, (2010). Multi Fea-
ture content Based image Retrieval, (IJCSE) Interna-
tional Journal on Computer Science and Engineering,
Vol. 02, No. 06, 2145-2149,.
[5] B, A.J., V. P and T. P, ( 2011). Efficient multimodal
biometric authentication using fast fingerprint verifica-
tion and enhanced iris features. J. Comput. Sci., 7: 698-
706.DOI:10.3844/jcssp.2011.698.706.
[6] M , L. and H , A. and P, F. and Matuszewski, B. J. and
Carreiras, F. ( 2012). Fractional Entropy Based Active
Contour Segmentation of Cell Nuclei in Actin-Tagged
Confocal Microscopy Images, Proceedings of the 16th
BMVA-Medical Image Understanding and Analys is
Conference, July, pp. 117-123.
[7] Jan Flusser, (2006). Moment Invariants in Image Analy-
sis, TRANSACTIONS ON ENGINEERING, COMPUTING
AND TECHNOLOGY V11 FEBRUARY ISSN 1305-5313.
[8] H. S , L. L , X. B , W Yu and G. H , (2011). An Efficient
Method for Computation of Legendre Moments,
Graphical Models 62, 237262.
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 29
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617


[9] M. E , and Y. C, (November 2005). Fast Extraction of
Edge Histogram in DCT Domain based on MPEG7, pro-
ceedings of world Academy of Science, Engineering &
Technology, volume 9,ISSN 1307-6884, PP. 209-212.
[10] R. Mukundan, K.R. Ramakrishnan, (2011). Moment
Functions in Image AnalysisTheory and Applications,
World Scientific, Singapore.
[11] R. Mukundan, (2004). Some Computational Aspects of
Discrete Orthogonal Moments, IEEE TRANSACTIONS
ON IMAGE PROCESSING, VOL. 13, NO. 8, AUGUST,
1055, 1059.
[12] Ryszard S. Choras, (2007). Image Feature Extraction
Techniques and Their Applications for CBIR and Bio-
metrics Systems, INTERNATIONAL JOURNAL OF BIOLO-
GY AND BIOMEDICAL ENGINEERING, Issue 1, Vol.1, pp.
6-16.
[13] Zijun Yang and C.-C. Jay Kuo, (1999). Survey on Image
Content Analysis, Indexing, and Retrieval Techniques
and Status Report of MPEG-7, Tamkang Journal of Sci-
ence and Engineering, Vol. 2, No. 3, pp. 101-118 .
[14] Li Zongmin, Kunpeng Hou, Liu Yujie, Diao Luhong and
Li Hua, ( 2006). The shape recognitionbased on struc-
ture moment invariants, International Journal of Infor-
mation Technology, Vol. 12, No. 2.
[15] JUN SHEN, Wei Shen and Danfei Shen, (2000). ON GE-
OMETRIC AND ORTHOGONAL MOMENTS, International
Journal of Pattern Recognition and Artificial Intelli-
gence, Vol. 14, No. 7 ,875-894.
[16] A. Khotanzad, Invariant image recognition by Zernike
moments, IEEE Trans. (Mar 1990). Pattern Anal. Mach.
Intell., vol. 12, pp. 489497.
[17] R. Mukundan, S. H. Ong, P. A. Lee, Discrete vs. (2000).
Continuous Orthogonal Moments for Image Analysis,
CISST01 International Conference, pp. 23-29.
[18] Sun-Kyoo Hwang and Whoi-Yul Kim, (2006). A novel
approach to the fast computation of Zernike moments,
Pattern Recognition 39 ,2065 2076.
[19] S.L.Phung and Abdesselam Bouzerdoum, ( 2007). De-
tecting people in images: An Edge Density Approach,
IEEE, ICASSP, and PP.1229-1232.
[20] Son Lam Phung & Abdesselam Bouzerdoum, ( 2007).
A new image feature for fast detection of people in im-
ages, International journal of Information & system sci-
ences, Volume 3, number 3, PP.383-391.
[21] Preeti Aggarwal, H.K. Sardana, Gagandeep Jindal, (
April 2009). Content Based Medical Image Retrieval:
Theory, Gaps and Future Directions, ICGST-GVIP
Journal, ISSN 1687-398X, Volume (9), Issue (II).
[22] Chee-Way Chonga, P. Raveendranb and R. Mukundan,
(2004). Translation and scale invariants of Legendre
moments, Pattern Recognition 37 ,119 129.
[23] Dr.H.B.Kekre, Ms.TanujaK.Sarode, Ms.Saylee
M.Gharge, (2009). Detection and Determination of Tu-
mor using vector Quantization, International Journal of
Engineering Science and Technology, Vol.1 (2), 59-66.
[24] O. B., Eli. Sh and M.Irani, ( June 2008). In Defense of
Nearest-Neighbor Based Image Classification, IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR).
[25] A. Opelt, M. Fussenegger, A. Pinz, and P. Auer. (2004).
Weak hypotheses and boosting for generic object de-
tection and recognition, In ECCV.
[26] G. Wang, Y. Zhang, and L. Fei-Fei. (2006). Using de-
pendent regions for object categorization in
agenerative framework. In CVPR.
[27] A. Bosch, A. Zisserman, and X. Munoz. (2007). Image
classification using random forests and ferns.In ICCV.
[28] P. Felzenszwalb and D. Huttenlocher. (2005). Pictorial
structures for object recognition. IJCV, 61.
[29] Sangam.S.K, Ramteke.R.J and RajKumar Benne, (
2009). Recognition of isolated handwritten kannada
vowels, Advances in Computational Research, ISSN:
09753273, Volume 1, Issue 2, pp-52-55.
JOURNAL OF COMPUTING, VOLUME 5, ISSUE 1, JANUARY 2013, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 30
2013 Journal of Computing Press, NY, USA, ISSN 2151-9617

You might also like