You are on page 1of 132

1

CHAPTER 1
INTRODUCTION

1.1

OVERVIEW OF MEMS TECHNOLOGY AND ITS


APPLICATIONS
MEMS (Micro Electro Mechanical Systems) refer to the technology,

which integrates electrical and mechanical components with feature size of 1 to


1000 microns. Due to its small size, low cost, low power consumption and high
efficiency, MEMS technology has been widely used in many fields. MEMS
technologies have a diverse range of applications in bio-engineering, automotive
engineering, telecommunications, environmental monitoring and space exploration
[1]. Micro mechanisms are also named MEMS (Micro Electro Mechanical Systems)
[2].
MEMS are a processing technology used to create tiny integrated devices
or systems that combine mechanical and electrical components. They are fabricated
using integrated circuit (IC) batch processing techniques and can range in size from
a few micrometers to a millimeter [3]. The reduction in size and power usage of
MEMS devices has enabled development of fully implantable medical devices [4].
These devices (or systems) have the ability to sense, control and actuate on a micro
scale, and generate effects on macro scale. If semiconductor micro fabrication was
seen to be the first micro manufacturing revolution, MEMS would be the second
revolution [5].
MEMS range from simple beams and electrostatic gaps to more complex
sensors and actuators that include fluidic, magnetic and thermal systems. Modern
methodology of MEMS design implies that the entire MEMS can be investigated
only at higher abstraction levels such as schematic and system ones, where accurate

macro models can be used [6]. On the other hand, at the component or device levels
the physical behavior of three-dimensional continuums is described by partial
differential equations (PDE) solvable by Finite Element or Finite Difference
Element Methods (FEM or FDM)[3]. Micro-Electro-Mechanical Systems consists of
mechanical elements, sensors, actuators and electrical and electronics devices on a
common silicon substrate [7]. The sensors in MEMS gather information from the
environment through measuring mechanical, thermal, biological, chemical, optical,
and magnetic phenomena. The electronics then process the information derived from
the sensors and through some decision making capability, direct the actuators to
respond by moving, positioning, regulating, pumping and filtering, thereby
controlling the environment for some desired outcome or purpose [3].
The advantages of semiconductor IC manufacturing such as low cost,
mass production, reliability are also integral to MEMS devices. The size of MEMS
sub-components is in the range of 1 to 100 micrometers and the size of MEMS
device itself measures in the range of 20 micrometers to a millimeter [8]. These have
been used as sensors for pressure, temperature, mass flow, velocity, sound and
chemical composition, as actuators for linear and angular motions and as simple
components for complex systems such as robots, lab-on-a-chip, micro heat engines
and micro heat pumps[6]. The Lab on-a-chip in particular is promising to automate
biology and chemistry. To some extent the integrated circuit has allowed large-scale
automation of computation.
Accelerometers for automobile airbags, keyless entry systems, dense
arrays of micro mirrors for high definition optical displays, scanning electron
microscope tips to image single atoms, micro heat exchangers for cooling of
electronic circuits, reactors for separating biological cells, blood analyzers and
pressure sensors for catheter tips are but a few of the current usages. Micro ducts are
used in infrared detectors, diode lasers, miniature gas chromatographs and highfrequency fluidic control systems. Micro pumps are used for ink jet printing,
environmental testing and electronic cooling. Potential medical applications for
small pumps include controlled delivery and monitoring of minute amount of
medication, manufacturing of nanoliters of chemicals, and development of artificial

pancreas [5]. Commercial tools like MEMCAD (Microcosm Technologies) [7] and
MEMS Modeler (MEMSCAP) use parametric curve-fitting of simulation data to
obtain macro models [9]. The primary drawback of these methods is that they do not
generate scalable macro models.
However, the greatest potential for MEMS devices lies in new
applications within telecommunications (optical and wireless), biomedical and
process control areas. Military use of MEMS such as triggers for weapons, microgyros, micro-surety systems, and micro-navigation devices give another dimension
to the importance of reliability of these devices [8]. Any accidental triggering may
claim many lives and, if in a warehouse, may have a domino effect. Initial air bag
technology used conventional mechanical ball and tube type devices which were
relatively complex, weighed several pounds and cost several hundred dollars. They
were usually mounted in the front of the vehicle with separate electronics near the
airbag. MEMS have enabled the same function to be accomplished by integrating
an accelerometer and the electronics into a single silicon chip. Another example of
an extremely successful MEMS application is the miniature disposable pressure
sensor used to monitor blood pressure in hospitals. These sensors connect to a
patients intravenous (IV) line and monitor the blood pressure through the IV
solution. For a fraction of their cost ($10), the hospitals have replaced the early
external blood pressure sensors with MEMS. The early ones cost over $600 and had
to be sterilized and recalibrated for reuse [3].
MEMS have several distinct advantages as a manufacturing technology
[6]. In the first place, the interdisciplinary nature of MEMS technology and its
micromachining techniques, as well as its diversity of applications has resulted in an
unprecedented range of devices and synergies across previously unrelated fields (for
example biology and microelectronics). Secondly, MEMS with its batch fabrication
techniques enables components and devices to be manufactured with increased
performance and reliability, combined with the obvious advantages of reduced
physical size, volume, weight and cost. Thirdly, MEMS provides the basis for the
manufacture of products that cannot be made by other methods. These factors make
MEMS potentially a far more pervasive technology than integrated circuit

microchips. However, there are many challenges and technological obstacles


associated with miniaturization of MEMS that need to be addressed and overcome
before it can realize its overwhelming potential. MEMS is a manufacturing
technology; a paradigm for designing and creating complex mechanical devices and
systems as well as their integrated electronics using batch fabrication techniques.
A number of actuators operate by thermal actuation that imposes
relatively high temperatures and require resistance against thermal cycling, high
temperature fatigue or creep. All these issues necessitate rigorous mechanical tests
on MEMS scale, to examine the effect of various processing parameters, types of
loading, service environments and temperature for different materials [5].
Components such as micro mirrors that have several levels of alignment controls
operating at high frequencies suffer from cyclic fatigue accumulation and may fail
by crack initiation and propagation under cyclic loading [3].
MEMS accelerometers help in sensing the acceleration experienced by a
system. MEMS accelerometers find frequent utilization in airbag deployment
systems in automobiles. Here, negative acceleration of the vehicle is being sensed by
these accelerometers. A processor will examine the magnitude of acceleration and
determines whether the airbags in the vehicle are to be deployed or not. The
conventional accelerometers for airbag deployment systems in automobiles are
being replaced by MEMS accelerometers in a rapid manner [10].
The growing popularity of MEMS accelerometers is mainly due to their
small size, light weight and increased reliability. In addition, the price involved in
manufacturing the MEMS accelerometer are found to be only a fraction of the cost
involved in constructing the conventional massive accelerometers. Numerous novel
micromachining approaches are made to work in a joint fashion for achieving a
commercially available accelerometer used in low g (gravity acceleration)
applications [10].
MEMS accelerometers are employed in a wider range of applications like
automobile airbag systems. Other familiar applications include consumer products,

like, computer games, cell phones, pagers, PDAs, advanced robotics, laptop
computers, computer input devices, camcorders, digital cameras and SD card
accessories. Size and accuracy are the most essential features that decide the
performance of the sensor in each of the above-mentioned applications [11].
In recent years, CMOS micromachining has evolved as a chief
fabrication technology for VLSI MEMS. CMOS micromachining is utilized for
fabricating the design and characterization of a lateral accelerometer. More attention
is being paid towards integrated Microsystems, such as inertial measurement unit.
Integrated Microsystems yield improved performance with an array of similar
topology devices, like accelerometers and gyroscopes, which possess dissimilar
performance specifications [12].
1.2

INTRODUCTION TO OPTIMIZATION
But ask the animals, and they will teach you, or the birds of the air, and

they will tell you; or speak to the earth, and it will teach you, or let the fish of the sea
inform you.
-Job 12:7 (adapted from [179])
Optimization is the procedure or procedures used to make a system or
design, as effective or functional as possible, especially the mathematical techniques
involved; making the best of anything. It is a mathematical technique used for
finding a maximum or minimum value of a function for several variables subject to
a set of constraints, as linear programming or system analysis.
1.3

TYPES OF OPTIMIZATION
The types of optimization techniques are Unconstrained optimization,

Constrained optimization, Multi-objective optimization, Multi modal optimization,


Combinatorial optimization, Hill climbing, Intelligence.

1.3.1

Unconstrained Optimization
Unconstrained optimization problem is one where you only have to be

concerned about the objective function you are trying to optimize. None of the
variables in the objective function are constrained.
1.3.2

Constrained Optimization
Constrained optimization is the process of optimizing an objective

function with respect to some variables in the presence of constraints. Constrained


optimization is the minimization of an objective function subjected to constraints on
the possible values of the independent variable. Constraints can be either equality
constraints or inequality constraints.
1.3.3

Multi-Objective Optimization
Multiobjective optimization also known as multi-objective programming,

vector optimization, multi criteria optimization, multi attribute optimization or


pareto optimization is an area of multiple criteria decision making that is concerned
with mathematical optimization problems involving more than one objective
function to be optimized simultaneously. It has been applied in many fields of
science, including engineering, economics and logistics where optimal decisions
need to be taken in the presence of trade-offs between two or more conflicting
objectives.
1.3.4

Multimodal Optimization
Multimodal optimization problem is a problem that has more than one

local minimum or multimodal optimization deals that with optimization takes, that
involve finding all or most of the multiple solutions (as opposed to a single best
solution) .

1.3.5

Combinatorial Optimization

Combinatorial optimization is a branch of optimization in applied


mathematics and computer science, related to operations research, algorithm theory
and computational complexity theory. There are many optimization problems for
which the independent variables are restricted to a set of discrete values. These types
of problems are called combinatorial optimization problems.
1.3.6

Hill Climbing
Hill climbing is a graph search algorithm where the current path is

extended with a successor node which is closer to the solution than the end of the
current path. In simple hill climbing, the first closer node is chosen whereas in
steepest ascent hill climbing all successors are compared and the closest to the
solution is chosen.
1.3.7

Intelligence
Intelligence has been defined in many different ways such as in terms of

ones capacity for logical, abstract thought, understanding, self-awareness,


communication, learning, emotional knowledge, memory, planning, creativity and
problem solving. Some of its characteristics are adaptation, randomness,
communication, feedback, exploration, and exploitation.
1.4

INTRODUCTION TO CLASSIC EVOLUTIONARY


ALGORITHMS
There are various optimization algorithms that can produce an optimized

design of MEMS. The Table 1.1 gives the details of various classic evolutionary
algorithms developed for optimization.

Table 1.1 A List of Evolutionary Algorithms

Swarm Intelligence Based Algorithms


Algorithm

Author

Reference

Ant Colony Optimization (1992)

Dorigo

[13]

Artificial Bee Colony (2005)

Karaboga and Basturk

[16]

Artificial Fish Swarm (2003)

Li et al.

[23]

Bacterial Foraging Optimization (2002)

Passino

[25,26]

Yang

[28]

Kennedy and Eberhart

[42]

Firefly Algorithm (2008)


Particle Swarm Optimization (1995)

Bio-inspired(not SI based) Algorithms


Genetic Algorithm (1970)

John Holland

[50]

Simon

[58]

Stron and Price

[66]

He et al.

[69]

Ensuff and Lansey

[71]

Biogeography-Based Optimization (2006)


Differential Evolution (1996)
Group Search Optimizer (2009)
Shuffled Frog Leaping Algorithm (2003)

Physics and Chemistry Based Algorithms


Gravitational Search (2009)
Harmony Search (2001)
Simulated Annealing (1983)

Rashedi et al.

[79]

Geem et al.

[97]

Kirkpatrick et al.

[111]

Other Algorithms
Cultural Algorithm (2004)

Reynolds

[114]

Estimation of Distribution (2002)

Larranaga

[117]

Opposition Based Learning (2005)

Tizhoosh

[124]

Glover and Mcmillan

[130]

Rao

[134]

Tabu Search (1986)


Teaching Learning Based
Optimization (2011)

1.4.1

Ant Colony Optimization (ACO)


Go to the ant, you sluggard; consider its ways, and be wise!
-Proverbs 6:6 (adapted from [179])

ACO initially proposed by Marco Dorigo in 1992 in his Ph.D thesis [13].
Ant colony optimization is a way to solve optimization problems based on the way
ants indirectly communicate directions to each other. Ant colony optimization is a
probabilistic technique for solving computational problems which can be reduced to
find good paths through graphs. Each ant tries to find a route between its nest and a
good food source [14].
The behavior of each ant in nature:
(i)

Wander randomly at first, laying down a pheromone trial.

(ii)

If food is found, return to the nest laying down pheromone trail.

(iii)

If pheromone is found, with some increased probability follow the


pheromone trial.

(iv)

Once back at the nest, go out again in search of food.

However pheromones evaporate over time, such that unless they are
reinforced by ants, the pheromones will disappear. Applications of ACO includes
scheduling problems(project scheduling, job shop scheduling, open shop, agent
based dynamic scheduling), routing problems (TSP, vehicle routing, connection
oriented and connection less network routing), assignment problems(quadratic
assignment problems, course timetabling, graph coloring),sequential ordering
problem, shortest common super sequence problem, constraint satisfaction,
classification rules, bayesian networks, protein folding, protein-ligand docking, set
problem and device sizing problem in nanoelectronics physical design, machine
learning, dynamic problem of data network routing, a shortest path problem where
properties of the system such as node availability vary over time, continuous
optimization and parallel processing implementations, digital image processing and
classification problem in data mining[15].
1.4.2

Artificial Bee Colony (ABC) Algorithm

10

ABC algorithm is a swarm based meta-heuristic algorithm that was


introduced by karaboga in the year 2005 for optimizing numerical problems [16]. It
was inspired by the intelligent foraging behavior of honey bees. The model consists
of three essential components: employed and unemployed foraging bees, and food
sources. The first two components, employed and unemployed foraging bees search
for rich food source, which is the third component, closer to their hive. The model
also defines two leading modes of behavior which are necessary for self-organizing
and collective intelligence: requirement of foragers to rich food sources resulting in
positive feedback and abandonment of poor food sources by foragers causing
negative feedback. ABC is developed based on inspecting the behaviors of real bees
in finding nectar and sharing the information of food sources to the bees in the hive.
ABC algorithm is used to solve unconstrained and constrained optimization
problems, multidimensional and multimodal optimization problems [17,18].
Application of ABC includes decoder-encoder and 3-bit parity benchmark problems
[19], clustering [20], scheduling problems, resource-constrained project scheduling
problem [21], image segmentation, capacitated vehicle routing problem, WSNs,
assembly line balancing problem, solving reliability redundancy allocation problem,
training neural networks, XOR, pattern classification, p-center problem [22].
1.4.3

Artificial Fish Swarm Algorithm (AFSA)


ASFA is a novel method to search global optimum, which is typically an

application of behaviorism in artificial intelligence. AFSA method is one of the


swarm intelligence approaches that works based on the population and stochastic
search. Fishes show very intelligent and social behavior. This algorithm is one of the
best approaches of the swarm intelligence method with considerable advantages like
high convergence speed, flexibility, error tolerance and high accuracy. Basic idea of
AFSA is to imitate fish behavior such as preying, swarming and following with local
search of fish individual for reaching the global optimum. It is a random and parallel
search algorithm [23]. Application of AFSA includes automated design, multi robot
task scheduling, UCAV path planning, fault diagnosis in mine hoist, optimum steel
making charge plan, target area on simulation robots, function optimization,

11

parameter estimation, combinatorial optimization, least squares support vector


machine and geo technical engineering problems [24].
1.4.4

Bacterial Foraging Optimization


BFOA was introduced by Passino in the year 2002. Bacterial foraging

optimization algorithm (BFOA) has been widely accepted as a global optimization


algorithm of current interest for distributed optimization and control. BFOA is a
non-gradient, bio inspired self organizing natural and newly developed efficient
optimization technique. In BFOA, the social foraging behavior of Escherichia coli,
commonly known as E.coli is mimicked. Application of group foraging strategy of
E.Coli bacteria in multi-optimal function optimization is the key idea of the
algorithm. Here the natural selection tends to eliminate the animals with poor
foraging strategies and favor those having successful foraging strategies. The
foraging strategies of E.Coli can be explained by Chemotaxis, Swarming,
Reproduction, Elimination and Dispersal [25, 26].
Applications of BFOA includes global optimization, adaptive control,
harmonic estimation, optimum power system stabilizers, optimal power flow,
optimization over continuous surfaces, PID controller tuning, active power filter for
load optimization, optimizing power loss and voltage stability limits, fuzzy
controller construction/tuning, neural network training, job shop scheduling,
electromagnetics, stock market prediction, motor control, system identification,
temperature control, energy efficiency optimization for buildings and distributed
energy generation, highly nonlinear and nonconvex problem, inverse airfoil design,
transmission loss reduction, implemented as the parameter estimation of nonlinear
system model(NSM) for heavy oil thermal cracking, evaluation of independent
components to work with mixed signals, solve constrained economic load dispatch
problems, application in the null steering of linear antenna array by controlling the
element amplitudes, applications in multi objective optimization [24,25 and 27].
1.4.5

The Firefly Algorithm (FA)

12

The firefly algorithm was introduced in the year 2007 by Xin-She Yang
at Cambridge University [28]. It is based on the attraction of fireflies to one another.
Attraction is based on the perceived brightness of a firefly, which exponentially
decreases with distance. A firefly is attracted only to those fireflies that are brighter
than it. Firefly can be described by using following idealized rules
(i)

All fireflies are unisex so that one firefly will be attracted to other
fireflies regardless of their sex.

(ii)

Attractiveness is proportional to their brightness, thus for any two


flashing fireflies, the less bright one will move towards the
brighter one. If there is no brighter one than a particular firefly it
will move randomly. The brightness of the firefly is affected or
determined by the landscape of the objective function. For a
maximization problem, the brightness can simply be proportional
to the value of the objective function.

Applications of FA includes least computation time for digital image


compression [29,30], feature selection [31], efficiently solve highly nonlinear,
multimodal design problems [32,33], antenna design optimization [34], efficiently
solve NP-hard scheduling problems[35], scheduling and travelling and salesman
problem [36-38], clustering [39,40], to train neural networks [41], wireless network
design, dynamic market pricing, mobile robotics, image segmentation, real time
complex image analysis problems, real time systems and heartbeat synchronization
[24].

13

1.4.6

Particle Swarm Optimization (PSO)


The particle swarm algorithm imitates human social behavior.
- James Kennedy and Russell Eberhart [42]
PSO is a population based on stochastic optimization algorithm to find a

solution and solve an optimization problem in a search space. It is a simple,


computationally efficient optimization method. It is based on a social-psychological
model of social influence and social learning.
Inspired by the flocking and schooling patterns of birds and fish, Particle
Swarm Optimization (PSO) was invented by Russell Eberhart and James Kennedy
in 1995. Originally, these two started out developing computer software simulations
of birds flocking around food sources, and then later realized how well their
algorithms worked on optimization problems.
Particle Swarm Optimization might sound complicated, but it's really a
very simple algorithm. Over a number of iterations, a group of variables have their
values adjusted closer to the member whose value is closest to the target at any
given moment. Imagine a flock of birds circling over an area where they can smell a
hidden source of food. The one who is closest to the food chirps the loudest and the
other birds swing around in his direction. If any of the other circling birds comes
closer to the target than the first, it chirps louder and the others veer over toward
him. This tightening pattern continues until one of the birds happens upon the food.
It's an algorithm that's simple and easy to implement [43-46].
PSO has been successfully applied in many areas: function optimization,
artificial neural network training, fuzzy system control, and other areas where GA
can be applied. The various application areas of PSO include power systems
operations and controlling-Hard combinatorial problems, job scheduling problems,
vehicle routing problems, mobile networking, modeling optimized parameters, batch
process scheduling, multiobjective optimization problems, image processing and
pattern recognition problems, multimodal biomedical image registration and the

14

iterated prisoners dilemma, classification of instances in multiclass databases,


feature selection, web service composition course composition, power system
optimization problems, edge detection in noisy images, finding optimal machining
parameter assembly line balancing problem in production and operations
management, anomaly detection, color image segmentation, sequential ordering
problem, constrained portfolio optimization problem, selective particle regeneration
for data clustering, machinery fault detection, unit commitment computation, and
signature verification [24, 47- 49].
1.4.7

Genetic Algorithm (GA)


GAs are NOT function optimizers
- Kenneth De Jong (adapted from [179])
Genetic Algorithms in particular became popular through the work of

John Holland in early 1970s [50]. Genetic Algorithms are inspired by Darwins
theory about evolution. Solution to a problem by Genetic Algorithms evolved.
Genetic Algorithms and evolutionary strategies mimic the principle of natural
genetics and natural selection to construct and search optimization procedures. GA
belong to the larger class of EA which generate solutions to optimization problems
using technique inspired by natural evolution, such as inheritance, mutation,
selection and crossover.
Algorithm is started with a set of solutions (represented by
chromosomes) called population. Solutions from one population are taken and used
to form new population. This is motivated by a hope, that the new population will be
better than the old one. Solutions which are selected to form new solutions
(offspring) are selected according to their fitness. The more suitable they are, the
more chances they have to reproduce. This is repeated until some condition is
satisfied [51, 52].
GA can even be faster in finding global maxima than conventional
methods, in particular when derivatives provide misleading information. The

15

enormous potential of GA lies elsewhere in optimization of non-differentiable or


even discontinuous functions. The Table 1.2 gives the details of various applications
of Genetic Algorithms.
Table 1.2 Applications of Genetic Algorithm
Applications of GA
S.No

Domain

Control

Gas pipeline, pole balancing, missile


evasion, pursuit.

Design

Semi conductor layout, air craft design,


keyboard configuration, communication
networks.

Scheduling

4.

Robotics

Trajectory planning

Machine
Learning

Designing neural networks, improving


classification algorithms, classifier
systems.

1.4.8

others

Application types

Reference

Manufacturing, facility scheduling,


resource allocation.

[53-57]

Mechanical sector, electrical engineering,


civil engineering, image processing, data
mining, wireless networks, VLSI,
production planning, air traffic problems,
automobile, signal processing,
communication networks, environmental
engineering, bioinformatics, phylogenetics,
computational science, economics,
chemistry, manufacturing, mathematics,
physics, pharmacometrics and other fields.

Biogeography Based Optimization (BBO)


BBO was introduced by Simon in the year 2006. BBO is based on the

science and study of species movement from one habitat to another. BBO is an

16

evolutionary algorithm that optimizes a function by stochastically and iteratively


improving candidate solutions with regard to a given measure of quality or Fitness
function (F). BBO belongs to the class of metaheuristics since it includes many
variations, and since it does not make any assumptions about the problem and can
therefore be applied to a wide class of problems [58]. It is modeled after the
emigration and immigration of species between habitatss to achieve information
sharing. The two key concepts involved in the optimization are HSI and SIV. If a
habitat with high HSI consists of a large number of species and habitat with low HSI
tend to have low number of species. Features that correlate with HSI include rain
fall, vegetative diversity, topographic diversity, land area, temperature, and others.
Here SIV is an independent variable and HSI is the dependent variable. Species will
immigrate to, and emigrate from, a habitat with a probability that is determined by
the HSI. A habitat with high HSI will tend to have low immigration rate and high
emigrate rate. If a habitat with low HSI will tend to have high immigration rate and
low emigration rate. Nature will optimize the number of species living in each
habitat to achieve equilibrium. The Figure 1.1 represents BBO Migration Curves.

Immigration

E
Emigration
Rate
Sc Smax
Number of species
Figure 1.1 BBO Migration Curves (Adapted from [179])
In the Figure 1.1 the immigration rate and the emigration rate are the
functions of the number of species in a habitat. Here each habitat is a candidate
solution to an optimization problem, each species is an independent variable of that
candidate solution.

17

In BBO, each candidate solution shares its features with other candidate
solutions, and this sharing process is analogous to migration in biography. If
migration occurs for many iterations, the habitat becomes more suitable for their
species, which corresponds to candidate solutions providing increasingly better
solution to an optimization problem.
Application of BBO includes Economic Load Dispatch (ELD) problem
[59] with generator constraints in power plants. BBO has also solved real-world
application problems such as Block based motion estimation in video coding [60],
Implementing color image segmentation [61], color image quantization [62], Face
recognition [63], Feature selection [64], ECG signal classification, power system
optimization, ground water detection, and satellite image classification [65], general
benchmark functions, constrained optimization, the sensor selection problem for
aircraft engine health estimation, web based BBO graphical user interface, global
numerical optimization, and optimal meter placement for security constrained state
estimation [24].
1.4.9

Differential Evolution (DE)


Compared to several existing EAs, DE is much simpler and straight

forward to implement .Simplicity of programming is important for practitioners


from other fields, since they may not be experts in programming .
-S. Das, P. Suganthan, and C. Coello Coello ( adapted from [179] )
Global optimization is necessary in the field of engineering, statistics and
finance. But many practical problems have objective functions that are nondifferentiable, non-continuous, non-linear, noisy, flat, multi dimensional or have
many local minima, constraints or stochasticity. Such problems are difficult if not
impossible to solve analytically. DE can be used to find approximate solution to
such problems. DE was developed by Rainer Stron and Kenneth V. Price around
1995 [66]. It is a stochastic, population-based optimization algorithm developed to
optimize real parameter, real valued functions. It is a method that optimizes a
problem by iteratively trying to improve a candidate solution with regard to a given

18

measure of quality. Such methods are commonly known as metaheuristics as they


make few or no assumptions about the problem being optimized and can search very
large spaces of candidate solutions.
An application of DE includes optimal operation of multipurpose
reservoir [67], design of digital filter [68], optimization of strategies for checkers,
maximization of profit in a model of a beef property, optimization of fermentation of
alcohol, unsupervised image classification, clustering, optimization of non-linear
functions, global optimization of non-linear chemical engineering processes and
multi-objective optimization [24].
1.4.10

Group Search Optimizer (GSO)


GSO has been proposed by S. He in the year 2009. GSO, which was

inspired by animal behavior especially animal searching (foraging) behavior.


Animals normally search for food in group. They get benefited sharing the
information among themselves. The frame work mainly based on the producerscrounger model, which assumes that group members search either for finding or for
joining opportunities. Based on this frame work, concepts from animal searching
behavior are employed metaphorically to design optimum searching strategies for
solving continuous optimization problems [69]. The population of GSO is called
group and each individual in the population is called a member. In the search space,
each member knows its own position, its head angle and a head direction, which can
be calculated from the head angle via polar to cartesian co-ordinate transformation.
A group constitutes three types of members: producers, scroungers and
rangers. Producers: perform producing strategies, searching for food. Scroungers:
perform scrounging strategies, joining resources uncovered by others. Rangers:
perform random walk motions and will be dispersed from their current positions.
Applications of GSO includes truss structure design [70], benchmark
functions applied for optimal power flow problems, mechanical design optimization
problems, multi objective optimization, optimal placement of FACT devices,

19

machine condition monitoring, optimal location and capacity of distributed


generations [24].
1.4.11

Shuffled Frog Leaping Algorithm (SFLA)


SFLA was introduced by Muzaffar Ensuff and Kevin Lansey in 2003.

SFLA is a memetic meta-heuristic and a population based cooperative search


metaphor inspired by natural memetics. The algorithm contains elements of local
search and global information exchange. The SFLA consists of a set of interactive
virtual population of frogs partitioned into different memeplexes. The virtual frogs
act as hosts or carriers of memes where meme is a unit of cultural evolution. The
algorithm performs simultaneously an independent local search in each memeplex.
The local search is completed using a particle swarm optimization-like method
adapted for discrete problems but emphasizing a local search. To ensure global
exploration, the virtual frogs are periodically shuffled and reorganized into new
memeplexs in a technique similar to that used in the shuffled complex evolution
algorithm. In addition, to provide the opportunity for random generation of
improved information, random virtual frogs are generated and substituted in the
population [71]. The steps in SFLA include the following: initial population, sorting
and distribution, memeplex evolution, shuffling and terminal condition.
Application of SFLA includes color image segmentation [72], solving
TSP [73], fuzzy controller design [74], mobile robot path planning [75], grid task
scheduling [76], combined economic emission dispatch [77], job scheduling [78],
automatic recognition of speech emotion water, unit commitment problem, optimal
viewpoint selection for volume rendering, multi-user detection in DS-CDMA
distribution, optimal reactive power flow, a web document classification,
classification rule mining, ground water model calibration problems and multicast
routing optimization [24].
1.4.12

Gravitational Search Algorithms (GSA)


GSA was introduced by E. Rashedi in the year 2009. GSA is constructed

based on the law of gravity and the notion of mass interactions. The GSA algorithm

20

uses the theory of Newtonian physics and its searcher agents are the collection of
masses. In GSA, we have an isolated system of masses. Using the gravitational force
every mass in the system can see the situation of other masses. The gravitational
force is therefore a way of transferring information between different masses. In
GSA, agents are considered as objects and their performance is measured by their
masses.
All these objects attract each other by a gravity force, and this force
causes a movement of all objects globally towards the objects with heavier masses.
The heavy masses correspond to a good solution of the problem. The position of the
agent corresponds to a solution of the problem, and its mass is determined using a
Fitness function (F) [79].
By lapse of time, masses are attracted by the heaviest mass. We hope that
this mass would present an optimum solution in the search space. The GSA could be
considered as an isolated system of masses. It is like a small artificial world of
masses obeying the Newtonian laws of gravitation and motion.
Application of GSA includes in the following fields: NN training [80],
robotics [81], optical [82], bioinformatics [83], software engineering [84],
networking [85], image processing [86], classification [87], clustering [88],
scheduling [89], business [90], computer engineering [91], civil engineering [92],
control engineering [93], mechanical engineering [94], power engineering [95],
telecommunication engineering [96].

1.4.13

Harmony Search (HS)


Harmony search (HS) was introduced in 2001 by Geem and further

explained by Lee [97]. HS is based on musical processes. Each musician in a choir


or band sounds a note within some allowable domain. If all the notes result in good
harmony, the positive experience is saved in the choirs collective memory and the
possibility of achieving continued good harmony is increased. In HS, a choir or band

21

is analogous to a candidate problem solution, and a musician is analogous to an


independent variable or candidate solution feature.
Applications of HS includes school bus routing problems[98], Sudoku
puzzle [99], water distribution network design [100], satellite heat pipe design [101],
structural design [102], ecological conservation [103], multiple dam operation [104],
music composition [105], vehicle routing [106], university course timetabling [107],
oceanic oil structuring mooring [108], hydrologic parameter calibration [109], heat
exchanger design [110], six-hump back function, multimodal function, ANN, webbased parameter calibration, robotics, internet searching, visual tracking,
management science, project scheduling, medical physics, bio informatics [24].
1.4.14

Simulated Annealing (SA)


We conjecture that the analogy with thermodynamics can offer new

insight in to optimization problems and can suggest efficient algorithms for solving
them.
-Valdo Cerny 1985 (adapted from [179] )
Simulated annealing was independently described by Scoot Kirkpatrick,
C. Daniel Gelatt and Mario P. Vecchi in 1983 [111]. Simulated annealing is an
optimization algorithm that is based on the cooling and crystallizing behavior of
chemical substances. Simulated annealing is a single individual stochastic algorithm.
Simulated annealing mimics the cooling phenomenon of molten metals to constitute
a search procedure. Slowly cool down a heated solid, so that all particles arrange in
the ground energy state. At each temperature wait until the solid reaches its thermal
equilibrium. Probability of being in a state with energy E can be represented by the
Equation (1.1),
Pr E = E =

(-E / KB).T
1
e
Z(T)
(1.1)

22

where,
E

Energy

Temperature

Boltzmann constant

KB

Z (T) -

Normalization factor

SA is a good solution method that is easily applicable to large number of


problems. Tuning of parameter is relatively easy. Quality of results of SA is good
although it take a lot of time. Results are generally not reproducible. SA can leave an
optimal solution and not find it again. Proven to find the optimum under certain
conditions; one of these conditions is that you must run forever.
Its applications include combinatorial optimization problem with
permutation property [112], basic problems, engineering design, facilities layout,
image processing and code design in information theory, structural optimization,
fluid transportation systems, water distribution systems, chip floor planning, GPP
(Graph Partitioning Problem), GCP (Graph Coloring Problem), NPP (Number
Partitioning Problem), TSP (Travelling Salesman Problem) [113].
1.4.15

Cultural Algorithm (GA)


Culture optimizes cognition
-

James Kennedy ( adapted from [179] )

Cultural algorithms (CA) are a branch of evolutionary computation


where there is a knowledge component that is called the belief space in addition to
the population component. It was derived from models of cultural evolution in
anthropology. It provides a framework to accumulate and communicate knowledge
so as to allow self-adaptation in an evolving model. Cultural algorithms can be seen
as extension to a conventional Genetic Algorithm. It was introduced by Reynolds
[114]. The Figure 1.2 shows the cultural algorithm frame work.

23

Belief space Adjust

Vote:

Promote:

Acceptance

Influence

function

function
Inherit
Population Space

Reproduce, Modify,
Variation

Performance
Function

Figure 1.2 Cultural Algorithm Frame Work (Adapted from [135])


The cultural algorithm components consist of a belief space and a
population space. The component interacts through a communication protocol.
Applications of cultural algorithms includes various optimization problems, social
simulation, constrained optimization in ammonia synthesis, real world application,
cloud computing applications, multi objective optimization, bioinformatics, eco
system modeling and virtual world, distributed computing applications [115,116].
1.4.16

Estimation of Distribution Algorithm (EDA)


That is what learning is. You suddenly understand something youve

understood all your life, but in a new way.

24

Doris Lessing (adapted from [179])

EDAs or PMBGAs (Probabilistic Model Building Genetic Algorithms),


are stochastic optimization methods that guide the search for the optimum by
building and sampling explicit probabilistic models of promising candidate
solutions. Optimization is viewed as a series of incremental updates of a
probabilistic model, starting with the model encoding the uniform distribution over
admissible solutions and ending with the model that generates only global optima
[117]. EDA is a new class of EA that does not use conventional crossover, mutation
operators; instead it estimates the distribution of the selected parent population and
uses a sampling step for offspring generation.
Application of EDA includes power system controller design [118],
linear and combinatorial optimization [119], design of process sensor networks
[120], HW/SW partition [121], graph matching problems [122] and bioinformatics
[123].
1.4.17

Opposition Based Learning (OBL)


The concept of OBL was first introduced by Tizhoosh. The main idea

behind OBL is the simultaneous consideration of an estimate and its corresponding


opposite estimate in order to achieve a better approximation of the current candidate
solution [124]. OBL has first been utilized to improve learning and back propagation
in neural networks. OBL includes three different kinds of concepts namely opposite
point, quasi-opposite point and quasi reflected point. OBL has been applied to many
evolutionary algorithms such as DE [125], BBO [126], PSO [127], ACO [128], and
GA [129].
1.4.18

Tabu Search
Tabu search was introduced in 1986 by Glover and McMillan [130]. It is

a meta-heuristic super imposed on another heuristic. The overall approach is to


avoid entrainment in cycles by forbidding or penalizing moves which takes the
solution, in the next iteration, to points in the solution space previously visited. The

25

method is still actively researched, and is continuing to evolve and improve. Tabu or
Taboo means forbidden, banned, or not allowed. Forbidden items, speech, or
practices can be based on culture, religion, morality or politics. Tabu search is a
higher level heuristic procedure for solving optimization problems, design to guide
other methods to escape the trap of local optimality [131,132]. Tabu search has
obtained optimal and near optimal to a wide variety of classical and practical
problems in application ranging from scheduling to telecommunications and from
character recognition to neural networks.
Applications of Tabu search includes employee scheduling, maximum
satisfiability problems, character recognition, space planning and architectural
development, telecommunications path assignment, probabilistic logic problems,
NN pattern recognition, quadratic assignment problems, network topology design,
computer channel balancing, TSP, graph coloring, graph partitioning, nonlinear
covering, maximum stable set problems, flow shop sequencing, design, location and
allocation, logic and artificial intelligence, technology, general combinational
optimization, graph optimization, routing, production, inventory and investment
[133].
1.4.19

Teaching-Learning-Based Optimization (TBLO)


TLBO was introduced in the year 2011 by R.V. Rao. TLBO is based on

the teaching and learning process in a classroom. Teaching-Learning is an important


process where every individual tries to learn something from other individuals to
improve themselves. The algorithm simulates two fundamental modes of learning.
Through the teacher (Teacher phase) and interacting with other learners (Learning
phase). In TLBO analogies group of students considered as population, different
subjects as different design variables, result scores as fitness value of the problem
and teacher is best solution [134]. The applications of TLBO includes clustering
[135], multi objective optimization [136], optimal power flow [137], discrete
optimization of truss structure [138], global optimization problems [139], economic
dispatch problems [140], design of IIR based digital hearing aids [141],
reconfiguration in radial distribution systems for loss reduction [142], optimal
scheduling [143].

26

1.5

RESEARCH MOTIVATION
Motivation of this research work is to optimize the parameters L1, L2, L3,

ym and Fitness function (F) or Die Area (DA) values of a MEMS accelerometer
using Artificial Bee Colony (ABC) algorithm with Particle Swarm Optimization
algorithm methods. The main problem in MEMS is to get an optimal design. The
significance of MEMS optimization relating to concert, power utilization, and
consistency increases. There are various optimization algorithms that can produce an
optimized design of MEMS.
Some of the optimization algorithms are methods without derivatives
(e.g. Nelder-Mead-Simplex), methods using derivatives (e.g. Conjugate Gradient or
Quasi-Newton). These methods have various disadvantages like, with an rising
amount of parameters these surfaces turn out to be more and more problematical and
it is almost not possible to compute them.
An additional universal difficulty is the typically complex relationship
among structure parameters and the structure performance. Probably the most
important drawback is finding a global optimum. FEM simulation is also another
important method to achieve the optimized design of MEMS. Here a simplified
spring-mass model is used to predict the device sensitivity. This method also will not
give a efficient optimized design of MEMS. The spring constant of the beams
should be further reduced by using some more complaint flexure structures (e.g.
four-fold beam) to get a better optimized method. All the drawbacks mentioned
above can be overcome by means of using a Genetic Algorithm. By using Genetic
Algorithm, MEMS design can be optimized in a better way. But GA having some
draw backs such as representation, population size and mutation rate, selection and
deletion policies, crossover and mutation operators, termination criterion. The
disadvantages of GA can be overcome by means of using ABC algorithm.
In general, the cost and the Die Area (DA) of the accelerometer are
directly proportional. Thus, the cost associated with the design of accelerometer
increases slowly with the increase in Die Area (DA) of the accelerometer. This

27

behavior has lead to the need for minimizing the Die Area (DA) along the design
parameter force (N) and thereby, optimization of these parameters comes into effect.
The optimization algorithm is centered on objective function or Fitness function (F).
The proposed method uses a combination of Artificial Bee Colony (ABC)
optimization algorithm and Particle Swarm Optimization (PSO) algorithm to
optimize the design parameters of MEMS accelerometer.
1.6

RESEARCH OBJECTIVES
MEMS based accelerometers used for airbag deployment in automobile

industry as alternate for conventional accelerometer provide advantages in terms


size, weight and cost. Therefore, the problem was identified to optimize the Die
Area (DA) of MEMS based accelerometer with a range of
value can be between 90000 to 160000 m2 and it can be
relaxed up to 240000 m2 (adopted from [11]).
The research objectives were to:
(a) Suggest a best optimization algorithm to optimize parameters of
MEMS based accelerometer.
(b) By applying the Genetic Algorithm we have obtained the optimal
parameters L1, L2, L3, ym and Fitness function (F) or Die Area (DA)
of a MEMS accelerometer.
(c) In the second method we have utilized the ABC optimization
algorithm for optimizing the parameters Z1, Z2, Z3, yk and Fitness
function (F) or Die Area (DA) of MEMS Accelerometer.
(d) The major intension of this work is to mainly focus on optimization
of the design parameter like Fitness function (F) or Die Area (DA)
along with a new parameter force (N). For optimization of these
parameters we have incorporated two optimization algorithms like
ABC and PSO. The primary optimization is done using ABC and
the resultant fitness solution from the ABC is further optimized

28

using the PSO algorithm. By combining two algorithms we can get


better optimized parameters which help in efficient design of
MEMS accelerometer.
(e) The optimization of parameters of a MEMS accelerometer using
GA, ABC, and ABC with PSO is carried out in MATLAB 7.12
environment.
(f) Based on the three different types of optimization techniques the
obtained parameter values of a MEMS accelerometer are compared
and optimal parameters are reported.
1.7

LITERATURE REVIEW
The various works related to the MEMS design optimization has been

presented here.
John K. Sakellaris [144] has presented the design of a vibration control
mechanism for a beam with bonded piezo electric sensors, actuators and an
application of the arising smart structure for vibrations suppression too. The
mechanical modeling of the structure and the subsequent finite element
approximation were based on Hamiltons principle and classical engineering theory
of bending of beams in connection with simplified modeling of piezoelectric sensors
and actuators. Two control schemes LQR and H2 were considered.
Aniket Singh et al. [145] have presented a study of MEMS RF Power
Sensors. An optimized sensor with low reflection loss parameters was identified.
Two designs, one with a cantilever bridge and another with a fixed bridge were
compared in terms of reflection and transmission losses. The designs were simulated
with different dielectric layers and varied thickness to get a series of results. A fairly
optimized design was realized with minimum reflection losses.
Xiaolin Chen et al. [146] have presented a study on multi-level
simulation that proved to be an efficient way to accelerate the design process and
improve the device performance. It can be used effectively to optimize the

29

gyroscope system. The device design was automatically generated based on mask
layout and fabrication process restrictions. Design verification was performed at the
device-level for detailed analysis and at the system-level for behaviour
characterization.
Chaitanya Chandrana et al. [147] have presented the structure for an
integrated transducer that used a non-conductive epoxy for mechanical backing of
the transducer and a thin film electrode for backside contact as part of the
integrated process for the transducer. The desired outcome was a single integrated
MEMS PVDF transducer chip, combining a high input impedance preamplifier and
focused transducer. It showed an approach for building integrated PVDF transducers
with minimal parasitic that could be widely used in clinical IVUS applications.
Adam Dugosz [148] has presented the MOOPTIM algorithm that had
been used for multiobjective shape optimization of MEMS structures. The
effectiveness of MOOPTIM had been compared to NSGAII on several benchmark
test problems. The obtained results showed the effectiveness of MOOPTIM for both
un-constrained and constrained optimization tasks. To reduce the time of the
optimization, parallel computation or approximate surrogate evaluations were used.
Rohit Pathak and Satyadhar Joshi [149] have presented a novel way to
approach reliability calculations and shown how properties at different levels and
types needed to be linked up in a multi scale analysis, where HPC can benefit
reliability calculations for MEMS devices. They have calculated various parameters
of different scale for a MEMS device and proceeded with the analysis of reliability
using MATLAB distributive computing Toolbox.
Sujata N. Naduvinamani et al. [150] have demonstrated the design of
cantilever based switch using CoSolve-EM and it was observed that pull in voltage
for RF MEMS switch varies for different dimensions. They have recently developed
CoSolve-EM, a coupled solver for 3D quasi-static electro-mechanics. With the help
of CoventorWare, the switch was designed. Initially in the process editor,
required gap (i.e between the beam and the substrate) was set to some desired

30

value. Then the 2-D layout was drawn. Then with the layout editor, the 3-D layout
was drawn. Then the meshing of the structure was done. After meshing the MEMMech set up was done for the CoSolve-EM and existing loads were removed. It was
solved for the CoSolve-EM and the mechanical deflections that takes place
solely due to the electrostatic force were observed. Then different voltages were
applied and the corresponding deflections towards the substrate were noted down.
Prince Nagpal et al. [151] have described the capacitive pressure
sensor design for biomedical applications like blood pressure measurement. The
described pressure sensors provided high sensitivity even at low pressure. This
makes it suitable for biomedical applications. Effects of varying different parameters
on the pressure sensor performance have been studied. From the results, the pressure
sensors with compatible parameters can be selected for specific requirements. These
compact pressure sensors are made up of biocompatible materials and can be
implanted easily inside body to be used for RF telemetry purpose.
Shveta Jain et al. [152] have presented the Performance Study of RF
MEMS Ohmic Series Switch. The effect of different geometrical parameters was
studied and simulated using CoventorWare. It showed that varying the anchors
length improves the contact force thereby reducing the insertion loss. Thus the tradeoff between the parameters and switch performance can be enhanced by
maintaining the parameters.
Zhang et al. [153] implemented a hierarchical MEMS synthesis and
optimization architecture, integrating an object-oriented data structure with SUGAR
and two types of optimization: Genetic Algorithms (GA) and local gradient-based
refinement. They noted that the MOGA approach needed a means for automating the
starting populations for MOGA that would enable a larger sampling of the solution
space of MEMS design.
Jain et al. [154] developed a MEMS transducer for an ultrasonic flaw
detection system. This experiment appears to be the first attempt to detect ultrasonic
signal by MEMS transducers in direct contact with solids.

31

Attoh-Okine et al. [155] proposed the potential applications of MEMS in


pavement engineering and highlighted some of the potential applications. They
highlighted both the advantages and disadvantages of MEMS within pavement
engineering applications. They also developed an experimental protocol for the use
of MEMS resonator sensors in monitoring micro cracking in concrete.
Kamalian et al. [156] extended Zhous work to more advanced MEMS
problems and explored interactive evolutionary computation (IEC), integrating
human expertise into the synthesis loop to lever the strength of human expertise with
computational efficiencies.
Li and Anton son [157] applied GAs to the mask-layout aspect of MEMS
synthesis. Ma and Anton son [158] also used GAs for automated mask-layout
synthesis, but extended their work to include process synthesis for MEMS. Given a
desired MEMS device topology and fabrication process, their tool could produce
mask layouts and associated fabrication steps for a particular MEMS device. GAs
was used to evolve an optimal mask layout given a user-defined shape. Li et al.
[159] also concentrated on the development of automatic fabrication process;
planning for MEMS devices for the later stages of product development.
Obadat et al. [160] developed full-scale MEMS-based biaxial strain
transducers for monitoring the fatigue state of railway tracks. A unique feature of
this work involves the combined use of finite element method (FEM) and MEMS.
FEM analysis was used to determine the critical fatigue locations where the MEMS
transducers were to be attached.
Mourad et al. [161] have designed a micro machined accelerometer that
relies on area variation capacitive sensing. This can be used in many applications to
enhance the efficacy and sensitivity of a capacitive accelerometer. Here, the
capacitive accelerometer depends on the area of variation capacitive sensing,
regarded to be a micro electro mechanical system (MEMS) that was existing and
realizable. MATLAB software was used for simulation. Optimization of some

32

accelerometer parameters and a single direction, which possesses movable fingers


and fixed fingers, ensures that the system damping is carried out using MATLAB.
Hamid et al. [162] have suggested a capacitive micro machined MEMS
acceleration sensor that was immune to normal-to-plane shock. When compared to
the springs of the structure, the suspended cantilevers were more beneficial because
it reduces the spring length for normal motion of the proof mass, after covering a
certain distance. Thus, the springs were made even stronger to avoid normal
movements and dangerous failures. Optimization of the consumed area, which was a
significant parameter for determining the price associated with the on-chip device
fabrication, was performed using Genetic Algorithm here.
M. S. Allen et al. [163] have expressed the input waveform OUU for a
highly nonlinear, electrostatically actuated RF MEMS switch. The MCS, which
helps in envisaging the maximum impact velocity experienced by an ensemble of
switches subjected to an input waveform, utilizes a reduced-order model for the
switch, that incorporates an uncertainty model based on experimental data and
experts viewpoint. The contact velocity for the ensemble of switches can be
decreased to a larger extent with the optimization of the shape of the waveform. On
comparing the unshaped waveform, the overall contact velocity was minimized to
50% with the optimization in shape. The optimization steps aid in foretelling the
amount of contact velocity reduction produced with the change in the design of the
switch.
1.8

ORGANIZATION OF THESIS
Chapter 1 discusses the overview of MEMS technology and its

applications, MEMS accelerometer sensor for airbag deployment in automobile


industry. The various types of optimization techniques and classical evolutionary
algorithms are reviewed. Research motivation and research objective are also
discussed. Literature review of various works related to the MEMS design
optimization has been described.

33

The parameters of MEMS accelerometer, as optimized using Genetic


Algorithm are discussed in Chapter 2. The optimization of parameters of a MEMS
accelerometer using Artificial Bee Colony (ABC) algorithm is described in Chapter
3. The Fitness function (F) or Die Area (DA) using a Genetic Algorithm and
Artificial Bee Colony (ABC) algorithm are compared in Chapter 3.
Chapter 4 discusses the optimization of parameters of a MEMS
accelerometer using Artificial Bee Colony (ABC) algorithm with Particle Swarm
Optimization (PSO) algorithm method. Chapter 5 shows the comparison of optimal
parameters L1, L2, L3, ym and Fitness function (F) or Die Area (DA) values of
accelerometer using GA, ABC algorithm and ABC with PSO algorithm. Finally
Chapter 6 focuses on the highlights of the work done, conclusion, and suggestions
for the further research.
1.9

SUMMARY
The significance of MEMS technology and its applications are discussed.

Among various optimization evolutionary algorithm techniques GA, ABC, PSO are
widely used, advantages and disadvantages of these optimization techniques are
discussed. The motivation for optimizing the parameter values of MEMS based
accelerometer and objectives of the research work are discussed. Literature reviews
on various works related to the MEMS design optimization are carried out.
Algorithms for optimization are identified.

CHAPTER 2
MEMS ACCELEROMETER DESIGN OPTIMIZATION
USING GENETIC ALGORITHM

2.1

INTRODUCTION

34

This chapter explains the biological background of Genetic Algorithm


and optimization of parameters L1, L2, L3, ym and Die Area (DA) or Fitness function
(F) of MEMS accelerometer using Genetic Algorithm. Optimization of parameters
of a MEMS accelerometer is carried out in MATLAB 7.12 environment and
simulation results are tabulated and discussed. Based on the simulation results
optimized Die Area (DA) was reported.
2.2

DESIGN PARAMETERS OF MEMS ACCELEROMETER


Designing of MEMS accelerometer (employs a double folded beam

structure) may include the parameters like Beam length, Beam width, Beam depth,
Beam mass, proof mass etc. The parameters and its specifications are represented
below as follows,
Beam Length

L {L1, L2, L3, Lb}

Beam Width

W {W1, W2, W3, Wb}

Beam Depth

B {B1, B2, B3, Bm}

Beam Mass

a {xa, ya}

Proof Mass

m {xm, ym}

Among these parameters, L1, L2, L3, ym should be optimized to produce


the optimal design of MEMS. The remaining parameters are assigned to the constant
values as follows using Equation (2.1) and Equation (2.2),
W1 W2 W3 Wb M
(2.1)
B1 B2 B3 Bm N
(2.2)
where M=N=1.8m. The Figure 2.1 MEMS accelerometer diagram specifies all the
parameters that can be used to get an optimal design of MEMS. It makes use of a
folded beam structure. The structure is specified as follows using Equation (2.3) to
Equation (2.6),

35

Lb O
(2.3)
ya P
(2.4)
x a Lb 2W1
(2.5)
x m Lb 2 L2 2W1 2W3
(2.6)

O
where

=150m, P=100m

Figure 2.1 MEMS Accelerometer Diagram

L1 Q
(2.7)

L2 R
(2.8)

36

L3 S
(2.9)
ym T
(2.10)
To apply Genetic Algorithm, L1, L2, L3, ym assigned to Q, R, S, T using
Equation (2.7) to Equation (2.10), and we take random values for the design
variables within the following ranges (20m Q 500m, 20m R 100m,
100m S 500m, 100m T 500m) . These ranges have been chosen
based on the minimum size constraints and maximum area constraints, in addition to
general observation and intuition about the final designs optimal geometry [11]. The
parameters are optimized using the Genetic Algorithm. The process of Genetic
Algorithm is explained below.
2.3

GENETIC ALGORITHM
Genetic Algorithms are good at taking large, potentially huge search

spaces and navigating them, looking for optimal combinations of things, solutions
you might not otherwise find in a lifetime.
2.3.1

Salvatore Mangano (adapted from [164])

GA Definitions
A Genetic Algorithm is an iterative procedure maintaining a

population of structures that are candidate solutions to specific domain challenges.


During each temporal increment (called a generation), the structures in the current
population are rated for their effectiveness as domain solutions, and on the basis of
these evaluations, a new population of candidate solutions are formed using
specific genetic operators such as reproduction, crossover, and mutation (adapted
from [165]).
They combine survival of the fittest among string structures with a
structured yet randomized information exchange to form a search algorithm with

37

some of the innovative flair of human search. In every generation, a new set of
artificial creatures (strings) is created using bits and pieces of the fittest of the old,
an occasional new part is tried for good measure. While randomized, Genetic
Algorithms are no simple random walk. They efficiently exploit historical
information to speculate on new search points with expected improved
performance [51].
2.3.2

Biological Background of GA
The science that deals with mechanisms responsible for similarities and

differences in species is called genetics. The science of genetics helps us to


differentiate between heredity and variations and seeks to account for resemblances
and differences due to the concept of GA and directly derived from natural heredity,
their source and development. The concepts of GA are directly derived from natural
evolution.
Genetic Algorithms are search and optimization techniques based on
Darwins Principle of Natural Selection [166-168].
Darwins Principle of Natural Selection
IF there are organisms that reproduce, and IF offsprings inherit traits
from their progenitors, and IF there is variability of traits, and IF the environment
cannot support all members of a growing population, THEN those members of the
population with less-adaptive traits (determined by the environment) will die out,
and THEN those members with more-adaptive traits (determined by the
environment) will thrive , the result is the evolution of new species.
Basic Idea of Principle of Natural Selection
Select the Best, Discard the Rest
2.3.3

Main Terminologies Involved in the Biological Background of


Species

38

The important terminologies involved in the biological background of the


species are the cell, chromosomes, reproduction and natural selection.
2.3.3.1

The cell
Every animal (or) human cell is a complex of the many small factories

that work together. The center of all this is the cell nucleus. The genetic information
is contained in the cell nucleus. The Figure 2.2 and Figure 2.3 show anatomy of the
animal cell and anatomy of animal nucleus [166].

Figure 2.2 Anatomy of Animal Cell (Adapted from [166])

39

Figure 2.3 Anatomy of Animal Nucleus (Adapted from [166])


2.3.3.2

Chromosomes
A set of genes. Chromosome contains the solution in the form of genes.

The genetic information is stored in the chromosomes. Each chromosome is a build


of DNA (Dioxy Ribo Nucleic Acid). The Chromosomes in the humans forms are in
pairs. There are 23 pairs in the human cell. The chromosome is divided into several
parts called genes. Genes code the properties of a species. The possibilities of the
genes for one property are called alleles. Each and every gene has a unique position
on the chromosome called locus [166]. The Figure 2.4 shows the Model of
chromosomes.

Figure 2.4 Model of Chromosome (Adapted from [166])

40

2.3.3.3

Reproduction
Reproduction of genetics is divided into two types. They are Mitosis and

Meiosis. Mitosis is copying the same genetic information to new offspring. There is
no exchange of information. It is the normal way of growing of multicell structures,
like organs. The Mitosis and Meiosis methods of reproduction are shown in
Figure 2.5.

Figure 2.5 Mitosis and Meiosis form of Reproduction (Adapted from [166])

41

Meiosis is the basis of sexual reproduction. When meiotic division takes


place 2 gametes appears in the process. When reproduction occurs, these two
gametes conjugate to a zygote which becomes the new individual. Hence the genetic
information is shared between the parents in order to create new offspring.
During reproduction error occurs. Due to these errors genetic variation
exists. Most important errors are Recombination (crossover) and Mutation.
Figure 2.6 and Figure 2.7 show Recombination (crossover) and Mutation process.

Figure 2.6 Recombination of Chromosomes (Adapted from [166])

Figure 2.7 Mutation of Chromosomes (Adapted from [166])

42

2.3.3.4

Natural selection
The origin of species is based on preservation of favorable variations

and the rejection of unfavorable variations. There are more individual born than
that can survive, so there is a continuous struggle for life. Individuals with an
advantage have a greater chance to survive: survival of the fittest [166, 167].
2.3.4

Genetic Algorithm Life Cycle


The Genetic Algorithm process is explained through a GA cycle in the

Figure 2.8.
Population (Chromosomes)

Decoded String
Offspring

Parents

New generation

Genetic operations

CC

Evaluation

Calculation
Mate

Reproduction
Selection

Figure 2.8 Genetic Algorithm Cycle (Adapted from [166])


Reproduction is the process by which the genetic material in two or more
parents is combined to obtain one or more offspring. In the fitness evaluation step,
the individuals quality is assessed. Mutation is performed to one individual to
produce a new version of it where some of the original genetic material has been
randomly changed. Selection process helps to decide which individuals are to be
used for reproduction and mutation in order to produce new search points. The
Figure 2.9 (a) explains the basic GA operation and (b) explains the basic flow chart
of GA.

43

Initially the GA processes start with initialization of population and next


evaluate the fitness value of the given function. If optimal solution found iteration
process is stopped otherwise process continue via genetic operators (crossover,
mutation, selection) until an optimal solution is found.

Start

Create initial, random


population of organisms

Evaluate fitness for each


organism

Optimal or good solution


found
YES

End

NO
Reproduce and
kill organisms

Mute organisms

Figure 2.9 (a) Basic GA operation (b) Flowchart of Genetic Algorithm


(Adapted from [166])

44

2.3.4.1

Simple GA
The following basic program explains the simple Genetic Algorithm. GA

based optimization of parameters was carried out in MATLAB 7.12 environment.


Simple _ Genetic _ Algorithm ( )
{
Initialize the Population;
Calculate Fitness Function;
While (Fitness Value! = Optimal Value)
{
Selection; //Natural Selection, Survival of Fittest
Crossover; //Reproduction
Mutation; //Mutation
Calculate Fitness Function;
}
}
2.3.4.2

Various steps of GA

Step 1:

Represent the problem variable domain as a chromosome of a fixed


length, choose the size of a chromosome population N, the crossover
probability pc and the mutation probability pm.

Step 2:

Define a Fitness function (F) to measure the performance, or fitness, of


an individual chromosome in the problem domain. The Fitness function
(F) establishes the basis for selecting chromosomes that will be mated
during reproduction.

Step 3:

Randomly generate an initial population of chromosomes of size N:


x1, x2, . . . , xN

45

Step 4:

Calculate the fitness of each individual chromosome:


f (x1), f (x2), . . . , f (xN)

Step 5:

Select a pair of chromosomes for mating from the current population.


Parent chromosomes are selected with a probability related to their
fitness.

Step 6:

Create a pair of offspring chromosomes by applying the genetic


operators crossover and mutation.

Step 7:

Place the created offspring chromosomes in the new population.

Step 8:

Repeat Step 5 until the size of the new chromosome population becomes
equal to the size of the initial population, N.

Step 9:

Replace the initial (parent) chromosome population with the new


(offspring) population.

Step 10:

Go to Step 4, and repeat the process until the termination criterion is


satisfied.
A chromosome (also sometimes called a genome) is a set of parameters

which define a proposed solution to the problem that the Genetic Algorithm is trying
to solve. The chromosome is often represented as a simple string; although a wide
variety of other data structures are also used.
In a Genetic Algorithm, a population of strings (called chromosomes),
which encodes candidates solutions (called individuals) to an optimization problem,
evolves toward better solutions. The design parameters that should be optimized are
specified by the following Equation (2.11),
K= {L1, L2, L3, ym}

(2.11)

The Genetic Algorithm begins with a population (chromosome) of 100


random strings. It should be in the above specified ranges. For each string the

46

Fitness function (F) should be evaluated by using Die Area (DA) Equation (2.16).
Based on the fitness, the four design parameter values are evaluated. The four design
parameters L1, L2, L3, ym of MEMS accelerometer should satisfy the following
conditions mentioned in Equation (2.12) to Equation (2.14),

L1 Q
(2.12)

L2 R
(2.13)
L3 W2 y a L1
(2.14)
If it does not satisfy the above conditions, the evaluated values will be
thrown out by giving a Fitness function (F) a value of infinity. If it satisfies, then the
values of Fitness function (F) are sorted to get 10 smallest values (parents).
A Fitness function (F) is a particular type of objective function that is
used to summarize, as a single figure of merit, how close a given design solution is
to achieving the set aims. The objective function or Fitness function (F) for the
MEMS design is represented by the Equation (2.15),

F D
(2.15)

where (DA) are the Die Area and

is the Objective function The Die Area (DA)

can be evaluated by the following Equation (2.16),

D = (x m + 2L3 + 2W2 )y m
(2.16)

47

Die Area (DA) value can be between 90000 m2 to 160000 m2 and it


can be relaxed up to 240000 m2 [11].

2.3.5

Crossover
A crossover is a genetic operator used to vary the programming of a

chromosome or chromosomes from one generation to the next. It is analogous to


reproduction and biological crossover, upon which Genetic Algorithms are based. A
crossover is a process of taking more than one parent solutions and producing a
childs solution from them. There are methods for selection of the chromosomes.
The rank selection method can be used for the selection of chromosomes. Crossover
can be done by using following Equation (2.17) and Equation (2.18),
i =

(I) i
(I)
+ (1- )i+1

(2.17)
i+1 =

(II) i
(II)
+ (1- )i+1

(2.18)
i
Where

and

(I)

are the children design values, i are the parent design values

(II)

are random values between zero and one. Thus the children design

variables can be produced from the parent design variables. Thus the obtained values
can be used for the further combination of random strings to create a new generation
or population [168].
2.3.6

Mutation
Mutation is a genetic operator used to maintain genetic diversity from

one generation of a population of algorithm chromosomes to the next. It is

48

analogous to biological mutation. Mutation alters one or more gene values in a


chromosome from its initial state. In mutation, the solution may change entirely
from the previous solution. Hence GA can come to a better solution by using
mutation. The ten parents and ten children obtained by the above process are then
combined with new random design variables to create second generation of 100
strings. For these strings again Fitness function (F) is calculated, checked for the
conditions, sorted and so forth. The process is repeated for 50 iterations to get the
best solutions [168].
2.3.7

Selection
Selection is the stage of a Genetic Algorithm in which individual

genomes are chosen from a population for later breeding (recombination or


crossover). Ranking is a parent selection method based on the rank of chromosomes.
Each chromosome is ranked by its fitness value. Rank 1 is assigned to the worst;
Rank 2 to second worst and so on. The higher fitness value has the higher ranking,
which means it will be chosen with higher probability. Calculate the sum of ranks:
result is Rsum. Parent selection: Random number generating between 0.Rsum. At
the end of 50 iterations of the above process, the top ranked design variables give
the values for L1, L2, L3, ym. Of these, the values that reduce the Die Area (DA) will
be chosen as the best optimal values for L1, L2, L3, ym [168].
2.4

RESULTS AND DISCUSSION


By applying the Genetic Algorithm, we have obtained the optimal values

for L1, L2, L3, ym and Fitness function (F) or Die Area (DA). 50 iterations were used
to obtain the optimal design. The iterative genetic process has minimized the Die
Area (DA), represented by the objective or Fitness function (F), by satisfying the
design criteria. The best performing design was saved for each successive starting
population to converge on the optimum values. The results had been displayed for
every 10 iterations the optimum values obtained by the Genetic Algorithm have been
shown in the Table 2.1 to Table 2.5. After 1,000 starting populations of 50

49

generations (iterations) have been computed, the best performing optimized design
parameters are given as the output.
After 10 iterations using the GA method the top 5 ranks are displayed in
the Table 2.1. Among these five rank values using rank selection method Rank 1
which produce the optimized parameter values L1= 2.826 x 10-04, L2= 8.454 x 10-05,
L3= 3.984 x 10-04, ym= 1.006 x 10-04, F= 113309.10072 m2. The minimized
objective function or Fitness function (F) at 10th iteration is F= 113309.10072
m2. Figure 2.10 represents the convergence of the objective function or Fitness
function (F) at 10th iteration.
Table 2.1 Five Optimal Values Obtained in the 10th Iteration by GA
Iteration

L1

L2

L3

ym

Rank 1

2.826e-04

8.454e-05

3.984e-04

1.006e-04

113309.10072

Rank 2

3.204e-04

4.655e-05

4.341e-04

1.011e-04

113378.55933

Rank 3

2.915e-04

8.160e-05

4.000e-04

1.010e-04

113508.45544

Rank 4

2.745e-04

8.714e-05

3.839e-04

1.033e-04

113899.40005

Rank 5

2.894e-04

6.075e-05

4.290e-04

1.001e-04

114141.35451

50

Figure 2.10 Minimization of Objective Function in the 10th Iteration


After 20 iterations using the GA method the top 5 ranks are displayed in
the Table 2.2. Among these five rank values using rank selection method Rank 1
which produce the optimized parameter values L1= 2.837 x 10-04, L2= 8.604 x 10-05,
L3= 3.924 x 10-04, ym= 1.010 x 10-04, F= 112903.54656 m2. The minimized
objective function or Fitness function (F) at 20th iteration is F= 112903.54656
m2. Figure 2.11 represents the convergence of the objective function or Fitness
function (F) at 20th iteration.
Table 2.2 Five Optimal Values Obtained in the 20th Iteration by GA
Iteratio
n

L1

L2

L3

ym

Rank 1

2.837e-04

8.604e-05

3.924e-04

1.010e-04

112903.54656

Rank 2

2.808e-04

9.071e-05

3.928e-04

1.004e-04

113180.96991

Rank 3

2.902e-04

9.063e-05

3.984e-04

1.000e-04

113920.92759

Rank 4

2.953e-04

6.791e-05

4.146e-04

1.014e-04

114188.22835

Rank 5

2.630e-04

9.141e-05

3.824e-04

1.031e-04

114232.92758

51

Figure 2.11 Minimization of Objective Function in the 20th Iteration


After 30 iterations using the GA method the top 5 ranks are displayed in
the Table 2.3. Among these five rank values using rank selection method Rank 1
which produce the optimized parameter values L1= 2.963 x 10-04, L2= 6.568 x 10-05,
L3= 4.139 x 10-04, ym= 1.001 x 10-04, F= 112072.49052 m2. The minimized
objective function or Fitness function (F) at 30th iteration is F= 112072.49052
m2. Figure 2.12 represents the convergence of the objective function or Fitness
function (F) at 30th iteration.
Table 2.3 Five Optimal Values Obtained in the 30th Iteration by GA
Iteratio

L1

L2

L3

ym

52

n
Rank 1

2.963e-04

6.568e-05

4.139e-04

1.001e-04

112072.49052

Rank 2

2.717e-04

9.759e-05

3.792e-04

1.014e-04

112972.01599

Rank 3

2.981e-04

6.259e-05

4.218e-04

1.002e-04

113230.17173

Rank 4

3.158e-04

5.038e-05

4.313e-04

1.012e-04

113824.23060

Rank 5

2.809e-04

8.586e-05

4.019e-04

1.003e-04

113917.35371

Figure 2.12 Minimization of Objective Function in the 30th Iteration


After 40 iterations using the GA method the top 5 ranks are displayed in
the Table 2.4. Among these five rank values using rank selection method Rank 1
which produce the optimized parameter values L1= 3.249 x 10-04, L2= 4.827 x 10-05,
L3= 4.318 x 10-04, ym= 1.006 x 10-04, F= 112754.92528 m2. The minimized
objective function or Fitness function (F) at 40th iteration is F= 112754.92528
m2. Figure 2.13 represents the convergence of the objective function or Fitness
function (F) at 40th iteration.
Table 2.4 Five Optimal Values Obtained in the 40th Iteration by GA

53

Iteratio
n

L1

L2

L3

ym

Rank 1

3.249e-04

4.827e-05

4.318e-04

1.006e-04

112754.92528

Rank 2

2.898e-04

8.056e-05

3.944e-04

1.016e-04

112892.53307

Rank 3

2.794e-04

9.760e-05

3.824e-04

1.008e-04

12922.54112

Rank 4

3.159e-04

5.661e-05

4.259e-04

1.005e-04

113114.38445

Rank 5

2.995e-04

6.010e-05

4.144e-04

1.019e-04

113129.17799

Figure 2.13 Minimization of Objective Function in the 40th Iteration


After 50 iterations using the GA method the top 5 ranks are displayed in
the Table 2.5. Among these five rank values using rank selection method Rank 1
which produce the optimized parameter values L1= 3.085 x 10-04, L2= 6.019 x 10-05,
L3= 4.107 x 10-04, ym= 1.019 x 10-04, F= 112297.95163 m2. The minimized
objective function or Fitness function (F) at 50th iteration is F= 112297.95163

54

m2. Figure 2.14 represents the convergence of the objective function or Fitness
function (F) at 50th iteration.
Table 2.5 Five Optimal Values Obtained in the 50th Iteration by GA
Iteratio
n

L1

L2

L3

ym

Rank 1

3.085e-04

6.019e-05

4.107e-04

1.019e-04

112297.95163

Rank 2

3.108e-04

5.896e-05

4.198e-04

1.009e-04

112864.70542

Rank 3

3.061e-04

5.763e-05

4.127e-04

1.028e-04

113203.34803

Rank 4

2.832e-04

6.901e-05

4.114e-04

1.012e-04

113470.35106

Rank 5

2.972e-04

6.460e-05

4.223e-04

1.001e-04

113525.79582

Figure 2.14 Minimization of Objective Function in the 50th Iteration


After 50 iterations results of the GA optimized parameter values are
3.085 x 10-04, L2= 6.019 x 10-05, L3= 4.107 x 10-04, ym= 1.019 x 10-04,
F= 112297.95163 m2 displayed in Table 2.6. The final minimized objective
function or Fitness function F= 112297.95163 m2.

L1=

55

Table 2.6 Results of the GA Optimization Method


Optimum Design

Property Value

L1

3.085 x 10-04 = 308.5 m

L2

6.019 x 10-05 = 60.19 m

L3

4.107 x 10-04 = 410.7 m

ym

1.019 x 10-04 = 101.9 m

Fitness function (F)

112297.95163 m2

Clearly, the Genetic Algorithm succeeds, in progressively finding designs


with smaller design areas. Additionally, the Genetic Algorithm appears to converge
to the best design asymptotically, as the starting population count increases.
2.5

SUMMARY
In this chapter, it has been discussed about the optimal design of MEMS

accelerometer. The parameters have been optimized to get the minimized Die Area
(DA) or Fitness function (F). Genetic Algorithm developed in MATLAB 7.12 had
been used for the optimization and it has produced a minimized Die Area (DA)
based on the Fitness function (F). Finally, optimal parameter values are obtained by
means of this algorithm. Thus, the experimental results have shown the optimal
design with minimized Die Area (DA) and optimized parameters.

56

CHAPTER 3
DESIGN PARAMETER OPTIMIZATION BASED ON
ARTIFICIAL BEE COLONY (ABC) ALGORITHM
FOR MEMS ACCELEROMETERS

3.1

INTRODUCTION
The parameters Z1, Z2, Z3, yk and Die Area (DA) or Fitness function (F) of

a double folded beam MEMS accelerometer was considered for optimization.


Biological background of the Artificial Bee Colony (ABC) algorithm was studied.
Optimization of parameters of a MEMS accelerometer using Artificial Bee Colony
(ABC) algorithm has been carried out in MATLAB 7.12 environment. Simulation
results are tabulated and discussed. Based on the simulation results optimized Die
Area (DA) was reported. The Die Area (DA) (or) Fitness function (F) values
obtained using Artificial Bee Colony (ABC) algorithm and Genetic Algorithm are
compared.
3.2

OPTIMIZED PARAMETER DESIGN OF MEMS


ACCELEROMETER
There are various optimization algorithms that can produce an optimized

design of MEMS. The optimization algorithm mainly focuses on the objective


function or Fitness function (F). In our proposed method we have utilized the
Artificial Bee Colony Optimization algorithm for optimizing the design of MEMS
Accelerometer. Various parameters that are required for the accelerometer design of
MEMS includes the Beam length, Beam width, Beam depth, Beam mass, proof mass
etc. The Figure 3.1 shows a block diagram of proposed design optimization of
MEMS accelerometer. In Figure 3.1, the various design parameters of the
accelerometer are selected and for each of these parameters the optimization process

57

is carried out using the ABC algorithm and the final optimized parameters are
obtained as the final solution.

Various Design Parameters of


MEMS Accelerometer

Final optimized design

Optimization of design

parameters for MEMS

Parameters using ABC

Accelerometer

Figure 3.1 Block Diagram for Artificial Bee Colony (ABC) Optimization of
MEMS Accelerometer
3.3

DESIGN PARAMETERS OF MEMS ACCELEROMETER


The MEMS accelerometer design in our proposed method is employed

with the aid of the various design parameters like Beam length, Beam width, and
Beam depth, Beam mass, proof mass etc. The parameters and its specifications are
represented below as follows,
Beam Length

Z {Z1, Z2, Z3, Zj}

Beam Width

H {H1, H2, H3, Hj}

Beam Depth

D {D1, D2, D3, Dj}

Beam Mass

m {xm, ym}

Proof Mass

k {xk, yk}

58

Among these parameters, Z1, Z2, Z3, yk should be optimized to produce the
optimal design of MEMS. The remaining parameters are assigned to the constant
values as follows using Equation (3.1) and Equation (3.2),
H1 H 2 H 3 H j A

(3.1)
D1 D2 D3 D j B

(3.2)
where A=B=1.8m.The Figure 3.2 MEMS accelerometer diagram specifies all the
parameters that can be used to get an optimal design of MEMS. It makes use of a
folded beam structure. The structure is specified as follows using Equation (3.3) to
Equation (3.6),

Figure 3.2 MEMS Accelerometer with Design Parameters

59

The parameter values is specified as follows,


Zj P

(3.3)
ym Q
(3.4)
x m Z j 2H 1

(3.5)
xk H j 2Z 2 2H 1 2H 3

(3.6)
where P=150m, Q=100m.
whereas the other parameters like Z1, Z2, Z3, yk assigned to R, S, T, U using
Equation (3.7) to Equation (3.10) .

Z1 R
(3.7)

Z2 S
(3.8)
Z3 T
(3.9)
yk U
(3.10)
We take random values for the design variables it should be in the
following ranges, where 20m R 500m, 20m S 100m, 100m T
500m, 100m U 500m [11]. The parameters are optimized using the ABC
algorithm. The process of ABC is explained below.

60

3.4

ABC ALGORITHM FOR OPTIMIZATION OF DESIGN


PARAMETERS
This section explains the basic working concept of the Artificial Bee

Colony (ABC) algorithm and different phases of ABC algorithm. A simple Artificial
Bee Colony algorithm was given as an example.

3.4.1

Basic Behavior Characteristics of Foragers


The basic behavioral characteristics of foragers can be understood by

using the Figure 3.3 behavior of honey bee foraging for nectar. In the Figure 3.3 we
assume that there are two discovered food sources: A and B. At the very beginning, a
potential forager will start as unemployed forager and that forager bee will have no
knowledge about the food sources around the nest. There are two possible options
for such a bee:

61

Figure 3.3 Behavior of Honeybee Foraging for Nectar (Adapted from [169])
A, B

Discovered food sources

Onlooker

Scout

UF

Uncommitted Follower

EF1

Sharing Information

EF2

Continue Work alone

(i) It can be a scout and start searching around the nest spontaneously
for food due to some internal motivation or possible external clue
(S on Figure 3.3).
(ii)

It can be a recruit after watching the waggle dances and start


searching for a food source (R on Figure 3.3).

After finding the food source, the bee utilizes its own capability to
memorize the location and then immediately starts exploiting it. And hence, the bee
becomes an employed forager. The foraging bee takes a load of nectar from the
source and return to the hive, thereby unloading the nectar to a food store. After
unloading the food, the bee has the following options:
(i)

It might become an uncommitted follower after abandoning the


food source (UF).

(ii)

It might dance and then recruit nest mates before returning to the
same food source (EF1).

(iii) It might continue to forage at the food source without recruiting


bees (EF2).
It is important to note that all bees do not start foraging simultaneously.
The experiments confirmed that new bees begin foraging at a rate proportional to the

62

difference between the eventual total number of bees and the number of bees
presently foraging [169].
Artificial Bee Colony (ABC) is a novel optimization algorithm inspired
by the natural behavior of honey bees in their search process for the best food
sources [170].
In the ABC algorithm, the colony of artificial bees contains three groups:
employed bees, onlookers and scouts [171,172]. At the initialization step, a set of
food source positions is randomly produced and also the values of control
parameters of the algorithm are assigned [173]. The nectar amount retrievable from
food source corresponds to the quality of the solution represented by that food
source. So the nectar amounts of the food sources existing at the initial positions are
determined [174].
A bee waiting for the dance area to obtain the information about food
sources is called an onlooker, a bee going to the food source is named as an
employed bee, and a bee carrying out random search is called a scout [175]. The
goal of bees in the ABC model is to find the best solution. The position of a food
source represents a possible solution to the optimization problem and the nectar
amount of a food source corresponds to the quality (fitness) of the associated
solution [176].
After sharing their information with onlookers, every employed bee goes
to the food source area visited by itself at the previous cycle since that food source
exists in her memory, and then chooses a new food source by means of visual
information in the neighborhood of the one in its memory and evaluates its nectar
amount [169]. At the second stage after sharing the information, every employed
bee goes to the food source area visited by itself at the previous cycle since that food
source exists in her memory, and then chooses a new food source by means of visual
information in the neighborhood of the present one [177]. At the third stage, an

63

onlooker prefers a food source area depending on the nectar information distributed
by the employed bees in the dance area [179]. When the source is abandoned, the
employed bee becomes a scout and starts searching a new source in the vicinity of
the hive.
In ABC algorithm the possible solutions for the optimization problem can
be denoted by the position of a food source and the fitness of a related solution can
be obtained by the nectar quantity of food source. The quantity of solutions in the
population is equal to the quantity of the employed bees or the onlooker bees. At the
first cycle, ABC produces a randomly distributed initial population of solutions.
After initialization, the population of the solutions is focused to the
replicated cycles of the search process of the employed bees, the onlooker bees and
the scout bees. An employed bee generates alteration on the solution in their
memory depending on the area knowledge and verifies the nectar quantity of the
new solution. If the nectar quantity of the new one is better than that of the earlier
one, the bee memorizes the new position and forgets the old one or else it memorize
the earlier position in their memory. The employed bees, after their completion of
search process share the nectar knowledge of the food sources and their position
knowledge with the onlooker bees. Then onlooker bees estimate the knowledge from
the employed bees and select a food source with a probability similar to its quantity.
If it is satisfied then we can memorize the solution and stop the process, if not the
process has to be repeated until the new solution satisfies the eligibility criteria. The
working process of the ABC algorithm is based on the fitness value of the solution.
The Figure 3.4 shows the overall process that is being carried out in the
ABC algorithm. The first stage is the initialization of the population which is
followed by the fitness calculation. The procedure of fitness calculation is carried
out for employed bees as well as the onlooker bees. The algorithm is terminated only
when the conditional criteria are met. By using ABC algorithm the intensity

64

parameters are adjusted and improved output image is obtained. The simple ABC
algorithm is explained below.
3.4.2

Simple ABC algorithm (adapted from [179])


An artificial bee colony (ABC) algorithm for optimizing the

n-dimensional function f(x), where xi is the i-th candidate solution (adapted from
[179]).

65

N population size
Initialize the positive integer L, which is the stagnation limit
Initialize the forager population size P f < N
Initiallize the onlooker population size Po = N - Pf
Initialize a random population of foragers xi for i 1, Pf
Initialize the forager trial counters T (xi) = 0 for i 1, Pf
while not (termination criterion)
Forager Bees
For each forager xi, i 1, Pf
k random integer
s random integer

1, N
1, n

such that k i

r U [-1,1]
vi (s) xi (s) + r (xi (s) - xk (s))
If f(vi) is better than f(xi) then
xi vi
T(xi) 0
else
T(xi) T(xi) + 1
End if
Next forager
Onlooker Bees
For each onlooker vi , i 1, Po

Select a forager xj , where Pr (xj) fitness (xj) for j 1, P f

66

N population size
Initialize the positive integer L, which is the stagnation limit
Initialize the forager population size P f < N
Initiallize the onlooker population size Po = N - Pf
Initialize a random population of foragers xi for i 1, Pf
Initialize the forager trial counters T (xi) = 0 for i 1, Pf
while not (termination criterion)
Forager Bees
For each forager xi, i 1, Pf
k random integer
s random integer

1, N
1, n

such that k i

r U [-1,1]
vi (s) xi (s) + r (xi (s) - xk (s))
If f(vi) is better than f(xi) then
xi vi
T(xi) 0
else
T(xi) T(xi) + 1
End if
Next forager
Onlooker Bees
For each onlooker vi , i 1, Po

Select a forager xj , where Pr (xj) fitness (xj) for j 1, P f


k random integer
s random integer

1, Pf
1, n

r U [-1,1]
vi (s) xj (s) + r (xj (s) - xk (s))
If f(vi) is better than f(xj) then
xj vi
T(xj) 0
else
T(xj) T(xj) + 1
End if
Next onlooker

such that k j

67

Scout Bees
For each forager xi, i 1, Pf
If T(xi) > L then
xi randomly-generated individual
T(xi) 0
End if
Next forager
Next generation

In this work, first we initialize the position of the employed bees by


finding the new solution. Here we are using contrast values as initial food source.
i 1, 2,3,.................N

Xi
Therefore

is the initial food source (solution), where, (

) is

a D-dimensional vector. After finding the initial food source calculate the Fitness
function (F) for a new food source (new solution). The Fitness function (F) can be
calculated for getting the maximum Fitness function (F). Therefore the maximum
Fitness function (F) can be obtained by Equation (3.11).
The objective function or Fitness function (F) for the MEMS design is
represented by the Equation (3.11),

F D
(3.11)
where, DA is the Die Area. The Die Area (DA) can be evaluated by the following
Equation (3.12),
D ( x m 2 Z 2 ( Z 3 Z 1 ) 2 H 2 ) y m
(3.12)

68

Die Area (DA) value can be between 90000 to 160000m2 and it can be
relaxed up to 240000 m2. The maximum fitness value is considered as the best food
source and by keeping this Fitness function (F) as initial stage we start our searching
process by using employed bees, onlooker bees, and scout. The initial stage of
Fitness function (F) is calculated.

Start
Initialize the population
Evaluate the population of new solution xi

Produce new solutions for employed bees

Calculate fitness

Produce new solutions for onlooker bees

Calculate fitness

Yes
Are termination criteria satisfied?
No
Send scout for best solutions

No

Required solution obtained


Yes
Memorize the best solution

End

69

Figure 3.4 Optimization Process in ABC


3.4.3

Employed Bees
The employed bee searches the neighborhood of its current food source

(solution) to find out a new food source (new solution) using the Equation (3.13),

C f X f f X i X l
(3.13)

l
where,

f
next value of

and

l
are the arbitrary selected symbols. Here

l
or

is the very

is the next chance of

to find the new food source

f
(solution).

is a random number between -1 and 1 . After creating the new

solution (food source), the quantity of it will be determined and a greedy choice
process will be presented. If the quality of a new food source (solution) is improved
than the existing position, the employed bee neglects that position and moves
towards a new solution (food source), or else the fitness of a new solution (food
Xi
source) is equal or improved than that of

, therefore the new solution takes the

Xi
place of
3.4.4

in the population and becomes a new solution.


Onlooker Bees

70

The onlooker bees select a food source by estimating the knowledge

pi
obtained from all of the employed bees. The probability

of choosing the solution

(food source) is evaluated by the Equation (3.14),


Pf

F
n

F
f 1

(3.14)
Xi

F
where

is the fitness value of the solution (food source)

. After

choosing a food source (solution) the onlooker bees generates a new food source
using Equation (3.13). After generating the new food source a greedy selection will
be applied same as that we applied in the case of employed bees. If a solution
obtained a food source cannot be improved by these trials then that food source will
be considered as abandoned and the employed bee related to that solution (food
source) becomes a scout. The scout randomly produces new food source and
therefore in our proposed method before the scout process we obtain our improved
better food source (solution).
3.5

RESULTS AND DISCUSSION


By applying the ABC algorithm we have obtained the optimal values for

Z1, Z2, Z3, yk and Fitness function (F) or Die Area (DA). 1000 iterations were used to
obtain the optimal design.
The optimization process has minimized the Die Area (DA), represented
by the objective or Fitness function (F), by satisfying the design criteria. The best
performing design was saved for each successive starting population to converge on
the optimum values.

71

The results had been displayed for the every 100 iterations up to 500
iterations and the optimum values obtained by the ABC algorithm have also been
described in the Table 3.1 to Table 3.5. After 1000 iterations have been computed,
the best performing optimized design parameters are given as the output. The Fitness
function (F) values using ABC, GA are compared and optimized Fitness function (F)
was reported.
After 100 iterations using the Artificial Bee Colony algorithm method the
top 5 ranks are displayed in the Table 3.1. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values

Z1=

2.814 x 10-05, Z2= 5.817 x 10-05, Z3= 4.746 x 10-04, yk= 1.009 x 10-04,
F= 111739.81521 m2. The minimized objective function or Fitness function (F)
at 100th iteration is F= 111739.81521 m2. Figure 3.5 represents the convergence of
the objective function or Fitness function (F) at 100th iteration.

Table 3.1 Five Optimal Values Obtained in the 100th Iteration


by ABC algorithm
Iteratio
n

Z1

Z2

Z3

yk

Rank 1

2.814e-05

5.817e-05

4.746e-04

1.009e-04

111739.81521

Rank 2

2.592e-05

6.620e-05

4.654e-04

1.004e-04

111820.19521

Rank 3

2.592e-05

6.620e-05

4.654e-04

1.004e-04

111913.99414

Rank 4

2.592e-05

6.620e-05

4.654e-04

1.004e-04

111913.99414

Rank 5

2.592e-05

6.620e-05

4.654e-04

1.004e-04

111913.99414

72

Figure 3.5 Minimization of Objective Function in the 100th Iteration

After 200 iterations using the Artificial Bee Colony algorithm method the
top 5 ranks are displayed in the Table 3.2. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values

Z1=

2.100 x 10-05, Z2= 5.446 x 10-05, Z3= 4.897 x 10-04, yk= 1.010 x 10-04,
F= 111913.994141 m2. The minimized objective function or Fitness function
(F) at 200th iteration is F= 111913.99414 m2. Figure 3.6 represents the
convergence of the objective function or Fitness function (F) at 200th iteration.
Table 3.2 Five Optimal Values Obtained in the 200th Iteration
by ABC Algorithm
Iteratio
n

Z1

Z2

Z3

yk

73

Rank 1

2.100e-05

5.446e-05

4.897e-04

1.010e-04

111913.99414

Rank 2

2.100e-05

5.446e-05

4.897e-04

1.010e-04

111979.77470

Rank 3

2.100e-05

5.446e-05

4.897e-04

1.010e-04

111979.77470

Rank 4

2.100e-05

5.446e-05

4.897e-04

1.010e-04

111979.77470

Rank 5

2.315e-05

6.079e-05

4.800e-04

1.007e-04

112358.53887

Figure 3.6 Minimization of Objective Function in the 200th Iteration


After 300 iterations using the Artificial Bee Colony algorithm method the
top 5 ranks are displayed in the Table 3.3. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values

Z1=

2.100 x 10-05, Z2= 5.446 x 10-05, Z3= 4.897 x 10-04, yk= 1.010 x 10-04,
F= 110409.08721 m2. The minimized objective function or Fitness function (F)
at 300th iteration is F= 110409.08721 m2. Figure 3.7 represents the convergence of
the objective function or Fitness function (F) at 300th iteration.
Table 3.3 Five Optimal Values Obtained in the 300th Iteration by ABC Algorithm
Iteratio

Z1

Z2

Z3

yk

74

n
Rank 1

2.100e-05

5.446e-05

4.897e-04

1.010e-04

110409.08721

Rank 2

2.100e-05

5.446e-05

4.897e-04

1.010e-04

110409.08721

Rank 3

2.150e-05

5.476e-05

4.832e-04

1.052e-04

110809.08721

Rank 4

2.172e-05

5.482e-05

4.869e-04

1.066e-04

110909.08721

Rank 5

2.315e-05

6.079e-05

4.800e-04

1.007e-04

111158.53887

Figure 3.7 Minimization of Objective Function in the 300th Iteration


After 400 iterations using the Artificial Bee Colony algorithm method the
top 5 ranks are displayed in the Table 3.4. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values

Z1=

2.283 x 10-05, Z2= 5.174 x 10-05, Z3= 4.805 x 10-04, yk= 1.019 x 10-04,
F= 111418.71974 m2. The minimized objective function or Fitness function (F)
at 400th iteration is F= 111418.71974 m2. Figure 3.8 represents the convergence of
the objective function or Fitness function (F) at 400th iteration.
Table 3.4 Five Optimal Values Obtained in the 400th Iteration
by ABC Algorithm

75

Iteratio
n

Z1

Z2

Z3

yk

Rank 1

2.283e-05

5.174e-05

4.805e-04

1.019e-04

111418.71974

Rank 2

2.500e-05

5.777e-05

4.727e-04

1.017e-04

111751.39832

Rank 3

2.500e-05

5.777e-05

4.727e-04

1.017e-04

111751.39832

Rank 4

2.500e-05

5.777e-05

4.727e-04

1.017e-04

111751.39832

Rank 5

3.786e-05

5.071e-05

4.866e-04

1.003e-04

112075.01720

Figure 3.8 Minimization of Objective Function in the 400th Iteration


After 500 iterations using the Artificial Bee Colony algorithm method the
top 5 ranks are displayed in the Table 3.5. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values

Z1=

2.602 x 10-05, Z2= 5.274 x 10-05, Z3= 4.625 x 10-04, yk= 1.019 x 10-04,
F= 115826.60541 m2. The minimized objective function or Fitness function (F)
at 500th iteration is F= 115826.60541 m2. Figure 3.9 represents the convergence
of the objective function or Fitness function (F) at 500th iteration.

76

Table 3.5 Five Optimal Values Obtained in the 500th Iteration


by ABC Algorithm
Iteratio
n

Z1

Z2

Z3

yk

Rank 1

2.602e-05

5.274e-05

4.625e-04

1.019e-04

115826.60541

Rank 2

2.602e-05

5.274e-05

4.625e-04

1.019e-04

115826.60541

Rank 3

2.602e-05

5.274e-05

4.625e-04

1.019e-04

115826.60541

Rank 4

2.500e-05

5.777e-05

4.732e-04

1.017e-04

120826.60541

Rank 5

3.486e-05

5.071e-05

4.825e-04

1.003e-04

128075.01720

Figure 3.9 Minimization of Objective Function in the 500th Iteration


After 1000 iterations results of the ABC optimized parameter values
are L1= 2.364 x 10-05, L2= 5.356 x 10-05, L3= 4.938 x 10-04, yk= 1.015 x 10-04,
F= 110969.1602 m2 displayed in Table 3.6. The final minimized objective
function or Fitness function F= 110969.1602 m2.
Table 3.6 Results of the ABC Algorithm Optimization Method

77

Optimum Design

Property Value

Z1

2.364 x 10-05 = 23.64 m

Z2

5.356 x 10-05 = 53.56 m

Z3

4.938 x 10-04 = 493.8 m

yk

1.015 x 10-04 = 101.5 m

Fitness function(F)

110969.1602 m2

The Table 3.7 shows the comparison of the fitness value between the
ABC algorithm method and the existing method using the Genetic Algorithm. The
Fitness function (F) or Die Area (DA) was minimized from 112297.95 m2 to
110969.16 m2 using the ABC method. The fitness value proved to have improved
than the existing (GA) method.
Table 3.7 Comparison of Fitness Function (F) Value using GA
and ABC Method
Methods

Fitness function (F) in m2

GA method

112297.9516

ABC method

110969.1602

The graphical representation of the comparison of two methods is given


in Figure 3.10. As shown in the Figure 3.10 the Fitness function (F) value of the
ABC method is improved when compared with that of Genetic Algorithm.

78

Comparison of Fitness function (F) value using GA and ABC Algorithms


112500
112000
111500
Fitness function (F) value in m2

111000
110500
110000
GA

ABC

Algorithms

Figure 3.10

3.6

Comparison of Fitness Function (F) Value using GA and ABC


Method

SUMMARY
In this chapter, we have proposed a system to perform the optimization of

the design parameters in the MEMS accelerometer. For this, we have employed
Artificial Bee Colony (ABC) algorithm which provides an efficient optimization
technique. From the simulation study, it is observed that Artificial Bee Colony
(ABC) algorithm method has delivered better results in terms of the fitness values
when compared to other optimization techniques like Genetic Algorithm and also
better optimization.
This algorithm will overcome the issues of GA, and the ABC based
design parameter optimization technique will helpful to design MEMS
accelerometer architecture. The fitness is based on the parameter Die Area (DA)
with a specified range.

79

CHAPTER 4
OPTIMIZATION OF PARAMETERS FOR
MEMS ACCELEROMETER WITH COMBINATION
OF ARTIFICIAL BEE COLONY (ABC) ALGORITHM
AND PARTICLE SWARM OPTIMIZATION (PSO)

4.1

INTRODUCTION
The optimization of parameters of a MEMS accelerometer was discussed

in this chapter. Artificial Bee Colony (ABC) optimization algorithm and Particle
Swarm Optimization (PSO) algorithm used to optimize parameters L1, L2, L3, bf and
Fitness function (F) or Die Area (DA) values of MEMS accelerometer. ABC
performs primary optimization and PSO does the optimization of the Fitness
solution resulting from the execution of the ABC algorithm. Die Area (DA) along
with the design parameter force (N) is minimized. This optimization process is
carried out in MATLAB 7.12 environment, simulation results are discussed and
optimized parameters are reported. The Fitness function (F) value using ABC with
PSO are compared with existing ABC alone.
4.2

PARAMETER SELECTION FOR DESIGNING THE


ACCELEROMETER THROUGH OPTIMIZATION PROCESS
Beam length, Beam width, Beam depth, Beam mass, Proof mass and so

on are some of the parameters involved in the design of MEMS accelerometer. But
the Die Area (DA) and Force form the main parameters of the design. The

80

Figure 4.1 represents the various steps involved in the optimization of parameters of
MEMS accelerometer.

Start

Various Parameters for


accelerometer design
Optimization using ABC

Initialization of Particle

PSO Algorithm

Evaluate Fitness

Termination Criteria met Yes


Best Solution
No
Update Velocity and Position

Final Optimized Solution

Stop

Figure 4.1
4.3

ABC with PSO Optimization of MEMS Accelerometer

DESIGN PARAMETERS OF MEMS ACCELEROMETER


The proposed method for the MEMS accelerometer design is

accomplished through the selection of various design parameters like Beam length,
Beam width, Beam depth, Beam mass, proof mass etc. The process of MEMS
designing will start with the approaching of the system and selection of the concept

81

domain (e.g. accelerometer design) by the MEMS designer because only then the
design specifications can be formulated. The designer then inputs the design
specifications for these various parameters. The parameters and its specifications can
be represented as follows:
Beam Length

L {L1, L2, L3, Lj}

Beam Width

W {W1, W2, W3, Wj}

Beam Depth

G {G1, G2, G3, Gj}

Beam Mass

M {aM, bM}

Proof Mass

f { af, bf }

The optimal designs of MEMS can be obtained with the optimization of


parameters, L1, L2, L3 and bf . Figure 4.2 represents various parameters of MEMS
accelerometer. The rest of the parameters are assigned to the constant values as
follows using Equation (4.1) and Equation (4.2),
W1 W2 W3 W j P

(4.1)
G1 G2 G3 G j Q

(4.2)
Where P=Q=1.8m. The MEMS accelerometer diagram indicates all the
parameters that can be utilized to yield an optimal design of MEMS. A folded beam
structure is available for use and the parameter values are described as in
Equation (4.3) to Equation (4.6),
Lj R

(4.3)

bM T
(4.4)

82

a M L j 2W1

(4.5)
a f W j 2 L2 2W1 2W3

(4.6)
The value of R= 150m and the value of T= 100m. The other
parameters like L1, L2, L3 and bf assigned to B, D, E, F using Equation (4.7) to
Equation (4.10),

L1 B
(4.7)

L2 D
(4.8)
L3 E
(4.9)

bf F
(4.10)
Where, 20m B 500m, 20m D 100m, 100m E
500m, 100m F 500m [11]. Optimization of the parameters is done with
both the ABC algorithm and PSO algorithm.

83
W2

W3

L2

Wj

Lj
Anchor

W2

L2
bM

L1

L3

aM

Proof Mass
bf

af

Anchor

Figure 4.2 MEMS Accelerometer with Design Parameters


4.4

ABC FOR OPTIMIZATION OF DESIGN PARAMETERS


Artificial Bee Colony (ABC) is a new optimization algorithm, which is

motivated by the natural behavior of honey bees in finding their best food resources.
The colony of artificial bees in the ABC algorithm has three clusters of bees,
namely, employed bees, onlookers and scouts. The initialization procedure consists
of random generation of a set of food source positions as well as the assignment of
values to the control parameters of the algorithm.
The quantity of nectar obtained from the food source denotes the quality
of solution embodied in that food source. Hence, the nectar amounts of the food
sources available at the initial positions are determined. A bee that remains in the
dance area to gather the information regarding food sources is termed as an

84

onlooker. A bee that visits the food source is called as an employed bee. A scout bee
is the one that performs random search.
The bees in the ABC model aim at discovering the best solution. The
location of a food source indicates a possible solution to the optimization problem
and the amount of nectar specifies the quality (fitness) of the solution related to the
food source. On delivering the information regarding the food source to the
onlookers, the employed bee would visit the food source area that was visited by her
previously in the past cycle using the food source information residing in her
memory and then, continues to select a new food source that lies in the
neighborhood of the previously visited food source through visual information and
evaluates its nectar amount.
The same process is repeated by the employed bee in the second stage
after giving food source information resulting from first stage to onlookers, except
that the new food source will now lie in the neighborhood of the food source visited
at the first stage. During the third stage, an onlooker uses the nectar information
given by the employed bees in the dance area to choose the food source area. If the
source gets discarded, the working bee becomes a survey bee and begins to hunt a
novel starting place in the region seal of the colony.
The amount of working bees or the observer bees is corresponding to the
number of results in the inhabitants. In the first cycle, ABC generates an arbitrarily
spread early inhabitants of results. Sometimes ago compute gets finished, the
inhabitants of the result will be subjected to frequent cycles of the hunt procedure
handled by the working bees, the observer bees and the survey bees. A working bee
is the one that modifies the solution in its memory with the help of area information
and then, checks the quantity of nectar in the new solution.
The employed bee would remember the new position, if the nectar
quantity of the present food source is larger than in the previous food source and
discards information regarding the previous food source. Else if the nectar quantity
in the present food source is smaller than in the previous food source, the

85

information of the previous food source is retained in the memory of the bee. On
completion of the search process, the employed bees share the information related to
the position of the food source and the nectar amount of the food source to the
onlookers.
The onlookers would then make an estimate of the information obtained
from the employed bees and chooses a food source that has a probability identical to
its quantity. If the solution achieved is satisfactory, memorize the solution and end
up the process or else, continue the process until the new solution attains the suitable
criteria. The working procedure of the ABC algorithm given in Figure 4.3.
Figure 4.3 reveals the complete process of ABC algorithm. Initialization
of the population forms the first stage of ABC algorithm, which is then followed by
the fitness calculation. The fitness calculation process is done for both the employed
bees and the onlooker bees. The algorithm gets ended only if the conditional criteria
are satisfied. Through the usage of ABC algorithm, the intensity parameters are
adjusted and an enhanced output image is obtained. The proposed ABC algorithm is
explained below.

86

Start

Initialization
Evaluate new solution(s)

Generate new solutions for


employed bees

Calculate fitness

Generate new solutions for


onlooker bees
Calculate fitness

Are termination criteria satisfied?

Yes
No
Send scout for best
solutions
Are termination criteria satisfied?

Yes
Memorize the best solution

Stop

Figure 4.3 Optimization Process in ABC

No

87

In this work, first initialize the position of the employed bees by


discovering the new solution. Here, contrast values are used as initial food source.
i 1, 2,3,.................n

Ri
Therefore,

is the initial food source (solution), where, (

) is

a D-dimensional vector. After finding the initial food source, calculate the Fitness
function (F) for a new food source (new solution). The Fitness function (F) is
calculated to yield the maximum Fitness function (F). Therefore, the maximum
Fitness function (F) can be obtained by Equation (4.11).
The objective function or Fitness function (F) for the MEMS design is
represented by Equation (4.11),

F D N
(4.11)

N
where, DA is the Die Area and

is the force. The Die Area (DA) and the force, N

can be evaluated by Equation (4.12) and Equation (4.13) respectively.


D (a m 2 L2 ( L3 L1 ) 2W2 )bm
(4.12)

N ma
(4.13)
m

where,

is the beam mass and

is the acceleration.

Die Area (DA) value can range between 90,000 to 160,000m2 and it can
be relaxed up to 240,000 m2 [11].
The food source with the maximum fitness value is taken to be the best
food source and by keeping this Fitness function (F) as initial stage, searching
process that employs the employed bees, onlooker bees and scout is initiated. The
initial stage of Fitness function (F) is calculated.

88

The employed bee searches the neighborhood of its current food source
(solution) to find out a new food source (new solution) using Equation (4.14),

K f R f f R i R l
(4.14)

l
where,

f
of

and

l
are the randomly chosen symbols. Here,

l
or

is the very next value

is the next chance of

to find the new food source (solution).

is a

random number between -1 and +1. After creating the new solution (food source),
the quantity of it will be determined and a greedy choice process will be presented.
If the quality of a new food source (solution) is better than the present position, the
employed bee neglects that position and moves towards new solution (food source),
or else the fitness of a new solution (food source) is equal or improved than that of
Ri

Ri
. Therefore, the new solution takes the place of

in the population and becomes

a new solution.
The onlooker bees estimate the knowledge gathered from all of the

Si
employed bees to make a selection of the food source. The probability

of

choosing the solution (food source) is evaluated by Equation (4.15).


Sf

F (t )
n

F (t )
f 1

(4.15)

89

F (t )
where,

Ri
is the fitness value of the solution (food source)

. After choosing a

food source (solution), the onlooker bees generate a new food source using Equation
(4.14). Once the new food source is selected, a greedy selection will be applied in a
similar way as that applied to employed bees. A food source is believed to be
abandoned if a solution obtained a food source that cannot be improved by these
trials and the employed bee associated with that solution (food source) will now turn
out to be a scout. The scout creates a new food source in a random manner and
hence in our proposed method, improved better food source (solution) is obtained
prior to scout process. In this way, various optimal design parameters for the
accelerometer are obtained for the purpose of improving the design process of the
MEMS accelerometer. The obtained fitness values for the design parameters are
optimized again using the PSO algorithm to get a better optimized solution.

4.5

FINAL OPTIMIZATION USING PSO


Here the resulting Fitness function (F) from ABC method was optimized

using PSO algorithm. This part explains about introduction to swarm intelligence
and PSO, a simple PSO algorithm, various steps involved in the PSO.
4.5.1

Introduction to Swarm Intelligence


Swarm Intelligence is an artificial intelligence, based on the collective

behavior of decentralized, self-organized systems. Swarm Intelligence systems are


typically made up of a population of simple agents interacting locally with one
another and with their environment. The agents follow very simple rules, and
although there is no centralized control structure dictating how individual agents
should behave, local, and to a certain degree, interactions between such agents lead
to the emergence of intelligent global behavior, unknown to the individual agents.
Natural examples of swarm intelligence include ant colonies, bird flocking, animal
herding, bacterial growth and fish schooling.

90

4.5.2

Introduction to PSO
An Evolutionary computational technique based on the movement and

intelligence of swarms looking for the most fertile feeding location. It is a Simple
algorithm, easy to implement and few parameters to adjust mainly velocity.
A swarm is an apparently disorganized collection (population) of moving individuals
that tend to cluster together while each individual seems to be moving in a random
direction. It uses a number of particles that constitutes a swarm moving around in
the search space looking for the best solution. Each particle is treated as a point in a
D-dimensional space which adjusts its flying according to its own flying
experience as well as the flying experience of other particles. Each particle keeps
track of its coordinates in the problem space which are associated with the best
solution (fitness) that has achieved so far. This value is called pbest. Another best
value that is tracked by the PSO best value obtained so far by any particle in the
neighbors of the particle is called gbest. The PSO concept consists of changing the
velocity of each particle toward its pbest and the gbest position at each time step.
Each particle tries to modify its current position and velocity according to the
distance between its current position and the pbest, and the distance between its
current position and gbest [166]. The velocity and position update can be done by

Equation (4.16) and Equation (4.17),

91

Vn + 1 = Vn + C1 rand1( ) x (P best, n - Current position n)


+ C2 rand2( ) x (g best, n - Current position n) (4.16)
Current position [n +1] = Current position [n] + Vn + 1

(4.17)

Vn + 1 = Velocity of particle at n +1th iteration


Vn = Velocity of particle at n th iteration
C1 = acceleration factor related to gbest
C2 = acceleration factor related to lbest
rand1( ), rand2( ) = random number between 0 to 1
gbest = gbest position of swarm
pbest = pbest position of particle
Current position [n +1] = Position of the particle at n +1th iteration
Current position [n]

4.5.3

= Position of the particle at n th iteration

Simple PSO Algorithm (adapted from [179])

92

Initialize a random population of individuals xi , i 1, N

Initialize each individual's n-element velocity vector vi, i 1, N


Initialize the best-so-far position of each individual : bi xi , i

1, N

Define the neighborhood size < N


Define the maximum influence values 1, max and 2, max
Define the maximum velocity max
While not (termination criterion)
For each individual xi , i 1, N

Hi nearest neighbors of xi
hi arg minx

f (x)

: x Hi

Generate a random vector 1 with 1(k) ~ U 0, 1, max for k 1, n

Generate a random vector 2 with 2(k) ~ U 0, 2, max for k 1, n


i i + 1

b i - xi

+ 2

hi - xi

If i > max then


i i

max

/i

End if
xi xi + vi
bi arg min f (xi), f (bi)
Next individual
Next generation

4.5.4

Steps in PSO
The general steps involved in the Particle Swarm Optimization are

explained as follows. In step 1 initialize the swarms from the solution space. In step
2 evaluate fitness of individual particles. If termination criteria or condition is met
obtain the best solution otherwise, in step 3 modify pbest, gbest and velocity. In
step 4 move each particle to a new position. In step 5 Go to step 2, and repeat until
convergence or stopping condition is satisfied.

93

4.6

RESULTS AND DISCUSSION


MATLAB 7.12 is used for implementing the ABC with PSO technique.

The application of ABC and PSO algorithms has produced optimal values for L1, L2,
L3 and bf. A total of thousand iterations were used to accomplish the optimal design.
The optimization process has minimized the Die Area (DA) along with force
parameter, represented by the objective or Fitness function (F), through the
satisfaction of design criteria. The Tables from 4.1 to 4.10 depicts the optimal values
obtained for 50th, 100th, 150th, 200th, 250th, 300th, 350th, 400th, 450th and 500th iteration
respectively. Five optimal values for each of the 50 iterations are found out and
tabulated. The corresponding graphical representations are shown from
Figures 4.4 to 4.13 with the minimized objective function. The Fitness function (F)
values using ABC with PSO and ABC alone are compared and the optimized Die
Area (DA) was reported.
After 50 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.1. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.180 x 10-05, L2= 4.695 x 10-05, L3= 4.900 x 10-04, bf= 1.022 x 10-04,
F= 115777.9014 m2. The minimized objective function or Fitness function (F)
at 50th iteration is F= 115777.9014 m2. Figure 4.4 represents the convergence of
the objective function or Fitness function (F) at 50th iteration.
Table 4.1 Five Optimal Values Obtained in the 50th Iteration
after Optimization
Iteratio
n

L1

L2

L3

bf

Rank 1

2.180e-05

4.695e-05

4.900e-04

1.022e-04

115777.9014

Rank 2

2.180e-05

4.695e-05

4.900e-04

1.022e-04

116188.00965

Rank 3

2.986e-05

4.830e-05

4.844e-04

1.027e-04

116578.33164

Rank 4

2.491e-05

6.257e-05

4.642e-04

1.023e-04

116588.48896

Rank 5

2.491e-05

6.257e-05

4.642e-04

1.023e-04

116588.48896

94

Figure 4.4 Minimization of Objective Function in the 50th Iteration


After 100 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.2. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.283 x 10-05, L2= 5.174 x 10-05, L3= 4.805 x 10-04, bf= 1.019 x 10-04,
F= 115826.6054 m2. The minimized objective function or Fitness function (F)
at 100th iteration is F= 115826.6054 m2. Figure 4.5 represents the convergence of
the objective function or Fitness function (F) at 100th iteration.
Table 4.2 Five Optimal Values Obtained in the 100th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.283e-05

5.174e-05

4.805e-04

1.019e-04

115826.6054

Rank 2

2.500e-05

5.777e-05

4.727e-04

1.017e-04

119751.3983

Rank 3

2.500e-05

5.777e-05

4.727e-04

1.017e-04

121751.3983

Rank 4

2.500e-05

5.777e-05

4.727e-04

1.017e-04

125751.3983

Rank 5

3.786e-05

5.071e-05

4.866e-04

1.003e-04

127075.0172

95

Figure 4.5 Minimization of Objective Function in the 100th Iteration


After 150 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.3. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.679 x 10-05, L2= 4.533 x 10-05, L3= 4.980 x 10-04, bf= 1.007 x 10-04,
F= 110743.1452 m2. The minimized objective function or Fitness function (F)
at 150th iteration is F= 110743.1452 m2. Figure 4.6 represents the convergence of
the objective function or Fitness function (F) at 150th iteration.
Table 4.3 Five Optimal Values Obtained in the 150th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.679e-05

4.533e-05

4.980e-04

1.007e-04

110743.1452

Rank 2

2.679e-05

4.533e-05

4.980e-04

1.007e-04

118053.07598

Rank 3

2.679e-05

4.533e-05

4.980e-04

1.007e-04

121053.07598

Rank 4

2.679e-05

4.533e-05

4.980e-04

1.007e-04

185053.07598

Rank 5

3.070e-05

5.371e-05

4.838e-04

1.005e-04

224669.52840

96

Figure 4.6 Minimization of Objective Function in the 150th Iteration


After 200 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.4. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 3.015 x 10-05, L2= 4.464 x 10-05, L3= 4.925 x 10-04, bf= 1.020 x 10-04,
F= 110706.21331 m2. The minimized objective function or Fitness function (F)
at 200th iteration is F= 110706.21331 m2. Figure 4.7 represents the convergence
of the objective function or Fitness function (F) at 200th iteration.
Table 4.4 Five Optimal Values Obtained in the 200th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

3.015e-05

4.464e-05

4.925e-04

1.020e-04

110706.21331

Rank 2

3.015e-05

4.464e-05

4.925e-04

1.020e-04

110706.21331

Rank 3

3.863e-05

4.527e-05

4.914e-04

1.015e-04

110706.21331

Rank 4

3.863e-05

4.527e-05

4.914e-04

1.015e-04

110706.21331

Rank 5

3.863e-05

4.527e-05

4.914e-04

1.015e-04

110706.21331

97

Figure 4.7 Minimization of Objective Function in the 200th Iteration


After 250 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.5. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.367 x 10-05, L2= 4.889 x 10-05, L3= 4.950 x 10-04, bf= 1.004 x 10-04,
F= 110457.56342 m2. The minimized objective function or Fitness function (F)
at 250th iteration is F= 110457.56342 m2. Figure 4.8 represents the convergence
of the objective function or Fitness function (F) at 250th iteration.
Table 4.5 Five Optimal Values Obtained in the 250th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.367e-05

4.889e-05

4.950e-04

1.004e-04

110457.56342

Rank 2

2.206e-05

4.814e-05

4.817e-04

1.036e-04

110592.78178

Rank 3

2.206e-05

4.814e-05

4.817e-04

1.036e-04

110692.78178

Rank 4

2.206e-05

4.814e-05

4.817e-04

1.036e-04

110792.78178

Rank 5

2.206e-05

4.814e-05

4.817e-04

1.036e-04

110892.78178

98

Figure 4.8 Minimization of Objective Function in the 250th Iteration


After 300 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.6. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.904 x 10-05, L2= 4.905 x 10-05, L3= 4.901 x 10-04, bf= 1.001 x 10-04,
F= 110119.3144 m2. The minimized objective function or Fitness function (F)
at 300th iteration is F= 110119.3144 m2. Figure 4.9 represents the convergence of
the objective function or Fitness function (F) at 300th iteration.
Table 4.6 Five Optimal Values Obtained in the 300th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.904e-05

4.905e-05

4.901e-04

1.001e-04

110119.3144

Rank 2

2.422e-05

4.210e-05

4.948e-04

1.022e-04

110119.3144

Rank 3

2.412e-05

4.954e-05

4.922e-04

1.014e-04

110119.3144

Rank 4

2.414e-05

3.907e-05

4.975e-04

1.045e-04

110119.3144

Rank 5

3.009e-05

5.555e-05

4.876e-04

1.006e-04

110119.3144

99

Figure 4.9 Minimization of Objective Function in the 300th Iteration


After 350 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.7. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.703 x 10-05, L2= 5.563 x 10-05, L3= 4.763 x 10-04, bf= 1.016 x 10-04,
F= 110602.9662 m2. The minimized objective function or Fitness function (F)
at 350th iteration is F= 110602.9662 m2. Figure 4.10 represents the convergence
of the objective function or Fitness function (F) at 350th iteration.
Table 4.7 Five Optimal Values Obtained in the 350th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.703e-05

5.563e-05

4.763e-04

1.016e-04

110602.9662

Rank 2

3.989e-05

4.607e-05

4.961e-04

1.005e-04

112555.50022

Rank 3

2.019e-05

6.605e-05

4.604e-04

1.030e-04

116171.31011

Rank 4

2.019e-05

6.605e-05

4.604e-04

1.030e-04

118171.31011

Rank 5

2.019e-05

6.605e-05

4.604e-04

1.030e-04

124171.31011

100

Figure 4.10 Minimization of Objective Function in the 350th Iteration


After 400 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.8. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.151 x 10-05, L2= 4.177 x 10-05, L3= 4.939 x 10-04, bf= 1.037 x 10-04,
F= 109618.2632 m2. The minimized objective function or Fitness function (F)
at 400th iteration is F= 109618.2632 m2. Figure 4.11 represents the convergence
of the objective function or Fitness function (F) at 400th iteration.
Table 4.8 Five Optimal Values Obtained in the 400th Iteration
after Optimization
Iteration

L1

L2

L3

bf

Rank 1

2.151e-05

4.177e-05

4.939e-04

1.037e-04

109618.2632

Rank 2

2.151e-05

4.177e-05

4.939e-04

1.037e-04

109618.2632

Rank 3

2.044e-05

7.413e-05

4.514e-04

1.019e-04

109618.2632

Rank 4

2.044e-05

7.413e-05

4.514e-04

1.019e-04

109618.2632

Rank 5

2.315e-05

3.750e-05

4.885e-04

1.066e-04

109618.2632

101

Figure 4.11 Minimization of Objective Function in the 400th Iteration


After 450 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.9. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.754 x 10-05, L2= 4.534 x 10-05, L3= 4.919 x 10-04, bf= 1.014 x 10-04,
F= 110165.74376 m2. The minimized objective function or Fitness function (F)
at 350th iteration is F= 110165.74376 m2. Figure 4.12 represents the convergence
of the objective function or Fitness function (F) at 450th iteration.
Table 4.9 Five Optimal Values Obtained in the 450th Iteration
after Optimization
Iteratio
n

L1

L2

L3

bf

Rank 1

2.754e-05

4.534e-05

4.919e-04

1.014e-04

110165.74376

Rank 2

2.754e-05

4.534e-05

4.919e-04

1.014e-04

111053.33436

Rank 3

2.754e-05

4.534e-05

4.919e-04

1.014e-04

111053.33436

Rank 4

2.169e-05

5.984e-05

4.793e-04

1.004e-04

111468.25048

Rank 5

3.417e-05

5.371e-05

4.862e-04

1.001e-04

112047.32099

102

Figure 4.12 Minimization of Objective Function in the 450th Iteration


After 500 iterations using the ABC with PSO algorithm method the top 5
ranks are displayed in the Table 4.10. Among these five rank values using rank
selection method Rank 1 which produce the optimized parameter values
L1= 2.754 x 10-05, L2= 4.534 x 10-05, L3= 4.919 x 10-04, bf= 1.014 x 10-04,
F= 110830.8728 m2. The minimized objective function or Fitness function (F)
at 500th iteration is F= 110830.8728 m2. Figure 4.13 represents the convergence
of the objective function or Fitness function (F) at 500th iteration.
Table 4.10 Five Optimal Values Obtained in the 500th
Iteration after Optimization
Iteratio
n

L1

L2

L3

bf

Rank 1

2.754e-05

4.534e-05

4.919e-04

1.014e-04

110830.8728

Rank 2

2.754e-05

4.534e-05

4.919e-04

1.014e-04

111053.33436

Rank 3

2.754e-05

4.534e-05

4.919e-04

1.014e-04

111053.33436

Rank 4

2.169e-05

5.984e-05

4.793e-04

1.004e-04

111468.25048

Rank 5

3.417e-05

5.371e-05

4.862e-04

1.001e-04

112047.32099

103

Figure 4.13 Minimization of Objective Function in the 500th Iteration


After 1000 iterations results of the ABC with PSO optimized parameter
values are L1= 2.114 x 10-05 ,L2= 4.995 x 10-05 ,L3= 4.854 x 10-04, yk= 1.013 x 10-04 ,
F= 110409.08701 m2. The final minimized objective function or Fitness
function F= 110409.08701 m2. The Table 4.11 shows the comparison of the
Fitness function (F) value that is obtained using ABC with PSO method and the
existing method that utilizes ABC alone for optimization. The Fitness function (F)
was reduced from 110969.1602 m2 to 110409.08701 m2 using ABC with PSO
method. It is apparent that the fitness value has improved in the ABC with PSO
method.
Table 4.11 Comparison of Fitness function (F) value using ABC and ABC with
PSO method
Methods

Fitness function (F) in m2

ABC method

110969.1602

ABC with PSO method

110409.08701

104

The graphical representation of the comparison of two methods is given


in Figure 4.14. As revealed by the graph, the Fitness function (F) value of the ABC
with PSO method is improved using the incorporation of PSO along with the ABC
algorithm.

Comparison of Fitness function (F) value using ABC and ABC with PSO Algorithms
111200
111000
110800
Fitness function (F) value in m2

110600
110400
110200
110000
ABC

ABC with PSO

Algorithms

Figure 4.14

4.7

Comparison of Fitness Function (F) Value using ABC and ABC


with PSO Method

SUMMARY
This chapter deals with a system that optimizes the design parameters of

MEMS accelerometer. Highly efficient optimization is obtained with the proposed


method, which incorporates both Artificial Bee Colony Optimization algorithm and
Particle Swarm Optimization algorithm to produce optimal design parameters. It is
evident from the results that the proposed method outperforms other optimization
techniques like Genetic Algorithm in terms of fitness values and hence, provides
improved optimization. Some of the shortcomings of utilizing GA can be eliminated
with the proposed method and serves good for designing MEMS accelerometer
architecture. The fitness is based on two design parameters namely; Die Area (DA)
and force that lie within a particular range.

105

CHAPTER 5
COMPARISON OF OPTIMAL PARAMETER VALUES
OF MEMS ACCELEROMETER USING THREE DIFFERENT
OPTIMIZATION TECHNIQUES

5.1

INTRODUCTION
In this chapter the optimized Beam Length values ( L1, L2 and L3), Proof

Mass(ym) and Fitness function (F) values of MEMS Accelerometer using Genetic
Algorithm, Artificial Bee Colony Algorithm, Artificial Bee Colony (ABC) with
Particle Swarm Optimization (PSO) Algorithm methods are compared. Based on the
comparison of the values optimized parameters of a MEMS accelerometer were
reported.
5.2

COMPARISON OF RESULTS FOR PARAMETRS OF MEMS


ACCELEROMETER
The Table 5.1 represents various optimized parameters L1, L2, L3, Proof

Mass (ym) and Fitness function (F) values using GA, ABC, and ABC with PSO
Algorithm methods.

106

Table 5.1

Comparison of Beam Length Values (L1, L2, L3), Proof Mass (ym)
and Fitness Function (F) Value using the GA, ABC and ABC with
PSO Algorithm Methods

Optimized
Design Value

5.3

Property Value
Using GA

Property Value
Using ABC

Property Value
Using ABC &PSO

L1

3.085 x 10-04
= 308.5 m

2.364 x 10-05
= 23.64 m

2.114 x 10-05
= 21.14 m

L2

6.019 x 10-05
= 60.19 m

5.356 x 10-05
= 53.56 m

4.995 x 10-05
= 49.95 m

L3

4.107 x 10-04
= 410.7 m

4.938 x 10-04
= 493.8 m

4.854 x 10-04
= 485.4 m

ym

1.019 x 10-04
= 101.9 m

1.015 x 10-04
= 101.5 m

1.013 x 10-04
= 101.3 m

112297.95163 m2 110969.1602 m2

110409.08701 m2

COMPARISON OF RESULTS AND DISCUSSION FOR BEAM


LENGTH (L1, L2, L3) VALUES USING GA, ABC, ABC with PSO
ALGORITHM METHODS
The Beam Length L1 value obtained using GA, ABC, ABC with PSO are

L1= 3.085 x 10-04 (GA), L1= 2.364 x 10-05(ABC), L1= 2.114 x10-05(ABC with PSO).
Here the Beam Length L1 was optimized or minimized from 308.5 m (GA) to
21.14 m (ABC with PSO). So the optimized Beam Length (L1) reported from
the Table 5.1 was L1= 21.14 m. This is the optimized Beam Length ( L1) value
obtained by using ABC with PSO method.
The Figure 5.1 shows the comparison of Beam Length (L1) values
graphically. The different types of algorithms taken in X-axis and Beam Length (L1)
in m taken in Y-axis. It proves that ABC with PSO Algorithm optimized the
Beam Length (L1) in a better way compared to GA, ABC methods.

107

Comparison of Beam Length (L1) Value using GA, ABC, ABC with PSO Algorithms
0
0
0
0
Beam Length (L1) in m 0
0
0
0
GA

ABC

ABC &PSO

Algorithms

Figure 5.1 Comparison of Beam Length (L1) Value using the GA, ABC and
ABC with PSO Algorithm methods
The Beam Length L2 value obtained using GA, ABC, ABC with PSO are
L2= 6.019 x 10-05 (GA), L2= 5.356 x 10-05 (ABC), L2= 4.995 x 10-05 (ABC with
PSO). Here the Beam Length L2 was optimized or minimized from 60.19 m
(GA) to 49.95 m (ABC with PSO). So the optimized Beam Length (L2)
reported from the Table 5.1 was L2= 49.95 m. This is the optimized Beam
Length (L2) value obtained by using ABC with PSO method.
The Figure 5.2 shows the comparison of Beam Length (L2) values
graphically. The different types of algorithms taken in X-axis and Beam Length (L2)
in m taken in Y-axis. It proves that ABC with PSO Algorithm optimized the
Beam Length (L2) in a better way compared to GA, ABC methods.

108

Comparison of Beam Length (L2) Value using GA, ABC, ABC with PSO Algorithms
0
0
0
0
Beam Length (L2) in m

0
0
0
0
GA

ABC

ABC &PSO

Algorithms

Figure 5.2 Comparison of Beam Length (L2) Value using the GA, ABC and
ABC with PSO Algorithm Methods
The Beam Length L3 value obtained using GA, ABC, ABC with PSO are
L3= 4.107 x 10-04 (GA), L3= 4.938 x 10-04 (ABC), L3= 4.854 x 10-04 (ABC with
PSO). Here the Beam Length L3 was not optimized in a better way and it
increases from 410.7 m (GA) to 485.4 m (ABC with PSO). Even though it is
not optimized but it used to optimize the Fitness function (F) in ABC with PSO
method. So the Beam Length (L3) reported from the Table 5.1
was L3= 485.4 m. This is the Beam Length (L3) value obtained by using
ABC with PSO method.
The Figure 5.3 shows the comparison of Beam Length (L3) values
graphically. The different types of algorithms taken in X-axis and Beam Length (L3)
in m taken in Y-axis.

109

Comparison of Beam Length (L3) Value using GA, ABC, ABC with PSO Algorithms
0
0
0
Beam Length (L3) in m

0
0
0
0
GA

ABC

ABC &PSO

Algorithms

Figure 5.3 Comparison of Beam Length (L3) Value using the GA, ABC and
ABC with PSO Algorithm Methods
5.4

COMPARISON OF RESULTS AND DISCUSSION FOR PROFF


MASS (ym) VALUES USING GA, ABC, ABC with PSO
ALGORITHM METHODS
The Proof Mass (ym) value obtained using GA, ABC, ABC with PSO are

ym = 1.019 x 10-04 (GA), ym = 1.015 x 10-04 (ABC), ym = 1.013 x 10-04 (ABC with
PSO). Here the Proof Mass (ym) was optimized or minimized from 101.9 m
(GA) to 101.3 m (ABC with PSO). So the optimized Proof Mass (ym) reported
from the Table 5.1 was ym = 101.3 m. This is the optimized Proof Mass (ym)
value obtained by using ABC with PSO method. The Figure 5.4 shows the
comparison of Proof Mass (ym) values graphically. The different types of algorithms
taken in X-axis and Proof Mass (ym) in m taken in Y-axis. It proves that ABC
with PSO Algorithm optimized the Proof Mass (ym) in a better way compared to
GA, ABC methods.

110

Comparison of Proof Mass (ym) Value using GA, ABC, ABC with PSO Algorithms
0
0
0
Proof Mass (ym) in m

0
0
0
0
GA

ABC

ABC &PSO

Algorithms

Figure 5.4 Comparison of Proof Mass (ym) Value using the GA, ABC and ABC
with PSO Algorithm Methods
5.5

COMPARISON OF RESULTS AND DISCUSSION FOR FITNESS


FUNCTION (F) VALUES USING GA, ABC, ABC with PSO
ALGORITHM METHODS
The Fitness function (F) value obtained using GA, ABC, ABC with PSO

are F= 112297.95163 m2 (GA), F= 110969.1602 m2 (ABC), F= 110409.08701


m2 (ABC with PSO). Here the Fitness function (F) was optimized in a better
way and it minimized from 112297.95163 m2 (GA) to 110409.08701 m2 (ABC
with PSO). So the optimized Fitness function (F) reported from the Table 5.1
was F= 110409.08701 m2. This is the optimized Fitness function (F) value
obtained by using ABC with PSO method.The Figure 5.5 shows the comparison
of Fitness function (F) values graphically. The different types of algorithms taken on
X-axis and Fitness function (F) in m2 taken in Y-axis. It proves that ABC with
PSO Algorithm optimized the Fitness function (F) in a better way compared to
GA, ABC methods.

111

Comparison of Fitness function (F) Value using GA,ABC,ABC with PSO Algorithms
112500
112000
111500
111000
Fitness function (F) in m2 110500
110000
109500
109000
GA

ABC

ABC with PSO

Algorithms

Figure 5.5 Comparison of Fitness Function (F) Value using the GA, ABC, and
ABC with PSO Algorithm Methods
5.6

SUMMARY
The optimized parameter of a MEMS accelerometer was compared.

Comparison of optimized parameters values done at graphically. Based on the


comparison optimal values L1, L2, L3, ym and Fitness function (F) of a MEMS
accelerometer were reported. The final optimized parameter values are
L1 = 21.14 m, L2 = 49.95 m, L3 = 485.4 m, ym = 101.3 m and Fitness function
(F) or Die Area (DA) = 110409.09 m2. The final optimized parameter values are
obtained using Artificial Bee Colony (ABC) and Particle Swarm Optimization
(PSO) algorithm method. The proposed ABC with PSO method overcomes all the
drawbacks of GA, ABC algorithm method.

112

CHAPTER 6
CONCLUSION

6.1

INTRODUCTION
The present work deals with optimization of parameters of MEMS

accelerometer. The optimization of parameters of a MEMS accelerometer using


Genetic Algorithm (GA), Artificial Bee Colony (ABC) algorithm, and Artificial Bee
Colony (ABC) algorithm with Particle Swarm Optimization (PSO) has been carried
out in MATLAB 7.12 environment. Optimized parameters L1, L2, L3, ym and Fitness
function (F) or Die Area (DA) has been identified. The optimal parameter values of
MEMS accelerometer, obtained using Genetic Algorithm (GA), Artificial Bee
Colony (ABC) algorithm, and Artificial Bee Colony (ABC) algorithm with Particle
Swarm Optimization (PSO) have been compared.
Initially, parameters of MEMS accelerometer have been optimized using
Genetic Algorithm. The simulation results of this algorithm shows the Fitness
function (F) or Die Area (DA) to be equal to 112297.95 m2 and this value is
within the constrained range of 90000 to 160000 m2.
In order to overcome the issues of Genetic Algorithm (GA), Artificial
Bee Colony (ABC) algorithm has been introduced. The optimized parameter
values L1 = 2.364 x 10-5 = 23.64 m, L2 = 5.356 x 10-05 =53.56 m, L3 = 4.938 x 1004

= 493.8 m, ym = 1.015 x 10-04 =101.5 m and Fitness function (F) or Die Area

(DA) = 110969.16 m2 obtained using Artificial Bee Colony algorithm (ABC)


method is better than Genetic Algorithm method. The time taken by the
Artificial Bee Colony (ABC) algorithm to compute the number of iterations is
very much less compared to Genetic Algorithm (GA) method.

113

To reduce the Fitness function (F) or Die Area (DA) still further a new
algorithm (combination of ABC with PSO) has been introduced. Here force was
introduced along with Die Area (DA) as a Fitness function (F).
The optimized parameter values L1 = 2.114 x 10-05 = 21.14 m, L2 =
4.995 x 10-05 = 49.95 m, L3 = 4.854 x 10-04 = 485.4 m, ym = 1.013 x 10-04 =101.3
m and Fitness function (F) or Die Area (DA) = 110409.09 m2 are obtained
from simulation. By comparing these parameters L1, L2, L3, ym and Fitness
function (F) or Die Area (DA) values with the parameter values obtained using
Artificial Bee Colony (ABC) algorithm, Genetic Algorithm (GA), we conclude
that the parameter values obtained using Artificial Bee Colony (ABC)
algorithm with Particle Swarm Optimization (PSO) algorithm method is better
than that the parameter values obtained using Genetic Algorithm (GA) and
Artificial Bee Colony (ABC) algorithm method.
6.2

RESEARCH CONTRIBUTION
Based on the simulation results obtained the optimized parameter

values of a MEMS accelerometer are L1 = 2.114 x 10-05 = 21.14 m, L2 = 4.995 x


10-05 = 49.95 m, L3 = 4.854 x 10-04 = 485.4 m, ym = 1.013 x 10-04 = 101.3 m
and Fitness function (F) or Die Area (DA) = 110409.09 m2. The reasons are:
Beam length (L1, L2 ) values obtained using ABC with PSO algorithm method is
better than that of GA, ABC method. Beam Length (L3) was not optimized
when compared with the values obtained using GA, ABC but it helps to find an
optimized Fitness function (F) = 110409.09 m2 using ABC with PSO algorithm
method. Time taken for the iteration process by ABC with PSO algorithm
method was very low compared to GA, ABC method. In ABC with PSO
algorithm method there was no overlapping and mutation. At ABC with PSO
algorithm method the calculation is very simple and it allows faster
convergence.
For the aimed Fitness function (F) or Die Area (DA), Artificial Bee
Colony (ABC) algorithm with Particle Swarm Optimization (PSO) algorithm

114

method provides Fitness function (F) or Die Area (DA) = 110409.09 m2 is


suggested for the MEMS accelerometer design.
6.3

SUGGESTIONS FOR FUTURE WORK


This work can be extended further as follows
Optimization of MEMS accelerometer parameters can be extended to

other evolutionary algorithms. Using the optimized parameters, a double folded


beam MEMS accelerometer can be designed in any of the MEMS CAD tool and
thereafter analyzed.
On completion of design, analysis and validation, the MEMS
accelerometer can be fabricated, which can be used for airbag deployment in an
automobile industry.

115

REFERENCES

[1] Zhang, Y., Kamalian, R., Agogino, A.M., and Squin, C.H., Hierarchical
MEMS Synthesis and Optimization, Proc. of SPIE, Bellingham, WA,
5763, pp. 96-106, 2005.
[2] Rubio, W.M., Godoy, P.H.de., and Silva, E.C.N., Design of
Electrothermomechanical MEMS, ABCM Symposium Series in
Mechatronics, 2, pp. 469-476, 2006.
[3] Neukermans, A., and Ramaswami, R., MEMS technology for optical
networking applications, Communications Magazine, IEEE., 39(1),
pp. 62-69, 2001.
[4] Albano, F., et al., Design of an implantable power supply for an intraocular
sensor, using POWER (power optimization for wireless energy
requirements), Journal of Power Sources, 170, pp. 216224, 2007.
[5] Udina, S., et al., A MEMS based compact natural gas analyzer
implementing IEEE-1451.2 and BS-7986 smart sensor standards, The 14th
International Meeting on Chemical Sensors, Tagungsband, pp. 1213-1216,
2012.
[6] Kim, M., Hacker, J. B., Mihailovich, R. E., and DeNatale, J. F., A DC-to40 GHz four-bit RF-MEMS true-time delay network, Microwave and
Wireless Components Letters, IEEE, 11(2), pp. 56-58, 2001.
[7] Cetiner, B. A., et al., Multifunctional reconfigurable MEMS integrated
antennas for adaptive MIMO systems, Communications Magazine,
IEEE, 42(12), pp. 62-70, 2004.
[8] Walter, H., Dudek, R., and Michel, B., Fracture and Fatigue Behavior of
MEMS Related Micro Materials, Proc. of 11th International Conference on
Fracture (ICXI), Italy, pp. 365-384, 2005.
[9] Deng, L., et al., Electrical calibration of spring-mass MEMS capacitive
accelerometers, Proc. of the Conference on Design, Automation and Test in
Europe, Grenoble, France, pp. 571-574, 2013.
[10] Design Optimization of MEMS Comb Accelerometer., Available:
http://www.asee.org/ documents/ zones/zone1/ 2008/ student/
ASEE12008_0050_paper.pdf

116

[11] Scott Moura., (2006), Parametric Design of a MEMS Accelerometer, A


Project Report, pp. 1-27, UC Berkeley, Berkeley, USA.
[12] Layout Synthesis of CMOS MEMS Accelerometer., Available:
http://www.nsti.org/publications/MSM/2000/pdf/ T23.06.pdf.
[13] Dorigo, M., Stutzle, T., (2004), Ant Colony Optimization, The MIT
press, Cambridge, MA.
[14] Blum, C., Ant Colony Optimization: Introduction and recent trends,
Phys. Life Rev., pp. 353-373, 2005.
[15] Ant Colony Optimization Available: http://en.wikipedia.org/wiki/ Ant_
colony_ optimization_ algorithms
[16] Karaboga, D., and Basturk, B., On the performance of artificial bee
colony (ABC) algorithm, Appl. Soft. Comput., 8(1), pp. 687-697, 2008.
[17] Akay, B., and Karaboga, D., Artificial bee colony algorithm for largescale problems and engineering design optimization, J. Intell. Manuf.,
23(4), pp. 1001-1014, 2012.
[18] Karaboga, D., (2005), An Idea Based on Honey Bee Swarm for
Numerical Optimization, Technical Report-TR06, Erciyes Univ.,
Engineering Faculty, Computer Engineering Department, Kayseri/Turkey.
[19] Singh, A., An artificial bee colony algorithm for the leaf-constrained
minimum spanning tree problem, Appl. Soft Comput., 9(2), pp. 625-631,
2009.
[20] Karaboga, D., and Ozturk, C., A novel clustering approach: Artificial
bee colony (ABC) algorithm, Appl. Soft Comput., 11(1), pp. 652-657,
2011.
[21] Ajorlou, S., Shams, I., and Aryanezhad, M.G., Optimization of a
Multiproduct CONWIP-based Manufacturing System using Artificial Bee
Colony Approach, Proc. of the International Multi-Conference of
Engineers and Computer Scientists(IMECS),Kowloon, Hong Kong, 2,
pp. 1385-1389, 2011.
[22] Artificial Bee Colony Algorithm., Available: http://en.wikipedia.org/wiki/
Artificial_ bee_colony_ algorithm
[23] Li, X.L., Shao, Z.J., and Qian, J.X., An Optimizing Method based on
Autonomous Animates: Fish-Swarm Algorithm, In.Proc.Systems
Engineering Theory and Practice, 22(11), pp. 32-38, 2002.

117

[24] Binitha, S., and Siva Sathya, S., A Survey of Bio inspired Optimization
Algorithms, International Journal of Soft Computing and Engineering,
2(2), pp. 137-151, 2012.
[25] Passino, K.M., Bacterial Foraging Optimization, International Journal
of Swarm Intelligence Research, 1(1), pp. 1-16, 2010.
[26] Thomas, R.M., Survey of Bacterial Foraging Optimization Algorithm,
International Journal of Innovative Science and Modern Engineering, 1(4),
pp. 11-12, 2013.
[27] Vipul Sharma, Pattnaik, S. S., and Tanuj Garg., A Review of Bacterial
Foraging Optimization and Its Applications, IJCA Proc. on National
Conference on Future Aspects of Artificial Intelligence in Industrial
Automation (NCFAAIIA), (1), pp. 9-12, 2012.
[28] Yang, X.S., Firefly Algorithm, Stochastic Test Functions and Design
Optimization, Int.J. Bio-Inspired Computation, 2(2), pp. 78-84, 2010.
[29] Horng, M.H., Lee, Y.X., Lee, M.C., and Liou, R.J., Firefly
metaheuristic algorithm for training the radial basis function network for
data classification and disease diagnosis, In: Parpinelli, R., Lopes, H.S.,
(eds) Theory and New Applications of Swarm Intelligence, pp. 115-132,
2012.
[30] Horng, M.H., Vector quantization using the firefly algorithm for image
compression, Expert Syst. Appl., 39, pp. 1078-1091, 2012.
[31] Banati, H., and Bajaj, M., Firefly based feature selection approach, Int.
J. Computer Science Issues, 8(2), pp. 473-480, 2011.
[32] Azad, S.K., and Azad, S.K., Optimum Design of Structures Using an
Improved Firefly Algorithm, Int. J. Optim.Civil Eng., 1(2), pp. 327-340,
2011.
[33] Gandomi, A. H., Yang, X. S., and Alavi, A. H., Cuckoo search
algorithm: a metaheuristic approach to solve structural optimization
problems, Engineering with Computers, 27, pp. 1-19, 2011.
[34] Basu, B., and Mahanti, G.K., Firefly and artificial bees colony
algorithm for synthesis of scanned and broadside linear array antenna,
Progress In Electromagnetics Research B, 32, pp. 169-190, 2011.
[35] Sayadi, M. K., Ramezanian, R., and Ghaffari-Nasab, N., A discrete
firefly meta-heuristic with local search for make span minimization in
permutation flow shop scheduling problems, Int. J. of Industrial
Engineering Computations, 1, pp. 1-10, 2010.

118

[36] Palit, S., et al., A cryptanalytic attack on the knapsack cryptosystem


using binary Firefly algorithm, In 2nd Int. Conference on Computer and
Communication Technology (ICCCT), India, pp. 428-432, 2011.
[37] Jati, G.K., and Suyanto, S., Evolutionary discrete firefly algorithm for
travelling sales-man problem, ICAIS2011, Lecture Notes in Artificial
Intelligence (LNAI 6943), pp. 393-403, 2011.
[38] Yousif, A., Abdullah, A.H., Nor, S. M., and abdelaziz, A. A., Scheduling
jobs on grid computing using firefly algorithm, J. Theoretical and Applied
Information Technology, 33(2), pp. 155-164, 2011.
[39] Senthilnath, J., Omkar, S. N., and Mani, V., Clustering using firefly
algorithm: performance study, Swarm and Evolutionary Computation,
1(3), pp. 164-171, 2011.
[40] Rajini, A., and David, V. K., A hybrid metaheuristic algorithm for
classification using micro array data, Int. J. Scientific & Engineering
Research, 3(2), pp. 1-9, 2012.
[41] Nandy, S., Sarkar, P. P., and Das, A., Analysis of nature-inspired firefly
algorithm based back-propagation neural network training, Int. J.
Computer Applications, 43(22), pp. 8-16, 2012.
[42] Kennedy, J., and Eberhart, R.C., Particle Swarm Optimization, In.
Proc. of the IEEE International Conference on Neural Networks IV, IEEE
Press, Piscataway, NJ, pp. 1942-1948, 1995.
[43] Clerc, M., (2006), Particle Swarm Optimization, ISTE Publishing
Company, London, UK.
[44] He, Q., and Wang, L., An effective co-evolutionary PSO for constrained
engineering design problems, Eng. Appl. Artif. Intell., 20, pp. 89-99, 2007.
[45] Liu, H., Cai, Z., and Wang, Y., Hybridizing PSO with differential
evolution for constrained numerical and engineering optimization, Appl.
Soft Comput., 10, pp. 629-640, 2010.
[46] Mandal, S., Ghosjal, S.P., Kar, R., and Mandal, D., Design of optimal
linear phase FIR high pass filter using craziness based PSO technique,
J.King Saud Univ. Comput. Inf. Sci., 24, pp. 83-92, 2012.
[47] Particle Swarm Optimization., Available: http://en.wikipedia.org/wiki/
Particle_swarm_ optimization
[48] Particle Swarm Optimization., Available: http://www.scholarpedia.or
g/article/Particle_swarm_ optimization

119

[49] Kao, C.C., and Torres, G.L., Applications of PSO to optimal power
systems, Inter. J. of Inno. Comp, Information and control, 8(3A),
pp. 1705-1716, 2012.
[50] Holland, J.H., Genetic algorithms and the optimal allocation of trials,
SIAM J. Comput., 2 (2), pp. 88105, 1973.
[51] Goldberg, D.E., (1989), Genetic algorithms in Search, Optimization,
and Machine learning, Addison-Wesley, Newyork.
[52] Goswami, B., and Mandal, D., A genetic algorithm for the level control
of nulls and side lobes in linear antenna arrays, J.King Saud Univ. Comput.
Inf. Sci., 25, pp. 117-126, 2013.
[53] Applications of Genetic Algorithm., Available: http://en.wikipedia.org/
wiki/List_of_genetic_algorithm_ applications
[54] Applications of Genetic Algorithm., Available: http://www.doc.ic.ac.uk/
~nd/surprise_96/journal/vol1/tcw2/article1.html
[55] Hoang Pham, (2006), Springer Handbook of Engineering Statistics,
pp. 749-773, Springer-Verlag, London.
[56] Applications of Genetic Algorithm., Available: http://neo.lcc.uma.es/
TutorialEA/semEC/cap03/ cap_3.html
[57] Man, K.F., Tang, K.S., and Kwong, S., Genetic Algorithms: Concepts
and Applications, IEEE Trans. on Industrial Electronics, 43(5), pp. 519534, 1996.
[58] Simon, D., Biogeography -Based Optimization, IEEE Transactions on
Evolutionary Computation, 12(6), pp. 702-713, 2008.
[59] Khokhar, B., Parmar, K.P.S., and Dahiya, S., Application of
Biogeography-based Optimization for Economic Dispatch Problems,
International Journal of Computer Applications, 47(13), pp. 25-30, 2012.
[60] Lohokare, M.R., et al., Biogeography based optimization technique for
block based motion estimation in video coding, National Conference on
Computational Instrumentation, CSIO Chandigarh, India, pp. 67-71, 2010.

[61] Gupta, S., Bhuchar, K., and Sandhu, P., Implementing Color Image
Segmentation Using Biogeography Based Optimization, International

120

Conference on Software and Computer Applications, Kathmandu, Nepal,


pp. 79-86, 2011.
[62] Kaur, R., and Khanna, R., Medical image quantization using
biogeography based optimization, International Journal of Computer
Applications, 48(12), pp. 8-11, 2012.
[63] Johal, N.K., Gupta, P., and Kaur, A., Face Recognition Using
biogeography based optimization, International Journal of Computer
Science and Information Security, 9(5), pp. 126-131, 2011.
[64] Nikumbh, S., Ghosh, S., and Jayaraman, V., Biogeography based
informative gene Selection and cancer classification using SVM and
random forests, IEEE World Congress on Computational Intelligence,
Brisbane, Australia, pp. 187-192, 2012.
[65] Panchal, V., Singh, P., Kaur, N., and Kundra, H., Biogeography based
satellite image classification, International Journal of Computer Science
and Information Security, 6(2), pp. 269-274, 2009.
[66] Storn, R., and Price, K., Differential Evolution A Simple and Efficient
Heuristic for Global Optimization over Continuous Spaces, Journal of
Global Optimization, 11(4), pp. 341359, 1997.
[67] Regulwar, D.G., Choudhari, S.A., and Raj, P.A., DE Algorithm with
Application to optimal operation of Multipurpose Reservoir, J. Water
Resource and protection, 2, pp. 560-578, 2010.
[68] Chattopadhyay, S., Sanyal, S.K., and Chandra, A., Design of FIR Filter
using DE Optimization and to study its effect as a pulse-shaping filter in a
QPSK Modulated system, International Journal of Computer Science and
Network Security, 10(1), pp. 313-320, 2010.
[69] He, S., Wu, Q.H., and Saunders, J.R., Group Search Optimizer: An
Optimization Algorithm Inspired by Animal Searching Behavior, IEEE
Transactions on Evolutionary Computation, 13(5), pp. 973-990, 2009.
[70] Liu, F., Xu, X.T., Li, L.J., and Wu, Q.H., The GSO and its Application
on Truss Structure Design, The 4th International Conference on Natural
Computation, Jinan, China, pp. 688-692, 2008.
[71] Ensuff, M.M., and Lansey, K.E., Optimization of water distribution
network design using the shuffled frog leaping algorithm, Journal of water
resources planning and management, 129(3), pp. 210-225, 2003.
[72] Bhaduri, A., and Bhaduri, A., Color Image Segmentation Using Clonal
Selection Based SFLA, In. Proc. of the International Conference on

121

Advances in Recent Technologies in Communication and Computing,


Kottayam, Kerala, pp. 517-520, 2009.
[73] Hui, L.X., Ye, Y., and Xia. L., Solving TSP with SFLA, Eighth
International Conference on Intelligent Systems Design and Applications,
Kaohsiung, 3, pp. 228-232, 2008.
[74] Huynh, T.H., and Nguyen, D.H., Fuzzy Controller Design using a new
SFLA, IEEE International Conference on Industrial Technology (ICIT),
Gippsland, VIC, pp. 1-6, 2009.
[75] Kundu, S., and Parth, D.R., Modified SFLA based 6DOF Motion for
Underwater Mobile Robot, First International Conference on
Computational Intelligence: Modeling Techniques and Applications
(CIMTA), Kolkata, India, 10, pp. 295-303, 2013.
[76] Yang, Wu., and Sun, Y., An Improved SFLA for Grid Task Scheduling,
International Conference on Network Computing and Information Security
(NICS), Guilin, China, 1, pp. 342-346, 2011.
[77] Chen, G., Combined Economic Emission Dispatch using SFLA,
International Conference on Information Engineering and Computer
Science(ICIECS), Wuhan, China, pp. 1-4, 2009.
[78] Li, J., Pan, Q., and Xie, S., An effective SFLA for multi-objective
flexible job shop scheduling problems, Applied Mathematics and
Computation, 218(18), pp. 9353-9371, 2012.
[79] Rashedi, E., Pour, H.N., and Saryazdi, S., GSA: A Gravitational Search
Algorithm, Information Sciences, 179(13), pp. 2232-2248, 2009.
[80] Ojugo, A. A., Emudianughe, Yoro, R. E., Okonta, E. O., and Eboka, A.
O., A Hybrid Artificial Neural Network Gravitational Search Algorithm for
Rainfall Runoffs Modeling and Simulation in Hydrology, Progress in
Intelligent Computing and Applications, 2(1), pp. 2233, 2013.
[81] Seljanko, F., Hexapod Walking Robot Gait Generation Using GeneticGravitational Hybrid Algorithm, In the 15th International Conference on
Advanced Robotics, Tallinn, Estonia, pp. 253-258, 2011.
[82] Saucer, T. W., and Sih, V., Optimizing Nanophotonic Cavity Designs
with the Gravitational Search Algorithm, Optics Express, 21(18),
pp. 2083120836, 2013.
[83] Davarynejad, M., Forghany, Z., and Berg, J.V.D., Mass-Dispersed
Gravitational Search Algorithm for Gene Regulatory Network Model

122

Parameter Identification, In. Proc. of the 9th International Conference,


Simulated Evolution and Learning, Hanoi, Vietnam, 7673, pp. 6272, 2012.
[84] Palanikkumar, D., Anbuselvan, P., and Rithu, B., A Gravitational Search
Algorithm for effective Web Service Selection for Composition with
enhanced QoS in SOA, International Journal of Computer Applications,
42(8), pp. 1215, 2012.
[85] Han, X., and Chang, X., Chaotic secure communication based on a
gravitational search algorithm filter, Engineering Applications of Artificial
Intelligence, 25(4), pp. 766774, 2012.
[86] Sun, G., and Zhang, A., A Hybrid Genetic Algorithm and Gravitational
Search Algorithm for Image Segmentation using Multilevel Thresholding,
Pattern Recognition and Image Analysis, 7887, pp. 707714, 2013.
[87] Shafigh, P., Hadi, S.Y., and Sohrab, E., Gravitation based
classification, Information Sciences, 220, pp. 319330, 2013.
[88] Li, C., et al., T-S Fuzzy Model Identification with a Gravitational
Search-Based Hyper plane Clustering Algorithm, IEEE Transactions on
Fuzzy Systems, 20(2), pp. 305317, 2012.
[89] Pei, J., et al., Application of an Effective Modified Gravitational Search
Algorithm for the Coordinated Scheduling Problem in a Two-stage Supply
Chain, Int.J. Adv. Manuf. Technol., 70, pp. 335-348, 2013.
[90] Vijaya Kumar, J., Vinod Kumar, D. M., and Edukondalu, K., Strategic
bidding using fuzzy adaptive gravitational search algorithm in a pool based
electricity market, Applied Soft Computing, 13(5), pp. 2445-2455, 2012.
[91] Qasem, R. A., and Eldos, T., An Efficient Cell Placement Using
Gravitational Search Algorithms, J. Comput.Sci., 9(8), pp. 943948, 2013.
[92] Ganesan, T., Elamvazuthi, I., Shaari, K.Z.K., and Vasant, P., Swarm
intelligence and gravitational search algorithm for multi-objective
optimization of synthesis gas production, Applied Energy, 103,
pp. 368374, 2013.
[93] Oliveira, P. B. D. M., Pires, E. J. S., and Novais, P., Gravitational
Search Algorithm Design of Posicast PID Control Systems, 7th
International Conference on Soft Computing Models in Industrial and
Environmental Applications(SOCO), Ostrava, Czech Republic, 188, pp.
191199, 2013.

123

[94] Li, C., and Zhou, J., Parameters identification of hydraulic turbine
governing system using improved gravitational search algorithm, Energy
Conversion and Management, 52(1), pp. 374381, 2011.
[95] Li, C., Zhou, J., Xiao, J., and Xiao, H., Hydraulic Turbine Governing
System Identification using T- S Fuzzy Model Optimized by Chaotic
Gravitational Search Algorithm, Engineering Applications of Artificial
Intelligence, 260(9), pp. 20732082, 2013.
[96] Parvin, J. R., and Vasanthanayaki, C., Gravitational Search Algorithm
Based Mobile Aggregator Sink Nodes for Energy Efficient Wireless Sensor
Networks, In International Conference on Circuits, Power and Computing
Technologies (ICCPCT-2013), Nagarcoil, India, pp. 10521058, 2013.
[97] Lee, K.S., and Geem, Z.W., A new meta-heuristic algorithm for
continuous engineering optimization: harmony search theory and practice,
Comput. Methods Appl. Mech. Engrg., 194(36-38), pp. 3902-3933, 2005.
[98] Geem, Z., School bus routing using harmony search, In Genetic and
Evolutionary Computation Conference (GECCO), Washington DC, USA,
pp. 1-6, 2005.
[99] Geem, Z.W., Harmony search algorithm for solving sudoku, In:
Apolloni, B., Howlett, R.J., Jain, L., [eds] KES, Part I.LNCN (LNAI),
Springer, Heidelberg, 4692, pp. 371-378, 2007.
[100] Geem, Z., Harmony search algorithm for the optimal design of largescale water distribution network, In Proc. 7th International IWA
Symposium on Systems Analysis and Integrated Assessment in Water
Management, IWA, Washington DC, USA, 2007,(CD-ROM).
[101] Geem, Z., and Hwangbo, H., Application of harmony search to multiobjective optimization for satellite heat pipe design, In Proc. US-Korea
Conference on Science, Technology, and Entrepreneurship , Teaneck, NJ,
USA, pp. 1-3, 2007.
[102] Geem, Z., Lee, K., and Tseng, C., Harmony search for structural
design, In Proc. 2005 conference on Genetic and evolutionary
computation, ACM New York, NY, USA, pp. 651- 652, 2005.
[103] Geem, Z., and Williams, J., Harmony search and ecological
optimization, International Journal of Energy and Environment, 1, pp. 150154, 2007.
[104] Geem, Z. W., Optimal scheduling of multiple dam system using
harmony search algorithm, Lecture Notes in Computer Science, 4507, pp.
316-323, 2007.

124

[105] Geem, Z.W., and Choi, J.Y., Music composition using harmony search
algorithm, Lecture Notes in Computer Science, 4507, pp. 593-600, 2007.
[106] Geem, Z., Lee, K., and Park, Y., Application of harmony search to
vehicle routing, American Journal of Applied Sciences, 2(12), pp. 15521557, 2005.
[107] Al-Betar, M., Khader, A., and Gani, T., A harmony search algorithm for
university course timetabling, In 7th International Conference on the
Practice and Theory of Automated Timetabling (PATAT 2008), Montreal,
Canada, 194(1), pp. 3-31, 2008.
[108] Ryu, S., Duggal, A.S., Heyl, C. N., and Geem, Z. W., Mooring Cost
Optimization via Harmony Search, Proc. 26th ASME International
Conference on Offshore Mechanics and Arctic Engineering (OMAE 2007),
San Diego, CA, USA, 1, pp. 355-362, 2007.
[109] Karahan, H., Gurarslan, G., and Geem, Z.W., Parameter Estimation of
the nonlinear Muskingum flood routing model using a hybrid harmony
search algorithm, J. Hydrol. Eng., 18(3), pp. 352-360, 2012.
[110] Fesanghary, M., Damangir, E., and Soleimani, I., Design optimization of
shell and tube heat exchangers using global sensitivity analysis and
harmony search, Applied Thermal Engineering, 29, pp. 1026-1031, 2009.
[111] Scott Kirkpatricks., Gelatt, C.D., and Vecchi, P.M., Optimization by
simulated annealing, Science, 220(4598), pp. 671680, 1983.
[112] Tian, P., Ma, J., and Zhang, D.M., Application of the simulated
annealing algorithm to the combinatorial optimization problem with
permutation property: An investigation of generation mechanism,
European Journal of Operational Research, 118(1), pp. 81-94, 1999.
[113] Van Laarhoven, J.P., and Aarts, E.H., (1987), Simulated Annealing:
Theory and Applications, 37, pp. 77-98, Kluwer Academic Publishers,
Norwell, USA.
[114] Reynolds, R.G., Introduction to cultural algorithms, Third Annual
Conference on Evolutionary Computing, Antony V.Sebald et al., eds., World
Scientific Press, Singapore, pp. 131-139, 1994.
[115] Cultural Algorithms., Available: http://en.wikipedia.org/wiki/
Cultural_algorithm
[116] Reynolds, R. G., (1999) Cultural Algorithms: Theory and Applications,
New Ideas in Optimization, McGraw-Hill Ltd, Maidenhead, UK,
pp. 367378.

125

[117] Laarranaga, P., and Lozano, J. A., (2002), Estimation of Distribution


Algorithms, A New Tool for Evolutionary Computation, Kluwer
Academic Publishers, Boston, USA.
[118] Folly, K.A., and Sheetekela, S.P., Application of simple EDA to Power
System Controller design, 45th International Universities Power
Engineering Conference (UPEC), Wales, pp. 1-6, 2010.
[119] Ceberio, J., Irurozki, E., Mendiburu, A., and Lozano, A.J., A review of
EDA in permutation-based combinatorial optimization problems, Progress
in Artificial Intelligence, 1(1), pp. 103-117, 2012.
[120] Carnero, M., Hernandez, J.L., and Sanchez, M., EDA: Applications to
the Design of Process Sensor Networks, CLEI Electronic Journal, 12(3),
pp. 1-7, 2009.
[121] Hardware/Software Partitioning using EDA., Available:
http://www.ieee.org.hk/icspcc2014/paper2014/ 2142final paper.pdf.
[122] Bengoetxea, E., Larranaga, P., Bloch, L., and Perchant, A., EDA: A New
Evolutionary Computation Approach for Graph Matching Problems, Third
International Workshop EMMCVPR, Sophia Antipolis, France,
pp.
454-469, 2001.
[123] Armananzas, R., et al., A review of EDA in bioinformatics, BioData
Mining, 1(6), pp. 1-12, 2008.
[124] Tizhoosh, H.R., Opposition Based Learning: A New Scheme for
Machine Intelligence, International Conference on Computational
Intelligence for Modelling, Control and Automation /International
Conference on Intelligent Agents, Web Technologies and Internet
Commerce [CIMCA / IAWTIC], Vienna, Austria, pp. 695-701, 2005.
[125] Rahnamayan, S., (2007), Opposition Based Differential Evolution,
Ph.D. thesis, Department of Systems Design Engineering, University of
Waterloo, Waterloo, Canada, pp. 49-60.
[126] Ergezer, M., and Simon, D., Oppositional Biogeography-based
optimization for combinatorial problems, IEEE Congress on Evolutionary
Computation (CEC), Orleans, LA, pp. 1496-1503, 2011.
[127] Opposition-based Particle Swarm Algorithm., Available:
http://www.cs.le.ac.uk/people/cl160/Papers/CEC07.pdf
[128] Haiping, Ma., Xieyong, R., and Baogen, J., Oppositional ant colony
optimization algorithm and its application to fault monitoring, 29th Chinese
Control Conference (CCC), Beijing, pp. 3895-3898, 2010.

126

[129] Iqbal, M.A., Khan, N.K., Mujtaba, H., and Baig, R.A., A Novel
Function Optimization Approach Using Opposition based GA with Gene
Excitation, International Journal of Innovative Computing, Information
and Control, 7(7B), pp. 4263-4276, 2011.
[130] Glover, F., and McMillan, C., The general purpose & scheduling
problem: an integration of MS and AI, Computers and Operations
Research, 13(5), pp. 536-573, 1986.
[131] Glover, F., Tabu Search Part I, ORSA Journal on Computing, 1(3),
pp. 190-206, 1989.
[132] Glover, F., Tabu Search Part II, ORSA Journal on Computing, 2(1),
pp. 4-32, 1990.
[133] Glover, F., Tabu Search: A Tutorial, Interfaces, 20(4), pp. 74-94, 1990.
[134] Rao, R.V., Savsani, V.J., and Vakharia, D.P., Teaching-learning-based
optimization: A novel method for constrained mechanical design
optimization problems, Comput Aided Des., 43, pp. 30315, 2011.
[135] Satapathy, S.C., and Naik, A., Data Clustering Based on TLBO,
Swarm, Evolutionary, and Memetic Computing Lecture Notes in Computer
Science, 7077, pp. 148-156, 2011.
[136] Zou, F., et al., Multiobjective optimization using TLBO algorithm,
Engineering Applications of Artificial Intelligence, 26(4), pp. 1291-1300,
2013.
[137] Nayak, M.R., Nayak, C.K., and Rout, P.K., Application of
Multiobjective TLBO algorithm to optimal power flow problems, 2nd
International Conference on Communication, Computing & Security
[ICCCS], Rourkela, India, 6, pp. 255-264, 2012.
[138] Baghlani, A., and Makiabdi, M.H., TLBO Algorithm for Shape & Size
Optimization of Truss structure with dynamic frequency constraints, IJST
Transactions of Civil Engineering, 37, pp. 409-421, 2013.
[139] Satapathy, S.C., Naik, A., and Parvathi, K., A TBLO based on
orthogonal design for solving global optimization problems, Springer Plus,
2:130, pp. 1-12, 2013.
[140] Ganesh, S.B., and Reddy, S., TLBO for EDP with Valve point loading
effect, International Journal of Education and Applied Research, 4(1),
pp. 9-15, 2014.

127

[141] Kaur, D., and Kaur. R., A design of IIR based digital hearing aids using
TLBO, International Journal of Computer Engineering & Applications,
3(2/3), pp. 182-190, 2013.
[142] Lakshmi Reddy, Y., and Sydulu, M., TLBO Algorithm for
Reconfiguration in Radial Distribution Systems for Loss Reduction,
International Journal of Advanced Engineering and Global Technology,
2(4), pp. 622-626, 2014.
[143] Roy, P.K., Sur, A., and Pradhan, D.K., Optimal Short term hydrothermal scheduling using quasi oppositional teaching learning based
optimization, Engineering Applications of Artificial Intelligence, 26(10),
pp. 2516-2524, 2013.
[144] John K, Sakellaris, Finite Element Analysis of Micro Electro
Mechanical Systems: Towards the integration of MEMS in design and
robust optimal control schemes of smart microstructures, WSEAS
TRANSACTIONS on APPLIED and THEORETICAL MECHANICS, 3,
pp. 114-124, 2008.
[145] Singh, A., Prince, A, A., and Agrawal, V. P., Design Optimization &
Comparison of RF Power Sensors based on MEMS, International Journal
of Recent Trends in Engineering, 1(4), pp. 64-67, 2009.
[146] Chen, X., Cui, W., and Xue, W., Process Modeling and Device-Package
Simulation for Optimization of MEMS Gyroscopes, Journal on ComputerAided Design and Applications, 6(3), pp. 375-386, 2009.
[147] Chandrana, C., et al., Design and Analysis of MEMS Based PVDF
Ultrasonic Transducers for Vascular Imaging, Journal on Sensors, 10, pp.
8740-8750, 2010.
[148] Dugosz, A., Multiobjective evolutionary optimization of MEMS
structures, Computer Assisted Mechanics and Engineering Sciences, 17(1),
pp. 41-50, 2010.
[149] Pathak, R., and Joshi, S., Optimizing reliability modeling of MEMS
devices based on their applications, World Journal of Modeling and
Simulation, 7(2), pp. 139-154, 2011.
[150] Naduvinamani, Sujata.N., Sheeparamatti, B.G., and Kalalbandi,
Sandeep, V., Simulation of Cantilever Based RF-MEMS Switch Using
CoventorWare, World Journal of Science and Technology, 1(8),
pp. 149-153, 2011.

128

[151] Nagpal, P., Mehta, Rangra, K., and Aggarwal, R., Optimization of
Capacitive MEMS Pressure Sensor for RF Telemetry, International Journal
of Scientific & Engineering Research, 2(10), pp. 1-4, 2011.
[152] Jain, S., Chechi, D., and Chawla, P., Performance Study of RF MEMS
Ohmic Series Switch, International Journal of Advanced Research in
Computer Science and Software Engineering, 2(8), pp. 485-488, 2012.
[153] Zhang, Y., Kamalian, R., Agogino, A.M., and Squin, C.H., Design
Synthesis of MicroElectroMechanical Systems Using Genetic Algorithms
with Component-based Genotype Representation, Proc. of the 8th Annual
Conference on Genetic and Evolutionary Computation, ACM Press, New
York, USA, pp. 731-738, 2006.
[154] Jain, A., Greve, D., and Oppenheim, J., A MEMS Transducer for
Ultrasonic Flow Detection, ISARC, Washington, USA, pp. 375386, 2002.
[155] Attoh-Okine, N. O., and Mensah, S., MEMS Application in Pavement
Condition Monitoring-Challenges, ISARC, Washington, USA,
pp. 387391, 2002.
[156] Kamalian, R.H., Agogino, A.M., and Takagi, H., The Role of
Constraints and Human Interaction in Evolving MEMS Designs: Micro
resonator Case Study, In Proc. of ASME Design Engineering Technical
Conferences and Computers and Information in Engineering Conference,
Saltlake city, Utah, USA, pp. 1-9, 2004.
[157] Li, H., and Antonsson, E.K., Evolutionary Techniques in MEMS
Synthesis, 25th Biennial Mechanisms Conf., ASME Design Engineering
Technical Conf., #DETC98/MECH-5840, Atlanta, Georgia, 1998.
[158] Ma, L., and Antonsson, E.K., Automated Mask-layout and Process
Synthesis for MEMS, Proc. of the Modeling and Simulation of
Microsystems Conference, Sandiego, pp. 20-23, 2000.
[159] Li, J., Gao, S., and Liu, Y., Solid-based CAPP for Surface
Micromachined MEMS Devices, Computer-Aided Design, 39(3), pp. 190201, 2007.

[160] Obadat, M., Hosin, L., Bhatti, M. A., and Mclean, B., Full Scale Field
Evaluation of MEMS Based Bi-axial Transducer, Presented at the 82nd
Annual Meeting of the Transportation Research Board, Washington, D.C,
pp. 1-17, 2003.

129

[161] Benmessaoud, M., and Nasreddine, M.M., Optimization of MEMS


capacitive accelerometer, Microsyst., Techno., 19(5), pp. 713720, 2013.
[162] Sabouhi, H.R., and Baghelani, M., Design of a Shock Immune MEMS
Acceleration Sensor and Optimization by Genetic Algorithm, J. Basic.
Appl. Sci. Res., 2(10), pp. 10480-10488, 2012.
[163] Allen, M. S., Massad, J. E., Field, R. V., and Dyck, C. W., Input and
Design Optimization Under Uncertainty to Minimize the Impact Velocity of
an Electro statically Actuated MEMS Switch, J.Vib. and Acoust., 130(2),
pp. 1-9, 2008.
[164] Genetic Algorithms: A Tutorial., Available:
www.sau.ac.in/~vivek/ softcomp /ga-tutorial.ppt.
[165] Grefenstette, J., GENESIS, Navy Center for Applied Research in
Artificial Intelligence, Navy research Lab., Wash. D.C. 20375-5000, 1993.
[166] Sivanandan, S, N., and Deepa, S. N., (2008), Introduction to Genetic
Algorithm, Springer-Verlag Berlin Heidelberg.
[167] Holland, J. H., (1992), Adaptation in Natural and Artificial Systems: An
Introductory Analysis with Applications to Biology, Control, and Artificial
Intelligence, The MIT Press, Cambridge, MA, USA.
[168] Kalyanmoy Deb, An Introduction to Genetic Algorithms, Sadhana, 24
(4-5), pp. 293-315, 1999.
[169] Karaboga, D., and Akay, B., A comparative study of Artificial Bee
Colony algorithm, Journal of Applied Mathematics and Computation, 214,
pp. 108132, 2009.
[170] Shokouhifar, M., and Abkenar, G.S., An Artificial Bee Colony
Optimization for MRI Fuzzy Segmentation of Brain Tissue, In Proc. of
International Conference on Management and Artificial Intelligence,
Indonesia, 6, pp. 6-10, 2011.
[171] Hadidi, A., Azad, S.K., and Azad, S.K., Structural optimization using
artificial bee colony algorithm, In Proc. of 2nd International Conference on
Engineering Optimization, Lisbon, Portugal, 2010, (CD-ROM).
[172] Karaboga, N., and Cetinkaya, M.B., A novel and efficient algorithm for
adaptive filtering Artificial bee colony algorithm, Turk. J. Elec. Eng &
Comp Sci., 19(1), pp. 175-190, 2011.
[173] Stanarevic, N., Tuba, M., and Bacanin, N., Modified artificial bee
colony algorithm for constrained problems optimization, International

130

Journal of Mathematical Models and Methods in Applied Sciences, 5(3), pp.


644-651, 2011.
[174] Vijayarani, S., and Sathiya prabha, M., Association Rule Hiding using
Artificial Bee Colony Algorithm, International Journal of Computer
Applications, 33(2), pp. 41-47, 2011.
[175] Ma, M., et al., SAR image segmentation based on Artificial Bee Colony
algorithm, Applied Soft Computing, 11, pp. 52055214, 2011.
[176] Karaboga, D., and Ozturk, C., Fuzzy clustering with artificial bee
colony algorithm, Scientific Research and Essays, 5(14), pp. 1899-1902,
2010.
[177] Karaboga, D., and Basturk, B., A powerful and efficient algorithm for
numerical function optimization: artificial bee colony (ABC) algorithm, J.
Glob. Optim., 39(3), pp. 459471, 2007.
[178] Srinivasa Rao, R., Narasimham, and S.V.L., Ramalingaraju, M.,
Optimization of Distribution Network Configuration for Loss Reduction
Using Artificial Bee Colony Algorithm, World Academy of Science,
Engineering and Technology, 2, pp. 620-626, 2008.
[179] Simon., (2013), Evolutionary Optimization Algorithms, John Wiley
&Sons, Hoboken, NJ, USA.

LIST OF PUBLICATIONS
In connection with this Thesis

131

I.

JOURNAL

1.

Krushnasamy, V.S., and Vimala Juliet, A., MEMS Accelerometer Design


Optimization Using Genetic Algorithm, Advanced Materials Research, 705,
pp. 288-294, 2013.
DOI: http://dx.doi.org/10.4028/www.scientific.net/AMR.705.288
(SNIP-0.377).Indexed in Scopus, EI Compendex, and Google Scholar

2.

Krushnasamy, V.S., and Vimala Juliet, A., Design Parameter Optimization


Based on Artificial Bee Colony Algorithm for MEMS Accelerometers,
Journal of Theoretical and Applied Information Technology, 60(2),
pp.
274-283, 2014.
DOI: http://www.jatit.org/volumes/Vol60No2/11Vol60No2.pdf
(SNIP-0.592).Indexed in Scopus, DOAJ and Google Scholar

3.

Krushnasamy, V.S., and Vimala Juliet, A., Optimization of MEMS


Accelerometer Parameter with Combination of Artificial Bee Colony (ABC)
Algorithm and Particle Swarm Optimization, Journal of Artificial
Intelligence, 7(2), pp. 69-81, 2014.
DOI: http://dx.doi.org/10.3923/jai.2014.69.81
(SNIP-1.749).Indexed in Scopus, DOAJ and Google Scholar

4.

Venkatesh, M, and Krushnasamy, V.S, Design and Analysis of Double


Folded Beam MEMS Accelerometer, International Journal of Advanced
Research in Electrical, Electronics and Instrumentation Engineering, 3(3),
pp. 8193-8199, 2014.
URL: http://www.ijareeie.com/upload/2014/march/60_Design.pdf
(IF-1.686).Indexed in Index Copernicus, DOAJ and Google Scholar

II.

INTERNATIONAL AND NATIONAL CONFERENCES

1.

Krushnasamy, V.S., and Vimala Juliet, A., MEMS Accelerometer Design


Optimization Using Genetic Algorithm, International Conference on MEMS
and Mechanics, Wuhan, China, 2013.

2.

Venkatesh, M, and Krushnasamy, V.S., Design and Analysis of Double


Folded Beam MEMS Accelerometer, 2nd National Conference on Recent
Trends in Instrumentation Control &Automation (NCRTICA14), Chennai,
India.

VIATE

132

KRUSHNASAMY V.S. received his Bachelor degree in Electronics and


Instrumentation Engineering from Madras University in summer 2000, M.Tech.
degree in Industrial Engineering from Dr.M.G.R University, Chennai in 2006.
Presently, he is an Assistant Professor (Senior Grade) in the Department of
Instrumentation and Control Engineering at S.R.M University, Kattankulathur,
Chennai, India, since 2007.

Presently, he is pursuing his Ph.D degree in Instrumentation and Control


Engineering at S.R.M University in the field of Micro Electro Mechanical Systems
(MEMS). His research interests are soft computing, MEMS, control systems, and
transducer engineering. He is a life member of Indian Society for Technical
Education, India (MISTE) and The Institution of Engineering and Technology (IET)
India.

You might also like