You are on page 1of 53

OPTIMIZATION TECHNIQUE

IN
ELECTRICAL MACHINE DESIGN
COMPUTER AIDED DESIGNING

• The application of computer in electrical machine designing was introduced in 1950.


• Initially use of computer was limited to transformer design but after 1956, designer
start using computer in rotating machine designing.
• The digital computer has completely revolutionized the field of machine designing.
• Computers eliminate the tedious and time consuming hand calculation thus accelerate
designing process.
• High rate of performing calculation at reasonable price and ability to carry logical
decisions are most important quality of present generation computer.
• In 1959, Two commonly acceptable approaches to machine design were introduced.
Analysis Method
Synthesis Method
Hybrid Method
ANALYSIS METHOD

• Analysis method means use of computer only for the purpose of analysis
leaving all exercises of judgement to the designer.
• In this method, Choice of dimension, material and type of construction are
made by the designer punched in computer as input.
• The performance is calculated by the computer and is return to the designer for
his examine.
• Designer examine the performance and make changes if required.
• This process is repeated till satisfied result obtained.
FLOWCHART FOR ANALYSIS METHOD
SYNTHESIS MODEL

• In this method, desired performance is given as input in computer.


• Logical decision required to modify the value of variable to arrive at the desired
performances are incorporated in program as set of instructions.
• The process run is not interrupted for the designer to take logical decisions.
FLOWCHART OF SYNTHESIS MODEL
HYBRID MODEL

• This method incorporate both the analysis as well as the synthesis method.
• Since the synthesis method involves grater cost, the major part of the design is
based on analysis method.
• Only limited portion of designing is done using synthesis method.
Various objective parameters for optimisation in an
electrical machine:
•Higher Efficiency
•Lower Weight for given KVA output (Kg/KVA)
•Lower Temperature-Rise
•Lower Cost
•Any Other Parameter like higher PF for induction motor, higher reactance etc.
FLOWCHART FOR COMPUTER AIDED DESIGN OF TRANSFORMER
FLOWCHART FOR COMPUTER AIDED DESIGN OF 3-PH INDUCTION MOTOR
WHAT IS OPTIMIZATION?
• Optimization - an act, process, or methodology of making something (such as a design, system, or
decision) as fully perfect, functional, or effective as possible;
• In the simplest case, an optimization problem consists of maximizing or minimizing a real function by
systematically choosing input values from within an allowed set and computing the value of the
function.
• More generally, optimization includes finding "best available" values of some objective function
given a defined domain (or input), including a variety of different types of objective functions and
different types of domains.
• It is primarily being used in those design activities in which the goal is not only to achieve just a
feasible design, but also a design which is fully functional and effective to take maximum
advantage of the resources available.
• An optimization algorithm is a procedure which is executed iteratively by comparing various
solutions till the optimum or a satisfactory solution is found.
• With the advent of computers, optimization has become a part of computer- aided design activities.
• In most engineering design activities, the design objective could be simply to minimize the cost
of production or to maximize the efficiency of production.
WHERE IS OPTIMIZATION USED:

Optimization is utilized in almost every field. For example:-


• All engineering designs use optimization for choosing design
parameters to improve some objective.
• Much of data analysis is also optimization: extracting some
model parameters from data while minimizing some error
measure (e.g. fitting).
• Most business decisions are optimization: varying some decision
parameters to maximize profit (e.g. investment portfolios, supply
chains, etc.).
WHY OPTIMIZATION:

Optimization is done because:-


• It increases efficiency.
• Make product cheap.
• Increase reliability.
• Make size compact.
• Minimizes power input.
• Overall it gives the best output from available input.
TERMS USED IN OPTIMISATION TECHNIQUES
DESIGN VARIABLES
•A design problem usually involves many design parameters, of which some are highly
sensitive to the proper working of the design.
• These parameters are called design variables in the parlance of the optimization
procedures.
•The choice of the important parameters in an optimization problem largely depends on
the user. However, it is important to understand that the efficiency and speed of
optimization algorithms depends, to a large extent, on the number of the chosen
design variables.
• The first thumb rule of the formulation of an optimization problem is to choose as
few design variables as possible.
• The second thumb rule in the formulation of optimization problems is that the
number of complex equality constraints should be kept as low as possible.
CONSTRAINTS

•The constraints represent some functional relationships among the design variables
and other design parameters satisfying certain physical phenomenon and certain
resource limitations.
•There are usually two types of constraints that emerge from most considerations:

1. INEQUALITY CONSTRAINTS: State that the functional relationships among


design variables are either greater than, smaller than, or equal to, a resource value.
2. EQUALITY CONSTRAINTS: state that the functional relationships should exactly
match a resource value.
OBJECTIVE FUNCTION
•The designer chooses the most important objective function as the objective function
of the optimization problem, and the other objectives are included as constraints by
restricting their values within a certain range.

•The objective function can be of two types. Either the objective function is to be
maximized or it has to be minimized
VARIABLE BOUNDS
•The final task of the formulation is to set the minimum and the maximum bounds on
design variable.
OPTIMAL PROBLEM FORMULATION
The objective in a design problem and the associated design parameters vary from product
to product.
Different techniques need to be used in different problems.
The purpose of the formulation procedure is to create a mathematical model of the
optimal problem, which then can be solved using an optimization algorithm.
FLOWCHART FOR OPTIMIZTION PROCESS
MACHINE OPTIMIZATION FACTORS:

Machine design is influenced by the following factors:


• Economics:- Typically, machines are to be designed to have a minimum material cost and
manufacturing cost. On the other hand, the trade-off between capital cost and operational
cost should also be considered, especially for large machines.
• Material limitations:- The physical limits of materials generally determine the performance
and dimensions of the machine.
• Specifications and standards:- The design, performance and materials used are often
subject to specifications issued by NEMA or similar bodies.
• Special factors:- In some applications, special considerations may exist that dominate the
design. For example, aerospace motors require a design of minimum weight with maximum
reliability.
For the design of traction motors the emphasis is usually on reliability and the ability to
satisfy a torque-speed curve.
MACHINE OPTIMIZATION PARTS
Design features for electrical machines can be broken down into:
• Electrical design:- The supply voltage, frequency, and number of phases of the
machine are typically specified, and sometimes even the phase connection and
number of poles. The slot numbers, winding layouts, turns per phase, and wire sizes
are determined by the designer.
• Magnetic design:- The airgap diameter and machine stack length, or active length, are
calculated based on output power, speed, pole number, and cooling type. The slot
sizing, core height, and external stator diameter are also calculated depending on
various criteria.
• Insulation design:- Insulation material choice and its thickness in various parts of the
machine are designed based on supply voltage, thermal management, and the
environment in which the machine operates.
MACHINE OPTIMIZATION PARTS(contd)

• Thermal design:- Extracting the heat caused by losses from the


machine is imperative to keep the windings, core, and frame
temperatures within safe limits.
• Depending on the application or power level, various types of cooling
are used. Calculating the loss and temperature distribution and the
cooling system represents the thermal design.
• Mechanical design:- Mechanical design refers to critical rotating
speed, noise, and vibration modes, mechanical stress in the shaft and
frame, and its deformation displacement, bearing design, inertia
calculation, and forces on the winding end coils during the most
severe current transients.
OPTIMIZATION CRITERIA
•The conditions for a point to be an optimal point, we define three different types of
optimal points.
Local optimal point
Global optimal point
Inflection point

• Local optimal point: A point or solution x* is said to be a local optimal point, if


there exists no point in the neighborhood of x* which is better than x*.
• In the parlance of minimization problems, a point x* is a locally minimal point if
no point in the neighborhood has a function value smaller than f(x*).
Optimization Criteria
• Global optimal point: A point or solution x** is said to be a global
optimal point, if there exist no point in the entire search space which is
better than the point x**.
Similarly, a point x** is a global minimal point if no point in
the entire search space has a function value smaller than f (x**).

• Inflection point: A point x* is said to be inflection point if the function


value increases locally as x* increases and decreases locally as x*
reduces or if the function value decreases locally as x* increases and
increases locally as x* decreases.
Optimization model

• Development of an optimization model can be divided into five major


phases.
– Collection of data
– Problem definition and formulation
– Model development
– Model validation and evaluation or performance
– Model application and interpretation of results
Data collection
– may be time consuming but is the fundamental basis of the model-building process
– extremely important phase of the model-building process
– the availability and accuracy of data can have considerable effect on the accuracy of the
model and on the ability to evaluate the model.
Problem definition and formulation, steps involved:
– identification of the decision variables;
– formulation of the model objective(s);
– the formulation of the model constraints.
In performing these steps one must consider the following.
– Identify the important elements that the problem consists of.
– Determine the number of independent variables, the number of equations required to
describe the system, and the number of unknown parameters.
– Evaluate the structure and complexity of the model
– Select the degree of accuracy required of the model
Model development includes: – the mathematical description,
– parameter estimation,
– input development , and
– software development
The model development phase is an iterative process that may require returning
to the model definition and formulation phase.
Model Validation and Evaluation
▪This phase is checking the model as a whole.
Model validation consists of validation of the assumptions and parameters of
the model.
The performance of the model is to be evaluated using standard performance
measures such as Root mean squared error.
▪Sensitivity analysis to test the model inputs and parameters.
▪This phase also is an iterative process and may require returning to
the model definition and formulation phase.
One important aspect of this process is that in most cases data used in the
formulation process should be different from that used in validation.
Modeling Techniques
• Different modeling techniques are developed to meet the
requirement of different type of optimization problems. Major
categories of modeling approaches are:
– classical optimization techniques,
– linear programming,
– nonlinear programming,
– geometric programming,
– dynamic programming,
– integer programming,
– stochastic programming,
– evolutionary algorithms, etc.
OPTIMIZATION ALGORITHMS

1. Single- variable optimization algorithms:


These algorithms are classified into two categories:
a) direct methods
b) gradient- based methods.
2. Multi- variable optimization algorithms
3. Constrained optimization algorithms
4. Specialized optimization algorithms
5. Non-traditional optimization algorithms
SINGLE VARIABLE OPTIMIZATION ALGORITHM
•Can be used to solve minimization problems of the following type:
Minimize f(x),
Where f(x) is the objective function and x is a real variable.
•The purpose of an optimization algorithm is to find a solution x, for
which the function f(x) is minimum.
Two distinct types of algorithms are presented :
• Direct search methods use only objective function values to locate
the minimum point, and
• Gradient-based methods use the first and/ or the second-order
derivatives of the objective function to locate the minimum point.
Gradient –based optimization methods which do not really require the
function f(x) to be differentiable or continuous.
SINGLE VARIABLE OPTIMIZATION ALGORITHMS

These algorithms provide a good understanding technique of the properties


of the minimum and maximum points in a function and how optimization
algorithms work iteratively to find the optimum point in any problem.
Different methods in Single- variable optimization algorithms are:-
1. Bracketing Methods
2. Region- Elimination Methods
3. Point-Estimation Method
4. Gradient-based Methods
MULTI-VARIABLE OPTIMIZATION ALGORITHMS

These algorithms demonstrate how the search for the optimum point progress in
multiple dimensions.
Different methods in multi- variable optimization algorithms are:-
1. Unidirectional Search Method
2. Direct Search Method
3. Gradient-based Methods
MULTIVARIABLE OPTIMIZATION ALGORITHMS
•In a multivariable function, the gradient of a function is not a scalar quantity;
instead it is a vector quantity.
•The optimality criteria can be derived by using the definition of a local optimal
point and by using Taylor’s series expansion of a multivariable function.
•Without going into the details of the analysis, the optimality criteria for a
multivariable function.
•Assume that the objective function is a function of N variables represented by x1,
x2 ,….., xN .
•The gradient vector at any point x(t) is represented by V f(x(t)) which is an
N-dimensional vector given as follows:

Vf(x(t))= (∂f / ∂x1,∂f /∂x2,∂f /∂x3 ,……∂f/∂xN )T | x(t)


•The first order partial derivatives can be calculated numerically . However,
the second-order derivatives in multivariable functions form a matrix,
V2f(x(t)) (better known as the Hessian matrix) given as follows:

∂2f / ∂x12 ∂2f /( ∂x1 *∂x2)……… ∂2f /( ∂x1 *∂xN)

∂2f / (∂x1 *∂x2) ∂2f / ∂x22 ……… ∂2f /( ∂x2 *∂xN)


V2f(x(t)) = . . .
. . .
∂2f / (∂x1 *∂x2) ∂2f / (∂xN *∂x2) …….. ∂2f / ∂xN2

•The second order partial derivatives can also be calculated numerically. By


defining the derivatives, optimality criteria can be presented
•A point x̅ is a stationary point if Vf(x(t)) =0. Furthermore the point is a minimum, a
maximum, or an inflection point if V2f(x(t)) is positive-definite, or ortherwise.
Unidirectional Search

•Many multivariable optimization techniques use successive unidirectional search


techniques to find the minimum point along a particular search direction.
• Since unidirectional searches will be mentioned and used when a unidirectional
search can be performed on a multivariable function.
Constrained Optimization Algorithm
•The constrained optimization algorithms can be grouped into direct and gradient-based
methods.
•Some of these algorithms employ single variable and multivariable unconstrained
optimization algorithms .
•A constrained optimization problem comprises an objective function together with a
number of equality and inequatlity constraints.
•Often lower and upper bounds on design or decision variables are also specified.
•Considering that there are N decision or design variables, we write a constrained
problem as follows:
Minimize f(x)

Subject to
gj (x) > 0, j=1,2,3,……,Ji
hk (x)=0 k=1,2,3,……, Ki
xi(L) < xi < xi(U) , i=1,2,3,……,N.

•Optimization problem - The function f(x) is the objective function which


is to be minimized.
•The function gj (x) and hk (x) are inequality and equality constraint
functions, respectively.
•There are J inequality constraints and K equality constraints in the
above problem.
•Inequality constraints are expressed in greater than or equal type.
• We recognize here that a less than or equal type constraint can be transformed into
the other type by multiplying the constraint by -1.
•Since none of the above functions (objective function or constraints) are assumed to
be linear, the above problem is also known as a Nonlinear Programming Problem or
simply an NLP problem.
• Even though the above problem is written for minimization, maximization
problems can also be solved by using the duality principle.
OTHER OPTIMISATION TECHNIQUES

•The conventional optimisation techniques themselves have problems normally


associated with local solutions rather than global solutions and the dependence on
their starting design vector.
• In this respect, the newer generation of methods like:
•Genetic algorithm
• Particle swarm optimisation
approach circumvent these issues usually at the expense of increased search times.
Differential evolution
• Differential evolution (DE) is a method that optimizes a problem by iteratively trying to
improve a candidate solution with regard to a given measure of quality. Such methods
are commonly known as metaheuristics as they make few or no assumptions about the
problem being optimized and can search very large spaces of candidate solutions.
• DE is used for multidimensional real-valued functions but does not use the gradient of
the problem being optimized, which means DE does not require for the optimization
problem to be differentiable as is required by classic optimization methods such as
gradient descent and quasi-newton methods. DE can therefore also be used on
optimization problems that are not even continuous, are noisy, change over time, etc
• DE optimizes a problem by maintaining a population of candidate solutions and
creating new candidate solutions by combining existing ones according to its simple
formulae, and then keeping whichever candidate solution has the best score or
fitness on the optimization problem at hand.
• In this way the optimization problem is treated as a black box that merely provides a
measure of quality given a candidate solution and the gradient is therefore not
needed.
Evolutionary algorithms

• An evolutionary algorithm (EA) is a subset of evolutionary computation, a generic


population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by
biological evolution, such as reproduction, mutation, recombination, and selection.
• Evolutionary algorithms often perform well approximating solutions to all types of problems
because they ideally do not make any assumption about the underlying fitness landscape.
• Techniques from evolutionary algorithms applied to the modeling of biological evolution are
generally limited to explorations of micro evolutionary processes and planning models based
upon cellular processes.
• In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this
computational complexity is due to fitness function evaluation. Fitness approximation is one
of the solutions to overcome this difficulty. However, seemingly simple EA can solve often
complex problems; therefore, there may be no direct link between algorithm complexity and
problem complexity.
IMPLIMENTATION

Step One: Generate the initial population of individuals randomly. (First generation)

Step Two: Evaluate the fitness of each individual in that population (time limit, sufficient fitness
achieved, etc.)

Step Three: Repeat the following regenerational steps until termination:

Select the best-fit individuals for reproduction. (Parents)


Breed new individuals through crossover and mutation operations to give birth to offspring.
Evaluate the individual fitness of new individuals.
Replace least-fit population with new individuals.
GENETIC ALGORITHM
• Genetic algorithms are commonly used to generate high-quality solutions to optimization and search
problems by relying on bio-inspired operators such as mutation, crossover and selection.
• In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an
optimization problem is evolved toward better solutions.
• Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and
altered;
• The evolution usually starts from a population of randomly generated individuals, and is an iterative process,
with the population in each iteration called a generation.
• In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the
value of the objective function in the optimization problem being solved.
• The more fit individuals are stochastically selected from the current population, and each individual's genome
is modified (recombined and possibly randomly mutated) to form a new generation.
• The new generation of candidate solutions is then used in the next iteration of the algorithm.
• Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a
satisfactory fitness level has been reached for the population.
Genetic Algorithm

The Genetic Algorithm can be summarised as follows :


i) Build a fitness function from the objective function and constraints.
ii) Create an initial population (usually randomly generated strings). The population
consists of N strings or chromosomes, which in turn are composed of
sub-strings or genes describing the design variables.
iii) Evaluate all of the individuals according to the fitness function
iv) Select a new population from the old population based on the fitness of the
individuals.
v) Apply genetic operators (mutation & crossover) to members of the population
to create new solutions.
vi) Evaluate these newly created individuals.
vii) Repeat steps from iv to vii until the termination criteria has been satisfied.
GENETIC ALGORITHM
PARTICLE SWARM OPTIMIZATION
• In computer science, particle swarm optimization (PSO) is a computational method
that optimizes a problem by iteratively trying to improve a candidate solution with regard
to a given measure of quality.
• It solves a problem by having a population of candidate solutions, here dubbed particles,
and moving these particles around in the search-space according to
simple mathematical formulae over the particle's position and velocity.
• Each particle's movement is influenced by its local best known position, but is also
guided toward the best known positions in the search-space, which are updated as
better positions are found by other particles. This is expected to move the swarm
toward the best solutions.
• PSO is a metaheuristic as it makes few or no assumptions about the problem being
optimized and can search very large spaces of candidate solutions
• Also, PSO does not use the gradient of the problem being optimized, which means PSO
does not require that the optimization problem be differentiable as is required by classic
optimization methods such as gradient descent and quasi-newton methods.
PSO and GA TECHNIQUES
PSO shares many similarities with evolutionary computation techniques such as
Genetic Algorithms (GA).
• The system is initialized with a population of random solutions and searches for
optima by updating generations.
• However, unlike GA, PSO has no evolution operators such as crossover and
mutation. In PSO, the potential solutions, called particles, fly through the problem
space by following the current optimum particles.
• Each particle keeps track of its coordinates in the problem space which are
associated with the best solution (fitness) it has achieved so far. (The fitness value
is also stored.) This value is called pbest.
• Another "best" value that is tracked by the particle swarm optimizer is the best
value, obtained so far by any particle in the neighbours of the particle. This
location is called lbest.
• When a particle takes all the population as its topological neighbors, the best value
is a global best and is called gbest.
• The particle swarm optimization concept consists of, at each time step, changing
the velocity of (accelerating) each particle toward its pbest and lbest locations
(local version of PSO).
• Acceleration is weighted by a random term, with separate random numbers being
generated for acceleration toward pbest and lbest locations.
• In past several years, PSO has been successfully applied in many research and
application areas. It is demonstrated that PSO gets better results in a faster, cheaper
way compared with other methods.
• Another reason that PSO is attractive is that there are few parameters to adjust. One
version, with slight variations, works well in a wide variety of applications.
• Particle swarm optimization has been used for approaches that can be used across a
wide range of applications, as well as for specific applications focused on a specific
requirement.
Summary
•Among the direct search methods, the evolutionary optimization method compares
2N (where N is the number of design variables) function values at each iteration to
find the current best point.
•This algorithm usually requires a large number of function evaluations to find
minimum point.
•The simplex search method also uses a number of points ((N+1) of them) at each
iteration to find one new point, which is subsequently used to replace the worst point
in the simplex.
• If the size of the simplex is large, this method has a tendency to wander about the
minimum.
CONCLUSION
•Optimization slowly becomes important and unavoidable part of the modern
electrical machines design process.
•Very often design engineers primarily rely on their experience to obtain a machine
design suited for some particular purpose. This ”classical” approach guarantees only
that a fully functional design will be accomplished.
•But it does not ensure that this design will be accomplished with minimum amount
of material used or that it will consume a minimum amount of energy in its
exploitation or that its initial cost will be the smallest possible.
•At the same time these are very important factors that need to be considered to make
a machine more competitive on the market. If properly utilized, the optimization will
lead to the design that satisfies all imposed requirements.

You might also like