You are on page 1of 14

Expert Systems With Applications 106 (2018) 183–196

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

A selection method for evolutionary algorithms based on the Golden


Section
Erik Cuevas a,b,∗, Luis Enríquez a, Daniel Zaldívar a,b, Marco Pérez-Cisneros a
a
Departamento de Electrónica, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, Guadalajara, Mexico
b
Centro Tapatío Educativo A.C, Av. Juárez 340, Colonia Centro, Guadalajara, Mexico

a r t i c l e i n f o a b s t r a c t

Article history: During millions of years, nature has developed patterns and processes with interesting characteristics.
Received 5 March 2017 They have been used as inspiration for a significant number of innovative models that can be extended
Revised 5 March 2018
to solve complex engineering and mathematical problems. One of the most famous patterns present in
Accepted 31 March 2018
nature is the Golden Section (GS). It defines an especial proportion that allows the adequate formation,
Available online 9 April 2018
selection, partition, and replication in several natural phenomena. On the other hand, Evolutionary algo-
Keywords: rithms (EAs) are stochastic optimization methods based on the model of natural evolution. One important
Evolutionary algorithms process in these schemes is the operation of selection which exerts a strong influence on the performance
Golden Section of their search strategy. Different selection methods have been reported in the literature. However, all of
Selection methods them present an unsatisfactory performance as a consequence of the deficient relations between elitism
Genetic algorithms (GA) and diversity of their selection procedures. In this paper, a new selection method for evolutionary com-
Evolutionary strategies (ES)
putation algorithms is introduced. In the proposed approach, the population is segmented into several
Genetic Programming (GP)
Evolutionary computation
groups. Each group involves a certain number of individuals and a probability to be selected, which are
determined according to the GS proportion. Therefore, the individuals are divided into categories where
each group contains individual with similar quality regarding their fitness values. Since the possibility
to choose an element inside the group is the same, the probability of selecting an individual depends
exclusively on the group from which it belongs. Under these conditions, the proposed approach defines
a better balance between elitism and diversity of the selection strategy. Numerical simulations show that
the proposed method achieves the best performance over other selection algorithms, in terms of its so-
lution quality and convergence speed.
© 2018 Elsevier Ltd. All rights reserved.

1. Introduction Evolutionary algorithms (EAs) (De Jong, 2006) are optimiza-


tion methods based on the model of natural evolution. In gen-
Expert systems (Jackson, 1998) are approaches commonly eral, Genetic Algorithms (GA) (Goldberg, 1989) are the most pop-
adopted to support decision-making processes and problem- ular representatives of such techniques. EAs operate on a popula-
solving applications. Some of their main characteristics include tion Pk ({pk1 , pk2 , . . . , pkN }) of N candidate solutions (individuals) that
their ability to solve complex problems and their capacity to pro- evolve from the initial point (k = 0) to a total gen number of iter-
duce consistent decisions. One of the most critical operations, in ations (k = gen). A candidate solution pki (i ∈ [1, …, N]) represents
the decision-making process, is the evaluation or ranking of all a d-dimensional vector { pki,1 , pki,2 , . . . , pki,d } where each dimension
possible alternatives of action in order to find the best solution for corresponds to a decision variable of the optimization problem. In
a particular problem. Therefore, optimization algorithms work co- each iteration, a set of stochastic operations are applied over the
operatively with expert systems schemes in the efficient search of population Pk to build the new population Pk + 1 . Such operations
potential solutions in the decision-making process. are mutation, recombination, and selection. The quality of each
candidate solution pki is evaluated by using an objective function
f (pki ) whose final result represents the fitness value of pki .

The effect of the evolutionary operators in the search
Corresponding author at: Departamento de Electrónica, Universidad de Guadala-
jara, CUCEI, Av. Revolución 1500, Guadalajara, Mexico.
strategy has been extensively demonstrated and documented
E-mail addresses: erik.cuevas@cucei.udg.mx, cuevas@inf.fu-berlin.de (E. Cuevas), (Hancock et al., 1994). Mutation incorporates modifications in the
daniel.zaldivar@cucei.udg.mx (D. Zaldívar), marco.perez@cucei.udg.mx (M. Pérez- population as a mechanism to escape of local optima. Recombi-
Cisneros).

https://doi.org/10.1016/j.eswa.2018.03.064
0957-4174/© 2018 Elsevier Ltd. All rights reserved.
184 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

Table 1
Comparative overview of classical selection methods.

Method Characteristic Disadvantage

Proportional method (Holland, 1975) It assigns a selecting probability to each individual in Problems in case of negative objective functions values or
terms of its relative fitness value. minimization tasks.
Tournament selection (Blickle et al., It selects the best solution for a set of q different Low selective pressure that promotes extensively the
1995) individuals. exploration, but adversely punishes the exploitation.
Linear ranking (Darrell, 1989) It uses a linear function to map the ranking index of each Solutions with the same fitness value can obtain a very
solution to a selection probability. different selection probability affecting the search
strategy.

nation interchanges essential characteristics of the search space mance of the EAs, the current research for the design or modifica-
among individuals. Selection (Bäck, 1992) conducts the search tion of selection operators has been practically overlooked.
strategy towards promising regions of the search space by the use The concept of selective pressure (Li et al., 2015; Pascal et al.,
of information currently available in the population. Both Mutation 2011) has been extensively used in the literature to characterize
and recombination (Bäck, 1992) allow the exploration of the search the performance of a selection approach. The index of takeover
space while selection exploits the information already present in time (Goldberg & Deb, 1991) allows evaluating the selective pres-
the population with the objective of improving it. sure of a determined selection method adequately. It quantifies the
The balance between exploration and exploitation (Nandar & number of generations required by a selection method to fill the
Ponnuthurai, 2015) in a search strategy can be interpreted, as the complete population with copies of the best initial solution. There-
conflicting action of increasing the solution diversity and simulta- fore, initially, the population presents only a single best individual
neously refining the solutions, already known, which mainly main- g (g1 ,g2 ,…gd ) and N-1 worse elements. Then, the selection method
tain the best fitness values. In EAs, this balance is critical in or- is operated until the whole population contains N copies of g.
der to achieve a good performance of the search strategy. Under On the other hand, nature is an exciting and inexhaustible
such circumstances, the selection operator provides an important source of solutions to several biological problems, which were
mechanism to modify the relation exploration-exploitation (Bäck & solved as a result of natural selection during millions of years
Hoffmeister, 1991). To increase the intensity of selecting individu- of evolution (Julian, Olga, Nikolaj, Bowyer, & Pahl, 2006; Sonya &
als with high fitness values augments the exploitation of the search William, 2010). As functional entities, nature has suggested impor-
strategy (Bäck, 1994). On the other hand, decreasing the emphasis tant patterns with interesting characteristics. They have been used
on selecting such individuals permits the selection of low-quality as inspiration for a significant number of innovative models that
solutions. In these conditions, the exploration of the optimization can be extended to solve complex engineering and mathematical
strategy is promoted (Baker, 1987). problems (Clark, Kok, & Lacroix, 1999). One of the most famous
Different selection techniques have been proposed in the liter- patterns present in nature is the Golden Section (GS) (Benavoli &
ature with different performance levels. The most popular meth- Chisci, 2009; Newell &Pennybacker, 2013). It defines an especial
ods are the Roulette method (Holland, 1975), the tournament se- proportion that allows the adequate formation, selection, partition,
lection (Blickle, Thiele, & Eshelman, 1995) and the linear rank- and replication of several natural phenomena (Walser, 2001). It ap-
ing (Darrell, 1989). The proportional method assigns a selecting pears in a variety of schemes, including the geometry of crystals,
probability to each individual regarding its relative fitness value. the spacing of stems in plants, the proportion of body parts in
This mechanism presents several flaws in case of negative objec- animals, and in the proportion of feature size in the human face
tive functions or minimization tasks (Noraini &, Geraghty, 2011). (Dunlap, 1997). GS, sometimes known as the golden ratio or golden
In the tournament selection technique, it is selected the best so- number, has been studied widely and has attracted the interest of
lution of a set of q different individuals randomly obtained from many scientific communities. As a result, its use has been extended
the whole population. The standard size of the tournament set is to several disciplines such as architecture (Krishnendra, 2015),
q = 2. Under tournament, it has been demonstrated that the se- arts (Loai, 2012), engineering, industrial design (Lu, 2003), biol-
lective pressure is very low, favoring the exploration extensively ogy (Ciucurel, Georgescu, & Iconaru, 2018) and quantum mechanics
and punishing the exploitation of the search strategy (Brad & Gold- (Coldea1 et al., 2010).
berg, 1995). Finally, the linear ranking method uses a linear func- Alternatively, to traditional selection methods, in this paper, a
tion to map the ranking index of each solution to a selection prob- new simple selection method for EAs is introduced. In the pro-
ability. Although linear ranking maintains a good selective pres- posed method, the population is segmented into several groups.
sure, the method presents a critical difficulty. Since linear rank- The number of individuals and the selection probability of each
ing assigns a selection probability to each solution depending on group is determined so that the proportion of two consecutive
its respective ranking index, two solutions with the same fitness groups maintains the GS. The proportions are assigned consider-
value can obtain a very different selection probability. This incon- ing that the group with the highest proportion of individuals cor-
sistency adversely affects the performance of the search strategy responds to the smallest probability to be selected. This group as-
during the optimization (Blickle & Thiele, 1996). In general, most sembles the elements with the worst fitness quality of the popula-
of the selection methods present an unsatisfactory performance as tion. In contrast, the group with the lowest proportion of elements
a consequence of the deficient relations between elitism and di- associates the highest probability to be chosen. Such a group con-
versity of their selection procedures. To provide an overview of all tains the best individuals in the population. Since the possibility
methods, we summarize the set of comparative features for each to choose an element inside the group is the same, the probabil-
method in Table 1. ity to select an individual depends exclusively on the group from
A selection operator is entirely independent of the Evolutionary which it belongs. Numerical simulations show that the proposed
Algorithm (EA). In particular, any selection method can be modi- method achieves the best performance over other selection algo-
fied or replaced, regardless of the global structure or the rest of rithms, with regard to solution quality and convergence speed. The
the operators used for a specific EA (De la, Maza, & Tidor, 1993). main contributions of this research are:
In spite of the importance of the selection operation in the perfor-
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 185

(I) The design of a new selection technique based on the GS that


defines a better balance between elitism and diversity in com-
parison to other selection methods currently in use.
(II) Its effective implementation in order to maintain a simple
structure and fast performance.
(III) The integration of an experimental scheme for the evaluation of
Fig. 1. Graphical representation of the GS proportion.
selection methods that includes benchmark functions and con-
sistent performance indexes.
(IV) The application of the proposed selection method to improve as the selection result z. Frequently, the selection is attained be-
the search strategy of Genetic Algorithms (GA). tween only two individuals (q = 2), but sometimes it is verified by
The rest of this paper is organized as follows. In Section 2, the using more elements. Tournament can be efficiently implemented
main selection methods are described. Section 3 introduces the as no sorting procedure for the population is required. The pseudo
concept of the Golden Section (GS). Section 4 explains the pro- code of tournament is given by Algorithm 2.
posed selection algorithm. Section 5 shows the experimental re- Under tournament, it has been demonstrated that the selective
sults. Finally, some conclusions are discussed in Section 6. pressure is very low, favoring the exploration extensively and pun-
ishing the exploitation of the search strategy (Bäck, 1992).
2. Selection methods
2.3. Linear ranking
EAs are stochastic methods that use a number of N candidate
solutions (known as individuals pki ∈ Pk , where Pk represents the The ranking selection was first proposed by Baker (Grefenstette
entire population) of the optimization problem to produce new po- & Baker, 1989) to eliminate the flaws and disadvantages of the pro-
tential solutions. During the process, the population Pk is modified portional selection method. In the linear ranking algorithm, the in-
to build a new population Pk + 1 . In each execution, as a search dividuals of Pk are sorted according to their fitness values. There-
strategy, EAs use the iterative application of three evolutionary op- fore, the rank N is assigned to the best element of Pk whereas
erators: mutation, recombination, and selection. Such operations the rank 1 is given to the worst individual. Then, a linear func-
are executed until a determined termination criterium has been tion is used to map the rank i of each individual to a selection
reached. Each iteration is called generation and is denoted by k. probabilitysi . Under such conditions, the selection probability of
The selection operation aims to improve the solution quality of each solution is computed as follows:
the population Pk by conferring to an individual of high quality pki  
1 i−1
a high probability to be chosen (Bäck, 1992). Therefore, selection si = η + (η − η
− + −
) (2)
allows the proliferation of good quality solutions and avoids the N N−1
election of bad quality solutions. After the operation of a selection
In Eq. (2), (η − /N) represents the selection probability of the
method, it is chosen a final solution which is denoted by z (z ∈
worst individual of Pk whereas (η + /N) exhibits the selection prob-
Pk ). Several selection methods have been proposed in the litera- 
ability of the best element. The restriction N i=1 si = 1requires that
ture with different performance levels. The most popular methods
1 ≤ η ≤ 2 and η = 2 − η are fulfilled. Under this approach, it
+ − +
are the proportional selection (Holland, 1975), the tournament se-
is noted that all individuals receive a different selection probabil-
lection (Blickle et al., 1995) and the linear ranking (Darrell, 1989).
ity, even, if they possess the same fitness value. This inconsistency
adversely affects the performance of the search strategy. Once cal-
2.1. Roulette method
culated the selection probability si of each individual of the popu-
lation Pk , the final selected element z is obtained by using the PS
The roulette method was introduced to genetic algorithms by
method, as it has been explained in sub-Section 2.1. The pseudo
Holland (Holland, 1975). It assigns to each element pki a selection
code of linear ranking is schematized in Algorithm 3.
probability si according to its relative fitness value. si can be com-
puted as follows:
3. Golden Section (GS)
f ( pk )
s i = N i (1)
l=1 f (pl )
k 3.1. Golden Section
Once calculated the selection probability si of each individual of
The Golden Section (GS), characterized by the Greek letter Phi
the population Pk , the final selected element z is obtained by us-
(φ ), is also known as the Golden Ratio, Golden Number, Golden
ing the Proportional Selection (PS) Method (RWM) (Holland, 1975).
Mean, Divine Proportion and Divine Section. GS is defined as an ir-
Under this approach, the cumulative probability Ai of each solution
 rational and continuous number [2] with the value of 1.61803398.
pki is calculated(Ai = ij=1 s j ). Therefore, to choose the final result
In general, the GS is characterized by a relative proportion between
z, a random number between zero and one is produced. Thus, the
two elements. Therefore, two quantities A and B present a GS rela-
solution that represents the chosen random number in the cumu-
tion if their proportion (A/B) is the same as the ratio of their sum
lative probability range for the individual is selected. Although the
in comparison with the larger of the two quantities ((A + B)/A).
proportional selection method maintains good selective pressure
Fig. 1 illustrates this geometric relationship. According to Fig. 1, a
properties, its selection mechanism presents several flaws in case
line segment is divided into two parts A and B so that the ratio
of negative objective functions or minimization tasks (Bäck, 1992).
between the longer part A and the shorter one B is equal to the
The pseudo code of the proportional selection technique is given
ratio between the whole line segment A + B and the longer part A.
by Algorithm 1.
Such relations, expressed algebraically, can be modeled as follows:
2.2. Tournament selection
A A+B
= (3)
B A
In tournament selection (Blickle et al., 1995), first, q different
individuals from Pk are randomly chosen to produce a group of ele- Operating this relationship, it is found that A2 = (A + B) • B.
ments B. Then, the best element of B (B = {b1 ,…, bq }) is considered Then, assuming that B = 1, it is obtained the following expression:
186 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

In the Fibonacci sequence, it has been demonstrated


A2 − A − 1 = 0 (4) (Esmaeili, Gulliver, & Kakhbod, 2009) that the proportion be-
tween two consecutive terms tends to the GS number, such as
The positive root of Eq. (4) represents the solution of the ge- follows:
ometric problem defined by Eq. (3). This solution is called the
Fn
Golden Section (φ ): ϕ = nlim (12)
√ →∞ Fn−1
1+ 5
A=ϕ = ≈ 1.618 (5) 3.2.3. Interconnection of the GS with Fractals
2
A fractal is a complex mathematical macro-structure built from
If it is converted the ratio between A (1.618) and B (1) in terms
an identical pattern that is self-similar across arbitrarily small
of percentages, such proportions can be established as A = 61.8%
scales (Mandelbrot, 1982). They are produced by repeating a sim-
and B = 38.2%.
ple construction process over and over in an ongoing feedback
loop. In the construction process, two elements are necessary: a
3.2. Mathematical framework of the Golden Section
self-similar structure and a drawing rule (Kenneth, 2003). The GS
appears in fractal geometry, due to its self-similarity and repeti-
3.2.1. General properties of the GS
tion properties. This self-similar nature can be seen in a certain
The GS number (φ ) presents interesting mathematical proper-
geometrical figure where one can divide it into two parts accord-
ties which has been exploited for several sciences such as physics,
ing to the GS proportion. From this operation, two regions are ob-
computer sciences, medicine and architecture (Iosa, Morone, &
tained A with the 61.8% and B with the 38.2% of the geometrical
Paolucci, 2018). Most of these properties express the capacity of
figure. If this procedure is infinitely repeated, dividing always the
the GS number to replicate the same structure at different scales.
smaller part (B), then a fractal shape is generated. Several inter-
The Eq. (4) can be formulated in terms of the GS number (φ )
esting fractal are constructed from the GS, some examples include
as follows:
the Fibonacci fractals (Ramírez, Rubiano, & De Castro, 2014) and
1
ϕ =1+ (6) the Golden trees (Bliss & Brown, 2009).
ϕ
3.3. Division process with GS
1
ϕ−1= (7)
ϕ The Golden Section (GS) is one of the most famous patterns
Under such circumstances, when it is subtracted 1 from present in nature. It has been used in a great variety of con-
φ = 1.618, it is obtained the inverse of the GS(1/φ = 0.618). It has texts. One of them is the division of the space in several sec-
been confirmed that the GS is the single positive integer hold- tions or groups. With this division, the main objective is to sepa-
ing this characteristic. Substituting multiple times the expression rate the space in such a way that allows better functionality, spac-
of Eq (6) forφ , the GS can be expressed as a continuous fraction ing, and distribution (Walser, 2001). Fig. 2 exhibits examples of
containing only ones: the GS presence in several natural structures. This approach has
been already applied in different disciplines such as architecture
1
ϕ =1+ 1
(8) (Krishnendra, 2015), industrial design and aeronautics (Lu, 2003).
1+ 1+ 1+1··· The idea under the division process is to segment a space E in m
different sections. Each section Si (i ∈ 1, …, m) that maintains a de-
Considering that Eq. (4) can be formulated in terms of the GS
termined proportion ci of the space E conserves the GS proportion
as φ 2 = φ + 1, the GS can be defined as a continuous square roots
with the rest of the sections.
of ones:
 
The process of dividing E in m groups begins with the ini-
  tial separation in two sections A1 (S1 ) and B1 (S2 ). Such sections
ϕ= 1+ 1+ 1+ 1 + ··· (9) present according to the GS proportion the c1 = 61.8% and the
c2 = 38.2% of the total space, respectively. Then, the small sec-
Both characteristics specified by Eq. (8) and Eq. (9) exhibit the tion B1 is divided into two different parts A2 andB2 maintaining
capacity of the GS of being auto-similar. Under such conditions, the the GS proportion between them. Therefore, the new sections A2
GS has the property of replicating the same pattern model at dif- and B2 occupy the 61.8% and the 38.2% of the space correspon-
ferent scale levels. dent toB1 (whose area corresponds to the 38.2% of the total area
Since the GS can be expressed as φ 2 = φ + 1, it can be gener- of E), respectively. After this procedure, the total space E is classi-
alized so that any power n of φ is equal to the sum of the two fied into three sections A1 (S1 ), A2 (S2 ) and B2 (S3 ), representing the
immediately preceding powers n-1 and n-2 such as: c1 = 61.8%, c2 = 23.6% and c3 = 14.6% of E, respectively. In case of in-
corporating a fourth section or group, the process is repeated over
ϕ n = ϕ n−1 + ϕ n−2 (10) B2 , generating the groups A1 (S1 ),A2 (S2 ), A3 (S3 ) and B3 (S4 ). With this
configuration, the total space E is distributed as follows:c1 = 61.8%,
This representation manifest the property of the GS to repeat a
c2 = 23.6%, c3 = 9% and c4 = 5.6%. Thereby, the division operation
specific spatial pattern in distinct magnitude levels, which is nor-
can repeatedly be executed until the number of necessary sections
mally exploited in fractal geometry (Sigalotti & Mejias, 2006).
m has been reached. Table 2 shows the produced segmentation
considering m = 2, m = 3, m = 4, and m = 5. Fig. 3 exhibits the di-
3.2.2. GS and Fibonacci numbers
vision process of an initial space E in six sections (m = 6). In Fig. 3,
Fibonacci numbers are a sequence of integers related with dif-
it is also illustrated the percentages of each section regarding the
ferent phenomena in nature (Benavoli et al., 2009). Fibonacci num-
space that they cover.
bers are a series of elements Fn of the type 1, 1, 2, 3, 5, 8, 13, 21,
34, 55, 89, 144,…, where each term (except the first two) is the
4. The proposed GS selection method
sum of the two immediately preceding elements, according to the
following recurrence formulation:
Alternatively to traditional selection methods, in this paper, a
Fn = Fn−1 + Fn−2 , n ≥ 2 (11) new simple selection method for EAs is introduced. In the pro-
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 187

Fig. 2. Natural examples of the GS presence.

Table 2 whereas the groups that maintain low-quality elements incorpo-


Produced segmentations considering m = 2, m = 3, m = 4 and
rate an extensive quantity of them.
m = 5.
The main idea of a selection method is to allow the prolifera-
m Proportions tion of good quality solutions and avoids the accumulation of bad
2 c1 c2 quality solutions in the new population Pk + 1 . Under such con-
S1 , S 2 61.8% 38.2% ditions, groups that store better individuals are selected with a
3 c1 c2 c3 higher probability than those that contain low-quality solutions.
S1 - S3 61.8% 23.6% 14.6% In the proposed method, it is assumed the existence of m dif-
4 c1 c2 c3 c4 ferent groups in the population. Therefore, the entire population Pk
S1 - S 4 is segmented according to the GS proportion in m different groups
61.8% 23.6% 9% 5.6%
or sections. With the division, each group gi (i ∈ 1, …, m) main-
5 c1 c2 c3 c4 c5
S1 - S5 tains a (4.1) number ui of individuals and a (4.2) probability wi to
61.8% 23.6% 9% 3.4% 2.2%
be selected.

4.1. Individuals of each group

In the approach, the population space of Pk is divided into


m different groups considering the proportions of the GS. The
groups are arranged so that the first group g1 corresponds to the
smallest proportion (cm ) of Pk whereas the last group gm cor-
responds to the highest proportion (c1 ) of Pk . Considering that
Pk has N individuals, the number of elements in ui is deter-
mined byround (N • (cm + 1 − i /100)), where round( • )symbolizes
a function which delivers the close integer to N • (cm + 1 − i /100).
In case of a segmentation of Pk in three sections, according to
the GS, the groups g1 , g2 and g3 maintain the following distribu-
tion: c3 = 14.6%, c2 = 23.6%and c1 = 61.8%. Therefore, assuming that
Pk has 100 elements (N = 100), the individuals are distributed as
follows:u1 = 14, u2 = 24 and u3 = 62.
To select the individuals that correspond to each group, the
elements of Pk are re-ordered regarding their fitness values, so
that the best individual is in the first position pk1 whereas the
worst element is the last one pkN . Under such conditions, the first
u1 elements (pk1 -pkg1 ) of the sorted population corresponds to the
groupg1 . Then, the next u2 individuals (pkg -pkg1 +g2 ) are included
1 +1
as part of groupg2 , and so on. Considering as an example that the
population Pk is distributed so thatu1 = 14, u2 = 24 and u3 = 62,
the groups include the following intervals of individuals: g1 (pk1 -
pk14 ), g2 (pk15 -pk38 ) and g3 (pk39 -pk100 ). Under this approach, the group
g1 contains the best 14 individuals found in the population, so that,
as the group number increases the quality of the involved indi-
Fig. 3. Division process of a space E in six sections considering the GS. (a) Initial
viduals also decreases. Therefore, the elements of the last group
space (E), (b) division with m = 2, (c) m = 3, (d) m = 4, (e) m = 5, (f) m = 6 and
(g) final configuration. represent the worst individuals contained in Pk . Fig. 4 illustrates
graphically the segmentation of the population space considering
the proportions of the GS.
posed method, the population is segmented into several groups ac-
cording to the GS. The groups are distributed so that each group 4.2. Selection probability of each group
contains individuals of a homogeneous quality in terms of the fit-
ness function. The number of individuals in each group depends on The main idea of a selection method is to allow the prolifera-
the quality of the individuals that it contains. Therefore, the group tion of good quality solutions and avoids the accumulation of low-
that has the best individuals contains a reduced amount of them quality individuals in the new population Pk + 1 . In the proposed
188 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

cant computational cost. During the operation of the evolutionary


algorithm, the number of groups remains fixed. As a consequence,
several operations can be calculated only once. Then, they can be
used as a fixed constant throughout the optimization process.

4.4. Discussion about the GS method

In the proposed method, the population is segmented into sev-


eral groups. The number of groups defines a particular relation be-
Fig. 4. Segmentation of the population space considering three groups. tween elitism and diversity of the selection strategy. Each group
involves a number of individuals and a probability to be selected,
which are determined according to the GS. Considering the basic
division of the population Pk in two sections, two groups are pro-
β
duced g21 and g22 . In each group gα the super-index β represents
the number of considered sections and the sub-index α the group
number. The first group maintains an individual proportion BI1 of
38.2% and a probability proportion AP1 of 61.8% (where the super-
index I or P symbolizes if the proportion is in the space of the
individuals or of the probability, respectively). Therefore, g21 con-
tains the best individuals of the population and possesses a higher
Fig. 5. Segmentation of the probability space considering three groups. probability to be selected. On the other hand, the last group g22 has
an individual proportion AI1 of 61.8% and a probability proportion
BP1 of 38.2%. With these proportions, g22 stores the worst elements
method, the probability space is divided into m different propor-
of the population and involves an inferior probability to be chosen.
tions according to the GS. With the division, each group gi main-
With only two groups, their proportions allow the accumulation of
tains a selection probability wi which corresponds to a proportion
individuals with different qualities in the same group. Under such
of the probability space suggested by the GS. The proportions are
conditions, the diversity increases and the elitism decrease as a
assigned so that the groups with the best individuals maintain big-
consequence of the selection of such groups.
ger probability proportions compared to groups with low-quality
If the population Pk is divided into more sections, for example,
elements. Under such conditions, the biggest probability propor-
three, the spaces of individuals and the probability are modified.
tion (c1 ) corresponds to the first group g1 whereas the small-
Under three sections, the individuals of Pk are segmented in g31 , g32
est probability (cm ) corresponds to the last group gm . Each selec-
and g33 . Similar to the last group g22 in the case of two sections, the
tion probability wi can be calculated from the original proportions
last group g33 maintains the worst individuals of the population.
shown in Table 2 such as:
However, the groupsg31 and g32 are obtained by the sub-division
ci
wi = (5) of the group g21 (g21 = g31 + g32 ). Therefore, the individuals of g21 that
100
represent the best elements of Pk are re-classified in two sections
In case of a segmentation of Pk in three sections, the groups according to their fitness values. The first group g31 stores the best
g1 , g2 and g3 maintain the following probability distribution: elements of g21 whereas g32 maintains the rest. With this division,
w1 = 0.618(61.8%), w2 = 0.236 (23.6%) and w3 = 0.146(14.6%). Fig. 5 the groups g31 , g32 and g33 assume the following individual propor-
illustrates graphically the segmentation of the probability space tions 14.6%, 23.6% and 61.8%, respectively. In case of the probability
considering the proportions of the GS. space, in comparison with the case of two sections, the first group
Finally, the selection of an individual is conducted in two steps. g31 has the highest probability to be selected. However, with the
In the first step, one from m possible groups is selected by using inclusion of another section, the selection probability of the last
the probability selection method (PS). Once a group has been cho- group (low-quality elements) is reduced to assign higher proba-
sen, a uniformly distributed random individual is selected within bilities to groups with individuals of better qualities. Therefore, if
it. This element is considered the final selection result z. several groups are defined, the elitism of the selection method is
emphasized whereas the diversity is simultaneously diminished.
4.3. Computational procedure To illustrate the division process of the individual space inPk , a
graphical example is shown in Fig. 6. The experiment presents a
The proposed algorithm is a selection method that allows determined distribution of the population Pk considering Fig. 6(b)
choosing an individual from a population Pk considering the pro- two, Fig. 6(c) three and Fig. 6(d) four sections regarding the objec-
portions established by the GS. As a first step, the number m of tive function presented in Fig. 6(a). Fig. 6(b) exhibits the distribu-
groups is defined. (I) The elements of Pk are re-ordered regard- tion of the population assuming two sections. Red dots represent
ing their fitness values, so that the best individual is in the first the best elements of Pk with a proportion of 38.2% whereas the yel-
position pk1 whereas the worst element is the last one pkN . After- low squares symbolize the worst individuals of Pk with a propor-
wards, (II) individuals of the ordered population Pk are classified in tion of 61.8%. In Fig. 6(c), the spatial localization of Pk under three
terms of their fitness values for the m proportions. Similarly, (III) section is shown. Analyzing Fig. 6(c), it is clear that the best in-
the probability portions are assigned to all the groups. Then, (IV) dividuals from Fig. 6(b) are sub-divided in two new groups: red
by using the RWM a group gi is chosen. Finally, (V) a uniformly dots and blue triangles. It is also remarkable that the worst ele-
distributed random individual is selected within gi as the final se- ments, represented by the yellow squares, remain without change.
lection z. The pseudo code of the GS method is schematized in Fig. 6(d) illustrates the distribution of Pk assuming four sections.
Algorithm 4. From this Figure, it can be seen that the best elements from
According to Algorithm 4, the proposed approach incorporates, Fig. 6(c) are still granulated in a extra group. Under such condi-
such as other selection methods, a sorting process of the popu- tions, the original best individuals (red dots) from Fig. 6(b) are fi-
lation. This operation represents the most expensive operation in nally segmented into three groups: red dots, blue triangles, and
the method. The rest of the operations does not present a signifi- green inverted triangles. The worst elements, represented by the
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 189

(a) (b)

(c) (d)

Fig. 6. Division process of the individual space of Pk considering 6(b) two, 6(c) three and 6(d) four sections in terms of the objective function presented in 6(a).

yellow squares, remain without change. With the distributions pre- 1. (S) The proposed method presents a better balance between
sented in Fig. 6(b)−(d), it is clear that the more groups are set, the elitism and diversity of the selection strategy.
more elitist the selection method becomes. 2. (S) The structure of the algorithm is simple and fast. Once
On the other hand, Fig. 7 presents the probability distribution the amount of groups is defined, the number of individu-
of each group configuration. In Fig. 7(a) the selection probabilities als and their selecting probabilities are fixed without using
of groups g21 and g22 are shown. From the figure, it is clear that the other operation during the optimization process.
best individuals of maintaining a high probability to be selected in 3. (W) The selection of an individual involves the use of two
comparison with g22 . Fig. 7(b) considers the probability distribution different probabilistic decisions: One for selecting the group
in the case of three sections. Under this configuration, the best in- and other for extracting the individual inside the group. This
dividuals, stored in g31 , also have the maximal probability to be se- fact can be understood as a disadvantage considering that all
lected. However, the probabilities of groups g32 and g33 are obtained selection methods use only one probabilistic determination.
by the sub-division of the probability assigned to g22 (g21 = g31 + g32 ). 4. (W) The proposed approach incorporates a configuration pa-
Therefore, the probability of g22 that includes the worst elements of rameter m which specifies the number of groups. Its defini-
Pk is sub-divided in two new probability sections. In case of seg- tion can be considered a disadvantage since all the other se-
menting Pk in four sections, the probability proportions are dis- lection methods operate automatically without specifying an
tributed according to Fig. 7(c). Similar to the cases of two and extra parameter. (S) On the other hand, its use allows modi-
three groups, the first group g41 maintains the best selection fre- fying the association between elitism and diversity of the se-
quency. On the other hand, the selection probability of g41 is still lection strategy. Under such conditions, it can be employed
reduced to assign higher probabilities to groups with individuals to fulfill the exploration and exploitation requirements of
of better qualities. specific optimization problems.
Different advantages and disadvantages of the proposed ap-
proach can be identified through the comparison of similarities 5. Experimental results
and differences with other selection strategies. As a summary, we
can enumerate the strengths (S) and weaknesses (W) of the pro- In this paper, the proposed method based on the GS is used
posed method as follows: to select individuals from a population. The method is also evalu-
190 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

(a) (b)

(c)
Fig. 7. Division process of the probability space considering 7(a) two, 7(b) three and 7(c) four sections.

ated in comparison with other selection approaches popularly em- In the comparison, a representative set of 7 functions, col-
ployed in evolutionary algorithms. In the experiments, we have ap- lected from Refs. (Ali et al., 1995; Chelouah & Siarry, 20 0 0) have
plied the GS method to different selection schemes whereas its re- been considered to test the performance of the proposed selection
sults are also compared to those produced by the roulette method method. Table 3 presents the test functions employed in the simu-
(Holland, 1975) and the tournament selection approach (Blickle lations. In Table 3, d represents the dimension of the function, f(x∗ )
et al., 1995; Clark et al., 1999). In case of the roulette method, a characterizes the minimum value of the function in location x∗ and
positive constant is added to each fitness value in order to avoid S is the search space. In the test, all functions are operated in 30
negative probabilities. On the other hand, the tournament selec- dimensions (d = 30).
tion approach is set with q = 2. This configuration is considered the In the simulations, two versions of the proposed GS method
most popular for selection proposes. are considered: with 3 sections (GS3) and with 4 sections
The experimental results are divided into three sub-sections. (GS4). Such techniques are compared along with the roulette
In the first Section (5.1), the performance of selection methods is method (Holland, 1975) and the tournament selection approach
compared regarding solution quality and convergence time. In the (Blickle et al., 1995) considering the functions from Table 3. To
second Section (5.2), the takeover time index is used to charac- minimize the stochastic effect of the results, each benchmark func-
terize the selective pressure of each method. Then, in the third tion is executed independently 30 times. The results, presented in
Section (5.3), the methods are analyzed using the concepts of the Table 4, report the averaged best fitness values f¯ and the standard
differential selection and the response to selection. Finally, in the deviations (σ f ) obtained through the 30 executions.
fourth Section (5.4), an analysis of the experimental results is pre- The best results in Table 4 are highlighted. According to Table 4,
sented. the approach GS3 provides better performance than GS4, tourna-
ment, and roulette for all functions. These performance differences
are directly related to a better trade-off between elitism and diver-
5.1. Solution quality and convergence sity produced by the formulated selection method. Fig. 8 presents
graphically the evolution of the average results of each method for
In this section, the performance of the selection methods re- functions 8(a) f1 , 8(b) f3 , 8(c) f4 and 8(d) f7 . With the visualiza-
garding solution quality and convergence time are compared. In tion of these graphs, the objective is to evaluate the velocity with
the comparisons, a generic Genetic Algorithm (GA) is employed to which a compared method reaches the optimum. An analysis of
solve several optimization problems represented by a set of bench- Fig. 8 shows that the proposed methods with 3 or 4 sections are
mark functions. In the experiments, each selection method is used faster in reaching their optimal fitness values than the tournament
as selection operator in a genetic algorithm whereas its operations and the roulette techniques.
of crossover and mutation remain without change. The comparison of the final fitness values cannot completely
The employed GA is set according to the following parameters, describe the performance of a selection algorithm. Therefore, in
the population size (N) is 100, the crossover probability is 0.60, and this part, a convergence test on the four compared algorithms has
the mutation probability is 0.10. Such values present according to been conducted. In the experiments, the convergence time t is as-
(Hamzaçebi, 2008) an acceptable performance. As termination cri- sessed as follows:
teria, it is considered the maximum number of iterations which
has been set to 500 (gen = 500). This stop criterion has been se-
lected to maintain compatibility with similar works reported in the
literature (Shilane, Martikainen, Dudoit, & Ovaska, 2008). t = min(tb , gen ), (6)
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 191

Table 3
Benchmark functions used in the experimental study.

Function S dim Optimum



d d x∗ = ( 0, . . . , 0 );
f 1 (x ) = −20 exp(−0.2 1
x2i ) − exp( 1
cos(2π xi ) ) + 20 [ − 32, 32]d d = 30
d i=1 n i=1
f ( x∗ ) = 0
d d x∗ = ( 0, . . . , 0 );
f 2 (x ) = 1
i=1 cos ( i ) + 1
x2i − x
√i [ − 60 0, 60 0]d d = 30
40 0 0 i=1
f ( x∗ ) = 0
⎧ 2 ⎫
⎪sin (3π x1 ) ⎪
⎨ d ⎬

f 3 ( x ) = 0.1 + (xi − 1 ) [1 + sin (3π xi + 1)]
2 2

⎩ i=1 ⎪

+ (xd − 1 )2 [1 + sin (2π xd )]
2
x∗ = ( 1, . . . , 1 );

d [ − 10, 10]d d = 30
+ u(xi , 5, 100, 4); f ( x∗ ) = 0
i=1 ⎧
⎨ k(xi − a )m xi > a
u(xi , a, k, m) = 0 −a < xi < a
⎩k(−x − a )m xi < −a
i

d
ix2 x∗ = ( 1, . . . , 1 );
f 4 (x ) = sin(xi )sin ( πi ) [0, π ]d
2
d = 30
i=1 f ( x∗ ) = 0
2
d 
d x∗ = ( 1, 2, . . . , d );
f 5 (x ) = ( ( ji + β )( ( xjj ) − 1)) [ − d, d]d d = 30
i=1 j=1 f ( x∗ ) = 0

d−1
2 x∗ = ( 1, . . . , 1 );
f 6 (x ) = [100(xi+1 − x2i ) + (xi − 1 ) ]
2
[ − 30, 30]d d = 30
i=1 f ( x∗ ) = 0

d 
i x∗ = ( 0, . . . , 0 );
f 7 (x ) = x2j [ − 65.536, 65.536]d d = 30
i=1 j=1 f ( x∗ ) = 0

Table 4
Comparison of solution quality among the selection methods.

Function GS4 GS3 Tournament Roulette

f1 0.3825(0.7481) 0.0193(0.0023) 0.3901(0.2097) 0.4754(0.1074)


f2 0.0159(0.0522) 0.0 011(0.0 017) 0.0174(0.0141) 0.0274(0.0047)
f3 0.6202(0.1505) 0.2874(0.1336) 0.6284(0.2410) 0.7414(0.0892)
f4 −23.5395(1.2346) −28.3944(0.6200) −22.8714(1.1017) −20.7483(1.8747)
f5 0.0213(0.0452) 0.0 057(0.0 0 02) 0.0317(0.0107) 0.04981(0.0271)
f6 0.3261(0.0341) 0.08512(0.0075) 0.3514(0.0711) 0.4101(0.0751)
f7 0.0431(0.0032) 0.0 0 06(0.0 0 01) 0.0482(0.0085) 0.0541(0.0620)

where gen represents the maximum number of allowed genera- and convergence properties of a selection method in the ab-
tions and tb is the generation number in which the best fitness sence of any other evolutionary operation. Assuming a popula-
value has been found. tion Pk ({pk1 , pk2 , . . . , pkN }) of N candidate solutions with a single best
In the simulations, the algorithms are executed 30 times over elementpbest , takeover time is defined as the number of genera-
seven functions. Under such conditions, 210 different executions tions GT spent by a selection method until the rest of the N-1 in-
are conducted for each selection method. To evaluate the conver- dividuals consists of copies of pbest .
gence of the selection methods, a study of the produced conver- Therefore, considering that E(k) represents the number of
gence times is conducted. In the study, the frequency of the re- copies of pbest in Pk at the k iteration:
sulting 210 convergence times is analyzed by using histograms.   
E (k ) = NepkNe = pbest , Ne = 1, . . . , N (7)
Fig. 9 presents the resulting histograms for 9(a) GS4, 9(b) GS3,
9(c) Tournament and 9(d) Roulette. In each histogram, the com- The takeover time GT is calculated as follows:
plete number of generations gen are divided into intervals of ap-
proximately 50 iterations. For the sake of clarity, the intervals that GT = min {k|E (k ) = N } (8)
do not contain more than ten events have been deleted from the Small GT values define a high selective pressure while large val-
histogram. Fig. 9(a) exhibits the convergence time distribution for ues characterize a low selective pressure of the search strategy.
the case of the GS with four sections. According to this Figure, the To evaluate the takeover time of each selection method, an ex-
frequency peak is reached in the interval from 225 to 275 gen- periment has been conducted. In the test, a population Pk of 400
erations whereas the average convergence time is approximately (N = 400) individuals is randomly initialized, containing a single
located in 290 iterations. Fig. 9(b) presents the convergence time best individual pbest . Then, the selection method is applied to the
distribution for the case of the GS with three sections. An analy- population whereas the empirical takeover times GT are registered.
sis of this histogram shows that the maximal frequency is attained It is important to remark that the process does not consider the
in the interval from 170 to 210 generations whereas the average effect of any other evolutionary operation such as crossover and
convergence time is localized in 160 iterations. On the other hand, mutation.
Fig. 9(c) and (d) show the convergence time distributions for the In the simulations, two versions of the proposed GS method
case of the Tournament and Roulette methods. From the figures, are considered: with 3 sections (GS3) and with 4 sections
it is clear that both methods present average convergence times (GS4). Such techniques are compared along with the roulette
comparatively higher than those produced by the GS approaches. method (Holland, 1975) and the tournament selection approach
(Blickle et al., 1995). In Table 5, the takeover times produced by
5.2. Takeover time each selection method are presented. In the Table, it is also re-
ported the execution time in which GT time takes place. To min-
The concept of takeover time, introduced by Goldberg and imize the stochastic effect, the results consider the average val-
Deb (1991), has been used to evaluate the selective pressure ues obtained from 30 executions of each method. An analysis of
192 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

(a) (b)

(c) (d)
Fig. 8. Fitness average results obtained during the evolution process for functions 8(a) f1 , 8(b) f3 , 8(c) f4 and 8(d) f7 .

(a) (b)

(c) (d)

Fig. 9. Resulting histograms for (a) GS4, (b) GS3, (c) Tournament and 9(d) Roulette.
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 193

Table 5
Algorithm 2 Pseudo code of the tournament selection method.
Takeover times produced by each selection method.

Method Takeover time (generations) Execution time (seconds)

GS4 8.1 3.23


GS3 5.2 2.12
Tournament 14.8 8.62
Roulette 33.4 12.52

Algorithm 1 Pseudo code of the proportional selection method.

Algorithm 3 Pseudo code of the linear ranking selection method.

Table 5 suggests that the GS3 method maintains the best perfor-
mance in comparison to the other selection methods. From the
results, it is clear than the Tournament, and Roulette methods
present the worst indexes regarding their takeover and execution
times. generations, Fig. 10 shows only 50 generations from 500 used in
Fig. 10 shows the comparative dynamics of E(k) of each method the evolution process. From Fig. 10, it is clear that the GS3 method
through the takeover time test. The curve represents the averaged is the fastest in filling Pk with pbest (around of 5 Generations). On
values obtained from a set of 30 independent executions. Since the the other hand, the Roulette and Tournament approaches obtain
takeover times of all approaches are located within the first 50 the worst performance.

Algorithm 4 Pseudo code of the GS selection method.


194 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

consequence of a better population improvement in comparison


to the tournament and roulette scheme. Such improvements take
place in the first generations as result of their better convergence
properties. On the other hand, the methods of tournament and
roulette present several oscillations as an inappropriate balance be-
tween elitism and diversity during the optimization process.

5.4. Analysis of the experimental results

By comparing the experimental results of each selection


method on the simulations, we can express the following conclu-
sions:

1. Regarding solution quality, the approach GS3 provides better


performance than GS4, tournament and roulette approaches
for all optimization problems. Such performance differences
are directly related to a better trade-off between elitism and
Fig. 10. Comparative dynamics of E(k) of each method through the takeover time diversity produced by the division of the population in three
test. For the sake of visualization, only 50 generations from 500 are shown. segments.
2. The results of the convergence time for the case of the GS
approach with four sections show that its frequency peak is
reached in the interval from 225 to 275 generations whereas
the average convergence time is approximately located in
290 iterations. For the case of the GS with three sections,
the maximal frequency is attained in the interval from 170
to 210 generations whereas the average convergence time is
localized in 160 iterations. On the other hand, for the case of
the Tournament and Roulette methods, it is clear that both
methods present the worst convergence times comparatively
than those produced by the GS. These results exhibit that
the proposed selection method improves the performance of
the search strategy. This enhancement is attributed to the
selection of candidate solutions which allow increasing the
diversity of the population during the optimization process.
3. With regard to the Takeover time, results suggest that the
Fig. 11. Response to selection R(k) values during the optimization of function f3 . GS3 method maintains the best performance in compari-
son to the other selection methods. From the results, it is
clear than the Tournament and Roulette methods present
the worst indexes regarding their takeover and execution
5.3. Response to selection times. The takeover time is an index extensively used to
evaluate the performance and convergence properties of a
Selection methods invariably use the fitness value of each in- selection method. Under this perspective, the proposed ap-
dividual to choose new elements from the population. Therefore, proach demonstrates that its selection strategy can identify
the state of a population Pk at the iteration k can be approxi- the best element of the population and to assign it an ap-
mately described by the mean fitness value of their individuals propriate probability for proliferating.
M(k). Response to selection is an additional way to characterize the 4. The results of the response index R(k) indicate that the pro-
performance of a selection method suggested by Mühlenbein and posed methods obtain the highest values as a consequence
Schlierkamp-Voosen, 1993. Response to selection R(k) is defined of a better population improvement in comparison to the
as the population mean fitness difference between two contiguous other selection strategies. Such improvements are more evi-
generations (R(k) = M(k + 1)-M(k)). In general, R(k) provides an al- dent in the first generations as result of their better conver-
ternative to measure the progress of the population during the op- gence properties.
timization process. Under such conditions, high positive values of
R(k) express strong improvements in the population whereas val- 6. Conclusions
ues near to zero indicate marginal improvements.
To produce R(k), each selection method is used as selection op- In this paper, a new selection method for evolutionary com-
erator in a genetic algorithm whereas its operations of crossover putation algorithms is introduced. In the proposed approach, the
and mutation remain without change. This GA is set according population is segmented into several groups. Each group involves
to the following parameters, the population size (N) is 100, the a certain number of individuals and a probability to be selected,
crossover probability is 0.60, and the mutation probability is 0.10. which are determined according to the GS proportion. Therefore,
Such values present according to (Hamzaçebi, 2008) an acceptable the individuals are divided into categories where each group con-
performance. As termination criteria, it is also considered the max- tains individuals with similar quality regarding their fitness values.
imum number of iterations which has been set to 500 (gen = 500). Since the possibility to choose an element inside the group is the
Fig. 11 exhibits the response to selection R(k) values during the same, the probability of selecting an individual depends exclusively
optimization of function f3 . Such results consider the average val- on the group from which it belongs. Under these conditions, the
ues from 20 different executions. An analysis of Fig. 11 indicates proposed approach defines a better balance between elitism and
that the methods GS3 and GS4 maintain high values of R(k) as a diversity of the selection strategy
E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 195

To show the performance of the proposed approach, several ex- Blickle, T., & Thiele, L. (1996). A comparison of selection schemes used in evo-
periments have been conducted. In the study, it has been com- lutionary algorithms. Evolutionary Computation, 4(4), 361–394 December 1996.
doi:http://dx.doi.org/10.1162/evco.1996.4.4.361.
pared to other selection methods such as tournament and roulette Bliss, P. M., & Brown, D. A. (2009). Geometric properties of three-dimensional fractal
schemes. The results demonstrated that the proposed GS method trees. Chaos, Solitons and Fractals, 42, 119–124.
outperforms the other algorithms for most of the experiments in Brad, L. Miller, & Goldberg, D. (1992). Genetic algorithms, tournament selection and
the effects of noise. Complex Systems, 9(1995), 193–212.
terms of solution quality and convergence properties. Chelouah, R., & Siarry, P. (20 0 0). A continuous genetic algorithm designed for the
In general, the main contributions of this paper can be sum- global optimization of multimodal functions. Journal of Heuristics, 6(2), 191–213.
marized as follows: The design of a new selection technique based Ciucurel, C., Georgescu, L., & Iconaru, E. (2018). ECG response to submaximal exer-
cise from the perspective of Golden Ratio harmonic rhythm, biomedical. Signal
on the GS that defines a better balance between elitism and diver-
Processing and Control, 40, 156–162.
sity in comparison to other selection methods currently in use. Its Clark, G., Kok, R., & Lacroix, R. (1999). Mind and autonomy in engineered biosys-
effective implementation in order to maintain a simple structure tems,. Engineering Applications of Artificial Intelligence, 12, 389–399.
Coldea1, R., Tennant, D., Wheelerl, E., Wawrzynska, E., Prabhakaran1, D., Telling, D.,
and fast performance. The integration of an experimental scheme
et al. (2010). Quantum criticality in an ising chain: experimental evidence for
for the evaluation of selection methods that includes benchmark emergent E8 symmetry. Science, 327(5962), 177–180.
functions and consistent performance indexes. The application of Darrell, L. W. (1989). The GENITOR algorithm and selection pressure: why
the proposed selection method to improve the search strategy of rank-based allocation of reproductive trials is best. In J. D. Schaffer (Ed.), Pro-
ceedings of the 3rd international conference on genetic algorithms (pp. 116–123).
Genetic Algorithms (GA). San Francisco, CA: Morgan Kaufmann Publishers Inc..
This paper shows an attempt to enhance the performance of De Jong, K. (2006). Evolutionary computation: A unified approach. Cambridge, MA:
evolutionary algorithms through the modification of their selec- MIT Press.
De la Maza, M., & Tidor, B. (1993). An analysis of selection procedures with particu-
tion strategy, and lots of work can be done as future research to lar attention paid to proportional and Boltzmann selection. In Proceedings of the
improve the performance of the proposed selection method. The 5th international conference on genetic algorithms (pp. 124–131). June 01.
main directions that deserve further research include: Dunlap, R. A. (1997). The golden ratio and fibonacci numbers. Singapore: World Sci-
entific Publishing Co. Pte. Ltd..
Esmaeili, M., Gulliver, T. A., & Kakhbod, A. (2009). The Golden mean, Fibonacci ma-
• To characterize the performance of the proposed method in trices and partial weaklysuper-increasing sources. Chaos, Solitons and Fractals,
other evolutionary approaches. In this paper, extensive work 42, 435–440.
Goldberg, D.E. (1989). Genetic algorithm in search optimization and machine learn-
has been conducted to evaluate the GS selection strategy in
ing, Addison–Wesley.
Genetic Algorithms (GA). Further work is necessary to prove Goldberg, D. E., & Deb, D. (1991). A comparative analysis of selection schemes used
the capacities of the proposed technique in other evolutionary in genetic algorithms. In G. J. E. Rawlins (Ed.), Foundations of genetic algorithms
schemes such as Evolutionary Strategies and Genetic Program- (pp. 69–93).
Grefenstette, J., & Baker, J. (1989). How genetic algorithms work: A critical look at
ming. With the inclusion of the GS selection method in such implicit parallelism. In D. Schaffer (Ed.), Proceeding of the third international con-
algorithms, they could substantially improve their performance ference on genetic algorithms (pp. 20–27). Morgan Kaufmann publishers.
as search strategies. Hamzaçebi, C. (2008). Improving genetic algorithms’ performance by local search for
continuous function optimization. Applied Mathematics and Computation, 196(1),
• To examine the abilities of the GS selection technique in dy- 309–317.
namic optimization problems. It is well known that dynamic Hancock, P. (1994). An empirical comparison of selection methods in evolutionary
optimization problems require an appropriate diversity of solu- algorithms. In T. C. Fogarty (Ed.), Evolutionary computing: AISB workshop leeds
(pp. 80–94). U.K.: Springer. April 11–13, 1994 Selected Papers, 1994 AISB Work-
tions due to the constant changes in the objective function. Un- shop, Leeds, Berlin Heidelberg.
der such conditions, the proposed method could provide good Holland, J. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: The
results through the definition of an adequate number of groups University of Michigan Press.
Iosa, M., Morone, G., & Paolucci, S. (2018). Phi in physiology, psychology and biome-
m.
chanics: The golden ratio between myth and science. BioSystems, 165, 31–39.
• To design a mechanism for the adaptation of the proposed GS Jackson, P. (1998). Introduction to expert systems, (3 ed.), (p. 2). Addison Wesley. ISBN
method during the evolution. In this paper, the selection strat- 978-0-201-87686-4.
Julian, F. V. V., Olga, A. B., Nikolaj, R. B., Bowyer, A., & Pahl, A. K. (2006). Biomimet-
egy maintains its operation without changes during the opti-
ics: Its practice and theory. Journal of The Royal Society Interface, 3, 471–482.
mization process. However, a further state is that the selection Kenneth, F. (2003). Fractal geometry: Mathematical foundations and applications. John
strategy experiment changes in the association between elitism Wiley & Sons. ISBN 0-470-84862-6.
and diversity as it is required by the optimization strategy. Shekhawat, K. (2015). Why golden rectangle is used so often by architects: A math-
ematical approach. Alexandria Engineering Journal, 54, 213–222.
• To define new performance indexes that allow correctly to eval- Li, F. Z., Zhou, C. X., He, R., Xu, Y., & Yan, M. L. (2015). A novel fitness allocation
uate the capacities of a selection method. The Takeover time algorithm for maintaining a constant selective pressure during GA procedure.
and the response index R(k) do not directly reflex the effect of Neurocomputing, 148, 3–16.
Loai, M. D. (2012). Geometric proportions: The underlying structure of design
the selection method in the search strategy. process for Islamic geometric patterns. Frontiers of Architectural Research, 1,
380–391.
Lu, Y. (2003). A Golden Section approach to optimization of automotive friction ma-
References terials. Journal of Materials Science, 38(5), 1081–1085.
Mandelbrot, B.B. (1982). The fractal geometry of nature. W.H. Freeman and Com-
Ali, M., Khompatraporn, C., & Zabinsky, Z. (1995). A numerical evaluation of several pany, New York.
stochastic algorithms on selected continuous global optimization test problems. Mühlenbein, H., & Schlierkamp-Voosen, D. (1993). Predictive models for the breeder
Journal of Global Optimization, 31(4), 635–672. genetic algorithm: Continuous parameter optimization. Evolutionary Computa-
Bäck, T. (1992). The Interaction of mutation rate, selection, and self-adaptation tion, 1(1), 25–49.
within a genetic algorithm. PPSN, 87–96. Nandar, L., & Ponnuthurai, N. S. (2015). Heterogeneous comprehensive learning par-
Bäck, T. (1994). Selective pressure in evolutionary algorithms: A characterization of ticle swarm optimization with enhanced exploration and exploitation. Swarm
selection mechanisms. In International conference on evolutionary computation and Evolutionary Computation, 24, 11–24.
(pp. 57–62). Newell, A. C., & Pennybacker, M. (2013). Fibonacci patterns: Common or rare? Pre-
Bäck, T., & Hoffmeister, F. (1991). Extended selection mechanisms in genetic algo- cedia IUTAM, 9, 86–109.
rithms. ICGA, 92–99. Noraini, M. R., & Geraghty, J. (2011). Genetic Algorithm Performance with Different
Baker, J. (1987). Reducing bias and inefficiency in the selection algorithm. In John Selection Strategies in Solving TSP. In Proceedings of the World Congress on Engi-
J. Grefenstette (Ed.), Proceedings of the second international conference on genetic neering 2011, Vol II, WCE 2011, July (pp. 6–8).
algorithms on genetic algorithms and their application (pp. 14–21). Hillsdale, NJ: Pascal, C., Vassiliev, S., Houghten, S., & Bruce, D. (2011). Genetic algorithm with al-
L. Erlbaum Associates Inc. ternating selection pressure for protein side-chain packing and pK(a) prediction.
Benavoli, A., & Chisci, L. F. (2009). Fibonacci sequence, Golden Section, Kalman filter Biosystems, 105(3), 263–270.
and optimal control. Signal Processing, 89, 1483–1488. Ramírez, J. L., Rubiano, G. N., & De Castro, R. (2014). A generalization of the Fi-
Blickl, T., & Thiele, L..A mathematical analysis of tournament selection. In Proceed- bonacci word fractal and the Fibonacci snowflake. Theoretical Computer Science,
ings of the 6th international conference on genetic algorithms (1995), Eshelman, 528, 40–56.
L. J. (Ed.), San Francisco, CA, USA: Morgan Kaufmann Publishers Inc, 9–16. Shilane, D., Martikainen, J., Dudoit, S., & Ovaska, S. (2008). A general framework
196 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

for statistical performance comparison of evolutionary computation algorithms. Sonya, Q., & William, G. (2010). Bionics—An inspiration for intelligent manufacturing
Information Sciences, 178, 2870–2879. and engineering. Robotics and Computer-Integrated Manufacturing, 26, 616–621.
Sigalotti, L., & Mejias, A. (2006). The golden ratio in special relativity, Chaos,. Solitons Walser, H. (2001). The Golden Section. The Mathematical Association of America.
and Fractals, 30, 521–524.

You might also like