Optimization, Fitting |
Differential evolution (DE) is the stochastic, population-based optimization algorithm. It was introduced by Storn and Price in 1996 [1,2]. It was developed to optimize real (float) parameters of a real valued function. The general problem formulation is: for an objective function
(1) |
defined in the region
(2) |
the minimization problem is to find such that
(3) |
for any in the region (2).
DE is an Evolutionary Algorithm (EA) or Genetic Algorithm (GA) [3, 4]. The parameter vector is called an individual (or chromosome, or genome). The objective function is also called the fitness function. The general scheme is the same for all EAs:
Fig. 1. General Evolutionary Algorithm scheme |
One cycle in the above scheme is called a generation. The solution is found if some stopping criterion is met.
The canonical EA-GA work with the strings of bits or integers (letters). Evolution strategy (ES) and Differential evolution both work with vectors of real numbers as representations of solutions. The ES typically uses adaptive mutation rates for the vectors themselves, but DE uses mutations of the differences of the parameter vectors.
Definition:
are the parameters for the individual i ( i = 1, . . . , P ) in the generation g ( g = 1, . . . , G _{ max } ) . |
Mutation:
i , a , b , c | are mutually different indexes of individuals |
is the target vector | |
is the donor vector | |
F | is the scaling (weighting) factor |
Recombination:
construct a trial vector
rand < CR or n = rand(D) + 1 | |
otherwise | |
CR | is the Crossover Rate |
Here rand generates the random numbers in the interval [0, 1) and rand(D) generates integer numbers in the interval 0, 1, . . ., D -1.
Selection:
otherwise |
Variants of mutation:
(4) | |
(5) | |
(6) |
Here K is the combination factor. For K = 1 Eq. (6) reduces to Eq. (4). Eq.(5) has the following limits:
K = 1 | (7a) | |
K = 0 | (7b) |
Acceleration
In the case when the mutation and crossover operations do not further improve the best fitness a steepest descent method is applied to push the best individual towards a better point [5]:
The gradient can be estimated numerically.
Recommendations
All control constants are problem-specific. According to [1], typical values for the weighting factor F = 0.8 and for the crossover constant CR = 0.9. In general, they both should be chosen from the interval [0.5,1].
In [6] it is recommended to choose CF from the interval [0,1] with typical value CR = 0.8. The value for K should be chosen around 0.5. It is also reported that the random choice of F from the interval [0,2] gives a good result.
Optimization, Fitting |
Please e-mail me at nikolai@shokhirev.com |
©Nikolai Shokhirev2004-2005
> >