Discussion
Genetic algorithms are a very flexible form of optimization, and can be applied is a generic fashion to most problems. There is a little amount of work required to choose a representation and write the conversion routines, but this is acceptable in most cases.
Genetic algorithms tend to do well at avoiding local minima, and given time can often find a near-optimal solution. This is due to their global nature; the search is not restricted to certain areas of the search space.
In many cases, it's possible to extend evolutionary algorithms by designing custom genetic operators. This allows the AI engineer to inject his knowledge about the domain to improve the quality or speed of the convergence.
On the other hand, although genetic algorithms can be assisted by custom operators, they cannot take advantage of properties of the problem to solve them more efficiently (for instance, mathematical insights).
Genetic algorithms perform well on a global scale, but are not very efficient at local optimization (that is, making small changes in the solution to improve it). They can do this thanks to mutation, but this generally happens slowly over time. Greedy hill-climbing techniques work better in such cases.
Randomness is the essence of genetic algorithms. Therefore, they can be hard to predict. One moment they may find the right solution in a few generations; in others, however, it may take entire dynasties! It's not a good idea to rely on randomness if a solution is required quickly.
Genetic algorithms require entire populations of solutions to work. Even with simple genotypes, this becomes costly very quickly, in both memory and computation. For complex problems, it's often unfeasible to run the optimization at fast enough rates for it to be interactive.
|