Springer, 2013. — 454 p.
Evolutionary algorithms (EAs) are generic heuristics that learn from natural collective behavior and can be applied to solve very difficult optimization problems. Applications include engineering problems, scheduling problems, routing problems, assignment problems, finance and investment problems, analyzing problems of complex social behavior, to state a few.
One of the most important features of EAs is that they are population-based searching algorithms, i.e., they work using one or more sets of candidate solutions (individuals). As a result, EAs deal well with the EvE (Exploration vs. Exploitation) paradox and are ideally suited for massively parallel runs, at the operator level, individual level, and population level and can therefore exploit efficiently general-purpose graphics processing units (GPGPUs) as well as massively parallel supercomputers. Because EAs are well suited to be applied to complex problems, they may require a long time to obtain good solutions or may require a large amount of computational resources to obtain high-quality solutions in a given time. Thus, since the earliest studies, researchers have attempted to use parallel EAs in order to accelerate the execution of EAs.
Many of the traditional parallel EAs run on multi-core machines, massively parallel cluster machines, or grid computing environments. However, recent advances in scientific computing have made it possible to use GPGPUs for parallel EAs. GPGPUs are low-cost, massively parallel, many-core processors. They can execute thousands of threads in parallel in single-program multiple-data (SPMD) manner. With the low-cost GPGPUs found in ordinary PCs, it is becoming possible to use parallel EAs to solve optimization problems of all sizes.
Decreasing execution time by orders of magnitude opens new possibilities for many researchers and EA users who were until now limited by the limited computing power of CPUs. At the time of writing, current GPGPU cards computing power are in the order of 10 TFlops, turning any PC into a high-performance parallel processing computer. Essentially, this provides anyone with the processing power previously available only to those with access to expensive supercomputers. Not only is there benefit to solve large problems, but also many small problems can be run many times in order to optimize parameter tuning.
The impact on optimization and inverse problem solving should be huge and this book should be of interest to anyone involved with EAs, including researchers, scientists, professionals, teachers, and students in the field.
Although the parallelism of EAs is well suited to SPMD-based GPGPUs, there are many issues to be resolved in the design and implementation of EAs on GPUs. For example, EAs run with a certain degree of randomness which can bring divergence that could degrade performance on SIMD cores. Some chapters of this book propose solutions to these kinds of issues in solving large-scale optimization problems.
Part I TutorialsWhy GPGPUs for Evolutionary Computation?
Understanding NVIDIA GPGPU Hardware
Automatic Parallelization of EC on GPGPUs and Clusters of GPGPUMachines with EASEA and EASEA-CLOUD
Part II Implementations of Various EAsGeneric Local Search (Memetic) Algorithm on a Single GPGPU Chip
arGA: Adaptive Resolution Micro-genetic Algorithm with Tabu Search to Solve MINLP Problems Using GPU
An Analytical Study of Parallel GA with Independent Runs on GPUs
Many-Threaded Differential Evolution on the GPU
Scheduling Using Multiple Swarm Particle Optimization with Memetic Features on Graphics Processing Units
ACO with Tabu Search on GPUs for Fast Solution of the QAP
New Ideas in Parallel Metaheuristics on GPU: Systolic Genetic Search
Genetic Programming on GPGPU Cards Using EASEA
Cartesian Genetic Programming on the GPU
Implementation Techniques for Massively Parallel Multi-objective Optimization
Data Mining Using Parallel Multi-objective Evolutionary Algorithms on Graphics Processing Units
Part III ApplicationsLarge-Scale Bioinformatics Data Mining with Parallel Genetic Programming on Graphics Processing Units
GPU-Accelerated High-Accuracy Molecular Docking Using Guided Differential Evolution
Using Large-Scale Parallel Systems for Complex Crystallographic Problems in Materials Science
Artificial Chemistries on GPU
Acceleration of Genetic Algorithms for Sudoku Solution on Many-Core Processors