Springer, 2005, -290 p.
Randomization has become a standard approach in algorithm design. Efficiency and simplicity are the main features of randomized algorithms that often made randomization a miraculous springboard for solving complex problems in various applications. Especially in the areas of communication, cryptography, data management, and discrete optimization, randomization tends to be an indispensable part of the development of software systems. We know several situations and algorithmic tasks for which any deterministic approach leads to so much computer work that it is impossible to perform it in practice, but for which randomized methods can be successfully applied. This huge gain of going from an intractable amount of computer work1 (computational complexity) to short computations of randomized algorithms is paid for by the risk of computing a wrong output. But the probability of executing an erroneous computation of a randomized algorithm can often be reduced to below the probability of winning the first prize in a lottery in the first attempt. This means that one pays for saving a huge amount of computing time with a very small loss in the degree of reliability. How to orchestrate such huge quantitative jumps in computational complexity by small relaxations of reliability (correctness) constraints and why such big gains for small prices are at all possible are the topics of this book.
Despite many spectacular and surprising results and successful applications of randomized algorithms, the basic concepts of randomization are not sufficiently broadcasted in academic education. One of the main reasons for this unpleasant circumstance is that simple explanations and transparent presentations of the discoveries in randomized computing are missing in textbooks for introductory courses that could be available even to non-scientists and beginners. This is usually the situation when principal contributions and concepts of a particular scientific discipline are not recognized as paradigms in the broad community, and are even considered to be too difficult to be presented in basic courses. The aim of this textbook is to close this gap. We focus on a transparent explanation of the paradigms of the design of randomized algorithms. This first book of our series on the design and analysis of randomized algorithms provides a readable introduction to this topic, giving an exhaustive, technical survey of the best and most important results achieved. Providing an accessible ticket to randomization we would like to encourage colleagues not working in the area of randomization to present some randomized concepts in their courses, or even give courses specialized on randomized algorithm design. In this way we would like to contribute to speeding up the inclusion of paradigmatic results on randomized computing in the educational mainstream of computer science.
The didactic method of this book is similar to that in our textbook Theoretical Computer Science. The main strategies are called simplicity, transparency, and less is sometimes more. Especially in this first book in the series, clarity takes priority over the presentation of the current state of research and development. When a transparent argument of a weaker result can bring across the idea succinctly, then we will opt for it instead of presenting a strong but technically demanding and confusing argument of the best known result. Throughout this book, we work systematically, taking small steps to travel from the simple to the complicated, and so avoid any interruption in thoughts. We are far from trying to present too many results. Instead, we take the time and space to explain what we want to explain in detail, and also in the general context of our scientific discipline. We also dedicate time to the development of informal ideas, and ways of thinking in this area.
Fundamentals
Foiling the Adversary
Fingerprinting
Success Amplification and Random Sampling
Abundance of Witnesses
Optimization and Random Rounding
Fundamentals of Mathematics