Springer, 2010. — 736 p.
Machine learning (ML) is one of the most fruitful fields of research currently, both in the proposal of new techniques and theoretic algorithms and in their application to real-life problems. From a technological point of view, the world has changed at an unexpected pace; one of the consequences is that it is possible to use high-quality and fast hardware at a relatively cheap price. The development of systems that can be adapted to the environment in a smart way has a huge practical application. Usually, these systems work by optimizing the performance of a certain algorithm/technique according to a certain maximization/minimization criterion but using experimental data rather than a given “program” (in the classical sense). This way of tackling the problem usually stems from characteristics of the task, which might be time-variant or too complex to be solved by a sequence of sequential instructions. Lately, the number of published papers, patents and practical applications related to ML has increased exponentially. One of the most attractive features of ML is that it brings together knowledge from different fields, such as, pattern recognition (neural networks, support vector machines, decision trees, reinforcement learning, …), data mining (time series prediction, modeling, …), statistics (Bayesian methods, Montecarlo methods, bootstrapping, …) or signal processing (Markov models), among others. Therefore, ML takes advantage of the synergy of all these fields thus providing robust solutions that use different fields of knowledge.
Therefore, ML is a multidisciplinary topic and it needs some particular bibliography to gather its different techniques. There are a number of excellent references to go into ML, but the wide range of applications of ML has not been discussed in any reference book. Both theory and practical applications are discussed in this handbook. Different state-of-the-art techniques are analyzed in the first part of the handbook, while a wide and representative range of practical applications are shown in the second part of the book. The editors would like to thank the collaboration of the authors of the first part of the handbook, who accepted the suggestion of including a section of applications in their chapters.
A short introduction to the chapters will be provided in the following. The first part of the handbook consists of eleven chapters. In chapter 1, R. Xu and D. C. Wunsch II do a review of clustering algorithms and provide some real applications; it should be pointed out the adequacy of the references, which show up the deep knowledge of the authors in these techniques, ratified by the book published by the authors recently in 2008. In chapter 2, “Principal Graphs and Manifolds,” Gorban and Zinovyev present the machine learning approaches to the problem of dimensionality reduction with controlled complexity. They start with classical techniques, such as principal component analysis (PCA) and the k-means algorithm. There is a whole universe of approximants between the ‘most rigid’ linear manifolds (principal components) and ‘most soft’ unstructured finite sets of k-means. This chapter gives a brief practical introduction into the methods of construction of general principal objects (i.e., objects embedded in the ‘middle’ of the multidimensional data set). The notions of self-consistency and coarse-grained self-consistency give the general probabilistic framework for construction of principal objects. The family of expectation/ maximization algorithms with nearest generalizations is presented. Construction of principal graphs with controlled complexity is based on the graph grammar approach. In the theory of principal curves and manifolds the penalty functions were introduced to penalize deviation from linear manifolds. For branched principal object the pluriharmonic embeddings (‘pluriharmonic graphs’) serve as ‘ideal objects’ instead of planes, and deviation from this ideal form is penalized. Chapter 3, “Learning Algorithms for RBF Functions and Subspace Based Functions,” by Lei Xu, overviewed advances on normalized radial basis function (RBF) and alternative mixture-of-experts as well as further developments to subspace based functions (SBF) and temporal extensions. These studies are linked to a general statistical learning framework that summarizes not only maximum likelihood learning featured by the EM algorithm but also Bayesian Ying Yang (BYY) harmony and rival penalized competitive learning (RPCL) featured by their automatic model selection nature, with a unified elaboration of their corresponding algorithms. Finally, remarks have also been made on possible trends.
Exploring the Unknown Nature of Data: Cluster Analysis and Applications
Principal Graphs and Manifolds
Learning Algorithms for RBF Functions and Subspace Based Functions
Nature Inspired Methods for Multi-Objective Optimization
Artificial Immune Systems for Anomaly Detection
Calibration of Machine Learning Models
Classification with Incomplete Data
Clustering and Visualization of Multivariate Time Series
Locally Recurrent Neural Networks and Their Applications
Nonstationary Signal Analysis with Kernel Machines
Transfer Learning
Machine Learning in Personalized Anemia Treatment
Deterministic Pattern Mining On Genetic Sequences
Machine Learning in Natural Language Processing
Machine Learning Applications in Mega-Text Processing
FOL Learning for Knowledge Discovery in Documents
Machine Learning and Financial Investing
Applications of Evolutionary Neural Networks for Sales Forecasting of Fashionable Products
Support Vector Machine based Hybrid Classifiers and Rule Extraction thereof: Application to Bankruptcy Prediction in Banks
Data Mining Experiences in Steel Industry
Application of Neural Networks in Animal Science
Statistical Machine Learning Approaches for Sports Video Mining Using Hidden Markov Models
A Survey of Bayesian Techniques in Computer Vision
Software Cost Estimation using Soft Computing Approaches
Counting the Hidden Defects in Software Documents
Machine Learning for Biometrics
Neural Networks for Modeling the Contact Foot-Shoe Upper
Evolutionary Multi-Objective Optimization of Autonomous Mobile Robots in Neural-Based Cognition for Behavioural Robustness
Improving Automated Planning with Machine Learning