Springer, 1994, -322 p.
The third volume of the Physics of Neural Networks series.
One of the most challenging and fascinating problems of the theory of neural nets is that of asymptotic behavior, of how a system behaves as time proceeds. This is of particular relevance to many practical applications. Here we focus on association, generalization, and representation. We turn to the last topic first.
The introductory chapter, "Global Analysis of Recurrent Neural Networks," by Andreas Herz presents an in-depth analysis of how to construct a Lyapunov function for various types of dynamics and neural coding. It includes a review of the recent work with John Hopfield on integrate-and-fire neurons with local interactions.
The chapter, "Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns" by Ken Miller, explains how the primary visual cortex may asymptotically gain its specific structure through a self-organization process based on Hebbian learning. His argument since has been shown to be rather susceptible to generalization.
Association long has been a key issue in the theory of neural nets. Local learning rules are quite convenient from the point of view of computer science, but they have a serious drawback: They do not see global correlations. In order to produce an extensive storage capacity for zero threshold, the couplings on the average should vanish. Accordingly, there is a deep truth behind Willshaw's slogan: "What goes up must come down." Meanwhile we have a zoo of local learning rules. In their chapter, "Associative Data Storage and Retrieval in Neural Networks," Palm and Sommer transform this zoo into a well-organized structure taking advantage of just a simple signal-to-noise ratio analysis.
Hebb's epoch-making book The Organization of Behavior appeared in 1949. It proposed one of the most famous local learning rules, viz., the Hebbian one. It was preceded by the 1943 paper of McCulloch and Pitts, which is quite notorious because of its formal logic. In "Inferences Modeled with Neural Networks," Carmesin takes up this lead and integrates it with the Hebbian approach, viz., ideas on assemblies and coherence. In so doing he provides a natural transition from "association" to "generalization." Generalization means that, on the basis of certain known data, one extrapolates the meaning of a new set. There has been quite a bit of progress in formally understanding the process of generalization, and Opper and Kinzel's chapter "Statistical Mechanics of Generalization" summarizes this progress. It starts from scratch, assuming only some basic knowledge of statistical mechanics.
Bayes stands for conditional probabilities. For example, what is the probability of having sunshine on the American East coast tomorrow given that today's weather has no clouds? The sentence starting with "given that..." is a condition and the question entails an extrapolation. Adding one further condition, viz., that it is during the summer, the chance in question is about one. MacKay presents a careful and detailed exposition of the beneficial influence of "Bayesian Methods for Backpropagation Networks." The last two chapters return to representation. Optical character recognition is well known as a playground of neural network ideas. The chapter "Penacee: A Neural Net System for Recognizing On-Line Handwriting," by Guyon et al., aims at making the underlying concepts also widely known. To this end, the setup is explained with great care. Their real-world examples show that an intelligently built but yet relatively simple structure can give rise to excellent performance.
Robotics has been in the realm of neural networks for a long time; and that is understandable. After all, we perform grasping movements ourselves with great ease. That is to say, our motor cortex allows us to do so. Cortical ideas also have permeated robotics. In their chapter "Topology Representing Networks in Robotics," Sarkar and Schulten present a detailed algorithm for the visually guided control of grasping movements of a pneumatic robot as they are performed by a highly hysteretic five-joint pneumatic robot arm. In so doing, they unfold a modified version of the manifold-representing network algorithm, a Kohonen-type approach. Here, too, governing asymptotic behavior is the algorithm's goal.
All of the chapters have one element in common: answering the question of how one can understand an algorithm or procedure theoretically. And that is what each volume of Models of Neural Networks is after.
Global Analysis of Recurrent Neural Networks
Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns
Associative Data Storage and Retrieval in Neural Networks
Inferences Modeled with Neural Networks
Statistical Mechanics of Generalization
Bayesian Methods for Backpropagation Networks
Penacee: A Neural Net System for Recognizing On-Line Handwriting
Topology Representing Network in Robotics