Sign up
Forgot password?
FAQ: Login

Galushkin A.I. Neural Networks Theory

  • pdf file
  • size 16,94 MB
  • added by
  • info modified
Galushkin A.I. Neural Networks Theory
Springer, 2007, -402 p.
Professor A. I. Galushkin’s monograph Neural Networks Theory appears at a time when the theory has achieved maturity and is the fulcrum of a vast literature. Nevertheless, Professor Galushkin’s work has high importance because it serves a special purpose which is explained in the following.
The roots of neural networks theory go back to the pioneering work of McCulloch and Pitts in the early 1940s. As a graduate student at MIT and later as an instructor at Columbia University, I was a witness to the birth of the digital world that took place in the years following the end of World War II and the beginning of the Cold War. To me, the definitive event was Shannon’s first lecture on information theory which took place in New York in 1946. I attended his lecture and was utterly fascinated by what I heard. Wiener’s cybernetics and the invention of the transistor were the other defining events which marked the debut of information revolution and the era of machine intelligence. The pioneering work of McCulloch and Pitts was too embryonic to attract much attention.
The work of McCulloch and Pitts was followed in the early 1950s by the development of threshold logic for pattern classification and automata theory for systems analysis. These developments were the backdrop for Frank Rosenblatt’s invention of the perceptron – the forerunner of multilayer neural networks. Frank Rosenblatt was a visionary who believed that the perceptron could perform miracles. Unfortunately, his lectures were hard to follow and not persuasive. His revolutionary ideas were not accorded recognition in his prematurely ended lifetime.
In the early 1960s at Moscow’s famed Institute of Automatics and Telemechanics, Ya Z. Tsypkin, M. A. Aizerman and others began to develop a theory of adaptive systems which led to the initiation of research on neural networks in the Soviet Union and to later work of Vapnik and Cherevonkis on support vector machines and kernel methods. Professor Galushkin was a student of Ya Z. Tsypkin and has played a pivotal role in the development of neural networks theory and its applications in the Soviet Union ever since.
Development of neural networks theory in the Soviet Union paralleled and, in some areas, especially in the realm of back propagation, was ahead. A detailed comparison and an overview are presented in Chap. 17 of Professor Galushkin’s work.
The extensive development of neural networks theory in the Soviet Union was largely unknown in the Western world. One of the objectives of Professor Galushkin’s work is to bring this fact to light. In this perspective, Professor Galushkin’s monograph serves an important purpose. But perhaps more importantly, his work stands out as an auVIII thoritative, comprehensive and up-to-date account of neural networks theory and its wide-ranging applications. Particularly worthy of note are Professor Galushkin’s expositions of optimal models of neural networks, structure optimization by topological characteristics, continual neural networks, optimal neural networks for multidimensional signals, multivariable function extremum search algorithms, random search algorithms for local and global extrema, implementation of minimum average risk function criteria, closed-loop neural networks operating on nonstationary patterns, selection of initial conditions for neural network adjustment, design of neural networks for matrix inversion problems, informative feature selection, multilayer neural network functional liability, and neural network diagnostics. In these expositions, there is much that is new and not readily found in the Western literature.
There is a significant issue which is not addressed in Professor Galushkin’s work, namely, the role of neural networks theory in soft computing. In science, as in most other realms of human activity, there is a tendency to adopt a particular methodology and march under its banner, in the belief that this methodology is superior to all others.
The principal thesis of soft computing is that there is much to be gained by forming an alliance of various computing methodologies and using them in combination rather than in a stand-alone mode. In soft computing, the principal members of the alliance are fuzzy logic, neurocomputing, evolutionary computing, probabilistic computing and machine learning, and the principal objective is to provide a foundation for the conception, design and utilization of intelligent systems. In recent years, a combination which has received a great deal of attention is that of neurofuzzy systems. In such systems, the capability of neural networks to deal with classification, identification and adaptation is combined with the capability of fuzzy systems to deal with imprecision, uncertainty and partiality of information. As a result of synergism of fuzzy logic and neural networks theory, many applications exist in which better performance can be achieved through the use of a neurofuzzy system than through the use of a system which is strictly neuro or strictly fuzzy.
Professor Galushkin’s monograph has many unique features that in totality make his work an important contribution to the literature of neural networks theory. He and his publisher deserve profuse thanks and congratulations from all who are seriously interested in the foundations of neural networks theory, its evolution and its current status.
Lotfi A. Zadeh
Professor in the Graduate School
Director, Berkeley Initiative in Soft Computing (BISC)
Berkeley University, California
May 20
The Structure of Neural Networks
Optimal Models of Neural Networks.
Adaptive Neural Networks
Neural Network Reliability and Diagnostics
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up