Sign up
Forgot password?
FAQ: Login

Hu X., Balasubramaniam P. (eds.) Recurrent Neural Networks

  • pdf file
  • size 7,71 MB
  • added by
  • info modified
Hu X., Balasubramaniam P. (eds.) Recurrent Neural Networks
InTech, 2008, -410 p.
The research of neural networks has experienced several ups and downs in the 20th century. The last resurgence is believed to be initiated by several seminal works of Hopfield and Tank in the 1980s, and this upsurge has persisted for three decades. The Hopfield neural networks, either discrete type or continuous type, are actually recurrent neural networks (RNNs). The hallmark of an RNN, in contrast to feedforward neural networks, is the existence of connections from posterior layer(s) to anterior layer(s) or connections among neurons in the same layer. Because of these connections, the networks become dynamic systems, which bring many promising capabilities that the feedforward counterparts do not possess. One of the obvious capabilities of RNNs is that they can handle temporal information directly and naturally, whereas feedforward networks have to convert the patterns from temporal domain into spatial domain first for further processing. Other two distinguished capabilities possessed by RNNs refer to associative memory and optimization, which were initially revealed by Hopfield and Tank.
The field of RNNs has evolved rapidly in recent years. It has become a fusion of a number of research areas in engineering, computer science, mathematics, artificial intelligence, operations research, systems theory, biology, and neuroscience. RNNs have been widely applied for control, optimization, pattern recognition, image processing, signal processing, etc. The aim of the book is to bring together reputable researchers from different countries in order to provide a comprehensive coverage of advanced and modern topics in RNNs not yet reflected by other books. This collective product comprises 18 contributions submitted by 51 authors from 16 different countries and areas. It covers most of the current main streams of RNN researches, ranging from human cognitive behavior modeling, dynamic system identification and control, temporal pattern recognition and classification, optimization, and stability analysis. According to these themes, the 18 contributions are grouped into five categories, corresponding to five parts of the book
Aperiodic (Chaotic) Behavior in RNN with Homeostasis as a Source of Behavior Novelty: Theory and Applications
Biological Signals Identification by a Dynamic Recurrent Neural Network: from Oculomotor Neural Integrator to Complex Human Movements and Locomotion
Linguistic Productivity and Recurrent Neural Networks
Recurrent Neural Network Identification and Adaptive Neural Control of Hydrocarbon Biodegradation Processes
Design of Self-Constructing Recurrent-Neural-Network-Based Adaptive Control
Recurrent Fuzzy Neural Networks and Their Performance Analysis
Recurrent Interval Type-2 Fuzzy Neural Network Using Asymmetric Membership Functions
Rollover Control in Heavy Vehicles via Recurrent High Order Neural Networks
A New Supervised Learning Algorithm of Recurrent Neural Networks and L2 Stability Analysis in Discrete-Time Domain
Application of Recurrent Neural Networks to Rainfall-runoff Processes
Recurrent Neural Approach for Solving Several Types of Optimization Problems
Applications of Recurrent Neural Networks to Optimization Problems
Neurodynamic Optimization: Towards Nonconvexity
An Improved Extremum Seeking Algorithm Based on the Chaotic Annealing Recurrent Neural Network and Its Application
Stability Results for Uncertain Stochastic High-Order Hopfield Neural Networks with Time Varying Delays
Dynamics of Two-Dimensional Discrete-Time Delayed Hopfield Neural Networks
Case Studies for Applications of Elman Recurrent Neural Networks
Partially Connected Locally Recurrent Probabilistic Neural Networks
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up