InTech, 2011, -112 p.
The RNNs (Recurrent Neural Networks) are a general case of artificial neural networks where the connections are not feed-forward ones only. In RNNs, connections between units form directed cycles, providing an implicit internal memory. Those RNNs are adapted to problems dealing with signals evolving through time. Their internal memory gives them the ability to naturally take time into account. Valuable approximation results have been obtained for dynamical systems.
During the last few years, several interesting neural networks developments have emerged such as spike nets and deep networks. This book will show that a lot of improvement and results are also present in the active field of RNNs.
In the first chapter, we will see that many different algorithms have been applied to prediction in time series. ARIMA, one of the models studied, combines three models (AR, MA and ARMA). It is compared to Elman-RNN with four different architectures. The second chapter gives an overview of RNN for time series prediction. The algorithm BPTT is detailed then delayed connections are added resulting into two new algorithms: EBPTT and CBPTT. BPTT is also upgraded through boosting thus giving much better results especially on multi-step ahead prediction.
The third chapter presents the application of RNN to the diagnosis of Carpal Tunnel Syndrome. The RNN used in this study is Elman-RNN and the Levenberg-Marquardt learning algorithm is detailed.
The fourth chapter describes the use of neural networks to model the hysteresis phenomena encountered on human meridian systems. Models using extreme learning machine (ELM) with a non-recurrent neural network and a RNN are compared.
The fifth chapter shows the use of a dynamic RNN to model the dynamic control of human movement. From multiple signals (EMG and EEG), the goal is to find the mapping with the movement of the different parts of the body. Some relations found by the RNN help for a better understanding of motor organization in the human brain. The sixth chapter proposes a paradigm of how the brain deals with active interaction with environment. It is based on Compact Internal Representation (CIR). The RNNs are used here to learn and retrieve these CIRs and also to predict the trajectories of moving obstacles.