Sign up
Forgot password?
FAQ: Login

Skorohod B.A. Diffuse Algorithms for Neural and Neuro-Fuzzy Networks. With Applications in Control Engineering and Signal

  • pdf file
  • size 5,89 MB
  • added by
  • info modified
Skorohod B.A. Diffuse Algorithms for Neural and Neuro-Fuzzy Networks. With Applications in Control Engineering and Signal
Butterworth-Heinemann, 2017. — 218 p.
Diffuse Algorithms for Neural and Neuro-Fuzzy Networks: With Applications in Control Engineering and Signal Processing presents new approaches to training neural and neuro-fuzzy networks. This book is divided into six chapters. Chapter 1 consists of plants models reviews, problems statements, and known results that are relevant to the subject matter of this book. Chapter 2 considers the RLS behavior on a finite interval. The theoretical results are illustrated by examples of solving problems of identification, control, and signal processing.
Properties of the bias, the matrix of second-order moments and the normalized average squared error of the RLS algorithm on a finite time interval are studied in Chapter 3. Chapter 4 deals with the problem of multilayer neural and neuro-fuzzy networks training with simultaneous estimation of the hidden and output layers parameters. The theoretical results are illustrated with the examples of pattern recognition, identification of nonlinear static, and dynamic plants.
Chapter 5 considers the estimation problem of the state and the parameters of the discrete dynamic plants in the absence of a priori statistical information about initial conditions or its incompletion. The Kalman filter and the extended Kalman filter diffuse analogues are obtained. Finally, Chapter 6 provides examples of the use of diffuse algorithms for solving problems in various engineering applications. This book is ideal for researchers and graduate students in control, signal processing, and machine learning.
The purpose of this book is to present new approaches to training of neural and neuro-fuzzy networks which have a separable structure. It is assumed that in addition to the training set a priori information only about the nonlinearly incoming parameters is given. This information may be obtained from the distribution of a generating sample, a training set, or some linguistic information. For static separable models the problem of minimizing a quadratic criterion that includes only that information is considered. Such a problem statement and the Gauss_Newton method (GNM) with linearization around the latest estimate lead to new online and offline training algorithms that are robust in relation to unknown a priori information about linearly incoming parameters. To be more precise, they are interpreted as random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter μ (soft constrained initialization). Asymptotic representations as μ→∞ for the GNM, which we call diffuse training algorithms (DTAs), are found. We explore the DTA properties. Particularly the DTAs’ convergence in case of the limited and unlimited sample size is studied. The problem specialty is connected with the observation model separable character, and the fact that the nonlinearly inputting parameters belong to some compact set, and linearly inputting parameters should be considered as arbitrary numbers.
Diffuse Algorithms for Estimating Parameters of Linear Regression
Statistical Analysis of Fluctuations of Least Squares Algorithm on Finite Time Interval
Diffuse Neural and Neuro-Fuzzy Networks Training Algorithms
Diffuse Kalman Filter
Applications of Diffuse Algorithms
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up