Sign up
Forgot password?
FAQ: Login

Er M.J., Zhou Y. (eds.) Theory and Novel Applications of Machine Learning

  • pdf file
  • size 6,75 MB
  • added by
  • info modified
Er M.J., Zhou Y. (eds.) Theory and Novel Applications of Machine Learning
InTech, 2009, -386 p.
Even since computers were invented many decades ago, many researchers have been trying to understand how human beings learn and many interesting paradigms and approaches towards emulating human learning abilities have been proposed. The ability of learning is one of the central features of human intelligence, which makes it an important ingredient in both traditional Artificial Intelligence (AI) and emerging Cognitive Science. Machine Learning (ML) draws upon ideas from a diverse set of disciplines, including AI, Probability and Statistics, Computational Complexity, Information Theory, Psychology and Neurobiology, Control Theory and Philosophy. ML involves broad topics including Fuzzy Logic, Neural Networks (NNs), Evolutionary Algorithms (EAs), Probability and Statistics, Decision Trees, etc. Real-world applications of ML are widespread such as Pattern Recognition, Data Mining, Gaming, Bio-science, Telecommunications, Control and Robotics applications.
Designing an intelligent machine involves a number of design choices, including the type of training experience, the target performance function to be learned, a representation of this target function and an algorithm for learning the target function from training. Depending on the resources of training, ML is always categorized as Supervised Learning (SL), Unsupervised Learning (UL) and Reinforcement Learning (RL). It is interesting to note that human beings adopt more or less these three learning paradigms in our learning process.
This books reports the latest developments and futuristic trends in ML. New theory and novel applications of ML by many excellent researchers have been organized into 23 chapters.
SL is a ML technique for creating a function from training data with pairs of input objects and desired outputs. The task of a SL is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of inputs and desired outputs). Towards this end, the essence of SL is to generalize from the presented data to unseen situations in a "reasonable" way. The key characteristic of SL is the existence of a "teacher" and the training input-output data. The primary objective of SL is to minimize the system error between the predicated output from the system and the actual output. New developments of SL paradigms are presented in Chapters 1-3.
UL is a ML methodology whereby a model is fit to observations by typically treating input objects as a set of random variables and building a joint density model. It is distinguished from SL by the fact that there is no a priori output required. Novel clustering and classification approaches are reported in Chapters 4 and 5.
Distinguished from SL, Reinforcement Learning (RL) is a learning process without explicit teacher for any correct instructions. The RL methodology is also different from other UL approaches as it learns from an evaluative feedback of the system. RL has been accepted as a fundamental paradigm for ML with particular emphasis on computational aspects of learning.
The RL paradigm is a good ML framework to emulate human way of learning from interactions to achieve a certain goal. The learner is termed an agent who interacts with the environment. The agent selects appropriate actions to interact with the environment and the environment responses to these actions and presents new states to the agent and these interactions are continuous. In this book, novel algorithms and latest developments of RL have been included. To be more specific, the proposed methodologies for enhancing Q-learning have been reported in Chapters 6-11.
Evolutionary approaches in ML are presented in Chapter 12-14 and real-world applications of ML have been reported in the rest of the chapters.
A Drawing-Aid System using Supervised Learning
Supervised Learning with Hybrid Global Optimisation Methods. Case Study: Automated Recognition and Classification of Cork Tiles
Supervised Rule Learning and Reinforcement Learning in A Multi-Agent System for the Fish Banks Game
Clustering, Classification and Explanatory Rules from Harmonic Monitoring Data
Discriminative Cluster Analysis
Influence Value Q-Learning: A Reinforcement Learning Algorithm for Multi Agent Systems
Reinforcement Learning in Generating Fuzzy Systems
Incremental-Topological-Preserving-Map-Based Fuzzy Q-Learning (ITPM-FQL)
A Q-learning with Selective Generalization Capability and its Application to Layout Planning of Chemical Plants
A FAST-Based Q-Learning Algorithm
Constrained Reinforcement Learning from Intrinsic and Extrinsic Rewards
TempUnit: A Bio-Inspired Spiking Neural Network
Proposal and Evaluation of the Improved Penalty Avoiding Rational Policy Making Algorithm
A Generic Framework for Soft Subspace Pattern Recognition
Data Mining Applications in Higher Education and Academic Intelligence Management
Solving POMDPs with Automatic Discovery of Subgoals
Anomaly-based Fault Detection with Interaction Analysis Using State Interface
Machine Learning Approaches for Music Information Retrieval
LS-Draughts: Using Databases to Treat Endgame Loops in a Hybrid Evolutionary Learning System
Blur Identification for Content Aware Processing in Images
An Adaptive Markov Game Model for Cyber Threat Intent Inference
Life-long Learning Through Task Rehearsal and Selective Knowledge Transfer
Machine Learning for Video Repeat Mining
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up