MIT Press, 1991. — 264 p.
Connectionist networks are composed of relatively simple, neuron-like processing elements that store all their long-term knowledge in the strengths of the connections between processors. In the last decade there has been considerable progress in developing learning procedures for these networks that allow them to automatically construct their own internal representations. The learning procedures are typically applied in networks that map input vectors to output vectors via a few layers of "hidden" units. The network learns to dedicate particular hidden units to particular pieces or aspects of the input vector that are relevant in determining the output. The network generally learns to use distributed representations in which each input vector is represented by activity in many different hidden units, and each hidden unit is involved in representing many different input vectors.
Current connectionist learning procedures such as backpropagation are comparable in power to the learning procedure for HMMs. Indeed, one kind of backpropagation network is equivalent to one kind of hidden Markov recognizer. As further theoretical progress is made, we can expect the optimization techniques used for connectionist learning to become much more efficient and, if these techniques can be applied in networks with greater representational abilities, we may see artificial neural networks that can do much more than just classify patterns. But for now, the problem is to devise effective ways of representing complex structures in connectionist networks without sacrificing the ability to learn the representations. My own view is that connectionists are still a very long way from solving this problem, but the papersĀ· in this issue suggest some interesting directions to pursue.
Preface to the Special Issue on Connectionist Symbol Processing
BoltzCONS: Dynamic symbol structures in a connectionist network
Mapping part-whole hierarchies into connectionist networks
Recursive distributed representations
Mundane reasoning by settling on a plausible model
Tensor product variable binding and the representation of symbolic structures in connectionist systems
Learning and applying contextual constraints in sentence comprehension