Connectionist networks are composed of relatively simple, neuron-like processing elements that store all their long-term knowledge in the strengths of the connections between processors.
The network generally learns to use distributed representations in which each input vector is represented by activity in many different hidden units, and each hidden unit is involved in representing many different input vectors.
The ability to represent complex hierarchical structures efficiently and to apply structure sensitive operations to these representations seems to be essential.
The outcomes of these two battles suggest that as the learning procedures become more sophisticated the advantage of automatic parameter tuning may more than outweigh the representational inadequacies of the restricted systems that admit such optimization techniques.
Clearly, the ultimate goal is efficient learning procedures for representationally powerful systems. The disagreement is about which of these two objectives should be sacrificed in the short term.
No comments:
Post a Comment