Thursday, October 28, 2010
Wednesday, October 27, 2010
The theoretical formalism described in this paper seems to be well suited to the description of many different variations of back-propagation.From a historical point of view, back-propagation had been used in the field of optimal control long before its application to connectionist systems has been porposed.The central problem that back-propagation solves is the evaluation of the influence of a parameter on a function whose computation involves several elementary steps.This paper presents a mathematical framework for studying back-propagation based on the Lagrangian formalism. In this framework, inspired by optimal control theory, back-propagation is formulated as an optimization problem with non-linear constraints.The Lagrange function is the sum of an output objective function, which is usually a squared sum of the difference between the actual output and the desired output, and a constraint term which describes the network dynamics.This approach suggests many natural extensions to the basic algorithm.Other easily described variations involve either additional terms in the error function, additional constraints on the set of solutions, or transformations of the parameter space.
[DBN] A tutorial on stochastic approximation algorithms for training restricted Boltzmann machines and deep belief nets
Tuesday, October 26, 2010
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators.
Sunday, October 24, 2010
Connectionist networks are composed of relatively simple, neuron-like processing elements that store all their long-term knowledge in the strengths of the connections between processors.
Monday, October 18, 2010
Wednesday, October 13, 2010
Friday, October 8, 2010
[Deep Learning] An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation