Thursday, May 27, 2010

A thesis on Deep learning

The author's site: http://web.mit.edu/~rsalakhu/www/index.html

Chapter 2 is about RBM and DBN.

Restricted Boltzmann Machine:


Two-layer architecture, visible binary units, v, and hidden binary units, h.
dimension of v is D and dimension of h is F.
The energy of state {v, h} is:


W is the symmetric weights, b is the visible bias and a is the hidden bias.

The joint distribution over the visible and hidden units is defined by:


Z(\theta) is know as the partition function for normalization. 

The probability that the model assigns to the visible vector v is:


and the hidden units could be explicitly marginalized out:


The conditional probabilities:


From the energy based model theory: http://deeplearning.net/tutorial/rbm.html

Free energy is defined as:


then:


P(x) is actually P(v; \theta) above.

For RBM, the free energy is:


Fro RBMs with binary visible units and binary hidden units, we obtain:


Download now or preview on posterous
Russ_thesis.pdf (6329 KB)

Posted via email from Troy's posterous

No comments:

Post a Comment