Thursday, May 27, 2010

A thesis on Deep learning

The author's site: http://web.mit.edu/~rsalakhu/www/index.html

Chapter 2 is about RBM and DBN.

Restricted Boltzmann Machine:


Two-layer architecture, visible binary units, v, and hidden binary units, h.
dimension of v is D and dimension of h is F.
The energy of state {v, h} is:


W is the symmetric weights, b is the visible bias and a is the hidden bias.

The joint distribution over the visible and hidden units is defined by:


Z(\theta) is know as the partition function for normalization. 

The probability that the model assigns to the visible vector v is:


and the hidden units could be explicitly marginalized out:


The conditional probabilities:


From the energy based model theory: http://deeplearning.net/tutorial/rbm.html

Free energy is defined as:


then:


P(x) is actually P(v; \theta) above.

For RBM, the free energy is:


Fro RBMs with binary visible units and binary hidden units, we obtain:


Download now or preview on posterous
Russ_thesis.pdf (6329 KB)

Posted via email from Troy's posterous

Deep Learning

Tutorial on Energy based models.

A course notes on machine learning, including deep learning. Sadly, not in English...

A book chapter about deep learning.

Download now or preview on posterous
lecun-06.pdf (2412 KB)

Download now or preview on posterous
ift6266H10.pdf (358 KB)

Download now or preview on posterous
ftml_book.pdf (1103 KB)

Posted via email from Troy's posterous

Monday, May 24, 2010

Zeros


Zeros and reconstructed ones using RBM. (In each figure, the left one is the original one and the right one is the reconstructed one.)

It seems that the red color shows more confidence, while the light blue and yellow indicate some confusion with those pixels. 

Posted via email from Troy's posterous

extract rar files under linux - unrar

Under linux, to extract files from rar archive use unrar:

Install:
sudo apt-get install unrar

usage:
unrar x file.rar

Posted via email from Troy's posterous

zypper - Suse Linux command line software tool

Wednesday, May 19, 2010

Loss functions for RBMs

Click here to download:
assignment1.tar.gz (126 KB)

Label: DBN, RBM, loss functions

One is to use the mean squared error criterion, which is to minimize the squared error between the original input values and the reconstructed visible values.

Another way is to minimize the negative log likelihood of the reconstruction, given the hidden vector. From the hidden vector, we could compute the probabilities of each visible units given the hidden vector, thus the loss function would be:

-log P(x|h) = - sum_i ( x_i * log p_i(h) + (1-x_i) * log ( 1 - p_i(h)))

Posted via email from Troy's posterous

Deep Autoencoders

Deep Autoencoder is a kind of deep architecture which consists many stacked RBMs.

An example structure is illustrated in the following figure:


Download now or preview on posterous
science.pdf (360 KB)

Posted via email from Troy's posterous

Robust Speech Recognition using Articulatory Information

Wednesday, May 5, 2010

View PDF in Chrome Ubuntu

Under the /usr/lib directory find all the "nppdf.so" files and delete them.

find /usr/lib | grep "nppdf\.so"

Then Chrome will download the PDF files when clicked, at least better than showing a grey tab.

Posted via email from Troy's posterous

做人、做事、做学问

快毕业了,老师站在讲台上:“探讨三个问题。”

1、“世界上第一高峰是哪座山?”大家哄堂大笑:“珠穆朗玛峰!”老师追问:“第二高峰呢?”同学们面面相觑,无人应声。老师在黑板上写:“屈居第二与默默无闻毫无区别。 ”

2、“有人要烧壶开水,等生好火发现柴不够,他该怎么办?”有的说赶快去找,有的说去借、去买。老师说:“为什么不把壶里的水倒掉一些?”大家一听,表示佩服。

3、“古代有一人,想学立身的本领。经过反复比较,决心去学屠龙之技。他拜名师,日夜苦练,终有所成。他会怎么样呢?“同学们兴致勃勃,说他能成为英雄、明星,受世人崇拜。老师摇头:“这个人一定会潦倒一生,因为世上根本就没有龙。”

这节课,这个老师要学生明白“如何做人、做事、做学问”:做人要力求出色,勇争第一,这样别人才能发现你、记住你;做事要敢于创新,方法灵活,千万不可墨守成规;做学问要学以致用,要懂得将知识转化为效益,闭门造车没有路

Posted via email from Troy's posterous

Google+