Actually, fine tuning of RBM based DBN is not only using the labels to retrain the model by error back-propagation.
For fine tuning, we re-organize the model to encoder and decoder and concatenate them together.
For unsupervised training, we use the data as both the input and output to fine tune the model.
Similarly, for supervised training, maybe we should also use both the label and observations as input and output to retrain the model.
Details could be found in the attached paper.
No comments:
Post a Comment