Monday, May 25, 2009

QuickNet - 1

Platform
--------

This testbed has only been tested on an x86 Red Hat Linux platform.

NOTE: Quicknet includes versions of the qnstrn MLP training tool that
are optimized for specific architectures such as the Pentium 4 or the
AMD Opteron.  Because these different versions of the tool use
different matrix routines, there may be different patterns of rounding
errors.  Thus if you want maximum scientific comparability of
different results, don't mix different versions of the qnstrn tool in
your experiments. 
(This is not expected to be a significant issue for
the forward pass tool, qnsfwd, because only training involves a
feedback process which can magnify the significance of different
rounding error patterns.)  For maximum comparability with the results
quoted in this README file, the tools/train script specifically
invokes the Pentium 4 qnstrn binary from Quicknet release 3.11, which
is named qnstrn-v3_11-P4SSE2 at ICSI.  (This binary cannot be used with older
processors which predate the Pentium 4, but it will run on an AMD
Opteron.)  To change this, edit the variable $qnstrnBinary in
tools/train.

tools/train invokes single-threaded MLP training.  On a multi-core
machine, training can be sped up by making it multi-threaded using the mlp3_threads option of qnstrn. 
The most convenient way to measure
training speed is by the MCUPS number reported in the qnstrn log
file. If you use more than one thread, you will probably get more
MCUPS if you increase the value of the mlp3_bunch_size option.
However, if it is increased too much, this can reduce the quality of
the trained MLP.  The maximum bunch size before this problem occurs
depends on the corpus.

Feature calculation
-------------------

The neural net software uses the pfile feature file format, which
stores the features for many utterances together in a single file.
The SPRACHcore feacalc tool can calculate a pfile of PLP features
directly.  Pfiles can be created from other feature file formats using
the SPRACHcore feacat tool.




No comments:

Post a Comment

Google+