1. The problem of training recurrent networks is analyzed from a numerical point of view. Especially it is analyzed how numerical ill-conditioning of the Hessian matrix might arise.
2. Training is significantly improved by application of the damped Gauss-Newton method, involving the Hessian. This method is found to outperform gradient descent in terms of both quality of solution obtained as well as computation time required.
3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability.
4. The viability of pruning recurrent networks by the Optimal Brain Damage (OBD) and Optimal Brain Surgeon (OBS) pruning schemes is investigated. OBD is found to be very effective whereas OBS is severely influenced by numerical problems which leads to pruning of important weights. 5. A novel operational tool for examination of the internal memory of recurrent networks is proposed. The tool allows for assessment of the length of the effe ctive memory of previous inputs built up in the recurrent network during application.
Time series modeling is also treated from a more general point of view, namely modeling of the joint probability distribution function of the observed series. Two recurrent models rooted in statistical physics are considered in this respect, namely the ``Boltzmann chain'' and the ``Boltzmann zipper'' and a comprehensive tutorial on these models is provided. Boltzmann chains and zippers are found to benefit as well from second-order training and architecture optimization by pruning which is illustrated on artificial problems and a small speech recognition problem.
For further information, please contact, Finn Kuno
Christensen, IMM, Bldg. 321, DTU
Phone: (+45) 4588 1433. Fax: (+45) 4588
2673, E-mail: fkc@imm.dtu.dk