Neural Network Toolbox

Training Algorithms

Training and learning functions are mathematical procedures used to automatically adjust the network's weights and biases. The training function dictates a global algorithm that affects all the weights and biases of a given network. The learning function can be applied to individual weights and biases within a network.

Neural Network Toolbox supports a variety of training algorithms, including several gradient descent methods, conjugate gradient methods, the Levenberg-Marquardt algorithm (LM), and the resilient backpropagation algorithm (Rprop). The toolbox’s modular framework lets you quickly develop custom training algorithms that can be integrated with built-in algorithms. While training your neural network, you can use error weights to define the relative importance of desired outputs, which can be prioritized in terms of sample, time step (for time-series problems), output element, or any combination of these. You can access training algorithms from the command line or via apps that show diagrams of the network being trained and provide network performance plots and status information to help you monitor the training process.

A suite of learning functions, including gradient descent, Hebbian learning, LVQ, Widrow-Hoff, and Kohonen, is also provided.

Neural Network apps let you automate training your neural network.
Neural network apps that automate training your neural network to fit input and target data (left), monitor training progress (right), and calculate statistical results and plots to assess training quality.
Next: Preprocessing and Postprocessing

Try Neural Network Toolbox

Get trial software

画像処理とコンピュータービジョンのための機械学習 -MATLABによる自動認識-

View webinar