Conjugate Gradient Algorithm Based on Meyer Method for Training Artificial Neural Networks
Abstract
The difference between the desired output and the actual output ofan multi-layers feed forward neural network produces an error value canbe expressed it as a function of the network weights. Therefore trainingthe network becomes an optimization problem to minimize the errorfunction.This search suggests a new formula for computing learning ratebased on Meyers formula to modify conjugate gradient algorithm (MCG)for training the FFNN. Typically this method accelerate the method ofFletcherReeves (FRCG) and PolakRibere (PRCG) when using it tosolve three different types problems well known in the artificial neuralnetwork (namely, XOR problem, function approximation, and theMonk1 problem ) with 100 simulations.