The effect of adaptive parameters on the performance of back propagation

The Back Propagation algorithm or its variation on Multilayered Feedforward Networks is widely used in many applications. However, this algorithm is well-known to have difficulties with local minima problem particularly caused by neuron saturation in the hidden layer. Most existing approaches mod...

Full description

Saved in:
Bibliographic Details
Main Author: Abdul Hamid, Norhamreeza
Format: Thesis
Language:English
English
English
Published: 2012
Subjects:
Online Access:http://eprints.uthm.edu.my/2344/1/24p%20NORHAMREEZA%20ABDUL%20HAMID.pdf
http://eprints.uthm.edu.my/2344/2/NORHAMREEZA%20ABDUL%20HAMID%20COPYRIGHT%20DECLARATION.pdf
http://eprints.uthm.edu.my/2344/3/NORHAMREEZA%20ABDUL%20HAMID%20WATERMARK.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Back Propagation algorithm or its variation on Multilayered Feedforward Networks is widely used in many applications. However, this algorithm is well-known to have difficulties with local minima problem particularly caused by neuron saturation in the hidden layer. Most existing approaches modify the learning model in order to add a random factor to the model, which overcomes the tendency to sink into local minima. However, the random perturbations of the search direction and various kinds of stochastic adjustment to the current set of weights are not effective in enabling a network to escape from local minima which cause the network fail to converge to a global minimum within a reasonable number of iterations. Thus, this research proposed a new method known as Back Propagation Gradient Descent with Adaptive Gain, Adaptive Momentum and Adaptive Learning Rate (BPGD-AGAMAL) which modifies the existing Back Propagation Gradient Descent algorithm by adaptively changing the gain, momentum coefficient and learning rate. In this method, each training pattern has its own activation functions of neurons in the hidden layer. The activation functions are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with conventional Back Propagation Gradient Descent and Back Propagation Gradient Descent with Adaptive Gain by means of simulation on six benchmark problems namely breast cancer, card, glass, iris, soybean, and thyroid. The results show that the proposed algorithm extensively improves the learning process of conventional Back Propagation algorithm.