Conjugate Gradient Algorithm Based on Aitken's Process for Training Neural Networks

Section: Research Paper
Published
Jun 25, 2025
Pages
39-51

Abstract

Conjugate gradient methods constitute excellent neural network training methods, because of their simplicity, numerical efficiency and their very low memory requirements. It is well-known that the procedure of training a neural network is highly consistent with unconstrained optimization theory and many attempts have been made to speed up this process. In particular, various algorithms motivated from numerical optimization theory have been applied for accelerating neural network training. In this paper, we propose a conjugate gradient neural network training algorithm by using Aitken's process which guarantees sufficient descent with Wolfe line search. Moreover, we establish that our proposed method is globally convergent for general functions under the strong Wolfe conditions. In the experimental results, we compared the behavior of our proposed method(NACG) with well- known methods in this field.

Identifiers

Download this PDF file

Statistics

How to Cite

K. Abbo, K., خلیل, H. Mohammed, H., & هند. (2025). Conjugate Gradient Algorithm Based on Aitken’s Process for Training Neural Networks. AL-Rafidain Journal of Computer Sciences and Mathematics, 11(1), 39–51. Retrieved from https://rjps.uomosul.edu.iq/index.php/csmj/article/view/19387