Application of Gradient Optimization Methods in Defining Neural Dynamics

oleh: Predrag S. Stanimirović, Nataša Tešić, Dimitrios Gerontitis, Gradimir V. Milovanović, Milena J. Petrović, Vladimir L. Kazakovtsev, Vladislav Stasiuk

Format: Article
Diterbitkan: MDPI AG 2024-01-01

Deskripsi

Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>A</mi><mi>X</mi><mi>B</mi><mo>=</mo><mi>D</mi></mrow></semantics></math></inline-formula> which changes over time is studied using the novel GNN model, termed as GGNN<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo>,</mo><mi>D</mi><mo>)</mo></mrow></semantics></math></inline-formula>. The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo>,</mo><mi>D</mi><mo>)</mo></mrow></semantics></math></inline-formula> design converges asymptotically to the solution of the matrix equation <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>A</mi><mi>X</mi><mi>B</mi><mo>=</mo><mi>D</mi></mrow></semantics></math></inline-formula>, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics is considered. The Simulink implementation of presented GGNN models is carried out on the set of real matrices.