Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
A Continual Learning Algorithm Based on Orthogonal Gradient Descent Beyond Neural Tangent Kernel Regime
oleh: Da Eun Lee, Kensuke Nakamura, Jae-Ho Tak, Byung-Woo Hong
Format: | Article |
---|---|
Diterbitkan: | IEEE 2023-01-01 |
Deskripsi
Continual learning aims to enable neural networks to learn new tasks without catastrophic forgetting of previously learned knowledge. Orthogonal Gradient Descent algorithms have been proposed as an effective solution to mitigate catastrophic forgetting. However, these algorithms often rely on the Neural Tangent Kernel regime, which imposes limitations on network architecture. In this study, we propose a novel method to construct an orthonormal basis set for orthogonal projection by leveraging Catastrophic Forgetting Loss. In contrast to the conventional gradient-based basis that reflects an update of model within an infinitesimal range, our loss-based basis can account for the variance within two distinct points in the model parameter space, thus overcoming the limitations of the Neural Tangent Kernel regime. We provide both quantitative and qualitative analysis of the proposed method, discussing its advantages over conventional gradient-based baselines. Our approach is extensively evaluated on various model architectures and datasets, demonstrating a significant performance advantage, especially for deep or narrow networks where the Neural Tangent Kernel regime is violated. Furthermore, we offer a mathematical analysis based on higher-order Taylor series to provide theoretical justification. This study introduces a novel theoretical framework and a practical algorithm, potentially inspiring further research in areas such as continual learning, network debugging, and one-pass learning.