Inverse-Free Incremental Learning Algorithms With Reduced Complexity for Regularized Extreme Learning Machine

oleh: Hufei Zhu, Yanpeng Wu

Format: Article
Diterbitkan: IEEE 2020-01-01

Deskripsi

The existing inverse-free incremental learning algorithm for the regularized extreme learning machine (ELM) was based on an inverse-free algorithm to update the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix. Before that recursive algorithm was applied in the existing inverse-free ELM, its improved version had been utilized in previous literatures. Then from the improved recursive algorithm to update the inverse, we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we propose the inverse-free incremental ELM algorithm based on regularized pseudo-inverse. Usually the above-mentioned inverse is smaller than the pseudo-inverse, while in the processor units with limited precision, the recursive algorithm to update the inverse may introduce numerical instabilities. Then to further reduce the computational complexity, we also propose the inverse-free incremental ELM algorithm based on the <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors of the inverse, where the <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors are updated iteratively by the inverse <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factorization. With respect to the existing inverse-free ELM, the proposed ELM based on regularized pseudo-inverse and that based on <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors are expected to require only <inline-formula> <tex-math notation="LaTeX">$\frac {3}{8+M}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\frac {1}{8+M}$ </tex-math></inline-formula> of complexities, respectively, where <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math></inline-formula> is the output node number. The numerical experiments show that both the proposed ELM algorithms significantly accelerate the existing inverse-free ELM, and the speedup in training time is not less than 1.41. On the Modified National Institute of Standards and Technology (MNIST) Dataset, usually the proposed algorithm based on <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors is much faster than that based on regularized pseudo-inverse. On the other hand, in the numerical experiments, the original ELM, the existing inverse-free ELM and the proposed two ELM algorithms achieve the same performance in regression and classification, and result in the same solutions, which include the output weights and the output sequence for the same input sequence.