Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Smooth Group <i>L</i><sub>1/2</sub> Regularization for Pruning Convolutional Neural Networks
oleh: Yuan Bao, Zhaobin Liu, Zhongxuan Luo, Sibo Yang
Format: | Article |
---|---|
Diterbitkan: | MDPI AG 2022-01-01 |
Deskripsi
In this paper, a novel smooth group <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>L</mi><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msub></semantics></math></inline-formula> (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>S</mi><mi>G</mi><msub><mi>L</mi><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msub></mrow></semantics></math></inline-formula>) regularization method is proposed for pruning hidden nodes of the fully connected layer in convolution neural networks. Usually, the selection of nodes and weights is based on experience, and the convolution filter is symmetric in the convolution neural network. The main contribution of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>S</mi><mi>G</mi><msub><mi>L</mi><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msub></mrow></semantics></math></inline-formula> is to try to approximate the weights to 0 at the group level. Therefore, we will be able to prune the hidden node if the corresponding weights are all close to 0. Furthermore, the feasibility analysis of this new method is carried out under some reasonable assumptions due to the smooth function. The numerical results demonstrate the superiority of the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>S</mi><mi>G</mi><msub><mi>L</mi><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msub></mrow></semantics></math></inline-formula> method with respect to sparsity, without damaging the classification performance.