Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
A Lagrange Programming Neural Network Approach with an <i>ℓ</i><sub>0</sub>-Norm Sparsity Measurement for Sparse Recovery and Its Circuit Realization
oleh: Hao Wang, Ruibin Feng, Chi-Sing Leung, Hau Ping Chan, Anthony G. Constantinides
| Format: | Article |
|---|---|
| Diterbitkan: | MDPI AG 2022-12-01 |
Deskripsi
Many analog neural network approaches for sparse recovery were based on using <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mn>1</mn></msub></semantics></math></inline-formula>-norm as the surrogate of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula>-norm. This paper proposes an analog neural network model, namely the Lagrange programming neural network with <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mi>p</mi></msub></semantics></math></inline-formula> objective and quadratic constraint (LPNN-LPQC), with an <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula>-norm sparsity measurement for solving the constrained basis pursuit denoise (CBPDN) problem. As the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula>-norm is non-differentiable, we first use a differentiable <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mi>p</mi></msub></semantics></math></inline-formula>-norm-like function to approximate the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula>-norm. However, this <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo>ℓ</mo><mi>p</mi></msub></semantics></math></inline-formula>-norm-like function does not have an explicit expression and, thus, we use the locally competitive algorithm (LCA) concept to handle the nonexistence of the explicit expression. With the LCA approach, the dynamics are defined by the internal state vector. In the proposed model, the thresholding elements are not conventional analog elements in analog optimization. This paper also proposes a circuit realization for the thresholding elements. In the theoretical side, we prove that the equilibrium points of our proposed method satisfy Karush Kuhn Tucker (KKT) conditions of the approximated CBPDN problem, and that the equilibrium points of our proposed method are asymptotically stable. We perform a large scale simulation on various algorithms and analog models. Simulation results show that the proposed algorithm is better than or comparable to several state-of-art numerical algorithms, and that it is better than state-of-art analog neural models.