Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Oversmoothing regularization with $\ell^1$-penalty term
oleh: Daniel Gerth, Bernd Hofmann
| Format: | Article |
|---|---|
| Diterbitkan: | AIMS Press 2019-08-01 |
Deskripsi
In Tikhonov-type regularization for ill-posed problems with noisy data, the penalty functionalis typically interpreted to carry a-priori information about the unknown true solution.We consider in this paper the case that the corresponding a-priori information is too strong such that thepenalty functional is oversmoothing, which means that its value is infinite for the true solution. In the case of oversmoothing penalties, convergence and convergence rate assertions for the regularized solutions are difficult toderive, only for the Hilbert scale setting convincing results have been published. We attempt to extend this setting to $\ell^1$-regularization when the solutions are only in $\ell^2$. Unfortunately, we have to restrict our studies to the case of bounded linear operators with diagonal structure, mapping between $\ell^2$and a separable Hilbert space. But for this subcase, we are able to formulateand to prove a convergence theorem, which we support with numerical examples.