Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Multimodal Representation Learning and Set Attention for LWIR In-Scene Atmospheric Compensation
oleh: Nicholas Westing, Kevin C. Gross, Brett J. Borghetti, Christine M. Schubert Kabban, Jacob Martin, Joseph Meola
| Format: | Article |
|---|---|
| Diterbitkan: | IEEE 2021-01-01 |
Deskripsi
A multimodal generative modeling approach combined with permutation-invariant set attention is investigated in this article to support long-wave infrared (LWIR) in-scene atmospheric compensation. The generative model can produce realistic atmospheric state vectors (T, H<sub>2</sub>O, O<sub>3</sub>) and their corresponding transmittance, upwelling radiance, and downwelling radiance (TUD) vectors by sampling a low-dimensional space. Variational loss, LWIR radiative transfer loss, and atmospheric state loss constrain the low-dimensional space, resulting in lower reconstruction error compared to standard mean-squared error approaches. A permutation-invariant network predicts the generative model low-dimensional components from in-scene data, allowing for simultaneous estimates of the atmospheric state and TUD vector. Forward modeling the predicted atmospheric state vector results in a second atmospheric compensation estimate. Results are reported for collected LWIR data and compared against fast line-of-sight atmospheric analysis of hypercubes-infrared (FLAASHIR), demonstrating commensurate performance when applied to a target detection scenario. Additionally, an approximate eight times reduction in detection time is realized using this neural network-based algorithm compared to FLAASH-IR. Accelerating the target detection pipeline while providing multiple atmospheric estimates is necessary for many real world, time sensitive tasks.