Abstract
Abstract
The Active Optics System of the Vera C. Rubin Observatory (Rubin) uses information provided by four wave front sensors to determine deviations between the reconstructed wave front and the ideal wave front. The observed deviations are used to adjust the control parameters of the optical system to maintain image quality across the 3.°5 field of view. The baseline approach from the project is to obtain amplitudes of the Zernike polynomials describing the distorted wave front from out-of-focus images collected by the wave front sensors. These Zernike amplitudes are related via an “influence matrix” to the control parameters necessary to correct the wave front. In this paper, we use deep-learning methods to extract the control parameters directly from the images captured by the wave front sensors. Our neural net model uses anti-aliasing pooling to boost performance, and a domain-specific loss function to aid learning and generalization. The accuracy of the control parameters derived from our model exceeds Rubin requirements even in the presence of full-moon background levels and mis-centering of reference stars. Although the training process is time consuming, model evaluation requires only a few milliseconds. This low latency should allow for the correction of the optical configuration during the readout and slew interval between successive exposures.
Funder
U.S. Department of Energy
MoSTR ∣ National Science Foundation of Sri Lanka
Publisher
American Astronomical Society
Subject
Space and Planetary Science,Astronomy and Astrophysics
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献