A focal plane wavefront sensor offers major advantages to adaptive optics, including removal of non-commonpath error and providing sensitivity to blind modes (such as petalling). But simply using the observed point spread function (PSF) is not sufficient for wavefront correction, as only the intensity, not phase, is measured. Here we demonstrate the use of a multimode fiber mode converter (photonic lantern) to directly measure the wavefront phase and amplitude at the focal plane. Starlight is injected into a multimode fiber at the image plane, with the combination of modes excited within the fiber a function of the phase and amplitude of the incident wavefront. The fiber undergoes an adiabatic transition into a set of multiple, single-mode outputs, such that the distribution of intensities between them encodes the incident wavefront. The mapping (which may be strongly non-linear) between spatial modes in the PSF and the outputs is stable but must be learned. This is done by a deep neural network, trained by applying random combinations of spatial modes to the deformable mirror. Once trained, the neural network can instantaneously predict the incident wavefront for any set of output intensities. We demonstrate the successful reconstruction of wavefronts produced in the laboratory with low-wind-effect, and an on-sky demonstration of reconstruction of low-order modes consistent with those measured by the existing pyramid wavefront sensor, using SCExAO observations at the Subaru Telescope.
|