Stabilization of the 81-channel coherent beam combination using machine learning.
Published Web Locationhttps://doi.org/10.1364/oe.414985
We develop a rapidly converging algorithm for stabilizing a large channel-count diffractive optical coherent beam combination. An 81-beam combiner is controlled by a novel, machine-learning based, iterative method to correct the optical phases, operating on an experimentally calibrated numerical model. A neural-network is trained to detect phase errors based on interference pattern recognition of uncombined beams adjacent to the combined one. Due to the non-uniqueness of solutions in the full space of possible phases, the network is trained within a limited phase perturbation/error range. This also reduces the number of samples needed for training. Simulations have proven that the network can converge in one step for small phase perturbations. When the trained neural-network is applied to a realistic case of 360 degree full range, an iterative scheme exploits random walking at the beginning, with the accuracy of prediction on phase feedback direction, to allow the neural-network to step into the training range for fast convergence. This neural-network-based iterative method of phase detection works tens of times faster than the commonly used stochastic parallel gradient descent approach (SPGD) using a single-detector and random dither when both are tested with random phase perturbations.