Improving Wavefunction Efficiency by Tessellating Correlation Factors and Coupled State-Specific Optimization
Rearranging chemical bonds is chemistry. Simulating chemical reactions is an expensive and complex process, and is necessary to understand the photochemical reactions that drive processes like chemical light-harvesting. The electronic many-body physics describing the bonds that participate in these processes becomes complicated and expensive even on modestly sized molecules, and computationally affordable approximations can qualitatively fail. One approach to this problem relies on devising compact and expressive wavefunction forms that are simple enough to be efficiently computed yet complex enough to capture the subtleties of many-electron physics.
Due to their relaxed integrability conditions, position-space Quantum Monte Carlo methods permit the use of flexible wavefunction components like Jastrow correlation factors that can be used to exactly express wavefunction cusps, which are otherwise difficult to describe. As many existing factors used in these calculations focus primarily on short-range or general correlation effects, we aim to augment the library of real-space correlation factors by developing one designed to handle the strong electronic correlation of bond-breaking. These factors do this by accounting for correlations between populations of electrons in different pockets of space using a set of indicator-like functions fashioned into a tessellation of Voronoi cells. These Voronoi cells can be automatically constructed around atomic coordinates or further tailored to the chemical system, as the math describing them is flexible enough to allow more subtle intra-atomic subdivisions and curved interfaces. In simple test systems, this factor is able to correctly cleave chemical bonds when applied to a single-determinant reference, and the resulting wavefunction is competitive with highly-accurate but expensive methods.
Just as wavefunctions can be limited by their functional form, so too they can be limited by the definition of their reference point. Many excited state methods are based on a linear response formalism, in which excited states are generated in the language of perturbations applied to a fixed ground state. When the wavefunction qualitatively changes upon electronic excitation -- as they do in charge-transfer states and core excitations -- these methods can fail to predict excitation energies with errors on the scale of several electron-volts. Optimizing molecular orbitals for individual excited states is one particularly efficient way to make the necessary zeroth-order changes to capture these states, but these state-specific methods can suffer from instabilities in their optimization. One particular benefit of this state-specific approach is that these tailored orbitals naturally compress the wavefunction, but in certain cases, the ground state can interfere when minimal representations of the excited state are sought, ultimately causing variational collapse. We show that this collapse behavior occurs in two different state-specific approaches, and show how this arises from an inadequately-modeled avoided crossing, and argue that orbital-CI coupling plays a key role in its prevention.
Newer excited state methods have also seen the use of target functions, where functions of the energy such as the square-gradient magnitude are minimized in place of the energy in order to stabilize state-specific optimization by transforming saddle points on the energy surface into minima on the target function surface. While pilot implementations of square-gradient-based optimizers for the Excited-State Mean-Field (ESMF) wavefunction are able to obtain state-specific orbitals at low cost, these are still new and have yet to benefit from numerical accelerations, limiting their use. For instance, while stable, the quasi-Newton optimizer in the ESMF-GVP implementation uses no Hessian preconditioner, and while stable to variational collapse due to its inclusion of orbital-CI coupling, is much slower relative to the occasionally-unstable ESMF-SCF implementation. Using the exact Hessian and full Newton-Raphson optimization as a benchmark, we explore a variety of Hessian approximations, and find that an approximate diagonal Hessian can accelerate the ESMF-GVP square-gradient minimization to match the speed of the gradient-only ESMF-SCF at mean-field cost-scaling while resisting variational collapse.