Purpose: Deformable registration problems are conventionally posed in a regularized optimization framework, where balance between fidelity and prescribed regularization usually needs to be tuned for each case. Even so, using a single weight to control regularization strength may be insufficient to reflect spatially variant tissue properties and limit registration performance. In this study, we proposed to incorporate a spatially variant deformation prior into image registration framework using a statistical generative model. Approach: A generator network is trained in an unsupervised setting to maximize the likelihood of observing the moving and fixed image pairs, using an alternating back-propagation approach. The trained model imposes constraints on deformation and serves as an effective low-dimensional deformation parametrization. During registration, optimization is performed over this learned parametrization, eliminating the need for explicit regularization and tuning. The proposed method was tested against SimpleElastix, DIRNet, and Voxelmorph. Results: Experiments with synthetic images and simulated CTs showed that our method yielded registration errors significantly lower than SimpleElastix and DIRNet. Experiments with cardiac magnetic resonance images showed that the method encouraged physical and physiological feasibility of deformation. Evaluation with left ventricle contours showed that our method achieved a dice of ( 0.93±0.03 ) with significant improvement over all SimpleElastix options, DIRNet, and VoxelMorph. Mean average surface distance was on millimeter level, comparable to the best SimpleElastix setting. The average 3D registration time was 12.78 s, faster than 24.70 s in SimpleElastix. Conclusions: The learned implicit parametrization could be an efficacious alternative to regularized B-spline model, more flexible in admitting spatial heterogeneity.