Learning-based methods are promising for tackling the inherent nonlinearity and model uncertainty in path-tracking control problems, but the lack of safety guarantee of neural controllers restricts their practical use in self-driving vehicles. We propose methods for training and certifying barrier functions that are themselves represented as neural networks, for ensuring safety properties of learning-based neural controllers in self-driving with realistic nonlinear dynamics. We describe how to identify safe and unsafe regions of the state space and minimize the violation of the barrier conditions to train neural barrier functions. We then show how to leverage the recent advances in robustness analysis of neural networks to bound the Lie derivatives of the barrier functions and certify the barrier conditions. Although understanding the vehicle dynamics is important for designing the training and certification procedures, the proposed algorithms are sampling-based and do not assume access to analytic forms of the dynamics. Thus the methods are generally applicable to nonlinear vehicle dynamics with model uncertainty and partial observability. We demonstrate the effectiveness of the methods for quantitatively certifying safety of neural controllers in different simulation environments ranging from simple kinematic models to the TORCS high-fidelity vehicle dynamics modeling environment as a blackbox simulator.