With increasing ethical and legal concerns on privacy in the era of big data, differential privacy (DP) has emerged as the de facto gold standard to disguise membership of individuals with quantiiable privacy guarantee. In DP, the theoretical privacy guarantee directly corresponds to the amount of noise and randomness that must be introduced into a DP mechanism. Therefore, to employ DP algorithms in the real world, it is crucial to develop a tight characterization of privacy analysis. This thesis aims to bridge the gap between theory and DP deployment by refining the constants in privacy guarantees. In the first part of the thesis, we focus on modern privacy accountings, which characterize privacy degradation through fine-grained mechanism-specific analysis, driving much of the recent success in DP deployments. We enhance these modern privacy accountings by generalizing the PLD formalism to handle adaptive composition and amplification by sampling, two foundamental components in the design of DP algorithms. Additionally, we derive nearly optimal bounds for characterizing privacy amplification by sampling in the R´enyi DP framework, which directly translate into practical enhancements in private deep learning. In the second part of the thesis, we address the mathematical slack in privacy analysis by incorporating data-adaptive analysis, enabling less noise injection when the input dataset is deemed “nice”.