Liquefaction triggering is typically predicted using fully-empirical and/or semi-empirical models. Hence, such models are heavily reliant upon available liquefaction (and/or lack thereof) case history data. These predictive models are based on a variety of factors, describing the demand (i.e., the cyclic stress ratio, CSR in existing legacy models) and the capacity (i.e., the cyclic resistance ratio, CRR). However, the degree to which these factors truly affect models’ performance is unknown. To explore this aspect and quantitatively rank the importance of liquefaction input model parameters, we leverage a Random Forest Machine Learning (ML) approach using two methods: (1) a feature importance metric based on the Gini impurity index, and (2) a SHapley Additive exPlanations (SHAP)-based approach. Both approaches were employed using typical input factors used in legacy liquefaction triggering models based on cone penetration test (CPT) data. Such analyses were performed using all reviewed (i.e., fully vetted) data in the Next Generation Liquefaction (NGL) database. Our analysis then separately explores the impact on resulting models of seven input parameters. We show that the most important input parameters are: (1) the peak ground acceleration, (2) the soil behavior type index, and (3) the earthquake magnitude (which serves as a proxy for duration in such models). The input parameters with the lowest importance are the total and the effective vertical stresses. A limitation of this analysis is that the ML model used does not allow for extrapolation beyond the range of the data. As a result, for input parameters with narrow distributions of the data (i.e., somewhat limited parameter space), a lower ranking could be associated with such limited availability of a wide range of values, rather than being related to actual low importance. This limitation likely accounts for the low importance attached to stress-related input parameters since legacy case histories are generally related to shallower (<10m) depths.