Machine learning (ML) is now commonplace, powering data-driven applications
in various organizations. Unlike the traditional perception of ML in research,
ML production pipelines are complex, with many interlocking analytical
components beyond training, whose sub-parts are often run multiple times on
overlapping subsets of data. However, there is a lack of quantitative evidence
regarding the lifespan, architecture, frequency, and complexity of these
pipelines to understand how data management research can be used to make them
more efficient, effective, robust, and reproducible. To that end, we analyze
the provenance graphs of 3000 production ML pipelines at Google, comprising
over 450,000 models trained, spanning a period of over four months, in an
effort to understand the complexity and challenges underlying production ML.
Our analysis reveals the characteristics, components, and topologies of typical
industry-strength ML pipelines at various granularities. Along the way, we
introduce a specialized data model for representing and reasoning about
repeatedly run components in these ML pipelines, which we call model graphlets.
We identify several rich opportunities for optimization, leveraging traditional
data management ideas. We show how targeting even one of these opportunities,
i.e., identifying and pruning wasted computation that does not translate to
model deployment, can reduce wasted computation cost by 50% without
compromising the model deployment cadence.