Accounts of human and machine concept learning face a fundamental challenge. Some approaches, notably deep learning, frequently achieve human-level performance on specific tasks but lack consistently human-like—i.e. generalizable, composable, and explainable—solutions. Others, including classic symbolic accounts, produce human-like hypotheses but scale poorly. We present a model of learning which is both human-level and human-like. It represents concepts as program-like expressions formed by applying a series of higher-order inferences that iteratively revise preexisting concepts into novel target concepts. Learning seeks the best combination of revisions under a Bayesian score. This model predicts learning behavior in 392 humans over 100 computationally sophisticated concepts more accurately than alternative models (Enumerate, Metagol, RobustFill, Fleet) while using three orders of magnitude less computation. This work shows how humans plausibly construct sophisticated algorithmic representations, a necessity for compelling human-like artificial intelligence.