Artificial agents now perform on par with or better than experts on several challenging decision-making tasks. People, however, remain reluctant to allow algorithms to make decisions on their behalf and legal constraints may prevent it altogether. How can we harness artificial intelligence, while maintaining trustworthiness and accountability? We propose confirmation trees, a decision-tree strategy for hybrid intelligence that can
improve accuracy while maintaining human control. First, decisions are elicited from a human expert and an artificial agent. If they agree, that decision is adopted. If they disagree, a second human expert is consulted to break the tie. Hence, a human expert always approves the final decision. Our approach outperforms human experts or algorithms alone at diagnosing malign skin lesions. Crucially, it performs better than a strong human baseline, using substantially fewer human ratings. Our results show the potential of this approach for medical diagnostics and beyond.