Text-to-image artificial intelligence (AI) programs are popular public-facing tools that generate novel images based on user prompts. Given that they are trained from Internet data, they may reflect societal biases, as has been shown for text-to-text large language model programs. We sought to investigate whether 3 common text-to-image AI systems recapitulated stereotypes held about surgeons and other health care professionals. All platforms queried were able to reproduce common aspects of the profession including attire, equipment, and background settings, but there were differences between programs most notably regarding visible race and gender diversity. Thus, historical stereotypes of surgeons may be reinforced by the publics use of text-to-image AI systems, particularly those without procedures to regulate generated output. As AI systems become more ubiquitous, understanding the implications of their use in health care and for health care-adjacent purposes is critical to advocate for and preserve the core values and goals of our profession.