Recent advances in learning-based perception systems have led to drastic improvements in the performance of robotic systems like autonomous vehicles and surgical robots. These perception systems, however, are hard to analyze and errors in them can propagate to cause catastrophic failures. In this paper, we consider the problem of synthesizing safe and robust controllers for robotic systems which rely on complex perception modules for feedback. We propose a counterexample-guided synthesis framework that iteratively builds simple surrogate models of the complex perception module and enables us to find safe control policies. The framework uses a falsifier to find counterexamples, or traces of the systems that violate a safety property, to extract information that enables efficient modeling of the perception modules and errors in it. These models are then used to synthesize controllers that are robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is a perception failure. We demonstrate our framework on two scenarios in simulation, namely lane keeping and automatic braking, and show that it generates controllers that are safe, as well as a simpler model of a deep neural network-based perception system that can provide meaningful insight into operations of the perception system.