Research on cognition in people with impairments is still insufficient to account for cognitive diversity. This study addresses the issue by making the first attempt to examine synthetic speech perception in congenitally blind versus sighted participants. With the rising prevalence of modern speech technologies, the study of synthetic speech perception across various groups becomes indispensable. Although recent research revealed certain advantages of blind listeners compared to sighted ones, these studies only addressed natural speech perception. Using a speeded AX discrimination task in a controlled experiment, we test how the two groups perceive signal distortions resulting from differing synthetic speech qualities, generated using neural networks. Results show that blind participants had significantly better discrimination, especially in the difficult condition involving small distortions. This suggests that blind participants were sensitive to deviations that remained imperceptible to sighted participants. Theoretical implications for speech perception and practical insights for Text-to-Speech technologies will be discussed.