Recent degradation studies have highlighted the importance of considering cloud cover when calculating degradation rates, finding more reliable values when the data are restricted to clear sky periods. Several automated methods of determining clear sky periods have been previously developed, but parameterizing and testing the models has been difficult. In this paper, we use clear sky classifications determined from satellite data to develop an algorithm that determines clear sky periods using only measured irradiance values and modeled clear sky irradiance as inputs. This method is tested on global horizontal irradiance (GHI) data from ground collectors at six sites across the United States and compared against independent satellite-based classifications. First, 30 separate models were optimized on each individual site at GHI data intervals of 1, 5, 10, 15, and 30 min (sampled on the first minute of the interval). The models had an average F0.5 score of 0.949 ± 0.035 on a holdout test set. Next, optimizations were performed by aggregating data from different locations at the same interval, yielding one model per data interval. This paper yielded an average F0.5 of 0.946 ± 0.037. A final, 'universal' optimization that was trained on data from all sites at all intervals provided an F0.5 score of 0.943 ± 0.040. The optimizations all provide improvements on a prior, unoptimized clear sky detection algorithm that produces F0.5 scores that average to 0.903 ± 0.067. Our paper indicates that a single algorithm can accurately classify clear sky periods across locations and data sampling intervals.