Automated preprocessing methods are critically needed to process the large publicly-available EEG databases, but the optimal approach remains unknown because we lack data quality metrics to compare them. Here, we designed a simple yet robust EEG data quality metric assessing the percentage of significant channels between two experimental conditions within a 100 ms post-stimulus time range. Because of volume conduction in EEG, given no noise, most brain-evoked related potentials (ERP) should be visible on every single channel. Using three publicly available collections of EEG data, we showed that, with the exceptions of high-pass filtering and bad channel interpolation, automated data corrections had no effect on or significantly decreased the percentage of significant channels. Referencing and advanced baseline removal methods were significantly detrimental to performance. Rejecting bad data segments or trials could not compensate for the loss in statistical power. Automated Independent Component Analysis rejection of eyes and muscles failed to increase performance reliably. We compared optimized pipelines for preprocessing EEG data maximizing ERP significance using the leading open-source EEG software: EEGLAB, FieldTrip, MNE, and Brainstorm. Only one pipeline performed significantly better than high-pass filtering the data.