This paper examines the role and efficiency of the non-convex loss functions
for binary classification problems. In particular, we investigate how to design
a simple and effective boosting algorithm that is robust to the outliers in the
data. The analysis of the role of a particular non-convex loss for prediction
accuracy varies depending on the diminishing tail properties of the gradient of
the loss -- the ability of the loss to efficiently adapt to the outlying data,
the local convex properties of the loss and the proportion of the contaminated
data. In order to use these properties efficiently, we propose a new family of
non-convex losses named $\gamma$-robust losses. Moreover, we present a new
boosting framework, {\it Arch Boost}, designed for augmenting the existing work
such that its corresponding classification algorithm is significantly more
adaptable to the unknown data contamination. Along with the Arch Boosting
framework, the non-convex losses lead to the new class of boosting algorithms,
named adaptive, robust, boosting (ARB). Furthermore, we present theoretical
examples that demonstrate the robustness properties of the proposed algorithms.
In particular, we develop a new breakdown point analysis and a new influence
function analysis that demonstrate gains in robustness. Moreover, we present
new theoretical results, based only on local curvatures, which may be used to
establish statistical and optimization properties of the proposed Arch boosting
algorithms with highly non-convex loss functions. Extensive numerical
calculations are used to illustrate these theoretical properties and reveal
advantages over the existing boosting methods when data exhibits a number of
outliers.