In applied sciences there is a tendency to rely on terminology that is either ill-defined or applied inconsistently across areas of research and application domains. Examples in information assurance include the terms resilience, robustness and survivability, where there exists subtle shades of meaning between researchers. These nuances can result in confusion and misinterpretations of goals and results, hampering communication and complicating collaboration. In this paper, we propose security-related definitions for these terms. Using this terminology, we argue that research in these areas must consider the functionality of the system holistically, beginning with a careful examination of what we actually want the system to do. We note that much of the published research focuses on a single aspect of a system--availability--as opposed to the system';s ability to complete its function without disclosing confidential information or, to a lesser extent, with the correct output. Finally, we discuss ways in which researchers can explore resilience with respect to integrity, availability and confidentiality.