Given the global challenges of security, both in physical and
cyber worlds, security agencies must optimize the use of their
limited resources. To that end, many security agencies have
begun to use "security game" algorithms, which optimally plan
defender allocations, using models of adversary behavior that
have originated in behavioral game theory. To advance our
understanding of adversary behavior, this paper presents
results from a study involving an opportunistic crime security
game (OSG), where human participants play as opportunistic
adversaries against an algorithm that optimizes defender
allocations. In contrast with previous work which often
assumes homogeneous adversarial behavior, our work
demonstrates that participants are naturally grouped into
multiple distinct categories that share similar behaviors. We
capture the observed adversarial behaviors in a set of diverse
models from different research traditions, behavioral game
theory, and Cognitive Science, illustrating the need for
heterogeneity in adversarial models.