Deep reinforcement learning (Deep RL) methods can train artificial agents (AAs) to reach or exceed human-level performance. However, in multiagent contexts requiring competitive behavior or where the aim is to use AAs for human training, the qualitative behaviors AAs adopt may be just as important as their performance to ensure representative training. This paper compares human behaviors and performance when competing against either a human expert or an AA opponent trained using Deep RL on a 2-dimensional version of Pong. Results show that participants were not sensitive to the movement differences between the human expert and AA. Further, the participants did not alter their behaviors, except to compensate for differences in the environmental states caused by the opponents. The paper concludes with discussion on the potential impacts of AA training on human behavior with regard to representative design in the areas of skill development and team training.