An accurate mental model of the partner's behavior is fundamental for efficient cooperation. The theory of mind demonstrates that humans are able to create such a model from repeated interactions with their human partners. However, it is an open question whether humans are also willing and capable of taking the perspective of artificial agents and creating similar mental models of agent behavior. We developed a repeated cooperative task that allows us to investigate the process that guides the formation of a specific partner model by repeatedly asking for a prediction of an artificial agent's actions. We found that humans learn to anticipate the artificial partners' behavior if it is goal-directed. An inability to explicitly explain the partner's behavior suggests that this is an implicit learning process. The role of the acquisition of task knowledge in the model of the other agent's behavior is discussed.