Coordinated hunting is widely observed in animals, and sharing rewards is often considered a major incentive for this success. However, it is unclear what causal roles are played by this reward-sharing mechanism. In order to systematically examine the effects of sharing rewards in animal coordinated hunting, we conduct a suite of modeling experiments using a state-of-the-art multi-agent reinforcement learning algorithm. The models are trained and evaluated with a task that simulates real-world collective hunting. We manipulate four evolutionarily important variables: reward distribution, hunting party size, free-rider problems, and hunting difficulty. Our results indicate that individually rewarded predators outperform predators that share rewards, especially when the hunting is difficult, the group size is large, and the action cost is high. Moreover, predators with shared rewards suffer from the free-rider problem. We conclude that sharing reward is neither necessary nor sufficient for modeling animal coordinated hunting through reinforcement learning.