In many real-world settings, it is crucially vital for agents to learn to communicate and cooperate. Different cooperation models have been proposed to represent cooperative relations among agents. However, the intensity of the cooperative relation has not received much attention. In particular, how it varied with spatial-temporal information has not been studied deeply. In this paper, we propose a temporal dynamic weighted graph convolution based multi-agent reinforcement learning framework (TWG-Q). We design a weighted graph convolutional network to capture cooperative information among agents. On top of that, a temporal weight learning mechanism is introduced to characterize intensities of cooperations. We design a novel temporal convolutional network in the temporal dimension to extract effective features for the multi-agent reinforcement learning. Extensive experiments show that our method significantly improves the performance of multi-agent reinforcement learning on the public benchmark of micromanagement tasks in StarCraft II.