To develop computational agents that better communicate using their own emergent language, we endow the agents with an ability to focus their attention on particular concepts in the environment.
Humans often understand an object or scene as a composite of concepts and those concepts are further mapped onto words.
We implement this intuition as cross-modal attention mechanisms in Speaker and Listener agents in a referential game and show attention leads to more compositional and interpretable emergent language.
We also demonstrate how attention aids in understanding the learned communication protocol by investigating the attention weights associated with each message symbol and the alignment of attention weights between Speaker and Listener agents.
Overall, our results suggest that attention is a promising mechanism for developing more human-like emergent language.