Social robotics has shown expansive growth in areas related to companionship/assistance for older adults. Critically, everyday interactions with artificial agents often involve spoken language in the context of a shared visual environment. Therefore, language interfaces for these applications must account for the distinctive nature of visually-situated communication revealed by psycholinguistic studies. In traditional frameworks, "rational" speakers were thought to avoid redundancy, yet human-human communication research shows that both younger and older speakers include redundant information (e.g., color adjectives) in descriptions to facilitate listeners' visual search. However, this "cooperative" use of redundant expressions hinges on beliefs about listeners' perception (e.g., "pop-out" nature of human color processing). We explored the incidence and nature of younger and older speakers' redundant descriptions for a robot partner in different visual environments. Whereas both age groups produced redundant descriptions, there were important age differences for when these descriptions occurred and for the properties encoded in them.