Collaboration and multi-user interactions are key aspects of many software tasks. In traditional desktop interfaces, such elements are well supported through built-in collaboration functions or general-purpose techniques such as screen and video sharing. Collaboration and guidance may also be required in Mixed Reality environments, where users carry out spatial actions in a three-dimensional space. However, not all users may have access to the same Mixed Reality interface. All of them may not have access to the same information, the same visual representation, or the same interaction affordances. Such asymmetries make communication and collaboration between users harder. To address these issues, we introduce Interactive Cross-Dimensional Media. In these media, the visual representation of information streams can be changed between 2D and 3D. Different representations can be chosen automatically based on context or through associated interaction techniques that give users control over exploring spatial, temporal, and dimensional levels of detail. This ensures that users understand and interpret any information or interaction across different dimensions, interfaces, and spaces. We have deployed these techniques in four different contexts: (1) Mixed Reality telepresence for remote instruction of physical tasks, (2) asynchronous video-based instruction of virtual tasks, (3) live asymmetric guidance of virtual tasks, and (4) live interactive spectating of virtual tasks. Through user studies of these systems, we show that Mixed Reality environments that provide users with interactive cross-dimensional media interfaces improve performance and user experience in multi-user and collaboration settings.