Augmented reality has incredible potential to change the way we interact with information. Smart mobile devices have enabled us to be constantly connected to an ever expanding source of information known as the internet. Augmented Reality devices can present a method for organizing this information and ensure that relevant information is present in the location where it is most useful. Similarly, the spatial organization of information alongside physical objects can present new forms of creativity and artistry. These devices use computer vision algorithms to understand the structure of objects in the world as well as the devices' location relative to these objects. This understanding is used to display virtual objects in such a way as to appear present in the physical world. While these algorithms present a viable solution in many situations, they still have many failure cases that can prevent the ubiquitous adoption of these devices.
This thesis discusses methods for improving the localization and mapping capabilities of these Augmented Reality devices by exploiting the structure that is present in many man-made environments. A discussion of some open problems in augmented reality is presented in the context of the open source package OpenARK. The remainder of thesis discusses methods for improving localization and mapping and 3D reconstruction. A method is presented for incorporating planar structures from a Time of Flight depth sensor into a state of the art Visual Inertial Odometry algorithm. This algorithm is demonstrated to improve localization accuracy in low light and low texture environments and maintain realtime performance on a mobile device with limited Time of Flight depth sensing range and limited compute resources. A method is presented for enabling the realtime matching of image wireframes. Image wireframes represent the junctions, lines and intersection relationships that form the structure of the scene. This algorithm exploits these relationships to improve the matching of image wireframes beyond standard feature matching. This method is further demonstrated to be capable of exploiting the additional constraints introduced by multiple cameras.