This dissertation examines the current state of automated indoor mapping and modeling using point cloud data produced by close range remote sensing systems. The first part looks at reality capture techniques that convert the physical form of indoor spaces into point clouds of millions of measured points, each with an (x,y,z) coordinate value. The second part examines methods for teasing out geometries from these point clouds -- often complicated by noise and voids -- and converting them into 3D geometric models. The final part examines techniques for merging the coordinate reference systems of these indoor maps and models with those of the outdoor world, resulting in a seamless representation of space. Lessons learned in this study revealed that theories, techniques, and practices in indoor mapping remain relatively elementary compared to those for the outdoors, yet they also present significant opportunities for future research propelled by emerging developments in remote sensing and a growing demand for indoor maps.