Object-Oriented Visual Localization in Urban Mobile-Edge Environments
- Tsai, Bo
- Advisor(s): Lin, Kwei-Jay
Abstract
The advent of the Internet of Things (IoT) and the progress in robotic technology have brought a new era where intelligent services significantly benefit society. As robots evolve from executing simple tasks to providing complex, interactive services, the need of precise and reliable localization becomes paramount. In this context, the ubiquity of low-cost cameras and the maturity of object detection technologies have paved the way for advanced semantic visual-based localization solutions.
"OOPose," introduced in this thesis, exemplifies this advancement by providing an object-oriented pose estimation framework tailored for robots with small embedded devices. It leverages dense features from conventional object detection neural networks, striking a balance between pixel-matching accuracy and processing speed, thus enhancing the effectiveness and efficiency of robot localization.
Amidst escalating demand for robots capable of facilitating intelligent interactive activities, integrating high-quality cameras on mobile IoT devices, combined with wireless connectivity, edge computing, and advanced computer vision algorithms, enables a novel approach to processing offload with mobile video. This thesis details the deployment of OOPose in a mobile-edge computing paradigm, emphasizing real-time performance and the feasibility of interactive applications.
The research encapsulates the development and application of the OOPose framework, demonstrating its significant role in the ecosystem of mobile-edge computing. This approach not only satisfies the requirements for real-time performance but also opens avenues for innovative robot localization applications, ensuring interactive, intelligent services across various environments.