Robots have played a crucial role in industries for decades, streamlining manufacturing processes and performing tasks with precision. However, with the availability of more affordable components and the rapid advancement of AI techniques, there is a growing interest in integrating robots into everyday settings. The challenge lies in the decades-old algorithms used in industrial robots often assume highly structured and controlled environments. These algorithms struggle to adapt to unstructured environments commonly found in everyday life, where obstacles and uncertainties abound. As a result, there is a need for newer, more adaptable algorithms to make robots a practical and safe part of our daily lives.
Motion planning, an age-old challenge in robotics, revolves around the task of determining a safe and efficient path for a robot to navigate an environment while evading obstacles. This problem is central in various robotics applications, from developing autonomous vehicles and healthcare robots to household robotic assistants. While existing methods have proven effective in generating trajectories and paths for pre-defined, structured environments, they face significant challenges when dealing with robots boasting higher degrees of freedom and adapting to changing environments. In these cases, the generated trajectories often necessitate further refinement before a robot can successfully execute them. In response to these limitations, there has been a recent surge of interest in using learning-based methods, which can address the shortcomings of traditional planners and enhance the generalizability and efficiency of robot motion planning algorithms.
This thesis introduces efficient planning algorithms that exhibit remarkable adaptability to various environments. Leveraging techniques from large language models, specifically the versatile Transformer architecture, we demonstrate how our planning algorithms can rapidly generate efficient trajectories while generalizing across diverse environmental contexts. Notably, we showcase the capacity of our learned models to tackle complex motion planning challenges, such as constraint planning, without the need for additional training data. By introducing a novel constraint function to encode the variabilities inherent in planning environments, we also lay the foundation for capturing and addressing different sources of uncertainties in the planning process. Looking ahead, we anticipate that our approaches will not only be readily accessible but also broadly beneficial, facilitating the seamless transfer of learned motion planners into a myriad of robotic-environment interaction scenarios and ushering in new possibilities for innovation and practical application in robotics.