Learning of Task-specific Control Policies for Industrial Robots
- Author(s): Lin, Chung-Yen
- Advisor(s): Tomizuka, Masayoshi
- et al.
Today's industrial robots are designed to be able to execute versatile tasks like a human. When deploying the robots to production lines, however, they only need to perform well for a narrowly defined task. A natural question to ask is whether the robots can tailor themselves for different working conditions. This dissertation focuses on developing learning and optimization algorithms that allow robots to achieve higher overall performance for a particular application. The difficulties of this work arise from the facts that 1) most robots have no end-effector sensors, 2) high robot precision and productivity may compromise the robot service life, and 3) robot trajectories in a single application may variate. In regards to these issues, this dissertation proposes a probabilistic approach to optimize the robot models. The approach solves various parameter learning problems in sensor-limited robots by Bayesian inference. Additionally, a trajectory optimization algorithm is introduced to minimize the robot life cost along a robot path. This dissertation also presents policy learning methods that can mimic the standard iterative learning controller for a group of robot motion. Experimental results on FANUC industrial manipulators show that the proposed methods effectively adjust the control policies for different tasks and make the robots outperform the traditional ones. In addition, they perform comparably to the commercial solutions while having the advantage of not requiring an additional learning action every time when the trajectory is changed. A number of subspace learning based Q-filters are also introduced for removing the undesired effects during the learning process. All algorithms are designed to meet industrial needs such as light computation. Thus, they can be integrated into commercially available robots without special hardware and software requirements.