We formulate a domain-independent language for representing stochastic tasks, and show how this representation can be synthesized from few training observations. In one experiment, we study how a physical robot may learn to fold clothes from a small number of visual observations, in contrast to big data techniques. Under the same framework, we also show how a virtual chat-bot may learn dialogue policies from few example transcripts, resulting in an interpretable dialogue model, outperforming current statistical techniques. Central to both examples in the concept of utility, why it's essential for generalizability, and how to learn it from small data.
To allow wide-spread adoption of consumer robotics, robots must be able to adapt to their
environment by learning new skills and communicating with humans. Each chapter explains a
contribution to achieve this goal. Chapter One covers a stochastic And-Or knowledge
representation framework for robotic manipulations. Chapter Two further expands this
established system for robustly learning from perception. Chapter Three unifies perception with
natural language for a joint real-time processing of information. We've successfully tested the
generalizability and faithfulness of our robotic knowledge acquisition and inference pipeline. We
present proof of concepts in each of the three chapters.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.