- Main
Deep Representation Learning with Induced Structural Priors
- Xie, Saining
- Advisor(s): Tu, Zhuowen
Abstract
With the support of big-data and big-compute, deep learning has reshaped the landscape of research and applications in artificial intelligence. Whilst traditional hand-guided feature engineering in many cases is simplified, the deep network architectures become increasingly more complex. A central question is whether we can distill the minimal set of structural priors that can provide us the maximal flexibility, and lead us to richer sets of structural primitives. Those structural priors will make the learning process more effective, and potentially lay the foundations towards the ultimate goal of building general intelligent systems.
This dissertation focuses on how we can tackle different real world problems in computer vision and machine learning with carefully designed neural network architectures, guided by simple yet effective structural priors. In particular, this thesis focuses on two structural priors that have proven to be useful and generalizable in many different scenarios: the multi-scale prior, with an application in edge detection, and the sparse-connectivity prior implemented for generic visual recognition. Examples will be presented in the last part, on how to learn meaningful structures directly from data, rather than hard-wiring them by, for example, learning a convolutional pseudo-prior in the label space, or adopting a dynamic self-attention mechanism.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-