Learning with Shared Information for Image and Video Analysis

PhD Candidate Gaowen Liu

April 10, 2017
Versione stampabile

Time: April 10, 2017, h. 09:00 am
Location: Room Ofek, Polo scientifico e tecnologico “Fabio Ferrari”, Building Povo 1 - Povo (Trento)

PhD Candidate

Dr. Gaowen Liu

Abstract of Dissertation

Image and video recognition is a fundamental and challenging problem in computer vision, which has progressed tremendously fast recently. In the real world, a realistic setting for image or video recognition is that we have some classes containing lots of training data and many classes that contain only a small amount of training data. Therefore, how to use the frequent classes to help learning the rare classes is an open question. Learning with shared information is an emerging topic which can solve this problem. There are different components that can be shared during concept modelling and machine learning procedure, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. For example, representations based on attributes define a finite vocabulary that is common to all categories, with each category using a subset of the attributes. Therefore, sharing some common attributes for multiple classes will benefit the final recognition system.
In this thesis, we investigate some challenging image and video recognition problems under the framework of learning with shared information. My Ph.D research comprised of two parts. The first part focuses on the two domains (source and target) problems where the emphasis is to boost the recognition performance on the target domain by utilizing useful knowledge from source domain. The second part focuses on multi-domains problems where all domains are considered equally important. This means we want to improve performance for all domains by exploring the useful information across domains. In particular, we investigate three topics to achieve this goal in the thesis, which are active domain adaptation, multi-task learning, and dictionary learning, respectively.