Invited Keynote Talks

The invited keynote speakers for ACML 2012 are James Rehg (Georgia Tech), Dale Schuurmans (University of Alberta), and Bob Williamson (Australian National University and NICTA).


Behavior Imaging and the Study of Autism

James Rehg
Georgia Tech

Abstract

In this talk I will describe current research efforts in Behavior Imaging, a new research field which encompasses the measurement, modeling, analysis, and visualization of social and communicative behaviors from multi-modal sensor data. Beginning in infancy, individuals acquire the social and communicative skills which are vital for a healthy and productive life, through face-to-face interactions with caregivers and peers. However, children with developmental delays face great challenges in acquiring these skills, resulting in substantial lifetime risks. Autism, for example, affects 1 in 88 children in the U.S. and can lead to substantial impairments, resulting in a lifetime cost of care of $3.2M per person. The goal of our research in Behavior Imaging is to develop computational methods that can support the fine-grained and large-scale measurement and analysis of social behaviors, with the potential to positively impact diagnosis and treatment. I will present an overview of our research efforts in Behavior Imaging, with a particular emphasis on the use of machine learning methods to extract behavior measurements from weakly-annotated video data. Specifically, I will describe a new approach to video analysis based on the concept of temporal causality, which leverages a novel representation of video events as multiple point processes. Our method provides a new bottom-up approach to video segmentation based on the temporal structure of video events. I will present results for retrieving and categorizing social interactions in collections of real-world video footage. I will also highlight our recent efforts in recognizing activities in video which is acquired by a wearable camera (also known as egocentric vision). This is joint work with Alireza Fathi, Yin Li, Karthir Prabhakar, and Sangmin Oh.

Bio

James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is the Director of the Center for Behavior Imaging, co-Director of the Computational Perception Lab, and Associate Director of Research in the Center for Robotics and Intelligent Machines. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received the National Science Foundation (NSF) CAREER award in 2001, the Raytheon Faculty Fellowship from Georgia Tech in 2005, and a Senior Faculty Research Award from Georgia Tech in 2011. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005 and BMVC 2010. Dr. Rehg is active in the organizing committees of the major conferences in computer vision, most-recently serving as the Program co-Chair for ACCV 2012 and the General co-Chair for IEEE CVPR 2009. He has served on the Editorial Board of the International Journal of Computer Vision since 2004. He has authored more than 100 peer-reviewed scientific papers and holds 23 issued US patents. Dr. Rehg is currently leading a multi-institution effort to develop the science and technology of Behavior Imaging, funded by an NSF Expedition award (see www.cbs.gatech.edu for details).


Convex Methods for Representation Learning

Dale Schuurmans
University of Alberta

Slides

Abstract

Automated feature discovery is a fundamental problem in data analysis. Although classical feature learning methods fail to guarantee optimal solutions in general, convex reformulations have been developed for a number of such problems. Most of these reformulations are based on one of two key strategies: approximating pairwise representations or exploiting induced matrix norms. Despite their use of relaxation, convex reformulations can demonstrate significant improvements in solution quality by eliminating local minima. I will discuss several convex reformulations for representation learning problems, including clustering, subspace learning, multi-view learning, and hidden-layer network training---demonstrating how feature discovery can co-occur with parameter optimization while admitting globally optimal solutions.

Bio

Dale Schuurmans is a Professor of Computing Science and Canada Research Chair in Machine Learning at the University of Alberta. He received his PhD in Computer Science from the University of Toronto, and has worked at the National Research Council Canada, University of Pennsylvania, NEC Research Institute and the University of Waterloo. He is an Associate Editor of JAIR and AIJ, and currently serves on the IMLS and NIPS Foundation boards. He has served as a Program Co-chair for NIPS-2008 and ICML-2004, and as an Associate Editor for IEEE TPAMI, JMLR and MLJ. His research interests include machine learning, optimization, probability models, and search. He is author of more than 130 refereed publications in these areas and has received paper awards at IJCAI, AAAI, ICML, IEEE ICAL and IEEE ADPRL.


Multiclass Losses and Multidistribution divergences

Robert Williamson
Australian National University and NICTA

Slides

Abstract

Binary prediction problems (and their associated loss functions) are perhaps the simplest machine learning problems and have been extensively studied. Similarly, divergence measures between two probability distributions are well understood, for example the classical Csiszar f-divergences. There is a natural bridge between binary proper losses and f-divergences via the Bayes risk of a binary learning problem induced by the loss.

Multiclass prediction problems and multiclass loss functions are less well understood. It is not even clear (at first) what the "divergence" between k distributions even means when k>2

In this talk I will show how the binary "bridge" extends to the multiclass case and allows simple proofs of the properties of multidistribution f-divergences which are analogous to those satisfied by the classical f-divergences. I will also outline the theory of composite multiclass losses, which are the composition of a proper loss with a link function, including a characterisation of when they are convex.

(Joint work with Dario Garcia-Garcia, Mark Reid, and Elodie Vernet)

Bio

Professor Bob Williamson is the leader of the Machine Learning group at NICTA and professor in the Research School of Computer Sciences at the ANU. He obtained a PhD in Electrical Engineering from the University of Queensland in 1990. From 2003 to 2006 Professor Williamson was the Director of NICTA’s Canberra Research Laboratory. In 2006 he was appointed as NICTA's Scientific Director. Since 2011 he has been leading the Machine Learning group. He is a member of the advisory board of the National Institute of Informatics (Japan) and the Scientific Advisory Board of the Max-Planck Institute for Biological Cybernetics. In 2012 he was elected a fellow of the Australian Academy of Sciences.