3:00 pm - 4:00 pm
1305 Newell Simon Hall
Abstract: I will talk about two recent pieces of work that attempt to move towards learning with less reliance on labeled data. In the first, part, I will talk about how the surrogate task of predicting the motion of objects can induce complex representations in neural networks without any labeled data. In the second part of the talk, I will discuss two recent papers that use a combination of detectors and trackers to automatically extract hard examples relative to a pre-trained detector. These hard examples can be used to either improve pre-existing detectors or adapt them to new domains.
Bio: Erik Learned-Miller is a Professor in the College of Information and Computer Sciences, at the University of Massachusetts, Amherst, where he joined the faculty in 2004. His research interests include face recognition, unsupervised learning and learning from small training sets, vision for robotics, and motion understanding. He spent two years as a post-doctoral researcher at the University of California, Berkeley, in the Computer Science Division. Learned-Miller received a B.A. in Psychology from Yale University in 1988. In 1989, he co-founded CORITechs, Inc., where he co-developed the second FDA cleared system for image-guided neurosurgery. He worked for Nomos Corporation, Pittsburgh, PA, for two years as the manager of neurosurgical product engineering. He obtained Master of Science (1997) and Ph. D. (2002) degrees from the Massachusetts Institute of Technology, both in Electrical Engineering and Computer Science. In 2006, he received an NSF CAREER award for his work in computer vision and machine learning. He was a co-Program Chair for CVPR 2015 in Boston.