Advanced Search   
  Look in
       Title    Full-text
  Date Range
      
      
VASC Seminar: Varun Ramakrishna
PoseMachines: Articulated Human Pose Estimation via Inference Machines

Varun Ramakrishna
PhD Student RI, Carnegie Mellon

April 07, 2014, 3:00 - 4:00, NSH 1507
Abstract

Current state-of-the-art approaches for articulated human pose estimation use the standard parts-based graphical model paradigm. These models are often restricted to tree-structured representations and simple parametric potentials in order to enable tractable inference. However, these simple dependencies fail to capture complex interactions among the human body parts. While models with more complex interactions can be defined, learning the parameters of these models remains challenging with intractable inference. In this paper, instead of performing inference on a learned graphical model, we build upon the recent emph{inference machine} framework and present a method for articulated human pose estimation. Our approach incorporates rich spatial interactions among multiple parts and information across parts of different scales. Additionally, the modular framework of our approach enables both ease of implementation without specialized optimization solvers and efficient inference. We analyze our approach on two challenging datasets with large pose variation and outperform the current state-of-the-art on these benchmarks.


Additional Information

Host: Kris Kitani

Speaker Biography

Varun Ramakrishna is a PhD student in the Robotics Institute, advised by Prof. Yaser Sheikh and Prof. Takeo Kanade. His research interests include structured prediction problems in computer vision with a focus on understanding human posture and motion from monocular images and image sequences. Varun was previously a master's student in the ECE department at CMU and earned his undergraduate degree from IIT Madras. This work being presented is in collaboration with Daniel Munoz (now at Google X).