Advanced Search   
  Look in
       Name    Email
  Include
       Former RI Members 
 
 
Takeo Kanade
U.A. and Helen Whitaker University Prof., RI/CS
Email:
Office: NSH
Phone: (412) 268-3016
Fax: 412-268-5570
  Mailing address:
Carnegie Mellon University
Robotics Institute
5000 Forbes Avenue
Pittsburgh, PA 15213
Administrative Assistant: Yukiko Kano
Affiliated Center(s):
 Vision and Autonomous Systems Center (VASC)
 Quality of Life Technology Center (QoLT)
 Medical Robotics Technology Center (MRTC)
Personal Homepage

Current Projects [Past Projects]
 
3D Head Motion Recovery in Real Time
A cylindrical model-based algorithm recovers the full motion (3D rotations and 3D translations) of the head in real time.
3D Image Overlay
X-ray vision has always been the dream of surgeons; Image Overlay is the next best thing.
3D Optical Reconstruction of Cell Shape
Reconstruction of 3D cell shapes using optical microscope
A Multi-Layered Display with Water Drops
With a single projector-camera system and a set of linear drop generator manifolds, we have created a multi-layered water drop display that can be used for text, videos, and interactive games.
A Projector-Camera System for Creating a Display with Water Drops
In this work, we show a computer vision based approach to easily calibrate and learn the properties of a single-layer water drop display, using a few pieces of off-the-shelf hardware.
A Statistical Quantification of Human Brain Asymmetry
Constructing image index features to retrieve medically similar cases from a multimedia medical database.
Autonomous Helicopter (HELI)
Develop a vision-guided robot helicopter
Bioprinting
We have designed and built inkjet-based bioprinters to controllably deposit spatial patterns of various growth factors and other signaling molecules on and in biodegradable scaffold materials to guide tissue regeneration.
Cell Tracking
We are developing fully-automated computer vision-based cell tracking algorithms and a system that automatically determines the spatiotemporal history of dense populations of cells over extended period of time.
Cohn-Kanade AU-Coded Facial Expression Database
An AU-coded database of over 2000 video sequences of over 200 subjects displaying various facial expressions.
Computer Assisted Medical Instrument Navigation
We are developing a system to help clinicians to precisely navigate various catheters inside human hearts.
Coplanar Shadowgrams for Acquiring Visual Hulls of Intricate Objects
We present a practical approach to shape-from-silhouettes using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction.
Deception Detection
Learning facial indicators of deception
DigitEyes
DigitEyes is a noninvasive, real-time tracking system for complex articulated figures like the human hand and body.
Dynamic Conformal Radiotherapy
Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects
This project involves creating an illusion of seeing moving objects through occluding surfaces in a video. We use a 2D projective invariant to capture information about occluded objects, allowing for a visually compelling rendering of hidden areas without the need for explicit correspondences.
EyeVision
Face Detection
We are developing computer methods to automatically locate human faces in photos and video.
Face Detection Databases
A collection of databases for training and testing face detectors.
Face Recognition Across Illumination
Recognizing people from faces: video and still iamges.
Face Video Hallucination
A learning-based approach to super-resolve human face videos.
Facial Expression Analysis
Automatic facial expression encoding, extraction and recognition, and expression intensity estimation for the applications of MPEG4 application: teleconferencing, human-computer interaction/interface.
Feature-based 3D Head Tracking
A feature-based head tracking algorithm can handle occlusions and fast motion of face.
Frontal Face Alignment
This face alignment method detects generic frontal faces with large appearance variations and 2D pose changes and identifies detailed facial structures in images.
GPU-accelerated Computer Vision
We are exploiting programmable graphics hardware to improve existing vision algorithms and enable novel approaches to robot perception.
Hand Tracking and 3-D Pose Estimation
A 2-D and 3-D model-based tracking method can track a human hand rapidly moving and deformed on complicated backgrounds and recover its 3-D pose parameters.
Human Kinematic Modeling and Motion Capture
We are developing a system for building 3D kinematic models of humans and then using the models to track the person in new video sequences.
Human Motion Transfer
We are developing a system for capturing the motion of one person and rendering a different person performing the same motion.
IMU-Assisted KLT Feature Tracker
The KLT (Kanade-Lucas-Tomasi) method seeks to increase the robustness of feature tracking utilities by tracking a set of feature points in an image sequence. Our goal is to enhance the KLT method to increase the number of feature points and their tracking length under real-time constraint.
Informedia Digital Video Library
Informedia Digital Video Library - Informedia is pioneering new approaches for automated video and audio indexing, navigation, visualization, summarization search, and retrieval and embedding them in systems for use in education, health care, defense intelligence and understanding of human activity.
Integrated Vision and Sensing for Human Sensory Augmentation (CMU MURI)
The project integrates vision algorithms with sensing technologies for low-power, low-latency, compact adaptive vision systems to provide augmentation of human sensory systems and enabling sensory driven information delivery
Knee Surgery Simulation
Haptic interface for simulated knee surgery and interaction with volumetric data.
LIDAR and Vision Sensor Fusion for Autonomous Vehicle Navigation
The goal of this project is to investigate methods for combining laser range sensors (i.e., LIDARs) with visual sensors (i.e., video cameras) to improve the capabilities of autonomous vehicles.
Magic Eye
Computer vision based augmented reality systems
Metaphor
Design for Software Evolution and Reuse
Modeling by Videotape (MBV)
Factorization method of solving the structure-from-motion problem
Moving Object Detection, Modeling, and Tracking
The goal of this research is to better understand how vision and 3D LIDAR data be combined for detecting and tracking moving objects.
Multi-People Tracking
Our multi-people tracking method can automatically initialize and terminate paths of people and follow multiple and changeable number of people on cluttered scenes over long time intervals.
Multi-view Car Detection and Registration
This method can detect cars with occlusions and varying viewpoints from a single still images by using multi-class boosting algorithm.
Object Recognition Using Statistical Modeling
Automobile and human face detection via statistical modeling.
Perception for Humanoid Robots
Real-time perception algorithms for autonomous humanoid navigation, manipulation and interaction.
Precision Freehand Sculpting (PFS)
We are developing a handheld tool to accurately cut bone for joint replacement surgery.
Quality of Life Technology (QoLT)
QoLT is a unique partnership between Carnegie Mellon and the University of Pittsburgh that brings together a cross-disciplinary team of technologists, clinicians, industry partners, end users, and other stakeholders to create revolutionary technologies that will improve and sustain the quality of life for all people.

Note: The QoLT Project has been superseded by the QoLT Center.
Rain and Snow Removal via Spatio-Temporal Frequency Analysis
Particulate weather, such as rain and snow, create complex flickering effects that are irritating to people and confusing to vision algorithms. We formulate a physical and statistical model of dynamic weather in frequency space. At a small scale, many things appear the same as rain and snow, but by treating them as global phenomena, we can easily remove them.
Real-time Face Detection
A face detection system has an accurate detection rate and real time performance by using an ensemble of weak classifiers.
Real-time Lane Tracking in Urban Environments
The purpose of this project is to develop methods for the real-time detection and tracking of lanes and intersections in urban scenarios in order to support road following by an autonomous vehicle in GPS-denied situations.
Reconfigurable Vision Machine (RAVEN)
Developing new hardware and software for high performance computer vision.
Soft Tissue Simulation for Plastic Surgery
Software Package for Precise Camera Calibration
A novel camera calibration method can increases not only an accuracy of intrinsic camera parameters but also an accuracy of stereo camera calibration by utilizing a single framework for square, circle, and ring planar calibration patterns.
Spatio-Temporal Facial Expression Segmentation
A two-step approach temporally segment facial gestures from video sequences. It can register the rigid and non-rigid motion of the face.
Temporal Shape-From-Silhouette
We are developing algorithms for the computation of 3D shape from multiple silhouette images captured across time.
Tightly Integrated Stereo and LIDAR
The goal of this project is to use sparse, but accurate 3D data from LIDAR to improve the estimation of dense stereo algorithms in terms of accuracy and speed.
Vehicle Localization in Naturally Varying Environments
The purpose of this project is to develop methods for place matching that are invariant to short- and long-term environmental variations in support of autonomous vehicle localization in GPS-denied situations.
Video Surveillance and Monitoring (VSAM)
cooperative multi-sensor military surveillance system
Video-rate Stereo Machine
Multiple images obtained by multiple cameras to produce different baselines in lengths and in directions
Virtualized RealityTM
Construct views of real events from nearly any viewpoint
Visual-Haptic Interface to Virtual Environment
What You can See is What You Feel (WYSIWYF) virtual environment
Z-Keying
A new image keying method which merges synthetic and real image in real time.