Advanced Search   
  Look in
       Title     Description
  Include
       Inactive Projects
 
 
Vision and Autonomous Systems Center (VASC)
VASC
The Robotics Institute
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh PA 15213-3890
Current Projects [Past Projects]
 
3D Head Motion Recovery in Real Time
A cylindrical model-based algorithm recovers the full motion (3D rotations and 3D translations) of the head in real time.
Advanced Sensor Based Defect Management at Construction Sites
This research project builds on, combines and extends the advances in generating 3D environments using laser scanners.
Agent Based Design
Novel agent-based approach for the design of modular robot manipulators
Autonomous Helicopter
Develop a vision-guided robot helicopter
BORG
Framework for the development of distributed, secure, reliable, robust and scalable systems
BeatBots
We are developing robots that can participate in coordinated rhythmic social interactions with people.
Bow Leg Hopper
A novel, single-leg, dynamically stabilized planar robot that efficiently traverses rugged terrains under computer control.
BowGo
We have developed BOWGO (patent pending) - a new kind of pogo stick that bounces higher, farther and more efficiently than conventional devices.
CTA Robotics
This project adresses the problems of scene interpretation and path planning for mobile robot navigation in natural environment.
ChargeCar
To develop electric vehicles (EVs) that are as efficient and cost-effective as possible, we have taken a systems-level approach to design, prototyping, and analysis to produce formally-modeled active vehicle energy management.
Cohn-Kanade AU-Coded Facial Expression Database
An AU-coded database of over 2000 video sequences of over 200 subjects displaying various facial expressions.
Coplanar Shadowgrams for Acquiring Visual Hulls of Intricate Objects
We present a practical approach to shape-from-silhouettes using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction.
Deception Detection
Learning facial indicators of deception
Depression Assessment
This project aims to compute quantitative behavioral measures related to depression severity from facial expression, body gestures, and vocal prosody in clinical interviews.
Educational Robotics
We are developing both physical robots and curriculum that will make educational robotics viable at the middle school and high school levels.
Event Detection in Videos
Our event detection method can detect a wide range of actions in video by correlating spatio-temporal shapes to over-segmented videos without background subtraction.
EyeVision
Face Detection
We are developing computer methods to automatically locate human faces in photos and video.
Face Detection Databases
A collection of databases for training and testing face detectors.
Face Recognition
Recognizing people from images and videos.
Face Recognition Across Illumination
Recognizing people from faces: video and still iamges.
Face Recognition Across Pose
Recognizing people from different poses.
Face Video Hallucination
A learning-based approach to super-resolve human face videos.
Facial Expression Analysis
Automatic facial expression encoding, extraction and recognition, and expression intensity estimation for the applications of MPEG4 application: teleconferencing, human-computer interaction/interface.
Facial Feature Detection
Detecting facial features in images.
Feature Selection
Feature selection in component analysis.
Forecasting the Anterior Cruciate Ligament Rupture Patterns
Use of machine learning techniques to predict the injury pattern of the Anterior Cruciate Ligament (ACL) using non-invasive methods.
Frontal Face Alignment
This face alignment method detects generic frontal faces with large appearance variations and 2D pose changes and identifies detailed facial structures in images.
GPU-accelerated Computer Vision
We are exploiting programmable graphics hardware to improve existing vision algorithms and enable novel approaches to robot perception.
Gamebot
Generic Active Appearance Models
We are pursuing techniques for non-rigid face alignment based on Constrained Local Models (CLMs) that exhibit superior generic performance over conventional AAMs.
Global Connection Project
The Global Connection Project develops software tools and technologies to increase the power of images to connect, inform, and inspire people to become engaged and responsible global citizens.
Grace
The Grace project is a collaboration among several schools and research labs to design a robot capable of fully performing the AAAI Grand Challenge.
Gyrover
A Single-Wheel, Gyroscopically stabilized Robot
Hand Tracking and 3-D Pose Estimation
A 2-D and 3-D model-based tracking method can track a human hand rapidly moving and deformed on complicated backgrounds and recover its 3-D pose parameters.
Hot Flash Detection
Machine learning algorithms to detect hot flashes in women using physiological measures.
Human Kinematic Modeling and Motion Capture
We are developing a system for building 3D kinematic models of humans and then using the models to track the person in new video sequences.
Human Motion Transfer
We are developing a system for capturing the motion of one person and rendering a different person performing the same motion.
IMU-Assisted KLT Feature Tracker
The KLT (Kanade-Lucas-Tomasi) method seeks to increase the robustness of feature tracking utilities by tracking a set of feature points in an image sequence. Our goal is to enhance the KLT method to increase the number of feature points and their tracking length under real-time constraint.
Image Alignment
Image alignment with parameterized appearance models.
Image De-fencing
We introduce a novel image segmentation algorithm that uses translation symmetry as the primary foreground/ background separation cue.
Image Enhancement for Faces
Video enhancement techniques, specifically tailored for human faces.
Indoor People Localization
Tracking multiple people in indoor environments with the connectivity of Bluetooth devices.
Informedia Digital Video Library
Informedia Digital Video Library - Informedia is pioneering new approaches for automated video and audio indexing, navigation, visualization, summarization search, and retrieval and embedding them in systems for use in education, health care, defense intelligence and understanding of human activity.
Intelligent Diabetes Assistant
We are working to create an intelligent assistant to help patients and clinicians work together to manage diabetes at a personal and social level. This project uses machine learning to predict the effect that patient specific behaviors have on blood glucose.
Learning Optimal Representations
Learning optimal representations for classification, image alignment, visualization and clustering.
Low Dimensional Embeddings
Finding low dimensional embeddings of signals optimal for modeling, classification, visualization and clustering.
Millibots
Heterogeneous group of small autonomous robots with modular payloads and sensing platforms
Multi-People Tracking
Our multi-people tracking method can automatically initialize and terminate paths of people and follow multiple and changeable number of people on cluttered scenes over long time intervals.
Multi-view Car Detection and Registration
This method can detect cars with occlusions and varying viewpoints from a single still images by using multi-class boosting algorithm.
Multimodal Diaries
Summarization of daily activity from multimodal data (audio, video, body sensors and computer monitoring)
Near Regular Texture -- Analysis, Synthesis and Manipulation
We are developing near regular texture synthesis algorithms for improved natural appearances.
Object Recognition Using Statistical Modeling
Automobile and human face detection via statistical modeling.
Perception for Humanoid Robots
Real-time perception algorithms for autonomous humanoid navigation, manipulation and interaction.
RERC on Accessible Public Transportation
We are researching and developing methods to empower consumers and service providers in the design and evaluation of accessible transportation equipment, information services, and physical environments.
Real-time Face Detection
A face detection system has an accurate detection rate and real time performance by using an ensemble of weak classifiers.
Reflectance Perception
We have developed Reflectance Perception - an image processing software that intelligently compensates for illumination problems in digital pictures.
Roboceptionist
In collaboration with the Drama Department, we are developing technology for long-term social interaction.
Robot Boat Project
We are developing a small solar-powered robot for long-term offshore science experiments. Applications include meteorology, oceanography, marine biology and other marine sciences.
Snackbot
The Snackbot is a mobile robot designed to deliver food to the offices at CMU while engaging in meaningful social interaction.
Social Robots
We are developing robots with personality.
Software Package for Precise Camera Calibration
A novel camera calibration method can increases not only an accuracy of intrinsic camera parameters but also an accuracy of stereo camera calibration by utilizing a single framework for square, circle, and ring planar calibration patterns.
Sonic FlashlightTM
We are developing a method of medical visualization that merges real time ultrasound images with direct human vision.
Spatio-Temporal Facial Expression Segmentation
A two-step approach temporally segment facial gestures from video sequences. It can register the rigid and non-rigid motion of the face.
Telepresence Robot Kit
To design, create, and disseminate robotics curricula and technologies that motivate young women and men to actively explore science and technology.
Temporal Segmentation of Human Motion
Temporal segmentation of human motion
Temporal Shape-From-Silhouette
We are developing algorithms for the computation of 3D shape from multiple silhouette images captured across time.
Texture Replacement in Real Images
We are developing methods to replace some specified texture patterns in an image while preserving lighting effects, shadows and occlusions.
The CMUcam Vision Sensor
We have developed CMUcam - a new low-cost, low-power sensor for mobile robots.
The Personal Rover Project
We have developed the Personal Rover - a 18"x12"x24" highly autonomous, programmable robot.
Toy Robots Initiative
The Toy Robots Initiative aims to commercialize robotics technologies in the educational, toy and entertainment fields.
Understanding and Modeling Trust in Human-Robot Interactions
This collaboration with the UMass Lowell Robotics Lab seeks to develop quantitative metrics to measure a user's trust in a robot as well as a model to estimate the user's level of trust in real time. Using this information, the robot will be able to adjust its interaction accordingly.
Unification of Component Analysis
This project aims to find the fundamental set of equations that unifies all component analysis methods.