Carnegie Mellon University
Advanced Search   
  Look in
       Title     Description
       Inactive Projects
Current Projects, Grouped by Subject
Vision, Perception & Sensors
Non-contact 3-D surgical instrument tracking for device testing and surgeon assessment.
Autonomous Navigation System (ANS)
The NREC is leading the development of perception and path planning within the Autonomous Navigation System program for the Future Combat System.
Cohn-Kanade AU-Coded Facial Expression Database
An AU-coded database of over 2000 video sequences of over 200 subjects displaying various facial expressions.
Computer Assisted Medical Instrument Navigation
We are developing a system to help clinicians to precisely navigate various catheters inside human hearts.
NREC designed and developed the Crusher vehicle to support the UPI program's rigorous field experimentation schedule.
The NREC is developing an untethered, long range (2,500 ft +), gas line visual inspection robot system that provides real-time video from inside the line, can be deployed in live lines, and can pass through all angles and bends of both 6" and 8" lines.
The NREC-led team designed, developed and field tested and successfully demonstrated a Gladiator robotic system with high mobility and remote combat capabilities.
IMU-Assisted KLT Feature Tracker
The KLT (Kanade-Lucas-Tomasi) method seeks to increase the robustness of feature tracking utilities by tracking a set of feature points in an image sequence. Our goal is to enhance the KLT method to increase the number of feature points and their tracking length under real-time constraint.
Indoor People Localization
Tracking multiple people in indoor environments with the connectivity of Bluetooth devices.
Multimodal Diaries
Summarization of daily activity from multimodal data (audio, video, body sensors and computer monitoring)
Robot Sensor Boat (RSB)
We present a fleet of autonomous Robot Sensor Boats (RSBs) developed for lake and river fresh water quality assessment and controlled by our Multilevel Autonomy Robot Telesupervision Architecture (MARTA).
Robotic Soccer (RoboSoccer)
The RoboSoccer project develops collaboration among multiple autonomous agents.
Sonic FlashlightTM
We are developing a method of medical visualization that merges real time ultrasound images with direct human vision.
Spatio-Temporal Facial Expression Segmentation
A two-step approach temporally segment facial gestures from video sequences. It can register the rigid and non-rigid motion of the face.
Treasure Hunt: Pickup Teams
We are developing a single heterogeneous human-robot team capable of effectively locating objects of interest (treasure) spread over a complex, previously unknown environment.
UGCV PerceptOR Integrated (UPI)
The UPI (UGCV PerceptOR Integrated) program integrates and enhances the results from UGCV and PerceptOR to increase the speed and autonomy of unmanned ground vehicles operating in complex terrain. By combining the inherent mobility of Spinner with advanced perception techniques including the use of learning and prior terrain data, the UPI program stresses system design across vehicle, sensors and software so that the strengths of one component compensate for the weaknesses of another.