Advanced Search   
  Look in
       Title     Description
  Include
       Inactive Projects
Current Projects, Grouped by Subject
Show:
View:
No Subject
3D Head Motion Recovery in Real Time
A cylindrical model-based algorithm recovers the full motion (3D rotations and 3D translations) of the head in real time.
3D Visualization for EOD Robots
NREC developed a plug-and-play camera and range finder module that gives range information and assists operators of EOD (explosive ordnance disposal) robots during manipulation.
A Multi-Layered Display with Water Drops
With a single projector-camera system and a set of linear drop generator manifolds, we have created a multi-layered water drop display that can be used for text, videos, and interactive games.
A Projector-Camera System for Creating a Display with Water Drops
In this work, we show a computer vision based approach to easily calibrate and learn the properties of a single-layer water drop display, using a few pieces of off-the-shelf hardware.
Adaptive Traffic Light Signalization
As part of the Traffic21 initiative at CMU, we are investigating the design and application of adaptive traffic signal control strategies for urban road networks.
Advanced Sensor Based Defect Management at Construction Sites (ASDMCon)
This research project builds on, combines and extends the advances in generating 3D environments using laser scanners.
AFOSR PRET: Information Fusion for Command and Control: The Translation of Raw Data To Actionable Knowledge and Decision
We are conducting a multidisciplinary research effort to develop the next generation of information fusion systems.
Agent Storm
Agent Storm is a scenario where agents autonomously coordinate their team-oriented roles and actions while executing a mission.
Agent-based Composition of Behavioral Models (ABC)
This project focuses on accurately modeling human physical behaviors; these models are used 1) to create computer generated forces (CGFs) that exhibit human-like behavior and 2) to recognize physical behaviors performed by trainees in a MOUT (Military Operations in Urban Terrain) team training simulation.
Application Specific Integrated MEMS Process Service (ASIMPS)
Creating the design, fabrication and characterization support for enabling integration of MEMS and CMOS to span from low cost prototyping to high volume production.
Assisted Mining
NREC applied robotic sensors to the development of semi-automated continuous mining machines and other underground mining equipment.
Assistive Educational Technology
This project seeks to design, create, implement, test, and deploy interactive computer games and automated tutoring systems to motivate and enhance the education of children who are visually impaired or deaf.
Automated Floor Plan Modeling
This project is working to estimate 2D floor plans from sensed 3D data, and to establish criteria for evaluating the accuracy of automated floor plan modeling algorithms.
Automated Reverse Engineering of Buildings
The goal of this project is to use data from 3D sensors to automatically reconstruct compact, accurate, and semantically rich models of building interiors.
Autonomous Coordinated Motion and Mobility (ACMM)
The ACMM (Autonomous Coordinated Motion and Mobility) project integrates and develops controls for more efficient supervised teleoperation of mobile manipulators.
Autonomous Driving Motion Planning
The goal of this project is to develop efficient, high-performance motion planning methodologies for highway and urban autonomous driving.
Autonomous Haulage (AHS)
NREC and Caterpillar are jointly developing the Autonomous Haulage System (AHS), a commercial system for automating large off-highway trucks.
Autonomous Loading (ALS)
The Autonomous Loading System (ALS) completely automates the task of loading excavated material onto dump trucks.
Autonomous Mobile Assembly (ACE)
The ACE project is concerned with autonomous mobile assembly.
Autonomous Platform Demonstrator (APD)
The Autonomous Platform Demonstrator (APD) will develop, integrate and test next-generation unmanned ground vehicle (UGV) technologies.
Autonomous Robotics Manipulation
Carnegie Mellon’s Autonomous Robotic Manipulation (ARM-S) team develops software that autonomously performs complex manipulation tasks.
Autonomous Vehicle Health Monitoring
As DoD autonomous vehicles begin to take on more-complex and longer-duration missions they will need to incorporate knowledge about the current state of their sensing, actuation, and computing capabilities into their mission and task planning.
Autonomous Vehicle Safety Verification
This project investigates safety verification of autonomous driving behaviors.
Autonomous Vineyard Canopy and Yield Estimation
The research project aims to design and demonstrate new sensor technologies for autonomously gathering crop and canopy size estimates from a vineyard -- expediently, precisely, accurately and at high-resolution -- with the goal to improve vineyard efficiency by enabling producers to measure and manage the principal components of grapevine production on an individual vine basis.
Autonomy & Control (RVCA)
NREC is implementing an end-to-end control architecture for unmanned ground vehicles (UGVs) to reduce integration risk in the US Army’s Future Combat Systems (FCS) program.
Backbone Fitting
Black Knight
NREC developed sensing, teleoperation and autonomy packages for BAE Systems' Black Knight, a prototype unmanned ground combat vehicle (UGCV).
BowGo
We have developed BOWGO (patent pending) - a new kind of pogo stick that bounces higher, farther and more efficiently than conventional devices.
Cargo UGV
NREC is teaming with Oshkosh Defense to develop autonomous unmanned ground vehicle technologies for logistics tactical wheeled vehicles used by the US Marine Corps.
Cargo UGV OCU
The Cargo UGV operator control unit (OCU) seamlessly controls one or more Cargo UGVs traveling in convoy formation.
ChargeCar
To develop electric vehicles (EVs) that are as efficient and cost-effective as possible, we have taken a systems-level approach to design, prototyping, and analysis to produce formally-modeled active vehicle energy management.
Chiara
The Chiara is a new, open source educational robot, developed at Carnegie Mellon University's Tekkotsu lab, that will be manufactured and sold by RoPro Design, Inc
Circuit Extraction from MEMS Layout
We are developing a MEMS extraction module which reads in the geometric description of the layout structure and reconstructs the corresponding schematic.
Cluster: Coordinated Robotics for Material Handling
Planetary robots which perform assembly tasks to prepare for human exploration must be able to operate in unmodeled environments and in unanticipated situations. We are working on a system of mobile robots that perform precise coordinated maneuvers for transporting assembly materials. We are also developing an interface that allows an operator to step in at various levels of autonomy, providing the system with both the efficiency of an autonomous system and the reliability of a human operator.
Comprehensive Automation for Specialty Crops (CASC)
CASC is a multi-institutional initiative led by Carnegie Mellon Robotics Institute to comprehensively address the needs of specialty agriculture focusing on apples and horticultural stock.
Computer Vistion Clinical Monitoring
NREC and Columbia University researchers investigated whether computer vision could be used to monitor patients in clinical trials for spinal muscular atrophy (SMA) therapies.
Constrained Controlled Coverage
Coverage of two dimensional surfaces embedded in three dimensions with emphasis on uniform coverage.
Container Handling
NREC developed autonomous and semi-autonomous robotic systems for moving containerized plants to and from the field.
Context-based Recognition of Building Components
In this project, we are investigating ways to leverage spatial context for the recognition of core building components, such as walls, floors, ceilings, doors, and doorways for the purpose of modeling interiors using 3D sensor data.
Context-sensitive bicycle and pedestrian detection and tracking
This project supports the development of algorithms to fuse sensor data from cameras, lidar, and radar for the purposes of detecting and tracking pedestrians and bicycles in proximity to autonomous cars.
Cooperative Attack Munition Real Time Assessment (CAMRA)
We are developing the algorithms required to achieve cohesive, flexible and robust coordination of large teams of autonomous Wide Area Search Munitions (WASMs) that can be controlled by a small number of human operators.
Cooperative Robotic Watercraft
This project's vision is to have large numbers of very inexpensive airboats provide situational awareness and deliver critical emergency supplies to flood victims.
Coordinators
The DARPA COORDINATORS program defines a challenging multi-agent application, with agents operating in a highly dynamic environment, where no agent has a complete view of the problem.
Coplanar Shadowgrams for Acquiring Visual Hulls of Intricate Objects
We present a practical approach to shape-from-silhouettes using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction.
CSBots
We are developing a curriculum for the Introduction to Computer Science (CS1) course taught at two and four year colleges and for high school Computer Science courses.
CTA Robotics
This project adresses the problems of scene interpretation and path planning for mobile robot navigation in natural environment.
Daml-S (Semantic) Matchmaker
We have developed our Semantic Matchmaker, an entity that will allow web services to locate other services, using DAML-S, a DAML-based language for describing service capabilities.
Deception Detection
Learning facial indicators of deception
Depression Assessment
This project aims to compute quantitative behavioral measures related to depression severity from facial expression, body gestures, and vocal prosody in clinical interviews.
DEPTHX: Deep Phreatic Thermal Explorer (DEPTHX)
The DEPTHX: Deep Phreatic Thermal Explorer project is developing an autonomy for underwater explorers to enable them to map their environment and plan and execute science investigations. The DEPTHX vehicle will explore the flooded caverns of the Zacaton Cenote in Mexico in 2006.
Detailed Wall Modeling in Cluttered Environments
The goal of this project is to develop methods to accurately model wall surfaces even when they are partially occluded and contain numerous openings, such as windows and doorways.
Discovery
We are developing an autoconfiguration mechanism for agents and infrastructures.
Distributed Robot Architectures (DIRA)
The primary objective of this project is to develop fundamental capabilities that enable multiple, distributed, heterogeneous robots to coordinate tasks that cannot be accomplished by the robots individually.
Distributed SensorWebs
The Sensor Web initiative develops and implements wireless technology for distributed sensing and actuation in horticultural enterprises.
Dragon Runner
NREC collaborated with Automatika, Inc. (AI) to develop Dragon Runner, an ultra-rugged, portable, lightweight reconnaissance robot for use by the U.S. Marine Corps in Operation Iraqi Freedom (OIF) for urban reconnaissance and sentry missions.
DRC Tartan Rescue Team
During the Fukushima-Daiichi nuclear accident, robots weren’t able to inspect the facility, assess damage, and fix problems. DARPA wants to change this.
Driver Awareness (DACD)
The Driver Awareness and Change Detection (DACD) system lets soldiers view their vehicle’s surroundings from any perspective.
Drug Discovery System
An advanced computer vision system identifies and classifies the behavioral effects of new drug compounds, speeding the work of drug discovery.
Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects
This project involves creating an illusion of seeing moving objects through occluding surfaces in a video. We use a 2D projective invariant to capture information about occluded objects, allowing for a visually compelling rendering of hidden areas without the need for explicit correspondences.
E57 Standard for 3D Imaging System Data Exchange
The goal of this project is to develop a vendor-neutral data exchange format for data produced by 3D imaging systems, such as laser scanners.
Electronic Commerce
EMBER
The Ember project uses multi-agent teams, comprised of autonomous and human agents, to achieve effective results under emergency situations.
Enhanced Real Time Motion Planning
This project aims to generate new representations and algorithms to improve on-road motion planning.
Enhanced Road Network Data from Overhead Imagery
This project aims to enhance existing digital maps by extracting structure from aerial images.
Enhanced Teleoperation (Mini SACR) (SACR)
NREC developed a real-time 3D video system to improve situation awareness in teleoperation and indirect driving.
Enhanced Teleoperation (SACR) (Mini SACR)
NREC’s miniaturized SACR (Situational Awareness Through Colorized Ranging) system fuses video and range data from a small panoramic camera ring and scanning LADAR sensor to provide photo-realistic 3D video and panoramic video images of an EOD (explosive ordnance disposal) robot’s surroundings.
EOD Robot Operator Assist
NREC demonstrated an add-on situational awareness and autonomy package for EOD (explosive ordnance disposal) robots.
Event Detection in Videos
Our event detection method can detect a wide range of actions in video by correlating spatio-temporal shapes to over-segmented videos without background subtraction.
Expeditionary Target Identification and Exchange System (ETIES)
we are developing ETIES, a modular software system that provides for target identification, handoff, and engagement.
Exploration of Planetary Skylights and Caves
EyeVision
Face Recognition
Recognizing people from images and videos.
Face Recognition Across Pose
Recognizing people from different poses.
Face Video Hallucination
A learning-based approach to super-resolve human face videos.
Facial Feature Detection
Detecting facial features in images.
Feature Selection
Feature selection in component analysis.
Feature-based 3D Head Tracking
A feature-based head tracking algorithm can handle occlusions and fast motion of face.
Ferret
We have developed a mine mapping robot.
Fine Outreach for Science
The Fine Outreach for Science, sponsored by the Fine Foundation, provides GigaPan units to scientists and documents the evolution of GigaPan as a research tool.
Fingersight
We are developing videotactile fingertip sensors which will enable people to interact with the visible world via their fingertips.
Free-Roaming Planar Motors
We are developing autonomous planar motors for precision positioning.
Frontal Face Alignment
This face alignment method detects generic frontal faces with large appearance variations and 2D pose changes and identifies detailed facial structures in images.
Generic Active Appearance Models
We are pursuing techniques for non-rigid face alignment based on Constrained Local Models (CLMs) that exhibit superior generic performance over conventional AAMs.
Geometric Mechanics of Locomotion
GigaPan
GigaPan is the newest development of the Global Connection Project, which aims to help us meet our neighbors across the globe, and learn about our planet itself.
Global Connection Project
The Global Connection Project develops software tools and technologies to increase the power of images to connect, inform, and inspire people to become engaged and responsible global citizens.
Golf Course Mowing
NREC collaborated with the Toro Company to develop a prototype autonomous mower that can be used in the maintenance of a golf course, sports field or commercial landscape.
GPS-Free Positioning (MINT)
NREC is developing MINT (Micro-Inertial Navigation Technology), a wearable navigation and localization aid.
Grace
The Grace project is a collaboration among several schools and research labs to design a robot capable of fully performing the AAAI Grand Challenge.
Hand Tracking and 3-D Pose Estimation
A 2-D and 3-D model-based tracking method can track a human hand rapidly moving and deformed on complicated backgrounds and recover its 3-D pose parameters.
High-Aspect-Ratio CMOS Micromachining Process
We have developed an integrated CMOS- MEMS process in which electrostatically actuated microstructures with high-aspect-ratio composite-beam suspensions are fabricated using conventional CMOS processing.
Hopping Robots (RATS)
NREC is researching and developing all-terrain hopping robots for space, search and rescue, and defense applications.
Human Kinematic Modeling and Motion Capture
We are developing a system for building 3D kinematic models of humans and then using the models to track the person in new video sequences.
Human Motion Transfer
We are developing a system for capturing the motion of one person and rendering a different person performing the same motion.
Hybrid Safety System (HSS)
The Hybid Safety System (HSS) enables humans and industrial robots to work together safely.
Hydroponic Automation
We are developing inexpensive robotic approaches towards hydroponic growing, which can increase overall crop yield.
Image Alignment
Image alignment with parameterized appearance models.
In-Situ Image Guidance for Microsurgery
We have developed a new image-based guidance system for microsurgery using optical coherence tomography (OCT), which presents a continuously updated virtual image in its correct location inside the scanned tissue. OCT provides real-time, 6-micron resolution images at video rates within a 2-6 mm axial range in soft or transparent tissue, and is therefore suitable for guidance to various targets in the eye. Ophthalmologic applications in general are diverse within the realm of anterior-segment surgery, whether for medical treatment or for scientific experimentation. Surgical manipulations, especially of the cornea, limbus, and lens may eventually be aided or enabled, and as an example we are presently working to guide access to Schlemm’s canal for treating Glaucoma.
Infantry Support (TUGV)
NREC designed, developed, field tested and successfully demonstrated a high-mobility tactical unmanned ground vehicle (TUGV) for the United States Marine Corps.
Inferring Adversarial Intent with Automated Exploratory Behaviors
This project examines how an autonomous vehicle can monitor and infer the intent of other vehicles (either human-driven or autonomous) around it. We are studying how the autonomous vehicle can explicitly select actions to perform that will explicitly test the inferred intent of the external vehicles.
Intelligent Diabetes Assistant
We are working to create an intelligent assistant to help patients and clinicians work together to manage diabetes at a personal and social level. This project uses machine learning to predict the effect that patient specific behaviors have on blood glucose.
Intelligent Electrocardiogram
An intelligent portable electrocardiogram (ECG) will automatically diagnose arrhythmias that could lead to sudden cardiac death (SCD).
Intelligent Infrastructure for Automated People Movers
We are evaluating sensors and autonomous control mechanisms that will allow for future public transportation systems to perceive the environment and operate more safely and efficiently.
Intelligent Monitoring of Assembly Operations (IMAO)
Our goal is to allow people and intelligent and dexterous machines to work together safely as partners in assembly operations performed within industrial workcells. To ensure the safety of people working amidst active robotic devices, we use vision and 3D sensing technologies, such as stereo cameras and flash LIDAR, to detect and track people and other moving objects within the workcell.
iSTEP
iSTEP (innovative Student Technology ExPerience) is a TechBridgeWorld program that provides Carnegie Mellon students with real-world experience in applying their knowledge and skills for creative problem solving in unfamiliar settings. The multidisciplinary iSTEP team is comprised of a mix of undergraduate and graduate students and recent alumni from various departments at Carnegie Mellon. The students work in a globally-distributed team with some members working from campus with others living and working at the overseas partner location. Together with TechBridgeWorld, the iSTEP team collaborates on technology research projects for underserved communities with local partners. The iSTEP internship locations include Tanzania in 2009, Bangladesh in 2010 and Uruguay in 2011 with projects in assistive technology, literacy tools and information exchange.
Land Mine Detection and Neutralization
Laser Coating Removal
NREC and Concurrent Technologies Corporation (CTC) are designing an environmentally friendly system to remove coatings from aircraft.
Laser Coating Removal for Aircraft
NREC and Concurrent Technologies Corporation (CTC) are designing an environmentally friendly system to remove coatings from aircraft.
Learning Optimal Representations
Learning optimal representations for classification, image alignment, visualization and clustering.
LIDAR and Vision Sensor Fusion for Autonomous Vehicle Navigation
The goal of this project is to investigate methods for combining laser range sensors (i.e., LIDARs) with visual sensors (i.e., video cameras) to improve the capabilities of autonomous vehicles.
LNG Pipe Vision (LPV)
A pipe-crawling robot visually inspects pipes in liquid natural gas (LNG) plants for corrosion.
Low Dimensional Embeddings
Finding low dimensional embeddings of signals optimal for modeling, classification, visualization and clustering.
Low-Flying Air Vehicles
We leverage perception technology originally developed for ground-based robot vehicles during 20 years of research at the Field Robotics Center. We combine this proven perception and control technology with aircraft-centric engineering and optimization.
Lunar Regolith Excavation and Transport
This research develops lightweight robotic excavators for digging and transporting regolith (loose soil) on the Moon.
Lunar Rover for Polar Crater Exploration (Scarab)
The Scarab lunar rover has been designed to carry a 1-meter coring drill and a payload of science instruments that can analyze the abundance of hydrogen, oxygen and other materials.
Manipulator Coordination (ACMM)
NREC implemented an autonomous manipulator for explosive ordnance disposal (EOD) robots that greatly simplifies manipulator use and coordinates the movements of the manipulator and its platform.
Message from Me
A collaboration between the CREATE Lab and the Pittsburgh Association for the Education of Young Children, Message from Me enables young children to better communicate with parents about their daytime activities at child care centers through the use of digital cameras, microphones, email, phone messaging and other technologies.
MIGSOCK
We are developing MIGSOCK, a Linux kernel module that re-implements TCP to make socket migration possible.
Minicrusher
Mini Crusher is a small, versatile robot inspired by the world-famous Crusher UGV.
Mobile Communications amongst Heterogeneous Agents (MoCHA)
We are developing a multi-agent system for "Anyware" communications and display.
Modeling Cultural Factors in Collaboration and Negotiation (MURI 14)
This multi-university cooperation project concentrates on Modeling Cultural Factors in Collaboration and Negotiation The goal of this project is to conduct basic research to provide validated theories and techniques for descriptive and predictive models of dynamic collaboration and negotiation that consider cultural and social factors.
Modelling Synergies in Large Human-Machine Networked Systems (MURI 7)
This multi-university cooperation project concentrates on modeling synergies in large Human-Machine networked systems. The goals of this project are to achieve following: develop validated theories and techniques to predict behavior of large-scale, networked human-machine systems involving unmanned vehicles; model human decision making efficiency in such networked systems; and investigate the efficacy of adaptive automation to enhance human-system performance.
Modular Snake Robots
Monitoring of Coastal Ocean Processes
This project is attempting to elucidate the basic principles governing environmental field model synthesis based on the integration of adaptive robot sampling with human decision-making
MORSE
The MORSE project is a simulated range operation, designed to evaluate effectiveness of the cognitive models and agents, in order to improve individual and team performance.
Motion Planning for Snake Robots
Creating algorithims for computer control of hyper-redundant manipulators existing in high dimension configuration spaces.
Moving Object Detection, Modeling, and Tracking
The goal of this research is to better understand how vision and 3D LIDAR data be combined for detecting and tracking moving objects.
Multi-cultural Human-Robot Interaction
We are exploring human robot interaction in a mixed cultural setting. To do this exploration we have developed a robot Hala that runs in the Carnegie Mellon Qatar central reception and speaks both English and Arabic. Sponsored by QNRF.
Multi-People Tracking
Our multi-people tracking method can automatically initialize and terminate paths of people and follow multiple and changeable number of people on cluttered scenes over long time intervals.
Multi-view Car Detection and Registration
This method can detect cars with occlusions and varying viewpoints from a single still images by using multi-class boosting algorithm.
Multimodal Data Collection
A multimodal database of subjects performing the tasks involved in cooking, captured with several sensors (audio, video, motion capture, accelerometer/gyroscope).
NavPal
Safe and independent navigation of urban environments is a key feature of accessible cities. People who have physical challenges need practical, customizable, low-cost and easily-deployable mobility aids to help them safely navigate urban environments. Technology tools provide opportunities to empower people with disabilities to overcome some day-to-day challenges. Our work focuses on designing, implementing, testing and deploying a smart mobile phone-based personalized navigation aid (NavPal) to enhance the navigation capability and thereby the independence and safety of visually impaired and deafblind people. The goal of the project is to use the smart phone navigation application to assist visually impaired and deafblind people to safely evacuate specific buildings in emergency situations. NavPal is a joint effort of TechBridgeWorld and the rCommerce lab, two research groups in the Robotics Institute at Carnegie Mellon University, in collaboration with Google Inc.
Near Regular Texture -- Analysis, Synthesis and Manipulation
We are developing near regular texture synthesis algorithms for improved natural appearances.
Negative Obstacle Detection
NREC is developing a perception system to accurately detect negative obstacles in the path of an unmanned vehicle (UGV).
Off-Road Autonomy (UPI)
The UPI program improved the speed, reliability, and autonomy of unmanned ground vehicles (UGVs) operating in extreme off-road terrain.
Optimal LIDAR Sensor Configuration
This project is developing a framework that allows objective comparison between alternative LIDAR configurations.
Orchard Spraying
NREC converted a John Deere tractor into an autonomous vehicle for spraying water in orchards.
Partial Order Scheduling Procedures
We are investigating the development, analysis and application of optimizing search procedures for generating plans and schedules that retain temporal flexibility
Peat Moss Harvesting
NREC developed an add-on perception system for automating peat moss harvesting.
PeepPredict
We are applying machine learning techniques to model and compute long-term and short-term trajectories of people in a variety of settings.
Perception for LS3
NREC’s sensor system for DARPA’s Legged Squad Support System (LS3) enables LS3 to perceive its surroundings and autonomously track and follow a human leader.
Pipeline Explorer
NREC designed, built and deployed Pipeline Explorer, the first untethered, remotely-controlled robot for inspecting live underground natural gas distribution pipelines.
ProbeSight
We are using video cameras to give vision to the ultrasound transducer. This could eventually lead to automated analysis of the ultrasound data within its anatomical context, as derived from an ultrasound probe with its own visual input about the patient’s exterior. We are exploring both probe-mounted cameras, as well as optically-tracked stand-alone cameras which could view a larger portion of the patient's exterior.
Psychophysics of Haptic Interaction
Quantitative comparisons of human subjects performing peg-in-hole experiments with real, virtual, and real-remote haptic environments
Quality Assessment of As-built Building Information Models using Deviation Analysis
The goal of this project is to develop a method for conducting quality assessment (QA) of as-built building information models (BIMs) that utilizes patterns in the differences between the data within and between steps in the as-built BIM creation process to identify potential errors.
Rain and Snow Removal via Spatio-Temporal Frequency Analysis
Particulate weather, such as rain and snow, create complex flickering effects that are irritating to people and confusing to vision algorithms. We formulate a physical and statistical model of dynamic weather in frequency space. At a small scale, many things appear the same as rain and snow, but by treating them as global phenomena, we can easily remove them.
Real-time Face Detection
A face detection system has an accurate detection rate and real time performance by using an ensemble of weak classifiers.
Real-time Lane Tracking in Urban Environments
The purpose of this project is to develop methods for the real-time detection and tracking of lanes and intersections in urban scenarios in order to support road following by an autonomous vehicle in GPS-denied situations.
Real-Time Scheduling of ACCESS Paratransit Transportation
The goal of this project is to increase the effectiveness of paratransit service providers in managing daily operations through the development and deployment of dynamic, real-time scheduling technology.
Representation of As-built BIMs
This project is investigating how the imperfections of sensed 3D data can be represented within the context of the BIM framework, which was originally designed to handle only perfect data from CAD systems.
Resonator Synthesis
We have developed a resonator synthesis module; the first in a series of synthesis modules to overcome the lack of MEMS Cell Libraries.
Retract-like structures for SE(2) and SE(3)
Motion planning algorithm for thr rod-shaped robots, based on distance measurements.
RETSINA Semantic Web Calendar Agent
We are Retsina Semantic Web Calendar Agent to assist in organizing and scheduling meetings between several individuals, and coordinating these meetings based on existing schedules maintained by MS Outlook.
Riverine Mapping
This project is developing technology to map riverine environments from a low-flying rotorcraft. Challenges include dealing with varying appearance of the river and surrounding canopy, intermittent GPS and a highly constrained payload. We are developing self-supervised algorithms that can segment images from onboard cameras to determine the course of the river ahead, and we are developing devices and methods capable of mapping the shoreline.
Roboceptionist
In collaboration with the Drama Department, we are developing technology for long-term social interaction.
Robot Diaries
A Robot Diary is a customizable robot designed to serve as a means of expression for its creator. Ultimately, the robot diary provides a unique means of exploring, expressing, and sharing emotions, ideas and thoughts while promoting technological literacy and informal learning.
Robotic Simulation (RSS)
NREC collaborated with RAND Corporation to incorporate NREC’s field-proven robotic mobility and planning software into RAND’s suite of high-resolution, force-on-force simulators.
Robots in Scansorial Environments (RiSE)
We are developing a bioinspired climbing robot with the unique ability to walk on land and climb on vertical surfaces.
Robust Autonomous Freeway Driving Behaviors
The goal of this project is to develop robust autonomous freeway driving behaviors that include: distance keeping handling entrance ramps; high-density traffic lane selection and merging; reasoning about sensor confidence, degradation, and failure; and accommodation of human-in-the-loop interaction.
Robust Detection of Highway Work Zones
This project is developing computer vision algorithms to detect and classify highway work zones.
Row Crop Harvesting
The NREC row crop harvesting project targeted three levels of automation: “cruise control”, “teach/playback” and full autonomy.
Safety for UGVs
A flexible, behavior-based approach to safety lowers the risk of operating a large, fast-moving UGV.
Schematic Design for MEMS
We have developed nodal simulation software to enable a structured representation for MEMS design using a hierarchical set of MEM components.
Science Autonomy
The Science Autonomy project seeks to improve the accuracy and effectiveness of robotic planetary investigations by enabling automatic detection of relevant science features, classification of feature properties, and exploration planning that responds on-the-fly.
Search and Rescue
Giving Urban Search and Rescue workers more technological tools to help find and save victims of natural disasters.
Secure Agent Name Server
We are developing a secure agent name server which requires preregistration for deployment.
Sensabot Inspection Robot
NREC is developing an inspection robot for use in oil and gas production plants.
Sense and Avoid
We are developing Unmanned Aerial Vehicles (UAVs) that sense and avoid autonomously.
Shape Stable Body Frames
Sidewinding
Simple Hands
Designing simple grippers for autonomous general purpose manipulation.
Snackbot
The Snackbot is a mobile robot designed to deliver food to the offices at CMU while engaging in meaningful social interaction.

Snake Robot Design
Analyzing the factors that are of importance in designing a snake robot, and implementing new designs.
Software Package for Precise Camera Calibration
A novel camera calibration method can increases not only an accuracy of intrinsic camera parameters but also an accuracy of stereo camera calibration by utilizing a single framework for square, circle, and ring planar calibration patterns.
Specialty Crop Automation
The Integrated Automation for Sustainable Specialty Crops Farming project teams the National Robotics Engineering Center (NREC), the University of Florida, Cornell University and John Deere to bring precision agriculture and autonomous equipment to citrus growers.
Spinner (UGCV)
With development of the Spinner unmanned ground vehicle, an NREC-led team delivered technical breakthroughs in mobility, mission endurance and payload fraction.
Strawberry Plant Sorter
NREC is developing an automated, machine vision-based strawberry plant sorter.
Stress Testing Autonomous Systems
Stress Tests for Autonomy Architectures (STAA) finds autonomy system safety problems that are unlikely to be discovered by other types of tests.
Sweep Monitoring (SMS)
NREC developed the Sweep Monitoring System (SMS) for training soldiers and demining personnel to use hand-held land mine detectors.
Swimming
TechCaFE
TechCaFE (Technology for Customizable and Fun Education) is a TechBridgeWorld program that provides educators with simple and customizable tools to make learning fun for students. TechCaFE currently offers tools for teaching and practicing English literacy. This includes CaFE Teach, a web-accessible content authoring tool that teachers use to create and modify English grammar exercises. Students learn content added by teachers through CaFE Teach via CaFE Web, a web-based practice tool, or CaFE Phone, a mobile phone game. Future work involves developing CaFE Play for customizing educational games. TechBridgeWorld has worked with or is currently working with primary school and university students, deaf and hard-of-hearing students and migrant workers, in Bangladesh, Qatar, Tanzania and the United States
Teleoperation Booth
NREC has developed an immersive teleoperation system that allows operators to remotely drive an unmanned ground vehicle (UGV) more effectively over complex terrain.
Temporal Segmentation of Human Motion
Temporal segmentation of human motion

Temporal Shape-From-Silhouette
We are developing algorithms for the computation of 3D shape from multiple silhouette images captured across time.
Terrain Estimation using Space Carving Kernels
This project uses information about the ray extending from the sensor to the sensed surface be used to improve terrain estimation in unstructured environments.
Text Miner
We are developing Text Miner, a system that automatically classifies news reports on a company's financial outlook.
Texture Replacement in Real Images
We are developing methods to replace some specified texture patterns in an image while preserving lighting effects, shadows and occlusions.
The CMUcam Vision Sensor (CMUcam)
We have developed CMUcam - a new low-cost, low-power sensor for mobile robots.
The Personal Rover Project
We have developed the Personal Rover - a 18"x12"x24" highly autonomous, programmable robot.
Tightly Integrated Stereo and LIDAR
The goal of this project is to use sparse, but accurate 3D data from LIDAR to improve the estimation of dense stereo algorithms in terms of accuracy and speed.
Transforming Surface Representations to Volumetric Representations
This project’s goal is to transform the surface-based representations that are naturally derived from sensed data into volumetric representations needed by CAD and BIM.
Tree Inventory
A tree inventory system uses vehicle-mounted sensors to automatically count and map the locations of trees in an orchard.
Tunnel Mapping
NREC is pioneering research and development of a low power, small, lightweight system for producing accurate 3D maps of tunnels through its Precision Tunnel Mapping program.
Tunnel Mapping
NREC is pioneering research and development of a low power, small, lightweight system for producing accurate 3D maps of tunnels through its Precision Tunnel Mapping program.
TURBO-PLAN: An Interactive Mission Planning Advisor
UAV/UGV Air-Ground Collaboration
This project is concerned with the development of a distributed estimation system of collaborating UAVs (Unmanned Aerial Vehicle) and AGVs (Autonomous Ground Vehicles) that detect, track and estimate the location of a person, vehicle or object of interest on the ground.
Understanding and Modeling Trust in Human-Robot Interactions
This collaboration with the UMass Lowell Robotics Lab seeks to develop quantitative metrics to measure a user's trust in a robot as well as a model to estimate the user's level of trust in real time. Using this information, the robot will be able to adjust its interaction accordingly.

Unification of Component Analysis
This project aims to find the fundamental set of equations that unifies all component analysis methods.
Unmanned Ground Vehicle for Security (Terrascout)
We are developing autonomous ATVs to secure borders and facility perimeters.
Urban Challenge
Carnegie Mellon University and General Motors built an autonomous SUV that won first place in the 2007 DARPA Urban Challenge.
Urban Search and Rescue
We are developing Hybrid Teams of Autonomous Agents: Cyber Agents, Robots and People (CARPs) to address the challenges of urban search and rescue.
VANE
We are exploring a mix of physics-based and data driven high fidelity sensor modeling techniques. The goal is to develop a system that can provide much more realistic UGV simulation than current techniques. Such simulation will play a crucial role in speeding up the development cycle, and in validating platforms. Sponsored by the US ACE ERDC.
Vehicle Classifier
A vision-based vehicle classifier uses machine learning techniques to identify cars and trucks in video images.
Vehicle Localization in Naturally Varying Environments
The purpose of this project is to develop methods for place matching that are invariant to short- and long-term environmental variations in support of autonomous vehicle localization in GPS-denied situations.
Visual SLAM for Industrial Robots
We are exploring algorithms to support visual mapping and localization for a robot vehicle operating in an industrial setting such as an LNG production plant. This work is sponsored by QNRF.
Visual Yield Mapping with Optimal and Generative Sampling Strategies
This research project aims to develop methods to automatically collect visual image data to infer, estimate and forecast crop yields -- producing yield maps with high-resolution, across large scales and with accuracy. To achieve efficiency and accuracy, statistical sampling strategies are designed for human-robot teams that are optimal in the number of samples, location of samples, cost of sampling and accuracy of crop estimates.