Advanced Search   
  Look in
       Title     Description
       Inactive Projects

New Projects at the Robotics Institute
Project LISTEN\'s Reading Tutor
Project LISTEN's Reading Tutor listens to children read aloud.
Visual Yield Mapping with Optimal and Generative Sampling Strategies
This research project aims to develop methods to automatically collect visual image data to infer, estimate and forecast crop yields -- producing yield maps with high-resolution, across large scales and with accuracy. To achieve efficiency and accuracy, statistical sampling strategies are designed for human-robot teams that are optimal in the number of samples, location of samples, cost of sampling and accuracy of crop estimates.
Cell Tracking
We are developing fully-automated computer vision-based cell tracking algorithms and a system that automatically determines the spatiotemporal history of dense populations of cells over extended period of time.
In-Situ Image Guidance for Microsurgery
We have developed a new image-based guidance system for microsurgery using optical coherence tomography (OCT), which presents a continuously updated virtual image in its correct location inside the scanned tissue. OCT provides real-time, 6-micron resolution images at video rates within a 2-6 mm axial range in soft or transparent tissue, and is therefore suitable for guidance to various targets in the eye. Ophthalmologic applications in general are diverse within the realm of anterior-segment surgery, whether for medical treatment or for scientific experimentation. Surgical manipulations, especially of the cornea, limbus, and lens may eventually be aided or enabled, and as an example we are presently working to guide access to Schlemm’s canal for treating Glaucoma.
Comprehensive Automation for Specialty Crops
CASC is a multi-institutional initiative led by Carnegie Mellon Robotics Institute to comprehensively address the needs of specialty agriculture focusing on apples and horticultural stock.
Face Recognition
Recognizing people from images and videos.
Riverine Mapping
This project is developing technology to map riverine environments from a low-flying rotorcraft. Challenges include dealing with varying appearance of the river and surrounding canopy, intermittent GPS and a highly constrained payload. We are developing self-supervised algorithms that can segment images from onboard cameras to determine the course of the river ahead, and we are developing devices and methods capable of mapping the shoreline.
Autonomous Vehicle Health Monitoring
As DoD autonomous vehicles begin to take on more-complex and longer-duration missions they will need to incorporate knowledge about the current state of their sensing, actuation, and computing capabilities into their mission and task planning.
Tunnel Mapping
NREC is pioneering research and development of a low power, small, lightweight system for producing accurate 3D maps of tunnels through its Precision Tunnel Mapping program.
Hydroponic Automation
We are developing inexpensive robotic approaches towards hydroponic growing, which can increase overall crop yield.
Lunar Ice Discovery Initiative
Icebreaker is a proposed mission to explore the south pole of the Moon.
Monitoring of Coastal Ocean Processes
This project is attempting to elucidate the basic principles governing environmental field model synthesis based on the integration of adaptive robot sampling with human decision-making
We are using video cameras to give vision to the ultrasound transducer. This could eventually lead to automated analysis of the ultrasound data within its anatomical context, as derived from an ultrasound probe with its own visual input about the patient’s exterior. We are exploring both probe-mounted cameras, as well as optically-tracked stand-alone cameras which could view a larger portion of the patient's exterior.

Robust Autonomy

Weld Sequence Planning
Many manufacturing processes require considerable setup time and offer a large potential for schedule compression. For example, Pratt&Miller Inc. constructed a military spec HMMWV welded spaceframe with best-practice methods, this took 89 billable hours — cutting square tubes, preparing them for welding, and then performing the final welding tasks to build the structure. On analysis, we discovered that the time actually spent on constructive processes was only 3% (slightly over two hours) of that total. Thus 97% of the overall time can potentially be eliminated. We built a system to exploit this opportunity that includes a welding robot, an augmented reality projection system and a laser displacement sensor. To test the system, we fabricated a custom variant of a HMMWV welded spaced frame where pre-process tasks were automated: BOM acquisition, component preparation, sequence planning, weld parameter optimization, fixture planning, workpiece localization and finally automated work assignments were delegated to a robot and a person. In the end, we were able to make the custom welded product nearly 9x faster than was previously possible. This achievement also translates economically to the bottom line because the cost of raw materials was only 6% of the total costs. This talk will highlight the technical achievements that made this possible.

Fine Motion Planning for Assembly

Fine Motion Planning for Mobile Robots in Large Structure Assembly

We have designed and built inkjet-based bioprinters to controllably deposit spatial patterns of various growth factors and other signaling molecules on and in biodegradable scaffold materials to guide tissue regeneration.

We are developing implantable, wireless MEMs-based sensors for various applications, such as monitoring bone regeneration and left ventricular pressure, to provide timely feedback to clinicians to help make better decisions on timing of therapeutic interventions.