October 30, 2018   

by Tanya M. Anandan, Contributing Editor
Robotic Industries Association

Tomorrow’s robotics are taking shape in today’s labs. From package delivery robots and self-driving cars, to surgical snakes and search and rescue robots, the innovations have profound implications. A year, 3 years, or maybe 5 to 10 years down the road, they could be at our door, in our home, or at our side when we need them most.

We will take a peek into our future through the lens of a few of the nation’s top academic institutions for robotics research. Each of these universities continues to attract and recruit renowned faculty to their robotics rosters. They have interdisciplinary master’s and doctoral programs in robotics. They spawn successful spinoffs and notable alumni that shake up the industry. They embrace a comprehensive approach to robotics research and education.

As we’ve said before, robotics is a multidisciplinary sport. The traditional areas of study, mechanical engineering, electrical engineering and computer science, have broadened into biological systems and cognitive science. Many of the top university robotics programs are attacking robotics challenges from all angles and making fascinating discoveries along the way.

Human-Robot Interaction
Established in 1979, the Robotics Institute at Carnegie Mellon University (CMU) is one of the oldest in the country and the first to offer graduate programs in robotics. The institute encompasses the main facility on CMU’s campus in Pittsburgh, Pennsylvania, the National Robotics Engineering Center (NREC) in nearby Lawrenceville, and Robot City in Hazelwood with its 40 acres of robot testing fields at the site of a former steel mill. The university and the Greater Pittsburgh robotics scene have transformed the Steel City into Roboburgh, one of the well-established hubs we visited in a previous article on Robotics Clusters the Epicenter for Startups.

The Robotics Institute is under the CMU School of Computer Science. Researchers take a comprehensive approach to robotics, studying robot design and control, perception, robot learning, autonomy, and human-robot interaction (HRI).

In fact, a central theme according to Martial Hebert, the institute’s director, is HRI. “Much of the work in robotics has less to do with robots. It has to do with people,” he says. “Understanding people, predicting people, and understanding their intentions. Everything from understanding pedestrians for self-driving cars, to understanding coworkers in collaborative robot manufacturing, any application that involves interaction with people at any level.”

One of the ways CMU is trying to better understand people is by studying our body language. Researchers built a life-sized geodesic dome equipped with VGA cameras, HD cameras, and depth sensors to capture images from tens of thousands of trajectories. The result is dynamic 3D reconstruction of people, their body poses, and motions.

As humans, we speak volumes through our body movements, posture, and facial expressions without needing to utter a word. The CMU Panoptic Studio was built to capture these subtle nonverbal cues and create a database of our body language to help robots better relate to humans. The research is ongoing with datasets now available for full-body motions, hand gestures, and 3D facial expressions.

Outside of academia, the work has not gone unnoticed. Facebook was inspired to open a lab in Pittsburgh and hired Yaser Sheikh, the CMU professor who developed the Panoptic Studio. It turns out nonverbal social interaction is just as important in the virtual world. Think Oculus Rift, the virtual reality technology now owned by Facebook.

Machine Learning and Robot Intelligence
Hebert says machine learning is another big area for CMU. The idea is to have a robot learn from its own actions and data, and learn to get better over time. Examples include manipulators that learn how to grasp, or drones that learn how to fly better. The recent collaboration between CMU and Honeywell Intelligrated to develop advanced supply chain robotics and AI will harness the power of machine learning to control and operate multiple robotic technologies in connected distribution centers.

“It’s a material handling application that includes sorting packages and moving packages around distribution centers at very high rates,” says Hebert, without divulging much about the robots they are using. A more recent Honeywell partnership with Fetch Robotics may provide a clue.

“We’re past the stage where robots only do repetitive operations,” he says. “They have to be able to make decisions, they have to be able to adapt to the environment. Things are not always in the same place or where they should be. That’s where machine learning and autonomy come into play. All of it comes together in this type of application.”

The project is underway at the university’s NREC facility, where for over 20 years CMU researchers have helped conceptualize and commercialize robotic technologies for industrial and government clients. From laser paint removal robots for F-16 fighter jets to unmanned bomb-clearing vehicle convoys, to autonomous crop harvesters, construction loaders and mining robots, the impact has been felt across numerous industries. Watch this video to see NREC technology in action, such as this Caterpillar self-driving mining truck.

Despite CMU’s audacious world of autonomous vehicles, Hebert says they focus less on the physical aspect of robotics research compared to robot intelligence. This is a recurring theme we hear inside and outside academia, the attention on algorithms, or the software side of robotics. He offers an example.

Kaarta makes a 3D mobile scanning and mapping generation system that puts advanced simultaneous localization and mapping (SLAM) technology in the palm of your hand – in real time. The 3D digital model is generated right in front of you on a handheld touchscreen interface, without the need for post-processing. At its heart is the patent-pending advanced 3D mapping and localization algorithms, a product of CMU’s robotics lab.

“Our contribution was to take massive amounts of data from the sensors and optimize it very quickly and efficiently,” says Hebert, crediting advanced mathematics and algorithms for the feat.

Watch as Kaarta CEO Kevin Dowling demonstrates the system. He’s also a product of the doctoral program at CMU.

The system’s compact size and customizable imaging hardware allow it to be mounted to ground or aerial vehicles, such as drones, for interior and exterior use. Right now, the company’s products are directed toward infrastructure inspectors, surveyors, engineers, architects and facilities planners. But imagine the possibilities for first responders, hazmat teams, law enforcement, and down the road, for self-driving cars.

Search and Rescue Robots
Speaking of first responders, that brings us to a peculiar-looking robot developed in the labs at CMU. It goes where humans can’t.

The Snakebot robot undulates its way into tight spaces and sticky situations, where the environment may be unhospitable and unpredictable for people, and even canines. Snakebot was on the ground for search and rescue efforts after a disastrous earthquake hit Mexico City last fall. Then this past spring, it was named Ground Rescue Robot of the Year.

Howie Choset, a professor of computer science and Director of the CMU Biorobotics Lab where Snakebot was developed, says they are proud of the robot and its accomplishments to date. Still, challenges remain.

“The challenges are how to move (locomotion), where to move (navigation), creating a map of the environment, and providing the inspector with good remote situational awareness,” says Choset.

A camera on the front of the robot helps an operator see the immediate area around the robot, but this has limitations in low-light conditions and highly cramped environments. In disaster scenarios, sensors for perceiving sound and smell may be more useful in detecting signs of life.

Watch CMU’s snake robot use its multi-joint body to climb poles, slither under fences, maneuver through pipes, roll into storm drains, and even swim.

Choset envisions snake robots destined for manufacturing applications such as inspecting tight spots inside aircraft wings, or installing fasteners inside airplane wings or boats, and painting inside car doors. He also hopes to see these snake bots at work in the nuclear industry.

Medical Robotics
Another snake-like robot developed in the Biorobotics Lab has made significant headway in medical robotics, another noteworthy area of study for CMU. Unlike the snake robots used in search and rescue or industrial applications, the surgical snake is a cable-driven robot. Choset explains the difference.

“Imagine a marionette that has little wires that pull on different parts of the doll. A cable-driven robot is one where internal cables pull on the links to cause the joints to bend. The motors don’t have to be on board, so you can get away with a lighter mechanism, or in my case, use bigger motors.”

This is in contrast to the locomoting robot that crawls through pipes, where all of the motors are on board.

“I think minimally invasive surgery is a great area for robotics,” says Choset. “The challenges are access, how to get to the right spots, and once you’re there, developing tools, end effectors and other mechanisms to deliver therapies and perform diagnostics. Situational awareness, or being able to really understand your surrounding environment, is the next step after that.”

The biorobotics team at CMU envisions minimally invasive no-scar surgery in the snake robot’s future. But in the meantime, the technology has already found success in transoral robotic surgery and has been licensed to Medrobotics Corporation. Professor Choset is a cofounder of the Massachusetts-based company. RIA will dive deeper into this technology next month when we focus on surgical robotics.

When a robot is attached to the body or inside a human body, medical robotics takes human-robot interaction to new levels. But what about when humans are the robot’s passengers? That’s basically the scenario with self-driving cars. Let’s take a ride to the Motor City.

Self-Driving Cars
The University of Michigan may be world renowned for its football program, but it is landmark self-driving vehicle research that put Michigan Robotics on the map – literally. Just 40 miles outside of Detroit, the Mcity Test Facility is a one-of-a-kind proving ground for testing connected, autonomous vehicle technologies in simulated urban environments.

The 32-acre site on U-M’s campus in Ann Arbor has miles of roads with intersections, traffic signs and signals, sidewalks, simulated buildings, obstacles such as construction barriers, and even the occasional “dummy” to test pedestrian avoidance technology. It’s the quintessential outdoor lab for researchers envisioning a local network of connected autonomous vehicles by 2021.

Watch as researchers demonstrate the advantages of connected self-driving vehicles and how augmented reality is helping them test these technologies more efficiently and safely.

“Self-driving is probably what we’re known for the most,” says Dmitry Berenson, a professor of engineering at U-M. “That’s a real strength here. We have the U-M Transportation Research Institute (UMTRI) that has been conducting self-driving work for many years, even before it was popular. We’re very close to the auto manufacturers, so we can very quickly set up meetings and integrate with them, and get feedback. Well-established relationships with Toyota and Ford are pushing self-driving technology forward.”

We first met Berenson when he was at Worcester Polytechnic Institute leading research in machine learning and manipulation planning. Back then, we were discussing his work with motion planning algorithms for a humanoid robot stacking boxes. Check out Our Autonomous Future with Service Robots. Now, Berenson is Director of the Autonomous Robotic Manipulation (ARM) Lab, which he founded two years ago when he joined U-M. Algorithms are still his passion.

“Michigan is doing something really important, which is pushing the boundaries on algorithms to get robots into unstructured environments in the real world,” says Berenson. “We have people working on this in terms of aerospace applications, all the way to legged locomotion, to manipulation like my group, to self-driving. There’s a huge push in self-driving technology. Some of our faculty have startups in this area.”

U-M Professor Edwin Olson cofounded May Mobility in 2017. The startup’s autonomous shuttle service is currently operating in downtown Detroit and charting new territory in other Midwestern cities. As Director of the APRIL Robotics Lab, Olson is known for his work in perception algorithms, mapping, and planning. The licensed intellectual property behind these self-driving shuttles was developed in his lab.

Replacing diesel buses in some cases, the six-passenger electric shuttles navigate city streets on specific routes in business districts, or on corporate and college campuses. This follows last year’s pilot in which May Mobility shuttled employees of Bedrock Detroit and parent company Quicken Loans between their offices and the city’s parking garages.

A surge in funding from major investors such as BMW i Ventures, Toyota AI Ventures, and Y Combinator, plus a new partnership with tier-one auto supplier Magna International, could accelerate a nationwide rollout for Olson’s autonomous shuttle startup.

Another U-M faculty member, Ryan Eustice, is Senior Vice President of Automated Driving at Toyota Research Institute. He’s known for his work in SLAM technology.

“SLAM is crucial technology for self-driving cars,” says Berenson. “They don’t know where they are without it.”

Eustice is Director of the Perceptual Robotics Laboratory (PeRL), a mobile and marine robotics lab at U-M focused on algorithm development for robotic perception, navigation, and mapping. He worked on the Next Generation Vehicle (NGV) project with Ford Motor Company, the first automaker to test an autonomous vehicle at Mcity. SLAM meets snow, check it out.

Robotics Raises the Roof
Ford has a legacy stake in Michigan Robotics. A $75 million facility currently under construction on the U-M Ann Arbor campus will be named the Ford Motor Company Robotics Building in recognition of the automaker’s $15 million gift to the engineering college. The 140,000-square-foot building will house a three-story fly zone for autonomous aerial vehicles, an outdoor obstacle course for legged robots, and a high-bay garage space for self-driving cars. Ford will also establish an on-campus research laboratory occupying the fourth floor, where the automaker’s researchers will be able to easily collaborate with the university’s faculty and provide hands-on experiences for students.

The new facility will also include classrooms, offices and lab spaces. Bringing students, faculty and researchers together under one roof in a space dedicated to robotics will encourage fluent interaction and the exchange of ideas. In effect, a culture designed to study the problems and solutions of robotics from all angles, including mechanics, electronics, perception, control and navigation, an approach university leadership refers to as “full spectrum autonomy.” The building is slated for completion in early 2020.

Toyota Research Institute has also dedicated funding to U-M research efforts. “They value our robotics and self-driving technology not because they think it will advance their interests tomorrow, but 5 or 10 years down the road,” says Berenson. “My ARM Lab has one of these grants.”

Robot Manipulation and Grasping
In his lab, Berenson is developing algorithms for robotic motion planning and manipulation. The research includes grasping in cluttered environments and manipulation of deformable objects, such as rope or cloth that are malleable and change shape when handled.

“We have deformable objects, we have piles of clutter, some of which we may have seen before, some we haven’t. We have to manipulate them anyway,” says Berenson. “We can’t wait for someone to perfectly model the environment, and give us all the parameters and tell us where everything is, and provide a CAD model of every object. That’s great in a factory, but it won’t work in somebody’s home.

“You will never have a perfect model of how this rope or cloth will behave. We have to be able to manipulate despite that uncertainty,” he continues. “For example, we’re able to put a placemat on a table in a particular position and avoid obstacles. We can do those types of tasks without knowing most of the parameters of the deformable object, like its stiffness or the friction values.”

Earlier this year, Berenson received a National Science Foundation CAREER award to improve the ability of autonomous robots to handle soft, deformable objects. Berenson believes the challenges involved in picking up deformable objects such as cables, clothing, or even muscle tissue can be overcome by representing the object and task in terms of distance constraints and formulating control and planning methods based on this representation. Enabling robots in this way could allow medical robots to perform tedious tasks in surgery or make hospital beds, and in home service, allow robots to handle clothes and prepare food.

“We’re really excited about this work because we believe it will push the frontier on what robots can do with very limited information, which is essential for getting robots to work in people’s homes or in natural environments.”

The ARM Lab is also working on algorithms for shape completion. This is particularly advantageous when you have a cluttered environment like a pile of clothing or other objects that need to be sorted.

“If you have a laser scanner and you scan something, you only see the front part of it. You have no idea what’s behind that or how far the object extends,” says Berenson. “We’ve been working on algorithms that allow us to basically fill in the part of the object that we don’t see.”

His team is taking advantage of a lot of the work already done by other researchers in deep neural networks for 3D reconstruction. Through machine learning, the algorithm has learned to look at a partial scan of an object and infer the parts of the shape it cannot see by looking at thousands of previously scanned objects. It turns out many household objects are very similar, so Berenson says they can get a pretty good prediction on household objects.

The research team is using some sophisticated robotic technology to test and verify their motion planning and manipulation algorithms. You will see a pair of KUKA LBR iiwa robot arms equipped with Robotiq 3-finger adaptive grippers manipulating everyday items of different shape, weight, and fragility. Watch the ARM Lab robots in action.

As robots begin to permeate our daily lives, disruption will come in many forms; not just technological. Social, ethical, legal and economic issues will raise concerns about privacy, liability, potential job loss, continued learning, and social conventions. One university is taking a closer look at the societal impact of robotics innovation.

Robot Ethics and Policy
At the heart of Corvallis, a city in central western Oregon about 50 miles from the Pacific Coast, we find a hidden gem. Part of the Willamette River Valley, the soil is very fertile here. Fertile ground for a rising star in the robotics field.

Oregon State University (OSU) is the city’s largest employer and home to the Collaborative Robotics and Intelligent Systems (CoRIS) Institute. Established in 2017 by OSU’s College of Engineering, CoRIS is mobilized to advance the design, development, and deployment of robots and intelligent systems able to interact seamlessly with people.

“We’re moving away from the idea that robots are over there behind the fence and people are on this side,” says Kagan Tumer, Director of CoRIS and a professor in the School of Mechanical, Industrial and Manufacturing Engineering at Oregon State. “We’re interacting with robots everywhere, from factories, to work, to even in homes now we’re starting to see AI and robots bought by consumers. Understanding how people interact with a robot, whether it’s a simple vacuum cleaning robot or a home-care-level talking robot, there are a lot of questions about what it means to interact with a robot.”

OSU researchers strive to address these questions through a strong collaborative research culture that is the hallmark of CoRIS. Multiple disciplines come together under one roof. There is also a unique focus on ethics and policy.

“That’s something we take very seriously,” says Tumer. “Usually institutions like this have a research director and an academic director. We specifically have a policy and ethics director for the deployment side because we think it’s critical. We are one of the only places I know that have graduate-level robot ethics courses. We want our graduates to not only be technologically savvy, but also understand the implications of the robotics technology they put out into the world.”

Oregon State’s CoRIS emphasizes the human element of robotics and AI. Researchers explore the ethical, political and legal implications of robotics to understand the scope and scale of the social and technological disruption, and its impact on the future of science, technology and society.

Robotic Legged Locomotion
Ethics and policy become more important as robots begin to share the same spaces as humans. Soon they will walk among us.

Cassie, a bipedal robot developed in the labs at Oregon State, garners a lot of attention as it strolls around campus. The robot may resemble a pair of ostrich legs, but biomimicry was not the mission. Cassie’s developers simply wanted to create the most stable legged platform for varied terrain and unpredictable environments.

The way Cassie would end up at OSU was no accident. In an effort to recruit top robotics talent, Tumer sought out Jonathan Hurst, who has a doctorate in robotics from Carnegie Mellon. He became the first Oregon State faculty devoted to robotics.

Hurst’s passion is legged locomotion, specifically passive dynamics of mechanical systems. He established the Dynamic Robotics Laboratory and his group designed and built ATRIAS, an early prototype to Cassie. ATRIAS gets its passive dynamics from series-elastic fiberglass springs, which act both as a suspension system and means of mechanical energy storage. The technology is based on the spring-mass model, a theory associated with the energy-efficient bouncing gait of animals. Imagine jumping on a pogo stick. Energy is stored in the spring when it’s compressed. When it expands, energy is released and you are thrust upwards.

“ATRIAS was a science experiment,” says Tumer. “It was never meant to be a robot in the real world. It was testing the idea of the models and the way that the passive dynamics of the robot works, and whether you can actually design a robot with very simple principles that would duplicate animal gait. Cassie is the outcome of that experiment.”

With control over two more joints in each of its legs compared to ATRIAS, Cassie is able to maintain its balance even when standing still or crouching. Full range of motion in the hips enables Cassie to steer. It’s also half the weight of its predecessor but twice as powerful and more energy efficient. A sealed system allows it to operate in rain and snow. Many of Cassie’s components were custom-developed in OSU’s lab when the team was unable to find off-the-shelf components that were small enough or had the required performance.

Oregon State spinoff Agility Robotics is marketing Cassie as a robust bipedal research platform for academic groups working on legged locomotion. The California Institute of Technology and University of Michigan are testing algorithms on Cassie to develop next-gen prosthetics and exoskeletons for persons with paraplegia. Beyond personal/assistive robotics, Tumer says the creators envision a career path for Cassie in package delivery and search and rescue applications.

“We’re not that far now from having driverless vehicles,” he says. “If you can imagine a delivery truck that drives itself to your neighborhood, how do you handle that last 100 to 300 feet? That’s when legged robots pop out of the truck, deliver the package to your door, go back to the truck and drive to the next stop.”

Cassie’s creators are working on arm-like appendages to carry those packages and to right itself in case of a fall. Because Cassie will eventually need “eyes” to see your front door, vision and other sensors are on the agenda.

“If you look at the area around any house, from the curb to the sidewalk, to the slight slope of the driveway, to one or two steps in front of the house, it’s a hazard course for any type of wheeled robot,” says Tumer. “When you can pair a legged robot with a self-driving truck, you’re done. Being able to walk in environments designed for humans is going to be a big thing.”

Investors like Andy Rubin’s Playground Global, Sony Innovation Fund, and Robotics Hub are banking on it. Albany, Oregon-based Agility Robotics raised $8 million in a Series A round in early 2018. In June, they opened a second location in Pittsburgh, where the startup plans to take advantage of the area’s strong talent pool in robotics. Meanwhile, research on future iterations of Cassie continues at Oregon State.

Multi-Robot Coordination
Another significant research area for OSU is multi-robot coordination, Tumer’s main focus. He says many interesting real-world scenarios require multiple robots, or humans and robots, to work together. Search and rescue operations are one example.

“You might have unmanned aerial vehicles (UAV) looking for debris. You might have unmanned ground vehicles (UGV) moving around. You may have legged robots. You will have a lot of components doing a lot of different operations,” explains Tumer. “The critical aspect is how we determine what each one of those robots should be doing so the team does what you want it to do. Determining the objectives that you need to provide to all of these different robots is a key part of our research.”

Tumer says the different robots in a multi-robot team would need to have some level of awareness of the task they are trying to achieve, so they can determine how to best contribute to the team. His group is trying to impart that high level of coordination capability to robots.

Underwater Robotics
Tumer’s research in multi-robot coordination may also apply to underwater robots. Oregon State has a strong oceanography department and they collaborate with CoRIS, particularly with OSU professor Dr. Geoff Hollinger who focuses on underwater autonomy.

“There’s a lot of underwater science that we do with robots,” says Tumer. “This is all about the health of the ocean, looking at how rivers bring the water and sediment, and how they propagate. There are a lot of research questions about how our environment is effected by everything we do, from runoff from rivers, to algae, to everything else. We have teams of intelligent gliders out there trying to collect information for our scientists.”

These “intelligent gliders” or autonomous underwater vehicles (AUV) look like small torpedoes, but have no engine. They glide with the water currents rather than being self-propelled. On-board sensors collect data on water salinity, temperature, nutrients and oxygen concentrations at various depths. The gliders can autonomously change their buoyancy to submerge up to 1,000 meter depths and then surface hours later to broadcast their data and location via satellite. They repeat this process every six hours or so, collecting data 24 hours a day for weeks at a time. Check out the video.

Oregon State researchers working collaboratively from different disciplines in marine sciences and robotics are also equipping undersea gliders with bioacoustic sensors to identify different kinds of marine animals using their unique acoustical signatures. This helps scientists study the distribution of predators and prey, and their relationship to oceanic conditions.

Advanced control algorithms developed by Hollinger and the Robotic Decision Making Laboratory allow the gliders and other AUVs to more efficiently navigate strong currents and environmental disturbances, and respond to environmental cues. Enabling intelligent AUVs to gather information in environments outside the reach of human divers has long-term benefits for sustaining the fishing industry, protecting marine life, and understanding climate change.

As more robotic systems enter our waterways, streets and homes, researchers say we will need formal means of validation and testing to support deployment on a larger scale.

Robotics Validation and Testing
Carnegie Mellon’s Herbert thinks the not-so-exciting but perhaps most critical research area for robotics over the next 5 to 10 years will be integration, validation and testing. This is especially critical as human-robot interaction becomes a part of our daily lives. To illustrate his point, Herbert draws an analogy to the aircraft industry.

“The flying public feels safe in a plane because we have 150 years of experience with a technology that has been validated and tested,” he says. “We don’t yet have those tools for AI and robotics. How do we do this for systems that learn over time, that adapt? Whose systems depend on the data they use to learn? How do we do this for a system that has complex interaction with people?

“For this relatively new field of robotics, we don’t yet have those engineering tools that allow us to guarantee performance, guarantee behavior of those systems,” says Hebert. “It’s what we need to be able to really use them in everyday applications. It’s this collection of best practices and formal tools that we need to get to a system you can actually trust.”

Trust will play a critical role in the acceptance of intelligent autonomous systems. Systems we can entrust to care for our loved ones and our most vulnerable populations, our children, our elderly; robots that will perhaps share our most intimate spaces; systems that will have access to our private data, details about our everyday activities, and privy to our conversations; robotic systems to which we will relinquish control. For that, they will need to earn our trust. Our bright future with robots depends on it. Researchers are helping us realize that future.

RIA Members featured in this article:
Carnegie Mellon University
University of Michigan

 

Originally published by RIA via www.robotics.org on 10/30/2018