Self-Mobile Space Manipulator - Robotics Institute Carnegie Mellon University

Self-Mobile Space Manipulator

Missing Image Placeholder
This Project is no longer active.

Astronaut extra-vehicular activity (EVA) at a space station is costly, potentially dangerous, and requires extensive preparation. Some EVA tasks, such as unplanned repairs, may require the versatility, skill, and on-site judgment of astronauts. Many other tasks, particularly routine inspection, maintenance and light assembly, can be done more safely and cost effectively by robots.

We are developing a relatively simple, modular, low mass, low cost robot for space station EVA that is large enough to be independently mobile on the station exterior, yet versatile enough to accomplish many vital tasks. Because our design is for a robot that is independently mobile, yet capable of conventional manipulation tasks, we call it the Self-Mobile Space Manipulator or (SM)2. The robot can perform useful tasks such as visual inspection, material transport, and light assembly. It will be able to work independently or in cooperation with astronauts, and other robots.

Robot Design: The robot is designed for mobility in a zero-gravity environment, with simplicity and low mass as primary design goals. The robot is assembled from seven, identical, compact, self-contained, modular joints. The connecting links are lightweight, aluminum tubes, and give SM2 a reach of about 80 inches. Each truss gripper has two fixed fingers and a sliding finger that closes to grasp the beam flanges, which vary from 4 inches to 6 inches wide. Each gripper incorporates a position sensor; contact switches on the fingers to verify grasp; and three proximity sensors, mounted at the bases of the fingers, to indicate proximity and proper alignment with the beams. SM2 carries three video cameras at the elbow and each gripper, each with controllable focus, zoom and aperture.

Gravity Compensation Systems: To simulate the zero-gravity environment at an orbiting space station, we have developed two gravity-compensation systems. Passive counterweights provide vertical balance forces for the robot through a system of cables and pulleys, and employ 10:1 weight ratios to minimize the effective inertia of the weights. For each system, an overhead mechanism actively controls horizontal motion to keep the support cable directly above the moving robot, based on a sensor designed to measure the deviations from vertical of the support cable. Cable routing is such as to decouple the horizontal and vertical motions. The first system is based on a gantry design, and provides X-Y motion over a 100-inch by 180-inch range, for global locomotion experiments. The second system is based on a swinging boom, and provides R-theta motion of two support points over an 80-inch by 180-degree area, allowing support of payloads as well as the robot.

Robot Control: A long reach, flexible structure, and compliance in joints make accurate positioning difficult. We developed a multi-phase control scheme to employ different controllers for different operational conditions. We developed an adaptive control scheme for identification of the dynamic model in real-time based on neural-networks. Fuzzy control schemes model the friction and damping effects in the system and deal with redundancy in kinematics. We modeled teleoepreation skill and human performance using Hidden Markov Model. At the high level, a modular, hierarchical shared-control architecture coordinates teleoperation and autonomous motion in a systematic manner. The robot is able to walk on the truss and perform certain transporting/inspection tasks, using automatic control based on the truss model, or teleoepration control.

Sensing and Teleoperation: A neural-network learning scheme, based on video images from the tip and elbow cameras, allows the robot to accurately approach the truss beams. Proximity sensors on each gripper are used for correcting misalignment of the gripper to the truss for a reliable grasping. We developed a real-time graphic interface for display and control of robot motion at the control station, which includes a 6-DOF free-floating hand controller. An operator provides control commands through the graphic interface or/and hand controller, based on camera views from tip and elbow cameras. Camera views may also be used for automatic control of robot motions. We have also been working on voice control interface and auditory display of force sensing to enhance telerobotic capability of the system.

Displaying 7 Publications

current head

current contact

past staff

  • Randy Casciola
  • John M Dolan
  • Yangsheng Xu
  • Jie Yang