To plan and measure motion of a robot (or any system), we need to have a set of coordinates for defining the motions. We need to know what up/down, left/right, forward/back are. With robots we usually pick some static point on the robot to serve as the 'body frame', the perspective from which we measure motion in the world. With the snake robots we have the problem that no single static definition makes a good coordinate frame. The whole body undulates when executing a gait, and usually every point on the snake is moving through the world in some complicated way. However, an operator remotely controlling the snakes clearly has an intuition of the robot's position and orientation. The difference is that position and orientation are defined in a more holistic way that considers all modules taken together.
The contribution of my research has been making the connection of these intuitive notions of position and orientation with the more formal concepts of center of mass and principle moments of inertia. Using this, I am able to define a coordinate frame for the whole body of the snake where world motion can now be represented in much simpler terms. While calculating these dynamic coordinate frames is more complex than using a static frame, it allows us to incorporate motion feedback from the snake in a manner that is as intuitive as it would be on other simpler robots. To motivate this work I have started fusing the data from the joint angles, accelerometers and gyros in each module to estimate the snake's motion as it traverses pipes.