Advanced Search   
  Look in
       Name    Email
  Include
       Former RI Members 
 
 
Rosen Diankov
PhD Student, RI
Alumnus of RI.
Research Interests

In order to solve a specific domain of tasks in path planning or manipulating objects in a dynamic environment, most complex robotic systems require prior knowledge of the environment and the robot. Such knowledge usually constitutes of knowing the robot? dynamic capabilities, its (inverse) kinematics, the type of environment, consequences of traversing configuration paths in the environment, the definitions of a goal condition, and what composes (optimal) solutions. While a lot of this domain knowledge is necessary, it begs the question of how much the robot really needs to perform the task, and how much it can autonomously learn from its domain.

An example of knowledge that can be inferred automatically are the reward functions and path heuristics that are very commonly used in Markov Decision Processes and A*-type search algorithms to encode domain specific knowledge. Creating these functions is time consuming and it isn? trivial to go from a desired robotic behavior to a reward function that encodes this behavior. But given training instances of desired path trajectories from various initial and goal conditions, a robot should be able to learn its own reward and heuristic functions so that it mimics the desired behavior. Another example is getting the desired robot goal configuration through inverse kinematics. The first drawback is that the configuration might not be reachable until the path planning algorithm is executed and fails. The second drawback is that the inverse kinematics have to be solved for the specific robot. While this is possible for low degree of freedom robots, it is not so pretty in high degr! ees of freedom where there are many solutions for a given goal condition. Unless domain specific knowledge is encoded, the optimality of a certain goal configurations can? even be determined.

One of my research goals is to develop a general framework for learning algorithms in path planning domains so that a good amount of domain knowledge can be automatically inferred just by relying on training examples, reinforcement learning, and offline simulations. Currently my research focuses on creating robots that can autonomously build simple toy structures without encoding their inverse kinematics, reward functions, or how to go about achieving the final goal.

I am also working on a robot animation and virtual environment simulator (RAVE) that serves as a test bed for planning algorithms and online control of robotic systems.

Research Interest Keywords
humanoid roboticsmachine learningmanipulationmotion planning