Thursday, March 8, 2007
Wean Hall 7000 (WEH 7500 hallway)
3:30 - 4:00 p.m.
Distinguished Keynote Lecture
Wean Hall 7500
4:00 - 5:30 PM
Interacting Physically with Robots and Virtually on the Global Digital Campus
The lecture presents two topics, one on the research on human-robot interaction conducted in the Anzai-Imai laboratory, and the other on the activity of the Digital Media and Content Research Institute, both at Keio University.
Our research on human-robot interaction, embarked upon in 1991, is concerned with designing technologies that facilitate the smooth interaction of humans with robots. We initially started by designing software and hardware systems that support human-robot interaction, and then moved forward, with Michita Imai and others, to designing robots that can smoothly interact with humans. In some cases we conducted behavioral experiments to find out how a human behaves in an interaction with a robot, and fed the results back to engineering. The first part of the lecture provides a summary of the efforts at our lab during these fifteen years.
The second part of the lecture will focus on the activities of the Research Institute for Digital Media and Content, established in 2004. One of its goals is to use various technologies to extend the reach of our physical campus so that students and faculty members can distribute their academic knowledge to a global audience, interact with people around the world, and have convenient access to globally shared knowledge. We have already set up what we call Global Digital Studios in Tokyo, Seoul, Beijing, Cambridge (UK) and San Francisco, with others scheduled to open in New York and some other locations. Twenty-four higher learning institutions in twelve South-East Asian countries are also tied to this network via satellite Internet. The Studios and sites can be connected online at any time, and are used for many different purposes: the network can be regarded as an early version of our Global Digital Campus. The second part of the lecture gives a glimpse of this effort at Keio University.
Friday, March 9, 2007
Giant Eagle Auditorium (Baker Hall A51)
Ford Professor of Engineering
Massachusetts Institute of Technology
From Direct Drive to Muscle Actuators
My talk will begin with a collaborative project with Dr. Kanade in 1980. Together, we developed the world premier direct drive robot arm using samarium-cobalt rare-earth magnets. The robot had no gearing; hence it was free from backlash, friction and other problems with gearing. The machine was an ideal test bed for torque and nonlinear dynamic control thanks to its low friction and high stiffness. After the Kanade-Asada project, the quest for advanced robot actuators continued and we have recently developed artificial muscle actuators with cellular architecture. Inspired by skeletal muscles, the new muscle actuator consists of vast numbers of tiny cells made of PZT, SMA and conducting polymers. They are compact, fast and of high energy density and, more importantly, behave like biological muscles; they not only generate force and displacement but also store and dissipate energy. To activate vast numbers of actuator cells, a novel control and communication methodology called "stochastic recruitment and broadcast feedback" has been developed. This stochastic control allows for amazingly robust and sustainable control: although 40 percent of the actuator cells are dead, it can still track a trajectory. Using a stochastic Lyapunov function, stability and sustainability of the cellular actuator system can be guaranteed. The talk will conclude with discussion on the future of bio-robotics and neuro-muscle control.
INRIA Sophia-Antipolis Professor, INRIA
A Few Problems Related to the Modeling of Cortical Activity
New methods for observing the brain, such as magnetic resonance imaging (MRI), electro and magneto-encephalography (MEEG) and optical imaging, pose challenging problems to modelers both in terms of analyzing the signals they produce and in terms of how to relate them to the way the brain operates. One of the most striking facts about the way the brain seems to function is that it involves electrical, physical and chemical phenomena at a large variety of spatio-temporal scales which are only partially captured by these measurement modalities. The resulting challenge is to design models and methods of combining several sources of information in such a way that the models can be tested on the data in a statistically significant manner. We illustrate these principles with the combination of functional MRI (fMRI) and MEEG data, through the use of a production model of the BOLD (blood-oxygenlevel dependent) signal and with the design of models of assemblies of neurons, such as the cortical column at several scales, that can form the basis of an understanding of the computational properties of parts of the neocortex.
Bernard Gordon Professor of Medical Engineering
Massachusetts Institute of Technology
Image Guided Surgery and Computational Anatomy
The current trend towards minimally invasive procedures raises an interesting challenge for surgeons: how to execute precise surgeries through small openings with limited view of nearby structures. Recent advances in computer vision are solving this challenge. Knowledgedriven segmentation methods provide detailed, patient-specific reconstructions of relevant anatomy. These models allow a surgeon to visualize the site "localizing tumors while highlighting critical structures" and they provide planning tools for optimal approaches to the tumor. Automated registration techniques accurately align the graphical patient reconstruction with actual position in the operating room so that surgeons can see the positions of their instruments relative to critical nearby structures in real-time and allowing a surgeon to execute minimally invasive surgeries as if the anatomy was completely visible. These tools are used regularly in a range of surgical procedures. Moreover, they are applicable to other clinical applications, such as measuring differences in shape of anatomical structures with disease or treatment.
Institute of Industrial Science, University of Tokyo
Learning From Observation: From Assembly Plan Through Dancing Humanoid
We have been developing the paradigm referred to as "programming by demonstration". The method involves simple observation of what a human is doing and generation of robot programs to mimic the same operations. The first half of this talk presents the history of what we have done so far under this paradigm. We will emphasize the top-down approach to utilize pre-defined, mathematically derived task-and-skill models for observing and mimicking human operations. We will show several examples of task-and-skill models applicable in different domains. The second half of the talk will focus on our newest effort to make a humanoid robot perform Japanese folk dances using the same paradigm. Human dance motions are recorded using optical or magnetic motion-capture systems. These captured motions are segmented into tasks using motion analysis, music information and task-and-skill models. We can characterize personal differences of dance using task-and-skill models. Then, we can map these motion models onto robot motions by considering dynamic and structural differences between human and robot bodies. As a demonstration of our system, I will show a video in which a humanoid robot performs two Japanese folk dances, Jongara-bushi and Aizu-bandaisan-odori.
T.C. Chang Professor of Computer Science
Computational Cameras: Redefining the Image
In this talk, we will first present the concept of a computational camera. It is a device that embodies the convergence of the camera and the computer. It uses new optics to select rays from the scene in unusual ways and an appropriate algorithm to process the selected rays. This ability to manipulate images before they are recorded and process the recorded images before they are presented is a powerful one. It enables us to experience our visual world in rich and compelling ways. We will show computational cameras that can capture wide angle, high dynamic range, multispectral and depth images. Finally, we will explore the use of a programmable light source as a more sophisticated camera flash. We will show how the use of such a flash enables a camera to produce images that reveal the complex interactions of light within objects as well as between them.
Eugene McDermott Professor in the Brain Sciences and Human Behavior
Massachusetts Institute of Technology
Learning: Theory, Engineering Applications and Neuroscience
The problem of learning is one of the main gateways to making intelligent machines and to understanding how the brain works. In this talk I will briefly show a few examples of our efforts in developing machines that learn. I will focus on a new theory of the ventral stream of the visual cortex in primates, describing how the brain may learn to recognize objects, and show that the resulting model is capable of performing recognition on datasets of complex images at the level of human performance in rapid categorization tasks. The model performs surprisingly well when compared with state-of-art computer vision systems in categorization of complex images.
Managing Director, Microsoft Research Asia
Prior, Context and Interactive Computer Vision
For many years, computer vision researchers have worked hard chasing the illusive goals such as "can the robot find a boy in the scene" or "can your vision system automatically segment the cat from the background". These tasks require a lot of prior knowledge and contextual information. How to incorporate prior knowledge and contextual information into vision systems, however, is very challenging. In this talk, we propose that many difficult vision tasks can only be solved with interactive vision systems, by combining powerful and real-time vision techniques with intuitive and clever user interfaces. I will show two interactive vision systems we developed recently, Lazy Snapping (Siggraph 2004) and Image Completion (Siggraph 2005), where Lazy Snapping cuts out an object with solid boundary using graph cut, while Image Completion recovers unknown region with belief propagation. A key element in designing such interactive systems is how we model the user's intention using conditional probability (context) and likelihood associated with user interactions. Given how illposed most image understanding problems are, I am convinced that interactive computer vision is the paradigm we should focus today's vision research on.
Professor of Computer Science
Johns Hopkins University
Medical Robotics and Computer-Integrated Surgery
The impact of computer-integrated surgery (CIS) on medicine in the next 20 years will be as great as that of computer-integrated manufacturing on industrial production over the past 20 years. A novel partnership between human surgeons and machines, made possible by advances in computing and engineering technology, will overcome many of the limitations of traditional surgery. By extending human surgeons' ability to plan and carry out surgical interventions more accurately and less invasively, CIS systems will address a vital national need to greatly reduce costs, improve clinical outcomes and improve the efficiency of health care delivery. As CIS systems evolve, we expect to see the emergence of two dominant and complementary paradigms: surgical CAD/CAM systems will integrate accurate patient-specific models, surgical plan optimization and a variety of execution environments, permitting the plans to be carried out accurately, safely and with minimal invasiveness. Surgical assistant systems will work cooperatively with human surgeons in carrying out precise and minimally invasive surgical procedures. Over time, these will merge into a CIS research inherently involves three synergistic areas: modeling and analysis of patients and surgical procedures, interface technology, including robots and sensors, and systems science to develop improved techniques for ensuring the safety and reliability of systems. This talk will explore these themes with examples drawn from our own research and elsewhere.