The complete and accurate schedule for current and upcoming robotics courses is maintained at the University level. This page serves as an overview of courses taught within the Robotics department.
All courses with a “16-” prefix are offered by the Robotics department. Other departments offering courses taught by Robotics faculty are Computer Science (CS), Electrical and Computer Engineering (ECE), Mechanical Engineering (MechE), Statistics (Stat), Psychology (Psych), the Tepper School of Business (GSIA), and the Institute for Complex Engineered Systems (ICES).
Finding Robotics 16-xxx Courses on the Registrar Site:
- Go to: Schedule of Classes Course Search.
- Select the “Semester/Year”
- Select “SCS: Robotics (16-xxx)” from the list of departments.
- Click “RETRIEVE SCHEDULE”
Computer vision is a discipline that attempts to extract information from images and videos. Nearly every smart device on the planet has a camera, and people are increasingly interested in how to develop apps that use computer vision to perform an ever expanding list of things including: 3D mapping, photo/image search, people/object tracking, augmented reality etc. This course is intended for graduate students who are familiar with computer vision, and are keen to learn more about the applying state of the art vision methods on smart devices and embedded systems. A strong programming background is a must (at a minimum good knowledge of C/C++), topics will include using conventional computer vision software tools (OpenCV, MATLAB toolboxes, VLFeat, CAFFE, Torch 7), and development on iOS devices using mobile vision libraries such as GPUImage, Metal and fast math libraries like Armadillo and Eigen. For consistency, all app development will be in iOS and it is expected that all students participating in the class have access to an Intel-based MAC running OS X Mavericks or later. Although the coursework will be focused on a single operating system, the knowledge gained from this class will easily generalize to other mobile platforms such as Android etc.
Practically everything around us is a system-from the cell phone in your pocket to the International Space System up in the sky. The higher the complexity of the system, the more its creators benefit from applying formal processes to its development-processes that are collectively known under the umbrella “systems engineering.” Systems Engineering is a formal discipline that guides a product from conception and design all the way to production, marketing, servicing, and disposal. In this course we will study the fundamental elements of systems engineering as they apply to the development of robotic systems. We will cover topics such as needs analysis, requirements elicitation and formalization, system architecture development, trade studies, verification and validation, etc. In addition, for this course we will cover core topics of Project Management that must be performed in tandem with Systems Engineering to achieve a successful project and product. For the Project Management we will cover work breakdown structures, scheduling, estimation, and risk management. We will study both classical and agile methods in project management. The students will apply most of the elements of this course in the MRSD Project Course I and II, thus giving them the opportunity to put the theory in practice in a real product design activity. Please note that this course is for MRSD students only. (Past project examples: http://mrsd.ri.cmu.edu/project-examples/)
Kinematics, Dynamic Systems, and Control is a graduate level introduction to robotics. The course covers fundamental concepts and methods to analyze, model and control robotic mechanisms which move in the physical world and manipulate it. Main topics include the fundamentals of kinematics, dynamics and control applied to the kinematics, dynamics and control of rigid body chains. Additional topics include state estimation and dynamic parameter identification.
This course introduces the fundamental techniques used in computer vision, that is, the analysis of patterns in visual images to reconstruct and understand the objects and scenes that generated them. Topics covered include image formation and representation, camera geometry, and calibration, computational imaging, multi-view geometry, stereo, 3D reconstruction from images, motion analysis, physics-based vision, image segmentation and object recognition. The material is based on graduate-level texts augmented with research papers, as appropriate. Evaluation is based on homeworks and a final project. The homeworks involve considerable Matlab programming exercises.
- Richard Szeliski, "Computer Vision Algorithms and Applications", Texts in Computer Science, Springer ISBN: 978-1-84882-934-3, Book Requirement: Recommended
- David Forsyth and Jean Ponce, "Computer Vision: A Modern Approach", Prentice Hall ISBN: 0-13-085198-1, Book Requirement: Recommended
The principles and practices of quantitative perception (sensing) illustrated by the devices and algorithms (sensors) that implement them. Learn to critically examine the sensing requirements of robotics applications, to specify the required sensor characteristics, to analyze whether these specifications can be realized even in principle, to compare what can be realized in principle to what can actually be purchased or built, to understand the engineering factors that account for the discrepancies, and to design transducing, digitizing, and computing systems that come tolerably close to realizing the actual capabilities of available sensors. Grading will be based on homework assignments, class participation, and a final exam. Three or four of the homework assignments will be hands-on “take-home labs” done with an Arduino kit that students will purchase in lieu of purchasing a textbook. Top-level course modules will cover (1) sensors, signals, and measurement science, (2) origins, nature, and amelioration of noise, (3) end-to-end sensing systems, (4) cameras and other imaging sensors and systems, (5) range sensing and imaging, (6) navigation sensors and systems, (7) other topics of interest to the class (as time allows).
Kinematics, statics, and dynamics of robotic manipulator’s interaction with a task, focusing on intelligent use of kinematic constraint, gravity, and frictional forces. Automatic planning based on mechanics. Application examples drawn from manufacturing and other domains.
This course surveys the use of optimization (especially optimal control) to design behavior. We will explore ways to represent policies including hand-designed parametric functions, basis functions, tables, and trajectory libraries. We will also explore algorithms to create policies including parameter optimization and trajectory optimization (first and second order gradient methods, sequential quadratic programming, random search methods, evolutionary algorithms, etc.). We will discuss how to handle the discrepancy between models used to create policies and the actual system being controlled (evaluation and robustness issues). The course will combine lectures, student-presented material, and projects. The goal of this course will be to help participants find the most effective methods for their problems.
This course covers all aspects of mobile robot systems design and programming from both a theoretical and a practical perspective. The basic subsystems of control, localization, mapping, perception, and planning are presented. For each, the discussion will include relevant methods from applied mathematics. aspects of physics necessary in the construction of models of system and environmental behavior, and core algorithms which have proven to be valuable in a wide range of circumstances.
Planning and Decision-making are critical components of autonomy in robotic systems. These components are responsible for making decisions that range from path planning and motion planning to coverage and task planning to taking actions that help robots understand the world around them better. This course studies underlying algorithmic techniques used for planning and decision-making in robotics and examines case studies in ground and aerial robots, humanoids, mobile manipulation platforms and multi-robot systems. The students will learn the algorithms and implement them in a series of programming-based projects.
This course covers selected topics in applied mathematics useful in robotics, taken from the following list: 1. Solution of Linear Equations. 2. Polynomial Interpolation and Approximation. 3. Solution of Nonlinear Equations. 4. Roots of Polynomials, Resultants. 5. Approximation by Orthogonal Functions (includes Fourier series). 6. Integration of Ordinary Differential Equations. 7. Optimization. 8. Calculus of Variations (with applications to Mechanics). 9. Probability and Stochastic Processes (Markov chains). 10. Computational Geometry. 11. Differential Geometry.
The course focuses on the geometric aspects of computer vision: The geometry of image formation and its use for 3D reconstruction and calibration. The objective of the course is to introduce the formal tools and results that are necessary for developing multi-view reconstruction algorithms. The fundamental tools introduced study affine and projective geometry, which are essential to the development of image formation models. Additional algebraic tools, such as exterior algebras are also introduced at the beginning of the course. These tools are then used to develop formal models of geometric image formation for a single view (camera model), two views (fundamental matrix), and three views (trifocal tensor); 3D reconstruction from multiple images; and auto-calibration.
- Computer Vision (16-721 or equivalent)
- The Geometry of Multiple Images. Faugeras and Long, MIT Press., Book Requirement: Not Specified
- Multiple View Geometry in Computer Vision, Richard Hartley and Andrew Zisserman, Cambridge University Press, June 2000., Book Requirement: Not Specified
- Fundamentals of projective, affine, and Euclidean geometries
- Invariance and duality
- Algebraic tools
- Single view geometry: The pinhole model
- Calibration techniques
- 2-view geometry: The Fundamental matrix
- 2-view reconstruction
- 3-view geometry: The trifocal tensor
- Parameter estimation and uncertainty
- n-view reconstruction
Everyday we observe an extraordinary array of light and color phenomena around us, ranging from the dazzling effects of the atmosphere, the complex appearances of surfaces and materials and underwater scenarios. For a long time, artists, scientists and photographers have been fascinated by these effects, and have focused their attention on capturing and understanding these phenomena. In this course, we take a computational approach to modeling and analyzing these phenomena, which we collectively call as “visual appearance”. The first half of the course focuses on the physical fundamentals of visual appearance, while the second half of the course focuses on algorithms and applications in a variety of fields such as computer vision, graphics and remote sensing and technologies such as underwater and aerial imaging. This course unifies concepts usually learnt in physical sciences and their application in imaging sciences, and will include the latest research advances in this area. The course will also include a photography competition.
- An undergraduate or graduate class in Computer Vision or in Computer Graphics
A graduate seminar course in Computer Vision with emphasis on representation and reasoning for large amounts of data (images, videos and associated tags, text, gps-locations etc) toward the ultimate goal of Image Understanding. We will be reading an eclectic mix of classic and recent papers on topics including: Theories of Perception, Mid-level Vision (Grouping, Segmentation, Poselets), Object and Scene Recognition, 3D Scene Understanding, Action Recognition, Contextual Reasoning, Image Parsing, Joint Language and Vision Models, etc. We will be covering a wide range of supervised, semi-supervised and unsupervised approaches for each of the topics above.
- Graduate Computer Vision or Machine Learning course
Data-driven learning techniques are now an essential part of building robotic systems designed to operate in the real world. These systems must learn to adapt to changes in the environment, learn from experience, and learn from demonstration. In particular we will cover three important sub-fields of Machine Learning applied to robotic systems: (1) We will cover Online Learning, which can be used to give robotic systems the ability to adapt to changing environmental conditions. (2) We will cover Reinforcement Learning, which takes into account the tradeoffs between exploration and exploitation to learn how to interact with the environment. We will also cover Deep Reinforcement Learning techniques in the context of real-world robotic systems. (3) We will cover Apprenticeship Learning (Imitation Learning and Inverse Reinforcement Learning) which is critical for teaching robotic systems to learn from expert behavior. Prerequisites: Linear Algebra, Multivariate Calculus, Probability theory.
Robot localization and mapping are fundamental capabilities for mobile robots operating in the real world. Even more challenging than these individual problems is their combination: simultaneous localization and mapping (SLAM). Robust and scalable solutions are needed that can handle the uncertainty inherent in sensor measurements, while providing localization and map estimates in real-time. We will explore suitable efficient probabilistic inference algorithms at the intersection of linear algebra and probabilistic graphical models. We will also explore state-of-the-art systems.
- simultaneous localization and mapping (SLAM)
This is an advanced graduate-level class on the theory and algorithms that enable robots to physically manipulate their world, on their own or in collaboration with people. The class will first focus on functional aspects of manipulation, such as synthesizing robust and stable grasps for dexterous hands and motion planning in these spaces, as well as learning for manipulation, such as how to predict stable grasps from demonstration and experience. Moving forward, we will discuss additional requirements that arise from performing manipulation tasks collaboratively with people: moving from purely functional aspects of motion to incorporating the human into the loop, and coordinating human and robot actions via understanding and expressing intent.
This course will develop the robot that CMU will drive on the moon to win the Lunar XPrize while mentoring the tributary technologies and creative process. The course will also claim the first cash from Google’s Lunar Milestone Prize by demonstrating flight readiness of the rover. The tributary technologies addressed in this course include mechanisms, actuation, thermal regulation, power, sensing, computing, communication, and operations. Process includes robot development and verification of functionality, reliability, and flight readiness. Relevant skills include robotics, mechanics, electronics, software, fabrication, testing, documentation, and systems engineering. The course is appropriate for a broad range of interests and experience.
The course provides an introduction into the mechanics and control of legged locomotion with a focus on the human system. The main topics covered include fundamental concepts, muscle-skeleton mechanics, and neural control. Examples of bio-inspiration in robots and rehabilitation devices are highlighted. By the end of the course, you will have the basic knowledge to build your own dynamic an control models of animal and human motions. The course develops the material in parallel with an introduction into Matlab’s Simulink and SimMechanics environments for modeling nonlinear dynamic systems. Assignments and team projects will let you apply your knowledge to problems of animal and human motion in theory and computer simulations.
This course presents an overview of robotics in practice and research with topics including vision, motion planning, mobile mechanisms, kinematics, inverse kinematics, and sensors. In course projects, students construct robots which are driven by a microcontroller, with each project reinforcing the basic principles developed in lectures. Students nominally work in teams of three: an electrical engineer, a mechanical engineer, and a computer scientist. This course will also expose students to some of the contemporary happenings in robotics, which includes current robot lab research, applications, robot contests and robots in the news.
Planning is one of the core components that enable robots to be autonomous. Robot planning is responsible for deciding in real-time what should the robot do next, how to do it, where should the robot move next and how to move there. This class does an in-depth study of popular planning techniques in robotics and examines their use in ground and aerial robots, humanoids, mobile manipulation platforms and multi-robot systems. The students learn the theory of these methods and also implement them in a series of programming-based projects. To take the class students should have taken an Intro to Robotics class and have a good knowledge of programming and data structures.
- Introduction to Robotics
This course is a comprehensive hands-on introduction to the concepts and basic algorithms needed to make a mobile robot function reliably and effectively. We will work in small groups with small robots that are controlled over wireless from your laptop computers. The robots are custom-designed mini forktrucks that can move pallets from place to place just like commercial automated guided vehicles do today. The robots are programmed in the modern MATLAB programming environment. It is a pretty easy language to learn, and a very powerful one for prototyping robotics algorithms. You will get a lot of experience in this course in addition to some theory. Lectures are focused on the content of the next lab. There is a lab every week and they build on each other so that a complete robot software system results. The course will culminate with a class-wide robot competition that tests the performance of all of your code implemented in the semester. In order to succeed in the course, students must have a 1) 2nd year science/engineering level background in mathematics (matrices, vectors, coordinate systems) and 2) have already mastered at least one procedural programming language like C or Java, and 3) have enough experience to be reasonably prepared to write a 5000 line software system in 13 weeks with the help of one or two others. When the course is over, you will have written a single software system that has been incrementally extended in functionality and regularly debugged throughout the semester.
Foundations and principles of robotic kinematics. Topics include transformations, forward kinematics, inverse kinematics, differential kinematics (Jacobians), manipulability, and basic equations of motion. Course also include programming on robot arms.
- forward kinematics
- inverse kinematics
- differential kinematics (Jacobians)
- basic equations of motion
This course provides a comprehensive introduction to computer vision. Major topics include image processing, detection and recognition, geometry-based and physics-based vision, sensing and perception, and video analysis. Students will learn basic concepts of computer vision as well as hands on experience to solve real-life vision problems.
Professor: Simon Lucey Semester: Fall and Spring Course Description: Computer vision is a discipline that attempts to extract information from images and videos. Nearly every smart device on the planet has a camera, and people are increasingly interested in how to develop apps that use computer vision to perform an ever expanding list of things including: 3D mapping, photo/image search, people/object tracking, augmented reality etc. This course is intended for students who are not familiar with computer vision, but want to come up to speed rapidly with the latest in environments, software tools and best practices for developing computer vision apps. No prior knowledge of computer vision or machine learning is required although a strong programming background is a must (at a minimum good knowledge of C/C++). Topics will include using conventional computer vision software tools (OpenCV, MATLAB toolboxes, VLFeat, CAFFE), and development on iOS devices using mobile vision libraries such as GPUImage and fast math libraries like Armadillo and Eigen. For consistency, all app development will be in iOS and it is expected that all students participating in the class have access to an Intel-based MAC running OS X Mavericks or later. Although the coursework will be focussed on a single operating system, the knowledge gained from this class is intended to generalize to other mobile platforms such as Android etc.
Systems engineering examines methods of specifying, designing, analyzing and testing complex systems. In this course, principles and processes of systems engineering are introduced and applied to the development of robotic devices. The focus is on robotic system engineered to perform complex behavior. Such systems embed computing elements, integrate sensors and actuators, operate in a reliable and robust fashion, and demand rigorous engineering from conception through production. The course is organized as a progression through the systems engineering process of conceptualization, specification, design, and prototyping with consideration of verification and validation. Students completing this course will engineer a robotic system through its compete design and initial prototype. The project concept and teams can continue into the Spring-semester (16-474 Robotics Capstone) for system refinement, testing and demonstration.
In this course students refine the design, build, integrate, test, and demonstrate the performance of the robot they designed in the pre-requisite Systems Engineering Course (16-450). The students are expected to continue to apply the process and methods of Systems Engineering to track requirements, evaluate alternatives, refine the cyberphysical architectures, plan and devise tests, verify the design, and validate system performance. In addition, the students learn and apply Project Management techniques to manage the technical scope, schedule, budget, and risks of their project. The course consists of lectures, class meetings, reviews, and a final demonstration. Lectures cover core topics in Project Management and special topics in Systems Engineering. During class meetings the students and instructor review progress on the project and discuss technical and project-execution challenges. There are three major reviews approximately at the end of each of the first three months of the semester. For each review, the students give a presentation and submit an updated version of the System Design and Development Document. The course culminates in a System Performance Validation Demonstration at the end of the semester. In addition to that the students hold a special demonstration of their robotic system for the broader Robotics community.