First a note about the course title: non-imaging precedes sensors because it is intended to teach about image sensors in the computer vision course. Nevertheless, it is more functional to discuss several specific imaging systems in this, the sensors course. For example, several camera-based approaches to range-sensing, where the specific details of the image sensor per se are nearly irrelevant. The underlying principle is that image sensing devices fit most naturally into the computer vision course, whereas sensing systems that happen to employ these devices fit most naturally into the sensors course. Yes, at first sight this actually sounds backwards, but it really is most sensible to do it this way; careful examination of the curriculum as a complete package will clarify why.
Sensors fill several more-or-less distinct roles in robotics; what distinguishes them is actually sharper and clearer in defense robotics than in most other application areas. First, sensors are needed to monitor the internal state and condition of the robot; these - formally called proprioceptive sensors - are analogous to the human body's sensors for joint positions, balance, internal temperature, and hunger. Next, sensors are needed so the robot can move and maneuver safely and correctly in its environment; these are analogous to our use of our eyes, and skin, and sometimes our ears and noses, to move our bodies from one place to another, and to transform our body parts from one environment-determined configuration to another, e.g., to adjust your body so your fit comfortably in a particular chair. Third, sensors are needed to do the task that the robot was sent to do, e.g., pick-and-place parts, use hand tools to assemble and disassemble multi-part objects, or to deploy another sensor constitutes the job, e.g., a chemical vapor sniffer that will tell whether an explosive device is concealed in a trash heap. And finally, those sensors that the robot was sent to deploy, e.g., the just-noted chemical vapor sniffer, or a gamma-ray spectrometer to detect and identify nuclear weapons materials, or sensors to remotely frisk a potential enemy for concealed weapons.
This is a sensor course vs. a sensing course: it is not intended to teach how to invent or build sensing devices or even sensing system, but rather how to use them in the four contexts just outlined. Nevertheless, a sensor user who understands nothing about how it works, how it can be fooled, and how it can fool him is indeed a danger to him and his whole team. Thus the sensors course in fact includes considerable material - approximately half - that teaches this fundamental understanding. The course begins with universal measurement principles, with the goal of ensuring that the student understands the limitations that are set by noise that mimics the desired signal, and that only part of this noise can be removed by smarter engineering, as nature ultimately sets fundamental limits that cannot be overcome. It then reviews topics and techniques in analog and digital electronics, communication, computing, and networking that determine how close a practical sensor will come to nature's limit. Optics are then reviewed from a measurement perspective - emphasizing the physical aspects of what determines the signal obtained over the geometrical aspects of how an image is formed - both because so many optical sensors are employed in defense robotics, and because a complete source-to-detector optical system provides a down-to-earth working example of an end-to-end sensor problem and its solution. Specific sensors and sensor systems discussed in approximately the last half of the course focus on the most critical defense robotics requirements: range sensors, navigation sensors, and threat sensors. In each case five to ten different practical solutions are taught, as it takes this many to span the range of practical requirements.
The packaging of computer vision (or machine vision) together with remote vision - i.e., television - is not the usual way courses on these topics are organized. It is an example of the principle that graduate-level courses are usually organized according to the research interests of the professors rather than according to the learning requirements of the students. In the study of the science and technology of artificial vision - perhaps more than in the study of any other topics in the robotics curriculum - it is to the students' advantage to learn vision for computers and remote and enhanced vision for people together. Both topics are crucial in practical robotics applications, and the lines between them are fuzzy and fluid. It would be good if there were time in the curriculum to also cover in detail the closely related fields of computer graphics - the modeling and simulation of cameras - and virtualized reality - the hybridization of vision and computer graphics, so these topics might well be added by the instructor of a fast-moving class.
This is a practical course that intends to convey a working knowledge of how to make vision hardware work. This hardware, the remote and enhanced video system that turns teleoperation into telepresence, is arguably the most important sensor subsystem on a defense robotic system. Practical means that in this course we omit the mathematical elegance and theorem proving that is at the core of many advanced computer vision texts. Instead we aim to deliver the material correctly and usefully, qualitatively and quantitatively, but from a position of intuitive sensibility rather than rigorous proof.
The course begins with a review of the fundamentals of image capture and recording: cameras, lenses, sensors, and getting the image into the computer. Light and color are then introduced, including the exposure and appearance aspects of imaging in addition to its usually fairly universally known geometrical aspects. The student is then ready for the meat of computer vision: feature extraction, motion, stereo, and image understanding. Feature extraction is largely a matter of mathematical operators - sums, differences, products, and ratios of pixels in the neighborhood of each pixel - that select out image features like edges, corners, and faces. Consideration of motion extends this idea from the spatial neighborhood of each pixel in one image to their temporal neighborhoods in sequences of images, i.e., video. Consideration of stereo similarly extends the idea to perspective neighborhoods: what changes in the image, and how, when the camera is moved, or equivalently, when images are available simultaneously from two or more cameras. Especially important here is the almost miraculously simple realization that just two viewpoints, with nothing particularly special about them, provide enough additional information to recover the depth information that disappears when the three dimensional world is imaged onto a flat sensor. The collection of these three toolkits - plus some additional tricks, e.g., dynamically varying the lighting - enable limited but valuable automation of image understanding, distinguished from feature extraction by the semantic vs. simply arithmetic character of the conclusions that are drawn. With only a few exceptions, from the perspective of practical robotics applications, all the rest exists only to enable image understanding and its application both automatically and as a human-machine interaction aid. The exceptions relate primarily to weaker versions the latter capability, i.e., providing overall better quality imagery to remote viewers, but short of automatically calling out "important" parts.
Robotics and artificial intelligence are inextricably linked by the sense-think-act paradigm: a robot that does not in some useful sense think is either a puppet - under the direct low-level control of a human operator - or a cuckoo clock - an automaton that gives the appearance of intelligence until something goes wrong, but when that happens is recognized to actually be oblivious to its surroundings, its own actions, and its own health. And conversely, a thinking machine that cannot autonomously explore its environment cannot verify the veracity of the knowledge that has been programmed into it, so it can't really be called intelligent.
Artificial intelligence - at least from the perspective of practical robotics - is a large bag of mathematical methods and computer programming styles that are employed to empower a suitably designed machine - the act component of the robot - to respond in seemingly intelligent ways to conditions and situations in its environment detected by its sense components. Whether or not this adaptive behavior is "really" intelligence, or rather just a programmed appearance of intelligence, is something more in the nature of a religious than a scientific question, and it will remain so until we understand our own brains and the brains of our close animal relatives well enough to know whether they differ qualitatively or only quantitatively from the computers in our robots. We will have achieved the goals of this curriculum if at its end our graduates are skilled enough to construct algorithms that elicit the usual complaint of anti-AI skeptics: "but it's not really intelligent; sure, it acts smart, but inside it's just an algorithm". In the robotics context cognition (thinking) means something less than in the human context: generally no more than "in possession of the working machinery to accept sensory information, process it logically, and, based on that logic, issue sensible orders to actuation machinery"; whether or not characteristics of human intelligence like motivation and creativity will ever emerge from man-built machines is a question this field is too new to answer. For now is enough that they serve a few specific human needs.
To serve these human needs effectively, robots need in some sense to understand" human intentions, and in some sense to "understand" the physics of our environment at a practical enough level that they can function in it no less sensibly than we would. As noted, AI is not yet far enough developed to do this in general, but it does tolerably well in sufficiently narrowly prescribed environments. The artificial intelligence course described subsequently will expose students of defense robotics to the range of mathematical methods and computer programming styles that span the AI discipline, it will make certain they understand the boundaries between what works now, what it is sensible for us to expect will probably work in their professional lifetimes, and what is pie-in-the-sky, and it will prepare them to implement real systems in the domain of what works now: statistics-based decision making, programs based on heuristic if-then rules, pattern recognition (essentially vector-space decomposition), neural networks (essentially continuous piecewise-linear approximation of empirically observed input-output behavior), and fuzzy logic (essentially a calculus for propagating if-then rules when the input conditions are gray levels vs. black-or-white). Although recently there has been a tendency to separate machine learning from AI and teach it as a separate course, this curriculum incorporates its fundamentals, including coverage of computationally effective techniques for automatically acquiring the statistics, discerning the rules, finding the eigenvectors, and iteratively identifying the parameter values of the knowledge representations employed in AI.
During this mini-course in the final semester the student will learn - by doing - to report project activities and conclusions in a professional manner. Each student's report will be prepared and presented in three different forms corresponding to the different contexts in which it is typically necessary for a technical professional to report his or her work:
A written internal company report that thoroughly journals and documents every step of the work "for the record", accompanied by an executive summary that summarizes the effort, outcome, and recommendations for the company's management. The essential feature of the company report is that it must be comprehensive, e.g., so as to allow the writer's successor to replicate the work and verify its outcome. The executive summary must be comprehensible to management who do not have the technical background to understand every detail, but who need to understand the relevance and future potential of the work to the company's business.
A short written paper suitable for publication in a technical journal or a professional conference that is known for having high standards for the papers it accepts. Technical papers of this sort are generally written "for the expert", so details that should be known to experts are omitted except, as appropriate, by citing existing literature. The student should target the paper to a specifically identified journal or conference appropriate for the content. The paper should be prepared and presented following the journal's or conference's published guidelines regarding length, format , etc.
An oral presentation such as might be given at the targeted conference in a tutorial session i.e., not as a research paper targeting an audience of experts - or that might be presented by a former student, now graduated, returning as a guest lecturer in a future edition of the defense robotics master's program. The presentation will be made orally to an audience of faculty and students who will grade the presentation. Before beginning the oral presentation the presenter will distribute a hard copy handout of his / her annotated PowerPoint or equivalent slides, and will deliver an electronic copy of the presentation to the program administrator.