This page is provided for historical and archival purposes only. While the seminar dates are correct, we offer no guarantee of informational accuracy or link validity. Contact information for the speakers, hosts and seminar committee are certainly out of date.
The performance of existing machine vision systems still significantly lags that of a biological vision. The two most critical features presently missing from the machine vision are low latency processing and top-down sensory adaptation. The low latency vision processing is essential in many robotic applications dealing with fast events, while the top-down sensory adaptation is desirable for guiding sensing and processing for a more robust performance. I will talk about ways to reduce latency and provide top-down sensory adaptation in vision systems by implementing global operations in computational sensors.
Computational VLSI sensors incorporate computation at the level of sensing and can both reduce latency and facilitate top-down sensory adaptation. In the context of this problem the global operations are important because: (1) in perception each decision is a kind of global, or overall, conclusion necessary for the coherent interaction of a machine with the environment, and (2) global operations produce a few quantities for the description of the environment which can be quickly transferred and/or processed to produce an appropriate action for a machine.
The main difficulty with implementing global operations comes from the necessity to bring together all or most of the data in the input data set. My work proposes two mechanisms for implementing global necessity to bring together all or most of the data in the input data set. My work proposes two mechanisms for implementing global operations in computational sensors: (1) sensory attention, and (2) intensity-to-time processing paradigm.
The sensory attention is based on the premise that salient features within the retinal image represent important global features of the entire data set. By selecting a small region of interest around the salient feature for subsequent processing, the sensory attention eliminates extraneous information and allows the processor to handle small amount of data at a time, and make fast decisions about the environment. The sensory attention is used for a VLSI implementation of a tracking computational sensor -- a computational sensor that attends and tracks a visual stimuli in the retinal image.
The intensity-to-time processing paradigm is based on the notion that stronger signals elicit responses before weaker ones allowing a global processor to make decisions based only on a few inputs at a time. The more time allowed, the more responses are received, thus the global processor incrementally builds a global decision based on several, and eventually on all of the inputs. The key is that some preliminary decisions about the retinal image can be made as soon as the first are received. The intensity-to-time processing paradigm is used for the VLSI implementation of a sorting computational sensor -- a computational sensory that sorts input stimuli by their intensity as they are being sensed.
I will present the VLSI implementation and experimental results for both the tracking computational sensor and sorting computational sensor.
Vladimir Brajovic received M.S. in Electrical and Computer Engineering from Rutgers University, and recently Ph.D. degree in Robotics from Carnegie Mellon University. His research interests are in VLSI for Computer Vision and Robotics.