PhD Speaking Qualifier
Carnegie Mellon University
10:00 am to 11:00 am
Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device.
The rendering device will be provided for a demo after the presentation. Everyone is welcome to try it out!