Event box

Voices of XR: Sanjeel Parekh In-Person / Online

Audio-Visual Scene Understanding and AR/VR

Sanjeel Parekh is a research scientist at Meta Reality Labs Research. His research primarily focuses on building machine learning tools for problems involving audio-visual data such as source separation, event detection, and speech enhancement. 

He earned his PhD in computer science at Technicolor and Telecom University of Paris-Saclay in 2019. His thesis was on learning representations for robust audio-visual scene analysis. Other areas he finds interesting and engaging are multimedia and ML research, music, philosophy, math, and machines. 

His talk will focus on audiovisual scene understanding and how the field appears through the lens of augmented/virtual reality. Processing multi-sensory information to robustly detect and respond to objects and events in our surroundings lies at the heart of human perception. What does it take to impart such ability to machines?  In this talk, he will explore this question in two parts: first through some of his work on multimodal and interpretable ML methods for audiovisual scene analysis. He will then outline research challenges and opportunities posed in the context of AR/VR, delving into a few in greater detail. A secondary goal of this presentation is to provide an overview of open research initiatives by the lab for collaboration with the broader research community.

 

The Voices of XR speaker series is made possible by Kathy McMorran Murray and the National Science Foundation (NSF) Research Traineeship (NRT) program as part of the Interdisciplinary Graduate Training in the Science, Technology, and Applications of Augmented and Virtual Reality at the University of Rochester (#1922591).

Date:
Monday, April 29, 2024
Time:
2:00pm - 3:15pm
Time Zone:
Eastern Time - US & Canada (change)
Location:
Studio X - Carlson 1st Floor
Library:
Carlson
Categories:
  Studio X     Voices of XR  
Registration has closed.

Event Organizer

Studio X