An analysis of aural and visual cross-modal recognition of paths of moving sources for authoring and editing trajectories in 3D audio composition and production.

Current advanced audio playback systems can provide a feeling of 3D immersion to listeners. Though the technology to listen to 3D audio can provide a good listening experience, the technology used to create 3D audio content has not adapted to those advances. One major reason is due to human aural localization ability that is not as accurate as the visual localization ability, making it difficult for users to create a 3D experience because common authoring tools use visualizations to interact with. The proposed research project is to study how the 3D audio systems affect listeners’ ability to track moving sounds by examining the association of visual representations to different paths of moving sounds. We expect to have measurable discrepancies between the visual representation and the path of the moving sound in our findings that will be used to design and evaluate new input methods for creating 3D audio content.

Faculty Supervisor:

Catherine Guastavino


Justin Dan Mathew



Computer science



McGill University



Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects