Action-driven 3D Indoor Scene Modeling

3D indoor scenes are ubiquitously needed in the virtual world, e.g. 3D games, movies and virtual reality. These scenes provide the essential virtual environments for 3D characters to perform daily activities and tasks. Current scenes on public available datasets, e.g. Trimble 3D warehouse, are usually clean and well organized, and might not be sufficient to serve as the realistic environment needed in applications that involve human interaction. In this project, we aim to produce scenes which are like having been used or interacted by human. We propose to learn human actions from various online data sources, e.g. public available RGB-D datasets, online video streams or still images, and apply the learned actions to generate “messed-up” indoor scenes. By analyzing how human interacts with indoor objects, we also aim to build a generic action model that could be applied to different scene categories and use the model for action-driven scene understanding and modeling in the future.

Faculty Supervisor:

Richard Zhang


Rui Ma



Computer science



Simon Fraser University



Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects