Hand Pose Reconstruction with Advanced Sensor And Deep Learning

This project explores 3D pose hand reconstruction using machine learning models with sensor data from innovative input devices. The goal is to propose an approach that provides real-time high fidelity hand reconstruction and understand how users perceive its quality to improve user experience and social interaction in AR/VR. The proposed methodology is to train a deep learning network that learns the mapping from sensor signals to hand pose joint locations. User studies will be performed to assess the quality of predicted hand motions. New evaluation metrics will be explored with respect to user perception ratings in different application scenarios. The training data will be collected using both the Tactual wearable device and motion capture devices. This project allows the Tactual to tailor and develop their Prism sensing technology for new products in AR/VR and prepare new sensing applications in valuable markets such as Automotive and Assisting Living.

Faculty Supervisor:

Fanny Chevalier;Andrei Badescu;Arvind Gupta


Jianda Chen


Tactual Labs Co.


Computer science


Professional, scientific and technical services


University of Toronto



Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects