Increasing Robustness of 3D Multimedia Interaction and Massive Data Analysis for Augmented Reality with AI Techniques - QC-136

Preferred Disciplines: Software, Software Engineering (Masters, PhD or Post-Doc)
Company: Anonymous
Project Length:  Project 1 : 3 units (1 for Master (4-6 months), 2 for PhD/Postdoc (8-12 months)), Project 2 : 3 units for PhD/Postdoc (12-18 months)
Desired start date: Project 1: as soon as possible, Project 2: September 2018
Location: Montreal, QC
No. of Positions: 3
Preferences: None

About Company:

The company is located in the Montreal area and has specialized in interactive multimedia installations and devices since 2000. It designs, realizes and communicates scenographic, architectural and museum experiences to touch, amaze or surprise. For several years, the company has been actively pursuing R & D activities in the field of interactive multimedia entertainment systems (SMID) in architectural spaces and the management of the flow of people in these spaces. These activities have led him to continue his central approach in SMID by increasing the level of interaction and intelligence of multimedia installations. Multiple projects for projections on architectural or immersive scale, for hotel, museum, corporate headquarters, etc. as well as multiple interactive space installations, projection surfaces, 3D tables, have been deployed all over the world.

Project Description:

As part of an industrial research project and the activities of the company, the candidates will contribute to the pursuit of research and development activities related to a 3D capture solution, real-time interaction with user gestures and person-object identification in an immersive physical environment in spatial augmented reality. Recent research conducted on an interactive platform version has highlighted the need to make the detection and prediction of interactions as well as the calibration of capture devices more robust. The interactive platform works by merging multiple cameras and projection systems scattered in the space that need to be precisely aligned. To ensure the best accuracy at all times, it is important to improve the accuracy of the 3D calibration algorithms of capture cameras and projectors according to space projection and interaction surfaces. In addition, the techniques must be self-calibration without human intervention or the use of a test pattern.

Project 1

The research project consists in improving the current 2D-based calibration techniques for projection as well as in 3D by introducing machine learning techniques for certain aspects of the process. In particular, it is desired for an automatic method for determining intrinsic and extrinsic parameters to improve the determination of the descriptors according to the type of the surface, 3D appearance and texture. Color calibration and image blending are also subject to improvement from machine learning techniques.

Project 2

The project wants to target the use of machine learning algorithms that would increase the platform's interaction level, both to improve the precision of the people's recordings (skeletons, head, limbs, faces, direction of gaze), but also to provide predictions about positions, gestures and interactions with the elements of the environment. It is also desirable to better detect and manage false positives, occlusions or merging of people and objects in interaction spaces. On the other hand, when using massive data to achieve interactive multimedia rendering and interfaces, it is necessary to perform massive data analysis to bring out trends and sophisticated predictions through machine learning techniques and deep neural networks.

Research Objectives/Sub-Objectives:

Project 1

  • Improve the accuracy and automatism of camera and projector array calibrations methods in 3D and for complex architectural screen or architectural surface configurations
  • Use machine learning techniques to improve the accuracy of determination of extrinsic camera / projector parameters using techniques using descriptors
  • Improve color calibration and blending according to the 3D structure of the screen and textures

Project 2

  • Use artificial intelligence (AI) techniques to increase the robustness of identification and tracking of objects and people (decrease false positives) in 3D interaction spaces and augmented reality (AR)
  • Increase the robustness of detection of faces, looks, gestures and interactions with: objects, furniture, surfaces and interactive spatial zones
  • Produce probabilities of interactions and user trajectories with the function of object-person and environmental elements
  • Improve the accuracy and scope of trend and prediction analyzes on massive data (media, internet of things and various sensor data flows)

Methodology:

Project 1

  • For calibration objectives, evaluate current calibration performance and identify process phases that may have benefited from machine learning techniques to increase accuracy or robustness. Specify and evaluate the selected techniques
  • For calibration objectives, evaluate machine learning techniques for the determination of more appropriate descriptors according to the projection surfaces
  • For aspects of color calibration and image blending it would be appropriate to consider AI techniques for dealing with different types of 3D screens and textures in real-life situations

Project 2

  • Explore and determine the AI ​​techniques that would give the best results for camera and projector calibrations, for the detection and identification of objects and people, for face and eye detection, and trend analysis and predictions on massive data
  • Development of experimental scenarios for each element: calibration, detection of objects, people, faces identification and gaze

For both projects

  • Development and experimentation of prototypes, validation of precision, robustness, prediction achieved and false positives or other unresolved problems, validations by a group of users (interactions)

Expertise and Skills Needed:

  • AI and Machine Learning Techniques (Neural Networks)
  • Calibration of 2D and 3D camera network
  • 2D / 3D vision algorithms
  • Filtering techniques
  • GPU programming
  • C ++ / Python

For more info or to apply to this applied research position, please

  1. Check your eligibility and find more information about open projects.
  2. Interested students need to get approval from their supervisor and send their CV along with a link to their supervisor’s university webpage by applying through the webform or directly to Jean-Philippe Valois at jpvalois(a)mitacs.ca.

Version française disponible ici

Program: