Spatio-Temporal Human Activity Recognition on Manufacturing Floors- ON-299

Desired discipline(s): Engineering - computer / electrical, Engineering, Engineering - other
Company: IFIVEO CANADA INC.
Project Length: 6 months to 1 year
Preferred start date: 01/01/2020
Language requirement: English
Location(s): Windsor, ON, Canada; Vaughan, ON, Canada; Canada
No. of positions: 1
Search across Mitacs’ international networks - check this box if you’d also like to receive profiles of researchers based outside of Canada: 
Yes

About the company: 

IFIVEO CANADA INC. has built a computer vision powered platform to track manual production processes in manufacturing. Over 70% of tasks in manufacturing are manual and human errors cost the industry over $6 trillion per year in losses. Workers are a blind spot for manufacturers as managers lack visibility into their manual operations. At IFIVEO we’ve built a platform that translates manual movement into activities using deep learning powered computer vision to provide management with real-time insights into their manual production processes.

The company is a subsidiary of Silicon Valley based start-up IFIVEO INC. IFIVEO CANADA INC.’s headquarters are located in Windsor, ON with a second site in Vaughan, ON.

Please describe the project.: 

In industry, inspection of manufacturing floors is handled by humans. The drawback of using humans for this repetitive task is that collected data can be highly prone to bias due to small sample size and industrial engineers can miss relevant details during the cycle time analysis. Calculations often misinterpret the real factory situation. This research aims at finding an automatic computer-vision-based system for manual inspection. Some industries are trying to use wearable sensors, so that they can monitor worker activities. However, such an approach is not readily scalable and can be burdensome to the worker. On the contrary, computer-vision-based activity recognition systems can function without interrupting workflow. This research attempts to find a scalable vision-based AI solution that can perform activity recognition on gateway devices. The goal is to develop a neural network architecture with low computational complexity, taking advantage of state-of-the-art computer vision and deep learning tools.

The main objective of this research project is developing a novel artificial intelligence system that can be effectively implemented on gateway devices in real-time. The system can predict the time duration of any action for a real-time video stream in order to automate the cycle time analysis on manufacturing floors. This project will be with Dr. Jonathan Wu at the University of Windsor.

Sub-objectives:

  • Investigate the works done on activity recognition by using computer vision-based techniques that can be replicated on manufacturing floors.
  • Data collection and annotation – Collect real video samples from the client’s site, where workers perform their daily production line tasks. The data needs to be annotated so that neural networks can be trained on the collected data.
  • Framework development and optimization – Build a lightweight deep learning model, which takes the collected data from the manufacture floor as input, for activity recognition with an acceptable level of accuracy
  • Validation – Validate the accuracy and the inference speed of prototype by testing with real-time video data collected from client’s site.

Required expertise/skills: 

  • Team Player
  • Deep Learning and Computer Vision (Knowledge of Yolov3 is a big plus)
  • Good understanding of data annotation techniques specifically for Object Detection and Action Classification
  • Knowledge of deep learning libraries like Tensorflow, Pytorch, MXNet etc. (Pytorch is preferred)
  • Python programming skills required
  • Knowledge of deploying deep learning models on the cloud such as AWS Sagemaker is a plus
  • Outstanding communication skills
  • Ability to work in a fast paced and rapidly changing environment