Develop data model for conversational analysis - ON-174

Preferred Disciplines: Machine Learning, Natural Language Processing (Masters or PhD)
Company: Summatti
Project Length: 4-6 months (1 unit)
Desired start date: As soon as possible
Location: Waterloo, ON
No. of Positions: 1
Preferences: None

About the Company: 

Summatti (legal corporate name - Microfluence Inc.) is a technology start-up based in Waterloo, Ontario & part of the WLU Launchpad incubator. They are the Google Analytics platform for support centers, giving organization an unprecedented level of insight into their customers’ experience.

The founders Rashmi & Sid come from marketing and technology backgrounds, respectively and have been part of the Waterloo community and more recently, the community tech scene for 17 years.  

Project Description:

Summatti is a platform that analyzes data from various channels in a support center – email, chat, voice, web, etc. – to provide real-time insights into the customers’ experience.

We are currently piloting our platform with customers and looking to build/improve our machine learning models to analyze the context of the conversations/discussion that are being analyzed. The data sets obtained from customers contain conversations between a representative and one or more customers.

The main goal of this project would be to research methodologies/models to improve the platform’s NLP capabilities and provide more accurate results. 

Research Objectives:

  • Develop models that tag important parts of a conversation/sentence based on context of the conversation
  • Research/develop/augment models for relationship extraction and topic segmentation
  • Highlight patterns of issues based on an intrinsic understanding of the context of conversations analyzed 

Methodology:

  • Explore the use of word2vec models to process textual data. (Voice recordings are currently being processed with speech2text algorithms.)
  • Current stack allows for integrations into IBM Watson and Google Tensorflow to build models and utilize off-the-shelf AI capabilities in the project.
  • Open source data used from various sources – conversational corpus, social media, etc. – to train the models to tune out ‘noise’ in conversations. 

Expertise and Skills Needed:

    • Familiarity with supervised/unsupervised machine learning concepts.
    • Experience with Python/Tensorflow and/or some NLP toolkits such as spaCy/OpenNLP/CoreNLP
    • Good-to-have : Knowledge of Google Cloud Platform, data pipelines, big data storage/management 

    For more info or to apply to this applied research position, please

    1. Check your eligibility and find more information about open projects
    2. Interested students need to get the approval from their supervisor and send their CV along with a link to their supervisor’s university webpage by applying through the webform.
    Program: