Automated Domain Specific Essay Scoring- ON-447

Project type: Research
Desired discipline(s): Engineering - computer / electrical, Engineering, Computer science, Mathematical Sciences
Company: Anonymous
Project Length: 6 months to 1 year
Preferred start date: As soon as possible.
Language requirement: English
Location(s): Toronto, ON, Canada; Canada; Canada
No. of positions: 1
Desired education level: CollegeUndergraduate/BachelorMaster'sPhDPostdoctoral fellowPreferred institutions: McGill University, McMaster University, Queen's University, University of Alberta, University of Guelph, University of Ottawa, University of Toronto, University of Waterloo, University of Windsor, Western University

Search across Mitacs’ international networks - check this box if you’d also like to receive profiles of researchers based outside of Canada: 
No

About the company: 

We are educational innovators who believe in transforming and improving the teaching experience and in turn positively impact students learning and maximize their potential. Our experienced team of cognitive architects leverages AI deep learning models and uses advanced natural language processing capabilities. The AI model was trained collaboratively with experts from education in order to analyze
unstructured content and extract domain- specific concepts in student responses.
Our company automates the process of grading students' essays and instantly provides consistent and personalized feedback at scale. Our client a large certification body representing a Canadian profession experienced 84% scoring accuracy, 13x faster scoring results, and a 60% cost reduction opportunity by applying our technology.

Describe the project.: 

We are looking to work with an expert in the natural language processing to analyze the performance of the machine learning AI models and create a road map to enhance the model. Also we are looking to build automated personalized feedback modules to the students using our data (Data point: 2000+ student responses to a domain specific essays – an accounting exam). The AI models do not evaluate sentence structures, or grammar, format, but evaluate the context of the student responses and look for the key concepts that educators are looking for.

The goal of this project and the company is to provide a complete version of the software platform to the educators to assist their essay grading.
The main tasks to be performed by the candidate:
• Analysis of the results of the simualtions (data scientst work).
• Enhance AI models (computer science/engineering).
• Program the grading and feedback loop (computer programming).
 Methodology/techniques to be used:
• Split data into training, test and validation datasets.
• Ingested human markers’ grades associated with selected dataset into AI
system.
• Developed AI marking assistant model with machine learning based
classifiers.
• Used training phrases developed by a senior marker who led the marking
centre; and
• Created page level and sentence level AI models.
• Created extra layers of reasoning and marking logic to enhance the AI model.
• Measured and recorded the accuracy and training effort for each test and
validation.
• Used iterative continuous improvement methodology comparing humanmarked
results to AI marked results to improve accuracy.

Required expertise/skills: 

• An master (or higher) degree in Computer Science, Software Engineering or
relevant experience.
• Software Programming experience (Node.JS, C, C++, Python, SQL).
• Experience in Computer Science and Data Science.
• Experience in using Google AutoML, Google cloud, AWS.