Scrutability of the “black box”: machine learning & social justice in educational measurement

Canada relies heavily on the role of language proficiency tests in determining suitability for high-stakes decisions such as admission to study programs, employment, residency, and citizenship. Increasingly, such tests are administered by automated scoring systems, employing machine learning models, which have been criticized for “black box” elements leading to inscrutable and unexplainable models of assessment. In contrast, social justice arguments for educational measurement practices assert that test takers have a right to determine measures most relevant to their needs, and to receive benefit from assessment. What is needed is a machine learning application for language testing that aligns with the level of individual autonomy characteristic of just educational measurement, in a transparent and scrutable manner. This research project will conduct an analytic review of the technology underlying the partner organization’s TestPredikt© application, to determine to what extent evidence supports the application’s contribution to a framework of just educational measurement.

Intern: 
Farid Talebloo
Faculty Supervisor: 
Gregory Tweedie
Province: 
Alberta
Partner: 
Sector: 
Partner University: 
Discipline: