Stop. Evaluate and Listen: a discussion with Mitacs’ Evaluation team

03/23/2017

If you’re a Mitacs program participant, you’ve probably been exposed to an evaluation tool like a survey or a final report. But you may be surprised to learn that we have a department dedicated to understanding and evaluating your responses, and helping inform how we run our programs. To learn more about the Evaluation team and their work, we talked to Jackie Hallet, Evaluation Officer.

Let’s start with the basics: In a business context, what exactly is evaluation?
As far as textbook definitions go, it’s the systematic investigation of the merit of an intervention. In Mitacs’ case, this intervention is a program — more specifically, all our programs. Program evaluation is a form of applied social research that is commissioned to inform decision making.

At Mitacs, our Evaluation team focuses largely on program performance measurement, or in other words, how the programs are doing in relation to their intended outcomes.

Why does an organization like Mitacs need an Evaluation team and an evaluation strategy?
Our programs are designed to make positive changes to the Canadian innovation landscape, so we need to be able to measure and describe the changes that result from participation in our programs.

An evaluation team and an evaluation strategy monitors the performance of our programs, ensures that program delivery remains efficient and effective, and captures the immediate, intermediate, and long-term outcomes of our programs.

What are some common tools used by the Evaluation Team?
Currently, our most common tool for capturing program participant satisfaction and outcomes is the humble survey.

For immediate and intermediate outcomes, we survey program participants about three months after the completion of their individual projects. For long-term outcomes, we have a cycle of longitudinal surveys, which are distributed to groups of past participants every three years.

Past interns are my favorite participants among them; it’s so inspiring to see where they are since they participated in a Mitacs program. Some start their own company, others are hired by their partner organization… regardless, nearly all of them say their career prospects have improved following their internship. That’s what I like to hear — especially in a tough economy!

Why are surveys like the ones above the best tools to use? What are some other tools typically used in evaluation?
Surveys are good for getting data from a wide variety (and a large number) of participants. They’re inexpensive, efficient, timely, and standardized to reduce bias. Other common data collection methods include focus groups, case studies, and key informant interviews. These are qualitative, in-depth interviews with anywhere between 15 and 35 people who have been selected for their first-hand knowledge about a particular topic.

In fact, the Evaluation team will be conducting some key informant interviews with past participants in the very near future. Although surveys are great for capturing a lot of data, sometimes the information gained from surveys demands further exploration.

In our upcoming study, we will be conducting one-on-one interviews with a select group of former Accelerate interns to determine what impact the program had, if any, on their decision to pursue entrepreneurship. Based on longitudinal survey data, we know that 14% of former Accelerate interns do engage in entrepreneurship, and it’s a growing area of interest for Mitacs and Canadian universities. So we want to know how we can best support entrepreneurs in Canada.

Thanks for taking the time to help us learn more. To wrap up, any words of advice for people or organizations looking to implement their own performance measurement strategies?
Logic models and performance measurement strategies should be developed at the birth of a program. A logic model is an excellent tool for providing a visual overview of the main components of a program, its expected effects, and
the theoretical causal links between the program and its goals.

All this to say, performance measurement and program evaluation shouldn’t happen halfway through or at the end of a program — if you set up a framework for performance measurement at the very beginning, evaluating the outcomes will be a breeze.


Read some of the Mitacs Evaluation team’s work:

 

 

 


Media Contact
 

Heather Young
Director, Communications 
Mitacs
hyoung@mitacs.ca
604-818-0020