ELF Background

ELF Background

Evaluation and Learning Background

First a caution. Note that the examples in this background document are meant to clarify concepts, not prescribe what you do for your strategy. Use them only if they actually apply to your situation. Also, the examples refer to single events or services since this is where you’ll need to start. Describing a detailed evaluation and learning strategy for your whole Big Picture Strategy may be something you’ll want to do if you’ve advanced beyond thinking about single projects or activities.

Does evaluation only mean quantifying what we did and what we achieved?

Over the years, evaluation practice has evolved.  Early evaluators were preoccupied with the scientific method –especially quantitative information to prove that a given program would produce a given result. Large scale evaluations, where large amounts of money and effort will be expended (e.g. national and provincial government programs) still tend to emphasize that method as a way to prove that public funds for a particular program achieve the intended results for citizens.  Even for those purposes however, evaluation practice is evolving.

As we recognize more and more that most decision makers need information for deciding on more and more complex problems, we recognize that they can’t just rely on quantitative methods, assessed by outside experts. So different types of evaluation methods are being used. Also, the increasing expectation of citizen and other stakeholder engagement means that evaluations are increasingly planned and conducted using participative methods.

Why evaluate?

You may be required to evaluate as a condition of receiving funding (for accountability). Your funder may want to know about outcomes or may be fine with understanding the effort you’ve put in.

But you might also want to evaluate for your own purposes. Maybe you want to learn what works, so you can adapt your strategy over time. Or you want to get better and better at doing whatever you do – improving efficiency so you can make your resources go further. We’ve titled this ‘evaluation and learning’ because ACMHI is an innovation so learning how to get better, and learning what strategies work better than others is important.

Some people want the information they get from evaluations to make their case for resources – either through writing proposals to get funding for your activities, or advocacy for others to take action.

If evaluation is to be your friend you also need to think about what your needs are. What decisions will you want to make?  For example, you might want to know whether students are interested in engaging with a particular initiative, or a particular way of delivering it. If not, you can change the initiative or the time place or way you implement.  Or you may want to make sure your SA’s mental health strategy builds momentum and impact from year to year, so your evaluation requirements include informing your successors.

If you are a student leader who takes a big picture governance approach and have staff who do the detailed planning and implementing, you will want to think about what evaluative information you want them to produce. You might want to be able to understand how the Big Picture Strategy needs to change, or understand more about what resources are really required to pull off different initiatives.  You can’t evaluate everything so will need to consider what resources are required to do different levels of evaluation. You probably don’t want to spend more on evaluation than you do on providing services!

There’s more than one answer to the question ‘How to evaluate?’

Sometimes you don’t need to do formal evaluation studies for your purposes – you just watch how many people show up to your events and decide to keep it the same or change depending on that alone. Or you pay attention to how frustrating the planning and delivery of an initiative is and decide that you could reduce the frustration and wasted resources by a different planning process, or writing down the usual steps so you don’t have to remember them all the time. The informality of those processes doesn’t change the fact that you are evaluating, or learning what works and what doesn’t. However the informality might make it harder to make your case to a funder for continuing the service. If that’s what you want to be able to do, you might want to keep some records of your observations and have a more formal analysis and reporting process.

The process of recording need not be a lot of effort – for example you could gather your team and do a 10 minute debrief at the end of an event and ask – “What would we Change, Drop, Add and Keep”?  Everyone puts their thoughts on a sticky note (one thought per sticky note) and posts them on a flip chart with the 4 columns. Take a photo and put it in the file. Next time you’re planning that type of initiative you can check the file to remind yourself. Or every 6 months you can have a ‘Learning Circle” where you and your team can look at all the field reviews and identify common patterns that you can improve with training or documenting process or other strategies.

You may want to use a participative approach for at least some of the steps. People are more likely to buy into the conclusions and recommendations if they’ve been a part of the process, but it does take more effort and planning. One important time to use participative approaches is when you’re trying to build a collaborative approach across many campus departments and community agencies. Having your partners involved in the planning, and then in the analysis or sense-making stages helps everyone understand and own the changes you might make as a result.

Sense-making is the process of collectively bringing everyone’s perspectives to the table to ask, “What does this information mean and what does it indicate we should do”. Having a group with different perspectives makes that assessment much more robust, but it does require more careful facilitation and openness to differing opinions.

ACMHI Legacy Tools don’t assume that there’s one right way to get the information that’s helpful to your purposes. Small campuses won’t use the same ways of evaluating and learning as large campuses, and may have creative options that work with small groups.

ACMHI Evaluation and Learning includes individual campuses and the collective parts of the overall strategy.

The ACMHI project had responsibilities to the funder for evaluative information, just as this will likely be a condition of any of the funding you receive.

Collective information means information that covers a number of campuses.  It helps you to participate in some collective ventures as it can create an overall culture that influences all the members.

For collective evaluation, information can come from many sources.

Some of it will come from reports of individual campus initiatives. At the collective level though, you will also want to learn about how to improve collective actions, as well as to report on their effectiveness. This supports many purposes:

  • Sharing what works so that other campus members can build on what works in campuses that share common characteristics (This is sometimes called scaling out, as a service with positive impact is provided to more students in more locations).
  • Understanding how to modify collective actions. For example the ACMHI Model Application process and logistics were modified from time to time as ACMHI learned from the experience gained in its first 3 years.
  • Understanding the range and portfolio of actions that improve post-secondary students’ mental health and reduced stigma. This helps form a common Advocacy agenda (and provincial Ministers are interested in the collective impact, not many individual campuses).
  • Being able to demonstrate how and why student leader led initiatives are an important contribution to the overall strategy for PSE mental health and reduced stigma.

What information do you want?

What information do you need to make the decisions you want?  This may seem like a strange place to start, when you know what initiatives you’re putting in place – isn’t it just about figuring out what to measure?  That’s certainly where people often start, but it actually can set you off in a direction that may not be really helpful in the end.

You may not be able to get to the answer to this question right away.  If that’s the case, start with where you’re at – what actions are you intending to take, and then unpack your thinking.  From there you can go in several directions:

  1. Ask yourself a question like, “Why do I think that will be useful to do?” And when you have an answer, ask “Why do I think that will be valuable or helpful?”  Ask yourself “Why?” to the answer you get, and repeat that more times.  At each step, you’ll end up with one piece of the chain of outcomes that you expect to result from your actions. Write out the chain of expected outcomes on a flip chart with arrows between each stage.  You’ll find that, even if you didn’t start with thinking about the outcomes you expect to create, at some level you’re making assumptions that you haven’t questioned. It helps to see those outside yourself – then you can ask if your expected chain of events seems reasonable. If not, you may want to adjust your planned action to make it a more believable outcome.
  2. Ask yourself, “Who will likely be interested and willing to participate in my event, service or activity?”  Be reasonable about the types and numbers of students who you will even connect with. Why might you want to know that?  If your goal is to influence all the students in the student body, then the information you need relates to understanding “Which students access which events and services under what circumstances?” Perhaps your events or resources are actually most likely to appeal to women, to men, to students under 30, Canadian born students etc. etc. and you’ll want to know that. So you might want to gather information about the characteristics of the students who participate.  You may want to go further and make sure you know which ones say they found the event or service helpful and which ones say they weren’t really helpful.
  3. Ask yourself, “How does the service or activity run?  You might want to know the following information if the decisions you want to make are ones to improve the operation of a service or activity. So you’ll gather information that helps you answer these kinds of questions:
    • What is required to deliver the service or activity? (e.g. Staff time, funds for SWAG or food, permission to use the space);
    • Do those delivering services need basic training?  As they get more experience do they want more training, see other opportunities to improve the service if they added to the role?
    • What actually do students need to do to access the service or activity? (e.g. if they have to overcome their fear of being stigmatized you may benefit from adding something that helps them overcome that fear until they are satisfied that they’ll be safe.)
    • What do students (or faculty or staff) consider to be strengths of the service or activity?
    • What are typical complaints?
    • On what basis will you decide that the services or events are no longer needed, or are no longer effective in their current form and need to be replaced or changed?
  4. Sometimes the decisions you need to make and the information you need depends on the stage of development you’re at. For example, in the earliest stages of an innovation you’ll want to understand mostly about whether your delivery system works (because you’ve made assumptions that the intervention will have the desired outcome and it’s better to treat your first attempts as tests or prototypes). In the early stages there are a lot of surprises that you need to just respond to. Evaluating and learning as you go helps you make decisions on how to do it better and easier then next time around. Sometimes you can also look to see whether long term implementation of the new initiative is possible given underlying administrative or policy realities. If your new initiative is different than the policy, it won’t survive over the long term.  In that case you’ll need to consider advocating a policy change –which can take time.  Once you’ve got a more stable delivery process you can then do evaluations to make sure you’re getting the outcomes you want and expect, and what unanticipated outcomes you’re creating (positive or negative).

There’s more than one kind of evaluation – look for creative fun ways to get the information you want

Remember that there are many types of evaluation (some have estimated more than 40!), including needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc. The type of evaluation and learning you do depends on what you want to learn about your strategy. Don’t worry about what type of evaluation you need or are doing — worry about what you need to know to make the decisions you want to make in an informed way, and worry about how you can do a good job about collecting and understanding that information.

Ask yourself – can I actually get double duty from my way of gathering information?  Can I get the information in a way that also engages other students in conversations about mental health, mental illness and issues, or reducing the secrecy around talking about mental illness openly?   You might consider creative options – Could I have a painting party and ask people to work in a group to draw images that describe their experience – and then post them, so others can see that there’s more than one way to respond to a situation?  One campus set out 5 jars, each labeled with a different type of stressor and gave participants 3 marbles to put in the appropriate jar(s).  People could put a marble in 3 jars, or put all the marbles in one jar if they wanted to indicate that type of stressor was really important.  Students walking by can see that others have the same kind(s) of stressors as they do.  Perhaps you could use the jars and marbles as a way for students to indicate what types of stress reduction actions they find most helpful?

For your reporting, you could take a photo of the images or the jars. If you do it at the beginning of the year and the end this will help you show whether or not there were any changes.

What to measure and how?

So to the details.  Perhaps this is where you wanted to start!

One way to look at your action or service is to understand different components.  Any action or service can be described in the following way:

First, briefly describe the action or service.  This should include a description of the type of student it’s aimed at (including ‘all’), the frequency it’s provided, and where the service is provided. Then ask yourself:

  • What are Inputs (e.g. time, funding, materials, physical space, expertise)?
  • What are Activities (e.g. providing brochures, holding workshops, providing counseling sessions)?
  • What are Outputs (e.g. numbers of people participating, numbers of events)?  These are measures of effort and used to be all that anyone gathered. Think numbers of hospital beds, numbers of procedures, and numbers of students attending a particular class. Increasingly funders (and citizens) aren’t satisfied with only measures of effort when deciding whether public funds are used well.
  • What are the expected Outcomes? These are the changes in a person or other entity that come as a result of the activities. Figuring out what you want and can reasonably measure requires you to think through that logic chain described earlier.

A basic way to describe these are in a ‘logic chain’:

 

Here’s an example for a single initiative or service:

  • Inputs: Time, funding, design and print costs, costs for frames and mounts for bathroom stall doors.
  • Activities: Information posters on bathroom stall doors that tell students about what resources are available to them for counseling or SA events, some introductory information about mental illness or mental well-being and how to identify warning signs.
  • Outputs: Amount of time for design and print-ready copy, costs for materials, and numbers of maintenance people able to install the frames, time of people who refresh the posters weekly.  Sometimes the numbers of students who would be exposed to the information counts as an output.
  • Outcomes: Think about the chain of outcomes you want to create:
    • First, a student needs to read and understand the information, and find it relevant
    • If they’ve understood and found it relevant, a student will know where to find resources they need them – e.g. counseling, SA information or workshops.
    • Third, if they know where counseling is located and how to make an appointment, the student will access services when they need them.
    • Then if the student participates in counseling they will have coping strategies, perhaps are referred to a physician for medication etc. etc.
    • If all those outcomes are achieved, you could claim an outcome – the student has supports required to successfully manage their mental illness. Perhaps further outcomes include academic performance – the student will be able to maintain the necessary GPA to stay enrolled in classes, or won’t drop out.

Once you’re satisfied that this is your assumed causal chain, you can look at each of them and figure out what and how you can measure to assess whether the outcome has been achieved. With outcomes, you’ll usually have to focus on the first steps of the chain since those are what will change in the short run. For the others you should be able to count or describe it.

Outcomes for categories of actions

In the Maturity Model tool, you’ll see different categories of actions. One category is the quadrant:

  • Activities focused on interventions to influence individuals
  • Activities focused on interventions to influence groups
  • Activities focused on your SA’s organizational capacity
  • Activities focused on advancing the partnerships and ‘ecosystem’ of organizations that have the common aim of improving your mentally healthy campus.

You might think about what is the overall outcome of each quadrant – and use that for all the different activities that fit within that quadrant.  Also some quadrants have subsections, and these might have particular outcomes that you want to stress.

Remember that you can gather data in a variety of ways – quantitative data is only one of them. If one of your outcomes is that the campus community values student leader-led initiatives, then think about interviewing people who are in a position to observe what’s happening on your campus. Think about asking people to draw sketches or write poems or take photos or videos that capture what their experiences are.

Or you may want to interview students and ask whether they feel comfortable talking about their stress or the actions they’re taking to improve their mental well-being. Or you could interview faculty or staff and ask whether they see students being open about mental issues. This may be different in different programs so you’ll also want to track the programs your interviewees are referring to.

Common use of information

Don’t think you have to do it all yourself. Sometimes you might have to work with others on campus to access their information (e.g. numbers of students who drop out for reasons of stress). Sometimes those departments will want your information on services you’ve provided to see if they can draw a conclusion about whether their change in academic outcomes is related to your services or activities.

Contributing to the Collective evaluation activities

For example, the ACMHI initiative said from the outset that it was aiming to improve students’ mental well-being, improve access to mental illness services, and reduce stigma. If this is one of your stated impacts, one of the outcomes you’ll want to track is whether the degree of stigma on campus is reducing.  There may be a number of types of data you’d collect to decide that.  One would be to make sure you have enough students participating in a common survey to be able to draw conclusions about your campus.

When to gather data and when to analyze and make sense of it?

You’ll want to gather information on what you’ve done and the outputs as you go. If you can build your way of assessing outcomes into the events or activities themselves that makes your job of accessing respondents easier. Others have created small cards with 4 quick questions and choices to check off that they could ask students to complete as they were participating in an event.  Perhaps you have student peer supporters who can ask students more detailed questions in the course of their conversations.

If you want to know whether a measure is increasing or decreasing, you’ll need to ask the same questions at the beginning and end of the term.  If you want to see changes over a longer time, you’ll need to have access to previous years’ information (and leave yours in a form that future student leaders can use).

Whatever your method, you’ll need to have ways of gathering and keeping all the slips of paper, of taking photo of the height of marbles in the jars, or whatever. You’ll also have organized a way for compiling all the information so you have it ready to look at during your learning circle or analysis session.

You need to report at the end of the term, so you have to do the analysis and sense making at least for those reports.  If you can make a regular habit of calling a learning circle – even for the last 10 minutes of a regular meeting – that will help create the culture of inquiry and adapting that you want.

In summary

Do the best you can and be creative – it helps if contributing feedback is enjoyable for students, and analyzing it and using it is fun for you. Smaller campuses will want to do the evaluation and learning in ways that don’t take much effort. Enjoy learning about what works on your campus and sharing your story!

And don’t forget your partners.  Using common evaluation tools, surveys and learning processes makes it easier for everyone, and you’ll find you work together better if you have some information in common as well as the unique information that helps you to weave different perspectives on your campus’ mentally healthy state.

References

There are many, many references on evaluation. Probably the most comprehensive and practical resource is the website Better Evaluation.