Some time ago, I was invited to give a training in Monitoring and Evaluation (M&E) in Burundi. During the exercise I was reminded of why people tend to hate M&E so much, and usually leave it as the last thing to think about when you’re planning and implementing a project or programme.
M&E helps to gather information on what your intervention or project is doing, how well it is performing and whether it is achieving its goals. M&E can also provide guidance on how to improve or change future intervention activities. Proper M&E is fast becoming an important accountability requisite from funding agencies and donors.
If your intervention aims to keep adolescent girls in school, for instance, Monitoring would be used to check progress throughout the intervention, and Evaluation to show how much impact your project had at the end.
But most often than not, M&E is seen as a burden, an administrative requisite or an afterthought. Even when people and organizations know M&E is valuable in theory, they have trouble giving it the time, effort and funding it deserves.
It’s past the time that we got better at M&E, so here’s my take on the 5 most common mistakes we make as project coordinators when it comes to M&E, and what to do instead. Let me know in the comments what other common ones I should add to this list! And for those of you not used to M&E terminology, I’ve added a glossary at the end
The logframe lays out the logic of your project, how your activities transform into results and into impact.
Getting clear on the logic of your intervention and how will you be changing the current state of affairs (the “before”, e.g. girls dropping out of school) to a better one (the “after”, e.g. girls staying in school and thriving), requires more than one brain. If you can, gather together your team, stakeholders and others and discuss the logical steps to get from the before to the after, through your intervention. Then distill this logic into the logframe.
If you don’t have a team, try to explain your logframe at least to one other person, to test the coherence and clarity of your approach. If you can’t explain it properly, it’s not clear!
If your intervention or programme is very complicated (for example, you are using educational, health and livelihoods interventions to improve the outcomes for teenage girls), then you’ll have a hard time fitting it into a logframe. You’ll probably feel like you’re trying to fit a square peg into a round hole.
Drop the logframe for a while and try using a theory of change approach instead (or other, similar methodologies). This will allow you to get the “big picture” of how all the parts of your programme fit together and drive change. From there, you can “zoom in” and create several logframes for specific components (e.g educational, health) of your programme.
The logframe is a tool to help you think through the logic of your intervention. This logic will change and improve as you start your intervention, talk to more people, find mistakes in your approach, etc. Thus, its valid to start with a first logframe and keep improving it from there as you move forward and learn.
Unfortunately, many donor organizations will require you to submit a logframe when applying for funding and then stick to that logframe if the funding is approved.
This is a classic. I get called in to carry out evaluations of projects at the end of a 3 to 5 year period. The organization wants to measure change, that is, how much they managed to improve things from the start of the project. Yet, sometimes there is no “baseline”. There was never a measurement, before starting the intervention, of how things stood (the “before”). Thus, even if we measure how things are now (the “after”), the evaluation cannot measure how things improved in a quantitative manner.
For example, if your intervention aims to help keep girls in school, but there was no baseline measurement of how many girls (number and percentage) were dropping out before the intervention was started, there will be no solid comparison point to what we measure now at the end of the project.
There are some ways around it, like carrying out qualitative assessments (e.g. interviews and surveys) to ask people if and how the intervention helped. You could also find secondary data for the area in the past, and extrapolate to get a baseline number, but these techniques will have a lot of biases.
Collecting data can be a dreary and tedious task. Yet someone always has to do it.
I’ve been impressed more than once when speaking to the frontline data collectors in different programs about all the ideas they have on how to improve the data collection process. I’ve gotten brilliant suggestions on using prompts in software apps to ensure the right data always gets collected and how to organize the work to ensure a larger area of a village is covered in a survey.
However, most of the time, data collectors don’t get asked about the data collection process, they’re not even explained why the task is so important, and never showed the results and analyses of the data they are spending so much time and effort collecting.
Data collectors are your allies. The quality of your data depends on them doing a great job, consistently, throughout the intervention. This is tough. Make sure you bring them into the discussion from the start, make them feel ownership of the data collection process, and that they understand why their job is so critical. Share results back with them, you’ll be surprised at the level of insight they can provide you.
Baseline - it is the current status of services and outcome-related measures before an intervention, against which progress can be assessed and compared.
Evaluation - is episodic and reflective, conducted either at intervalled stages or at the end of a project, programme or organisation. It uses both external and internally collected data.
Intervention - a specific activity or set of activities intended to bring about change in a targeted population.
Logframe - short for Logical Framework, is a management tool used to improve the design of interventions. It identifies strategic elements such as inputs, outputs, activities, outcomes and impact. Also looks at the causal relationships, indicators and the assumption of risks that may influence success and failure. It therefore, facilitates planning, execution and monitoring and evaluation of an intervention.
Monitoring - is the ongoing collection of data in order to track the progress of a project, program or organisation.
Primary Data - data that is observed or collected directly from a first hand experience
Programme - multiple projects which are managed and delivered in a single package
Project - has a defined start and end, it has a specific set of operations and is designed to accomplish a specific goal.
Qualitative Data - data that is measured in the form of words rather than numbers, can be analysed through common and diverse themes, patterns and ideas
Quantitative Data - data that is measured on a numerical scale, can be analysed using statistical methods and can be displayed using tables, charts and graphs.
Secondary Data - is research data that has been previously gathered and can be accessed by researchers
Theory of Change - explains how a group of early and intermediate accomplishments set the stage for producing long-range results. It articulates the assumptions about the process through which change will occur and specifies the ways in which all of the required early and intermediate outcomes related to achieving the desired long term change will be brought about and documented as they occur.