A caveat to this blog post, I am not a Monitoring and Evaluation (M+E) expert. In my opinion, there is a distinct art to high-quality M+E. This blog post is dual purpose for me, (1) articulate the strengths of ‘decent’ M+E and (2) spread the word of how to recognise and promote the art of M+E.
This post is helpful for me, scholars teaching M+E, organisations doing or commissioning M+E, and students who are contemplating this as a career or independent study approach. The definition of M+E according to, Guijt et al. in 1998 (yes, this is a decades-old term), is:
Monitoring progress and evaluating impacts have long been considered important to ensure that money is well spent and that objectives are met. Beside the conventional focus on being accountable to funding agencies, organisations are increasingly using monitoring and evaluation for internal learning and to improve their work.Guijt, Irene, Mae Arevalo, and Kiko Saladores. “Participatory monitoring and evaluation.” PLA Notes 31 (1998): 28.
In a recent project, I’ve come across a tonne of fantastic and free to access resources on how to design M+E, e.g.:
Common mistakes I’ve made and seen recently have been around poor initial scoping of a project and what is being monitored or evaluated, i.e. formation of aims or construction of the units of analysis. I am prone to creating broad (and not smart) aims or units, and this overcomplicates the project and reduces how effective the project is.
To counter this mistake and strengthen my own thinking, I’ve employed an academic/practitioner technique of ‘member checking’, which essentially involves more than one person in the decision-making of the project’s design. I’ve applied ‘member checking’ as an International Advisory Group aspect to a project or a series of built-in ‘check and challenge’ moments throughout the life course of the M+E project. Although this sounds simple and obvious, this does help to avoid common mistakes.
Finally, great examples, as ever I am always collecting documents. Beyond the 4 I included above, here are several M+E examples I’ve been using as a benchmark the past 12 months or so:
- sportscotland e.g. Women and Girls Fund Evaluation – https://sportscotland.org.uk/about-us/our-publications/archive/women-and-girls-fund-evaluation/
- sport and dev sharing files e.g. Sport-in-Development. A Monitoring and Evaluation Manual – https://www.sportanddev.org/en/article/publication/sport-development-monitoring-and-evaluation-manual
- Higher Education Institutions e.g. University of Salford, Manchester and Evaluating sport and physical activity intervention: a guide for practitioners – https://usir.salford.ac.uk/id/eprint/3148/1/Dugdill_and_Stratton_2007.pdf
Thank you to the organisations for making these documents and tools publicly accessible, and for commissioning excellent experts to do great work. As I said earlier, M+E done to a ‘decent’ standard is an art…