
Evaluating Our Impact: InFocus 2014 – 2017

After creating a clear Theory of Change that set out how we thought events and volunteering would lead to a core set of social outcomes, we set ourselves the challenge of measuring those outcomes across our grant funding. InFocus were our independent learning partner across an ambitious programme of process, formative and summative evaluations over three years.
The evaluation asked:
1. To what extent is Spirit working with its intended target population of isolated or disengaged people, via its current grantee portfolio?
2. How successful are the Spirit-funded projects at delivering outcomes within the areas of wellbeing, disability and social cohesion for these target populations within which they work?
3. What kind of funder is Spirit perceived to be, internally, by Spirit staff and externally, by grantees and other key stakeholders?

What did we discover about our impact?
- InFocus found that our projects were targeting people with lower than average levels of wellbeing. There was an average 7.9 percentage point increase from baseline to end line in the number of participants or volunteers who reported high levels of wellbeing. The evaluation also found that projects that focused on sustained volunteer or participant engagement were more likely to demonstrate greater improvements in wellbeing than projects that engage volunteers or participants in a one-off activity.
- The evaluation finds that all but one of the Spirit-funded projects that measured participants’ pride in their communities reported an increase between baseline and endline. Spirit funded projects reported increases in the extent to which young people feel they have a voice in their communities. We were successfully meeting our mission to continue and recreate the spirit of pride, positivity and social connectedness that people experienced during London 2012.
- In-depth case studies started to give us a real picture of the mechanisms that were leading to positive change for participants and volunteers – a better idea of what quality practice looked like.
- All Spirit grantees interviewed in the Spirit case study saw Spirit as a partner, rather than funder. Grantees unanimously agreed that Spirit is approachable and stated that they have relationships not only with their Grant and Learning Manager, but also with other Spirit staff. InFocus said that these values should not be lost as Spirit developed.
How did this affect our approach?
- There was real richness in collecting subjective wellbeing data using ONS4 and (with some caveats) these questions were the easiest to use and aggregate across multiple grants. Collecting just the proportions of participants who were ‘happy’ (the 7/10 and aboves) was not telling us enough, however – and we started asking for average base and endline wellbeing as well.
- 18 of the 36 projects we had during this period submitted both base and endline data across a range of thematic areas, including wellbeing, inspiring events, engaging volunteers and connecting generations. Not all the indicators had worked and there was a lot of variation in quality. Whilst interviews with grantees suggested they found this approach rewarding, it was also challenging and many needed more support. We decided to bring in specialist evaluation capacity building training through the Behavioural Insights Team to work with grantees through a mix of learning events and bespoke support.
- We needed better ways of understanding our impact on changing perceptions of disability. Using standardised questions based on Understanding Society weren’t a good fit for the changes we wanted to measure and weren’t foregrounding the opinions of disabled people themselves sufficiently. We’d continue to explore better ways of doing this throughout our remaining timeframe.
- We needed to think more about what sorts of changes we should expect in people’s lives through our interventions, and how long we predicted they would last. This led to a big focus for the next three years on capturing change from regular, high-quality participation (perhaps at the expense in this period on what one-off event data collection should and could look like).
- While we were committed to being open about what wasn’t working, as well as what was – this was sometimes harder to do in practice: it was often dependent on grant holders feeling able to talk publicly about their programmes, which came with more risks for them than it did for us.