Reflections for program evaluation

Dr Bridget Foley
4 min readJul 12, 2021

I am currently in the final year of my PhD. My studies have been part of a large evaluation of a state-wide government program known as the Active Kids voucher program. In 2018, the NSW Government allocated $207 million across four years to reduce the cost of kids registration in sport and active recreation programs. This program is the most extensive sport and active recreation voucher program globally, and the first to have embedded an evaluation from the outset.

I have learned a lot about the voucher program’s reach and physical activity levels of children and the short term effects of the voucher program on kids’ health and wellbeing. I have also enhanced my technical skills in using large datasets and conducting high-quality research.

But, not all governments will invest in a voucher program to reduce the cost of kids sport and active recreation

So I wanted to share some transferable knowledge I’ve gained during my candidature regarding program evaluation. I have four key reflections that I think can be applied to other large-scale evaluation programs.

Thanks to the International Society for Physical Activity and Health’s support, I have developed some fun illustrations to share these reflections.

My four key reflections are:

1. Logic gets you where you need to go

Program evaluation is similar to a sourdough recipe (and we have all been making lots of sourdough during COVID lockdowns)— for best results, you should collect your ingredients, understand the steps required and follow each step to ensure you finish with the right outcome. A good recipe is developed from previous experiments. If you follow each step, you should succeed. A recipe can also help you understand whether you forgot the salt, how the environment influenced the crust and if the bread rose to a good height. I consider an evaluation in program delivery as important as a recipe in making bread.

If a program hasn’t been tried and tested before, we don’t have a recipe. Without a recipe, logic models are your best friend. A logic model is a simple document or diagram that allows people to understand the steps required to get the desired outcomes. It’s theoretical and just as useful as a recipe when it comes to evaluation. It helps unite a diverse team, multidisciplinary team to the core of the program. I have revisited the program logic (simplified below) soo often during my studies.

By returning to the logic model regularly, it has helped validate our research protocol and methods, collaborate with partners and focus on activities and outputs to achieve positive outcomes for participants.

2. Be ready to adapt

Importantly, evaluation of a large-scale, government-led voucher program is not simple or straight forward. The wider-evaluation team has needed to be flexible, responsive and adaptable to change.

This is what it has looked like to me:

Yes, a lot going on in that road map, and new roads are being paved all the time. With a dynamic and large-scale program, the evaluation must be flexible and adaptive to ensure the best possible evidence is collected.

As a PhD student, my safe place is in the data forest. Data has primarily come from a registration platform and a longitudinal online survey. We now have a few years of data provided by parents and carers of children in the program which can inform program improvements…which takes me to point 3.

3. Communicate in multiple ways

The purpose behind including an evaluation in a program is to allow evidence based decisions to be made during and after program delivery. For example, our data shows that more than half the population is involved in the program. We have found that reducing the cost of sports registration has the most positive impacts on disadvantaged and marginalised populations, however these populations are less likely to use a voucher. Communicating these messages to policy makers has been critical to guide targeted approaches to improve the program’s impacts on population health.

We have used written reports, presentations, social media, coffee chats, meetings, and media appearances to appeal and communicate the key messages emerging from the evaluation — all to inform evidence-based decisions within the sport and physical activity sector and other sectors.

4. Involve the target group

It is absolutely critical to include the target group throughout all stages of a program to ensure that it is tailored to their needs. I am particularly interested in improving the health and wellbeing of children and young people. This population can often be considered tricky to engage with for a variety of reasons, including informed-consent, however every effort should be made to include them. Children and young people have soo much to add to each step of a program. Meaningfully engaging the target group throughout a program can enhance acceptability, reach, effectiveness and overall quality of the work and it’s outcomes.

In conclusion, for best program evaluation results — Logic gets you where you need to go (the recipe), be ready to adapt the evaluation during program delivery, communicate in multiple ways and involve the target group.

--

--

Dr Bridget Foley

PhD in Public Health. Physical activity advocate. Closing the gap between science and practice. Enjoys running, dancing, active travel, & board games.