Skip to main navigationSkip to main content
The University of Southampton
Public Engagement with Research

Evaluating public engagement activities

Evaluation (the systematic assessment of the worth or merit of an activity, project, programme or policy) can be

- formative (for ongoing improvement); or

- summative (to document success/failure).  

There are many methodologies and approaches.

The prompts on this page will help you think about WHY and HOW you might want to evaluate your public engagement project/activity; and where to go for more detail.

(acknowledgement: this guidance is informed by the NCCPE Beginners Guide to Evaluation workshop)

If you would like to take a step-by-step approach to designing your evaluation, you can use our Evaluation Toolkit.

Evaluation Toolkit

Benefits

A well designed and conducted evaluation will:

Purpose

What are you evaluating and why?  The type of evaluation you choose will depend on this, for example:

Planning your evaluation

You will normally develop your evaluation plan alongside your project/activity plan, as the two should be linked.

Your evaluation plan should summarise what, why and how you are going to do things. It doesn't need to be a long document and should include:

When thinking about how you are going to conduct your evaluation, consider:

Triangulation - combining different approaches to develop a deeper picture of the activity, to help reduce bias. This involves capturing different perspectives on your activity (e.g. from the participants, from the deliverer (you), and from a neutral observer (helper, colleague, the evaluator); and using a variety of collection techniques.

Creating a baseline - this is important so that you can measure and evidence any change eg. to know if people's knowledge or attitudes have changed you need to know where you are starting from. Where possible build this into the engagement activity itself.

Quantitative and qualitative data - ideally you should be collecting a mixture of these (e.g. responses to factual questions plus responses to open questions), so that you can explore and understand what is happening in more depth.

Sampling - you don't need to evaluate everyone and everything, just a representative sample. A large sample takes longer to analyse and may not give you any more information. Quantitative data usually involves larger sample sizes (e.g. 40-60) and you should ask at least 100 people before expressing results as percentages. Qualitative data usually involves smaller sample sizes (e.g. 10-20) but provides more depth.

Data Collection

When deciding how to collect your data, consider:

It is possible to be creative with your data collection- there are many techniques, with different strengths and weaknesses: eg response cards, questionnaires, interviews, focus groups, graffiti walls, drawings, observation, video, images/photos ....

Data analysis

This is often an iterative process, moving back and forth across three steps:

1. Noticing and collecting (downloading/typing up/labelling/debriefing)

2. Sorting and thinking (listening/reading/processing quantitative data)

3. Critical analysis and interpretation (comparing, contrasting, exploring themes & patterns/describing and illustrating findings eg tables, charts, text, quotes)

Remember to:

Making use of your evaluation

Spending time and energy on collecting data is pointless unless you use the information, learn from it and share it with others.

Consider who will be reading your report and tailor it accordingly. Remember to feedback findings to those who were involved (wherever possible), value their contribution and thank them. Ensure that the findings are acted on.

A formal report structure might include: summary, context of evaluation, aims, objectives and evaluation questions, description of activity/event, methodology, summary of evidence, overview of activity/event, conclusions and recommendations.

Useful Downloads

Privacy Settings