On monitoring and evaluation in BEESPOKE

Last changed: 13 December 2022

We see monitoring and evaluation in Beespoke as the systematic data collection, analysis and reporting of activities, processes, outputs and outcomes. The reporting involves statements, judgements and conclusions on what has been done, how it has been experienced and on potential development paths and improvements. Ideally monitoring and evaluation positively affect current and planned activities within Beespoke, but also future decision-making on implementation strategies.

On monitoring and evaluation in Beespoke

We see monitoring and evaluation in Beespoke as the systematic data collection, analysis and reporting of activities, processes, outputs and outcomes. The reporting involves statements, judgements and conclusions on what has been done, how it has been experienced and on potential development paths and improvements. Ideally monitoring and evaluation positively affect current and planned activities within Beespoke, but also future decision-making on implementation strategies.

WP6 aim to support the Beespoke-project and its partners when implementing tools for monitoring and evaluation. It is a collaborative effort. Together we can track what is actually done and how well we achieved it, demonstrate how our approaches have contributed to targeted gains, and better understand barriers and needs to improve future activities.

A multi-methodological approach

Collecting data in order to monitor and evaluate social processes often require that different tools and methods are used. No single method could fully capture the complexity. Furthermore we must apply a flexible framework, adapted to the specific needs and preconditions in unique context.

In our tool-box we have the potential to use many different methods, from surveys and document studies to participant observations and interviews or focus groups. We will be pragmatic in the sense that suggesting methods that is both desirable and feasible. Thus, we will not suggest a “one size fits all” solution. Important is that we monitor and evaluate continuously, collect data from many activities and actors, as well as have a dialogue within Beespoke on how the data should be interpreted. This will be an ongoing process, although with rather low intensity.

Focus on specific practices with universal questions

Within Beespoke we have a lot of different activities to monitor and evaluate. From monitoring of pollinators, use of checklists, farmer workshops and training, implementation of new farm management on demo farms, external communication to policy workshops, etc. All involving social interaction and/or learning. The evaluation methods need to suit the data needs (the desirable), but also be practical and fit within existing resources available (the feasible). Where possible the methods need to be included as a normal part of project activities – not just an add-on.

At the same time we need to be able to draw some general conclusions on success factors for implementation of measures supporting pollination in the agricultural landscape. In our WP this challenge will be met by asking similar question to different activities in order to find common answers.

Some general questions we will raise are (examples):

  • What is being done to implement new measures or influence change?
  • With whom and where does these activities take place?
  • What practices are changing and in which field of work?
  • What impact does the new measures have on performance?
  • What benefits are being achieved by the activities?

Evaluation methods

After a performed activity how do you know whether or not your activity was good or not? How do you know if the activity succeeded in promoting the pollinator friendly measures that was your message? Have people’s attitudes, knowledge or behaviours changed as a result of your communication efforts? Have there been on-the-ground impacts? When your activities have been planned and set in motion you need to decide whether it is successful and make changes if it is not. You will need to ask following questions:

Is the activity effective?

What are its impacts?

Can the activity be improved?

Is the activity cost effective?

Should the activity be continued or modified?

 

Evaluation is the key to answer these questions and to give feedback for improving activities. Ideally evaluation should be conducted from the beginning to the end. It is well spent time to plan for evaluation at the same time as doing the planning of the activities. Evaluation is a critical component of any successful activity. To start with information collected during planning often can serve as baseline data for comparison with the evaluative results later on.

Key reasons for conducting an evaluation:

-          Measure achievements of activities objectives

-          Assess secondary outcomes and unanticipated impacts

-          Identify strength and weaknesses in the activity

-          Analyse the activity from a cost-benefit perspective

-          Improve activity effectiveness

-          Collect evidence to promote future activities

-          Share experience and lessons learned with similar activities


Contact