Skip to main content

Table 3 Summary of influences on evaluation practice

From: Exploring influences on evaluation practice: a case study of a national physical activity programme

Influence

Examples of how these can act as barriers or facilitators

Programme and project design

 Timescales

Lead in time, delivery and funding cycles influence opportunities for relationship building, recruitment, piloting methods and formative evaluation.

Scheduling and duration of delivery sessions influence resource availability and capacity for data collection.

 Participant demographics

Participant demographics influence recruitment and data collection, capacity for self reporting, response rates, outcomes of interest, requirements for different outcome measures and need for adaptations to data collection methods (impacts standardisation and generalisability).

 Settings

Location, facilities and resource availability influence recruitment, response rates and data collection.

 Implementation

Tailoring and adaptability in project and evaluation implementation can facilitate recruitment, participant engagement and response rates, but limit standardisation.

Evaluation design

 Standardised data collection

Facilitates consistency of reporting and comparability, however use in diverse project contexts and participant groups limits generalisability.

Increases research-practice tensions, data collection burden and impacts response rates.

Choice of tools, appropriateness to participants, and ease or difficulty of implementation influence data collection and outcomes.

 Standard Evaluation Frameworks

Evaluation frameworks and guidance facilitate more consistent evaluation and reporting of required evaluation criteria and outcomes of interest.

Variability in how criteria are applied and reported can act as a barrier to generalisability and quality of data.

Limitations in guidance included in frameworks used can lead to variability in the quality of evaluation and reporting of specific evaluation components.

 Use of non-required evaluation methods

Use of non-required evaluation components is dependent on knowledge, experience and priorities of project stakeholders, e.g. the value placed on qualitative methods influenced the inclusion of qualitative methods.

Limitations in the specified requirements to address objectives drives inclusion of additional methods.

Limitations in guidance, understanding of methods and capacity to conduct qualitative research influences the quality of analysis and reporting.

Pilot and formative evaluation facilitates development, testing and embedding of evaluation approaches and data collection systems, intermediate evaluation facilitates learning, adaptation and improvement. These are dependent on timescales, regular

reporting and feedback processes.

Adaptability and flexibility facilitates ability to be responsive to needs, to improve participant and stakeholder engagement with evaluation processes, and to improve response rates and quality of data collection.

Resources

 Staffing

Staff expertise, experience, capacity, buy-in for evaluation, and how roles and responsibilities are defined influence evaluation processes, project sustainability, knowledge management and dissemination.

 Funding level

Funding for evaluation, including staffing and partnership working, is a major influence on evaluation practice.

Differing levels of funding and the proportion allocated to evaluation, position of decisions for this at local or national level, and timescales of funding cycles influence evaluation practices.

 Time

Time impacts the choice of evaluation methods, and the capacity for data collection and evaluation processes.

 Equipment/facilities

Influences project activities, recruitment, implementation, and data collection methods, including opportunities for use of innovative methods.

Partnerships

 Essential partners/roles and responsibilities

Definning roles and responsibilities of delivery, funding & evaluation partners for evaluation processes is a key factor.

Capacity for evaluation and success of partnership working is dependent on costs, funding, resources, and the nature of the partnership.

 Stakeholder priorities, objectives and expectations

Differing partner priorities and expectations can lead to research-practice tensions.

Approaches to balance research objectives, policy priorities and practicalities of what will work in real-world & in budget are required.

Strategies to manage expectations are needed.

 Expertise, experience, capacity

Prior experience, knowledge and training of stakeholders influence evaluation design, choice of methods, innovation and implementation.

Research-practice partnerships can improve evaluation through access to expertise, skills and experience, and access to additional resource for implementing evaluation and data collection.

 Relationships and Communication

Close relationships between partners are key.

Local partnerships increase opportunities to observe and understand local project needs and facilitate relationship building.

Available, approachable and adaptable partners enable open and trusting relationships, regular comminication, opportunities for stakeholders to challenge, learn from each other, find solutions and make decisions collaboratively.

Appropriate language facilitates relationship building (jargon busting).

 History of partnership, embeddedness

Continuity of relationships facilitates understanding of local project evaluation priorities, helps to embed processes, which can help mitigate effects of limited lead-in times, piloting and insight phases.

Arms-length or transactional relationships act as barriers.

Organisational structures, systems and processes

 Funding systems and requirements

Clearly defined, agreed and communicated funding requirements act as facilitators to evaluation and use of evidence.

Funding cycles and time scales for reporting and review can limit learning from evaluation, dissemination and project sustainability.

Understanding future commissioning needs facilitates evaluation planning and implementation to ensure practice-relevant evidence is collected.

 Staffing structures

Clearly defining roles and responsibilities of staff, volunteers and partners is vital to successful partnership working, project implementation and evaluation processes.

Key staff that have capacity &/or responsibility for co-ordinating processes, relationships and practices can be essential for the success of a project and its evaluation. These may be embedded in the staff structure as an evaluation officer, or an external partner that champions evaluation.

Highly mobile workforce & employment contracts linked to short funding cycles act as a barrier to continuity of partnerships, relationships, and organisational learning, but as a facilitator to inter-organisational learning.

 Systems for oversight, monitoring and communication

Information and support from funders, essential to guide project planning, but also to make use of feedback from intermediate monitoring and evaluation.

Service level agreements help to define and agree roles, responsibilites, objectives and outputs, but can limit adaptability and flexibility.

Steering groups (project boards or operational groups) enable sharing of good practice, open dialogue and support.

Regular meetings that include evaluation feedback facilitates evaluation process. Challenges remain to ensure decisions are transferred between strategic and operational stakeholders, and that actions agreed are followed up.

 Processes for capacity building and knowledge exchange

Training to build capacity, knowledge and gain buy-in is essential, especially where data collection is dependent on delivery staff.

Workshops and networking opportunities facilitate knowledge exchange across projects, partners and wider audiences.

 Data management systems

Effective data management systems facilitate data collection and management, participant engagement and project implementation.

Developing, agreeing and embedding systems that meet the needs of practitioners and researchers is essential, but has implications for resources such as time, staffing and budgets.

System development and use needs to consider implications for data security policies and practices, reliability, flexibility, integration with existing service delivery systems and needs, standardisation to allow reporting and comparison between partners, projects and programme.

 Wider external influences

Embedding project and evaluation into existing service delivery offers opportunities for efficiencings, e.g. shared resources, staffing economies and use of existing infrastructure such as data management systems. Embedding in existing service delivery can also facilitate project sustainability.

Evolving policies, strategies, commissioning priorities and knoweldge development interact to influence priorities for funding, project and evaluation objectives, reporting and desimmination, and use made of evidence.

Multi-sectoral, multi-component projects or localised delivery and evaluation can lead to fragmentation of projects across organisations and locations, which can act as a barrier to standardised approaches to evaluaton, knowledge exchange and use of evidence.

 Organisational culture and embeddedness of evaluation

Organisational culture and a history of evaluation and partnership working within organisations can increase opportunities for integrating evaluation and project design, improve the skills base, capacity and buy-in to evaluation process and practices and facilitate the embedding of evaluation.