Breadcrumb

  1. Evidence and Innovation
  2. Monitoring and Evaluating

Monitoring and Evaluating

Ongoing evaluation and monitoring of your evidence-based program are essential because they provide information to the staff implementing the program as well as the stakeholders in the community. Using data, both qualitative and quantitative data, on a continual basis to evaluate and monitor your program can allow you to

  • see how the program addresses your outcome(s) of interest;
  • understand whether your program has been implemented with a reasonable level of fidelity;
  • identify areas for improvement, training, or adaptation;
  • justify to stakeholders and funders that the program is effective; and
  • determine whether you may want to expand, cut, or abandon the program.

Types of Evaluations

In evaluating and monitoring your evidence-based program, it is important to consider both the process you used to achieve the outcomes and whether you did what you were supposed to do (process evaluations) and the outcomes you achieved (outcome evaluations).

Process Evaluations
Knowing the process you took when implementing an evidence-based program is important for understanding how your actions are connected to the outcomes you achieve. Process evaluations, or evaluating your fidelity to implementation, allow you to understand whether the results you found were a result of the evidence-based program that you implemented or other variables. Process evaluations can also help you identify your level of implementation and areas where you may need additional training, resources, or support.

Outcome Evaluations
Outcome evaluations allow you to understand what happened as a result of implementing the evidence-based program. They can, if there was an experimental design, tell you whether there was an improvement in your outcome(s) of interest for your participants or whether the program showed no or negative effects.

Choosing an Evaluator

It is important for evaluators to be knowledgeable about research and statistics. An evaluator may be an internal evaluator or someone external to or independent of your organization. There are advantages and disadvantages to both. Factors that may influence whom you choose to evaluate your program include

  • cost,
  • availability/flexibility,
  • knowledge of program and operations and context,
  • potential for utilization of results,
  • specialist skills and expertise,
  • objectivity or perceived objectivity,
  • autonomy, and
  • ethical issues.1

It is important to consider the benefits and drawbacks to an external or internal evaluator when making a decision. Alternatively, have someone who is neutral decide.

Data Collection

Evidence-based programs often come with prescribed data collection and analysis procedures. Different types of data will allow you to answer different types of questions and will differ depending on whether you are conducting an outcome or a process evaluation. A number of different indicators may be available that you can use to understand the effects of the evidence-based program on your participants, so it is important to clearly plan what questions you are interested in answering before you begin because data collection efforts can become timely and expensive.

Process Evaluation Data Collection
A number of ways to measure implementation exist, including self-assessment rubrics or surveys, observation checklists, interviews, and product reviews. Self-report information, such as interviews and surveys, may indicate staff knowledge and context, but although these measures are often efficient, they can result in subject bias. For example, responders may exaggerate or underreport in an attempt to make them look better. Direct observations are often less efficient, but they may measure fidelity more reliably. Observations provide an outside and real-time perspective on what is going on, instead of relying on the memory of the person implementing the program. The reliability of observations can be increased by ensuring that observers are trained, using multiple observers with a goal of achieving high inter-rater reliability (a high level of similar responses), and using detailed checklists and observation tools that anchor responses in specific behaviors or practices. While conducting observations eliminates the potential bias caused by an individual reporting on him- or herself, it may create situations where the individual being observed acts differently when being observed. Document or product reviews allow the evaluation of what was done and are moderately efficient and reliable methods of measuring process implementation. A document review doesn’t allow you to understand the context of what occurred and provides less information about delivery, dosage, and adherence.

Outcome Evaluation Data Collection
For an outcome evaluation, you may want to measure a change in the outcome that you are interested in over time to see changes in the outcome of interest. For example, if you are interested in improved academic outcomes, you may look at a participant’s score on an assessment before and after the program was implemented to see whether the scored changed. You may want to compare an organization implementing the program with one that is not implementing the program (this would demand collecting data on the outcome of interest from both programs), or you may want to look at variations in impact based on subgroups (e.g., gender, age, socio-economic status, disabilities, cultural/linguistic differences).

Publishing Evaluation Results

Core components are the parts, features, attributes, or characteristics of a program that a range of research techniques show influence its success when implemented effectively. These core components can serve as the unit of analysis that researchers use to determine “what works,” and they become the areas practitioners and policymakers seek to replicate within and across a range of related programs and systems in order to improve outcomes. Research techniques such as meta-analysis can shed light on which components make programs successful across a range of programs and contexts, and help researchers identify with greater precision what works, in which contexts, and for which populations. The Office of Human Services Policy at the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, published a brief (PDF, 12 pages) that explains why it is important to collect and report on a wide range of program characteristics and the kinds of characteristics that should routinely be collected and reported in order to facilitate future meta-analyses that can help the field identify core components of effective programs. This brief was followed up by a webinar, Advancing the Use of Core Components Approaches: Suggestions for Researchers Publishing Evaluation Results, that identifies categories of data important to collect and report on that would advance core components approaches. It also engages audience members in a discussion about how best to incentivize researchers, journal editors and others to make more complete and detailed information available to identify, test, and scale up core components of effective programs.

Webinar on Evaluation

In March 2018, the Interagency Working Group on Youth Programs hosted a webinar to discuss the ways that commonly-held beliefs derail the practice of program evaluation as part of a successful intervention. Experts from the American Institutes for Research shared evidence-based communications framing strategies to help organizations effectively gain buy-in from stakeholders for implementing program evaluation.

 

1 Weiss, C. H. (1998). Evaluation (2nd Ed). Upper Saddle River, NJ: Prentice Hall; Bronte-Tinkew, J., Joyner, K., & Allen, T. (2007). Five steps for selecting an evaluator: A guide for out-of-school time practitioners (Research-to-Results Brief, 2007-32). Washington, DC: Child Trends.