Breadcrumb

  1. Identifying Effective Interventions: A Report from GAO

Identifying Effective Interventions: A Report from GAO

In November 2009, the Government Accountability Office (GAO) released the report, "Program Evaluation: A Variety of Vigorous Methods Can Help Identify Effective Interventions" (PDF - 49 pages).

As state and local youth-serving organizations know well, many federal funding streams are tied to the use of evidence-based programs—programs that have been proven effective and are listed on federally-hosted evidence-based program directories. However, the debate over what constitutes an appropriate level of evidence of effectiveness, and an appropriately rigorous design, continues. Some groups believe that only randomized experiments that show sizable and sustained benefits to participants across multiple sites should be included in these directories; others believe that programs that use a theory of change should be included beside those with more rigorous evaluation designs.

In the midst of this debate, the GAO examined the literature on evaluation methods and consulted experts on the use of randomized experiments related to federal social programs and produced a report of its findings. In its report, the GAO found that "randomized experiments are considered best suited for assessing intervention effectiveness where multiple causal influences lead to uncertainty about program effects and it is possible, ethical, and practical to conduct and maintain random assignment to minimize the effect of those influences."

However, in its report, GAO shared that requiring evidence from randomized studies as the only acceptable proof of effectiveness will exclude many potentially effective and worthwhile practices. Randomized experiments are well suited for interventions through which both control and treatment groups can be developed and remain distinct and intact throughout the study; however, for some programs—such as entitlement programs, policies that apply to everyone, or interventions that involve exposure to negative events—designing a randomized experiment may not be practical or ethical. Another interesting point in the GAO report, from the evaluation literature they reviewed, is that randomized experiments are less informative for more complex, broad-based, diverse packages of interventions and policies that communities may implement in response to needs. Alternative methods—such as quasi-experimental comparison group studies—statistical analyses of observational data, and in depth case studies—are often also useful.

GAO explained that many factors should be taken into account by communities choosing an intervention, not just evaluation rigor. Interventions should be considered based on cost, as well as suitability to the local community's needs, demographics, and available resources. For example, an effective intervention conducted with rural youth in Alaska may not be appropriate to use with urban youth in Los Angeles. And for some communities, the cost of implementing a rigorously evaluated program may be prohibitive.

GAO recommended improving evaluation quality and enhancing technical expertise associated with evaluation. Detailed protocols and training for those evaluating programs were also recommended.

Click here to read the full report (PDF - 49 pages).

View the evidence-based programs for youth listed on youth.gov.