Office of Juvenile Justice and Delinquency Prevention. Evaluating Juvenile Justice Programs: A
Design Monograph for State Planners. Washington, DC: Prepared for the U.S. Department of
Justice, Office of Juvenile Justice and Delinquency Prevention by Community Research Associates, Inc.; 1989. pp. 21-22.

View entire document

Scope of the Evaluation

The critical question when thinking about the scope of an evaluation effort is: "What do I want to learn?" Answering this question will make your decisions easier and will help make choices regarding data and methods. It is not a simple question. As with other questions we have reviewed, the answer depends on practical and political considerations, and the trick is to find a reasonable course of action so you can get on with the business of evaluating.

Your total program plan probably includes many programs of different kinds, funded at different levels, with varying goals and methods. Presumably you will not attempt to evaluate them all unless you have vast resources. The scope of your evaluation, then will be smaller, maybe a few isolated evaluation efforts, maybe a coordinated evaluation of similar programs, or perhaps you will decide to evaluate only one program.

You may also choose to evaluate aspects of one or more large program. A treatment program might include multiple facets or components-therapy, training, community activities-and you could decide to focus on only the training or therapy components, or you may choose to evaluate the training components of a number of programs.

Again, you must ask, "What do I, or what does my evaluation audience, really want to know about?" The legislature or state educational association may want to know about all of your training efforts. Your criminal justice constituency may only ask about a particular program's recidivism or failure rate, while you may feel there is more to be considered.

Evaluation need not always consider a single, total program. It also may cover more than one program, a component or components of a single program, a single component of many programs, and so on.

There is an important distinction in the evaluation literature that bears reviewing here, though it will be covered further on: program monitoring versus process and outcome evaluations. The following definitions for these concepts are offered:

Program Monitoring:

Developing and analyzing data for the purpose of counting specific program activities and operations.

Process Evaluation:

Developing and analyzing data to assess program processes and procedures; to assess the connections between various program activities.

Outcome Evaluation:

Developing and analyzing data to assess program impact and effectiveness.

These definitions lend a false simplicity to these concepts, but provide the correct impression that evaluation activities can be distinguished by levels of complexity, difficulty, and cost. In reality, most evaluations comprise some of each of these activities.

When thinking about what you want to learn through evaluation, think in the context of monitoring, process, and outcome evaluation. Evaluations simply cannot proceed without monitoring information, which means answers to such questions as:

The volume of work done must be counted, sometimes in very detailed fashion. This is the general nature of program monitoring and it must be done if other evaluation activities as to take place.

Monitoring is, by itself, an evaluation activity. The process will yield information to answer a question such as, "How, or what, is the program doing?" Activity levels can be compared to goals and objectives and monitoring them over time can provide important feedback to program staff, clients, administrators, and funders.

As soon as you begin asking questions about the relationship between different activity levels, or the sequence of activities and systemic issues (how the activities are related in program procedures), you enter the realm of process evaluation. Sometimes referred to as "formative evaluation." process evaluation is concerned with providing feedback to staff and management to help avoid problems and adapt to changes in the program's internal or external environment.

In monitoring, you may keep track of the number of incoming clients, the staff workloads, and the provision of services to clients. Process evaluation takes these data a step further by analyzing the effect of trends in new clients on existing caseloads (and perhaps the external processes that are affecting program referrals), and on the time required to provide services. You are really building a model of program operations-identifying the relevant variables and measuring them, and then analyzing their interrelationships. This must be accomplished in some fashion to make the move to outcome evaluation.

An outcome evaluation assesses the success or effectiveness of a program or program component. Having achieved an analytical understanding of how a program operates, through monitoring and process evaluation, the next step is to assess program products, or outcomes. Consider a simple example involving a training program.

The outcome evaluation issues concern how the intended training was provided, the extent to which program activities deviated from the original design, and if the desired effects were achieved (better grades, higher self esteem, better employment, less involvement in crime, etc.). Outcome evaluations may address efficiency or cost-effectiveness issues. They may also uncover unanticipated outcomes.

Conducting outcome evaluation requires adequate monitoring process evaluation, for you cannot be sure an outcome was achieved by a program unless you can demonstrate a link between program activities (process) and results (outcome).