Office of Juvenile Justice and Delinquency Prevention. Evaluating Juvenile Justice
Design Monograph for State Planners. Washington, DC: Prepared for the U.S. Department of
Justice, Office of Juvenile Justice and Delinquency Prevention by Community Research Associates, Inc.; 1989. pp.43-44.
View entire document
Use of Random Assignment
A major concern in any program evaluation is that something other than the program itself might be the cause of the results. Proper evaluation design can help eliminate many of these alternative explanations. As noted in Chapter Three, an experimental or comparative design provides the most definitive response to this question. Through experimental design the researcher obtains a comparative base that should be equivalent to the treatment group except that its members have not participated in the program under study. However, the use of experimental design in program evaluation is often hampered by the political, ethical, and pragmatic aspects of the public environment in which these projects are conducted. In many situations, program administrators or potential participants may object to the concept of random assignment, favoring instead selection based on need or order of application, e.e., first come-first served. At other times administrators may feel public and organizational pressure not to withhold treatment from a group of needy and worthy individuals. Furthermore, the costs of conducting an experimental design are often viewed as prohibitive.
While in many situations these factors may be valid and experimental procedure would be impractical, in others they can be overcome through the persistence of the evaluator and the support of the administrator. The strength of these barriers is often presumed to be greater than it actually is. It is common to limit the initial size of a new program or the number of jurisdictions in which it will be implemented. In other situations program space may naturally be limited to where demand exceeds the ability to accommodate all who would like to participate. These situations create a natural environment for implementing an experimental design.
When program resources are scarce the fairest method of allocation is a random one. Using random assignment, administrators would not be subject to criticism of political bias or other forms of favoritism in selecting those to receive treatment. The cost of an evaluation using an experimental design should not be appreciably greater than constructing comparison groups in a less rigorous manner. A major cost of experimental studies comes from the length of follow up, a similar cost would be incurred from studies using a post hoc comparison. Additional costs are not a function of the experimental nature but of the data collected. Also, if there is additional cost in this approach it pales in comparison to the cost of widespread implementation of an ineffective, and even potentially damaging, program.
With careful planning and explanation, experimental designs can be utilized in juvenile justice evaluation. The increased quality, rigor, and potential impact on public policy is well worth the effort.