Office of Juvenile Justice and Delinquency Prevention. Evaluating Juvenile Justice Programs: A
Design Monograph for State Planners. Washington, DC: Prepared for the U.S. Department of
Justice, Office of Juvenile Justice and Delinquency Prevention by Community Research Associates, Inc.; 1989.  pp. 41-43.

View entire document


Basic Guidelines for the Development of Survey Items

If there is an interest in understanding what victims feel about their participation in a restitution program, then a survey of such victims would be most appropriate. If, on the other hand, there is an interest in obtaining viewpoints from victims and nonvictims concerning the acceptability of restitution as a sanction, then surveys of the general population should be used.

One of the principal advantages of surveys is the use of sampling. Sampling allows surveying a small group that is representative of the larger population yet is large enough to generalize to the whole group. The key word is representative. There are many forms of sampling that can produce adequate samples in different situation, but professional advice is well advised.

To elicit meaningful information from a survey all respondents must answer the same question. This cannot happen if questions require a respondent to interpret what it is asking. Questions should be stated in a simple and direct manner as possible. The use of double negatives can result in substantial confusion over what is being asked and the appropriate response. Consider the follow question, for example.

"Do you approve or disapprove of the juvenile court not allowing status offenders to be placed in secure detention?"

As phrased, the question is confusing. The respondent may not understand that to disapprove of the statement is to favor secure detention of status offenders. Instead, the question should be stated in the affirmative, such as:

"Do you favor the use of secure detention for status offenders?"

Avoid stating questions as double negatives and the use of confusing phrases and implicit negative words which require positive responses for a negative opinion. Instead of asking "Do you oppose gun control?, the more direct and positive question "Do you favor gun control?", is less confusing and thus preferable.

Avoid also double-barreled questions. These are single questions that ask for responses about two or more different things. For example:

"Do you favor community juvenile justice programs such as diversion and restitution?"

Responses to this question could be either opinions about diversion or about restitution. Similarly, the question:

"How satisfied are you with the police and juvenile court response to delinquency?"

requires the respondent to assess both the police and juvenile court with a single response. In such cases two separate questions should be used to clarify what is being asked, or only one concept should be included in the question.

"Do you favor diversion as a form of community juvenile justice program?"

Better information and more focused responses are obtained from specific rather than general questions. But there needs to be agreement between what the evaluator is asking and what the respondent thinks he or she is answering, because in some situations there can be different definitions of a concept. For example, if a question asks if the respondent was physically abused as a child, the individual may answer "no" because he or she doesn't consider the treatment received to be child abuse. Similarly, asking respondents if they have been crime victims (or offenders) may not elicit accurate responses if they do not consider the behavior under investigation to be criminal. In these situations you should ask specific questions about the actual conduct in question. Instead of asking respondents if they have been crime victims, a series of questions reflecting specific criminal behaviors should be posed. For example:

"Have you had anything taken from you by force or the threat of force?"

This would reveal if the respondent had been robbed without requiring him to define robbery.

Moreover, specific questions can pinpoint the source of opinions. A general question, "Do you feel that the juvenile court is doing a poor job, a fair job, a good job, or an excellent job?', will indicate the respondent's overall rating of the juvenile court but not the source of, or reasons for, that rating. An alternative approach would present several questions regarding court operations to obtain an evaluation of a range of services.

One of the most common forms of survey questions is the agree/disagree statement. For example, "Do you agree or disagree with the statement that juvenile offenders should be provided with due process rights just as adults?" Studies show that questions stated in this form tend to elicit a positive response ("agree") regardless of their content. Respondents will even agree with contradictory statements due to this tendency. A more appropriate form of this question might be, "Do you feel that juvenile offenders should have the same due process rights as adults, fewer due process rights than adults, or greater due process rights than adults?"

Learning that a high percentage of respondents agrees with a particular statement tells us little about how strongly they feel about it. Individuals can support a certain statement but not feel strongly about the issue, or those that support a position can be more intense in their views than those who oppose it. Asking responses on the familiar ?strongly agree-strongly disagree" continuum confuses the issues of support and intensity. Generally, follow up questions such as "How strongly do you feel about that position?" should be asked to determine the intensity of the respondent's viewpoint.

This procedure is one of the most important yet most frequently ignored stages in survey development. The purpose of the pretest is to ensure that the survey is measuring what you think it is measuring, and that if administered a second time it would obtain similar responses. Which proves the responses are not a function of the instrument itself. Pretesting is not an obscure science, instead it is part of the ongoing process of instrument development. Like much of evaluation research it involves common sense to determine if the new creation is performing as expected during one or more dry runs. In conducting a pretest, it is beneficial to debrief respondents (who should be from a similar target population as those to be involved in the actual study) about specific questions to determine how they interpreted them, the reasons they responded as they did, and how they might have responded if a question were presented in a different manner.

While surveys have numerous potential pitfalls, the relatively low cost and ease of administration make them an attractive research tool. Careful design, administration, and analysis can overcome these difficulties and produce valuable evaluation data.