Evaluation Guidebook

Projects Funded by S.T.O.P.
Formula Grants under the
Violence Against Women Act

by

Martha R. Burt
Adele V. Harrell
Lisa C. Newmark
Laudan Y. Aron
Lisa K. Jacobs

December 1997

Table of Contents Through Chapter 4

The nonpartisan Urban Institute publishes studies, reports, and books on timely topics worthy of public consideration. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders.

This project was supported by Grant No. 95-WT-NX-0005 awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. Points of view in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice or of other staff members, officers, trustees, advisory groups, or funders of the Urban Institute.


TABLE OF CONTENTS

WHERE TO GET RESOURCES DISCUSSED IN THIS GUIDEBOOK

PREFACE

CHAPTER 1: GETTING STARTED: THINKING ABOUT EVALUATION

CHAPTER 2: DEVELOPING AND USING A LOGIC MODEL

CHAPTER 3: MAKING EVALUATION WORK FOR YOU

CHAPTER 4: USING EVALUATION INFORMATION

CHAPTER 5: USING THE SAR AND THE SSS

CHAPTER 6: CHOOSING AN EVALUATION DESIGN

 

INTRODUCTION TO THE RESOURCE CHAPTERS

CHAPTER 7: VICTIM SAFETY AND WELL-BEING: MEASURES OF SHORT-TERM AND LONG-TERM CHANGE

CHAPTER 8: DESCRIBING VICTIM SERVICES AND SUPPORT SYSTEMS

CHAPTER 9: EVALUATING CRIMINAL AND CIVIL JUSTICE AGENCY CHANGES

CHAPTER 10: MEASURES OF COMMUNITY COLLABORATION

CHAPTER 11: MEASURING CHANGE IN COMMUNITY ATTITUDES, KNOWLEDGE OF SERVICES, AND LEVEL OF VIOLENCE

CHAPTER 12: MEASURING PERCEPTIONS OF JUSTICE

CHAPTER 13: MEASURING THE IMPACT OF TRAINING

CHAPTER 14: DATA SYSTEM DEVELOPMENT

CHAPTER 15: SPECIAL ISSUES FOR EVALUATING PROJECTS ON INDIAN TRIBAL LANDS

 


 

WHERE TO GET RESOURCES DISCUSSED IN THIS GUIDEBOOK

Copies of this Evaluation Guidebook:

1. Every state STOP coordinator has both a hard copy and a computer disk copy of the Guidebook. Call VAWGO for the contact information for the STOP coordinator in your state. VAWGO phone number: (202) 307-6026

2. Every statewide coalition on domestic violence and/or sexual assault has a hard copy of the Guidebook.

3. The STOP TA Project has both a hard copy and a computer disk copy of the Guidebook. TA Project phone numbers: (800 256-5883 or 202 265-0967)

4. The Guidebook is available in photoimage and in text formats on the Urban Institute's Internet webpage—http://www.urban.org/crime. VAWGO and NIJ websites also link up to this.

NOTE: Hard copy versions of the Evaluation Guidebook are complete, and are legal to copy as many times as needed. Internet and disk versions of the Evaluation Guidebook are missing a few pages of Chapter 7 because some publishers do not grant permission to reproduce other than in hard copy format.

5. You can buy the Guidebook for cost plus shipping from the Urban Institute's clearinghouse. Call or email: (202 857-8687 or pubs@ui.urban.org).

Copies of evaluation instruments referenced in this Guidebook:

1. The STOP TA Project has hard copies of the measuring instruments described in Chapters 7 and 11 of this Guidebook (only hard copy is available due to copyright requirements). TA Project phone numbers: (800 256-5883 or 202 265-0967)

Further help with your evaluation:

1. The STOP TA Project will soon have the capacity to offer a limited amount of technical assistance related to evaluation. TA Project phone numbers: (800 256-5883 or 202 265-0967). PLEASE DO NOT CALL THE URBAN INSTITUTE; it is not funded to provide technical assistance.

 


 

PREFACE

This Evaluation Guidebook is intended as a resource for all people interested in learning more about the success of programs that try to aid women victims of violence. It has been written especially for projects funded through STOP formula grants, but has wider application to any program addressing the needs of women victimized by sexual assault, domestic violence, or stalking.

The Violence Against Women Act (VAWA), Title IV of the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322), provides for Law Enforcement and Prosecution Grants to states under Chapter 2 of the Safe Streets Act. The grants have been designated the STOP (Services, Training, Officers, Prosecutors) grants by their federal administrator, the Department of Justice's Violence Against Women Grants Office (VAWGO) in the Office of Justice Programs (OJP). Their purpose is "to assist States, Indian tribal governments, and units of local government to develop and strengthen effective law enforcement and prosecution strategies to combat violent crimes against women, and to develop and strengthen victim services in cases involving violent crimes against women."

A major emphasis in VAWA is also placed on collaboration to create system change, and on reaching underserved populations. System change may result from developing or improving collaborative relationships among justice system and private nonprofit victim service agencies. This emphasis on collaboration is key to the long-term ability of the legislation to bring about needed system change. Historically the relationships among these agencies in many states and communities have been distant or contentious, with little perceived common ground. VAWA is structured to bring these parties together, with the hope that they can craft new approaches which ultimately will reduce violence against women and the trauma it produces. Many groups of women have not participated in any services that were available for women victims of violence. To compensate for these service gaps, VAWA encourages states to use STOP funds to address the needs of previously underserved victim populations, including racial, cultural, ethnic, language, and sexual orientation minorities, as well as rural communities. The Act requires assessment of the extent to which such communities of previously underserved women have been reached through the services supported by STOP grants.

To accomplish VAWA goals, state coordinators in all 50 states, the District of Columbia, and five territories distribute STOP formula grant funds to subgrantees, which carry out specific projects. Subgrantees can be victim service agencies, law enforcement or prosecution agencies, or a wide variety of other agencies. They run projects addressing one or more of the STOP program's seven purpose areas:

This Guidebook is designed to help subgrantees document their accomplishments, and to offer assistance to state STOP coordinators as they try to support the evaluation activities of their subgrantees. Everyone should read the first four chapters, which introduce the reader to issues in doing evaluations and in working with evaluators:

 

The only evaluation activity that VAWGO requires for projects receiving STOP funds is that they complete a Subgrant Award Report (SAR) at the beginning of their project and for each add-on of funding, and Subgrant Statistical Summary (SSS) forms covering each calendar year during which the project operates. Everything else in this Guidebook is optional with respect to federal requirements, although state STOP coordinators may impose additional evaluation requirements of their own. Therefore, if you read nothing else, all state STOP coordinators and STOP subgrantees should read Chapter 5, which explains the requirements of the SAR and SSS.

The remaining chapters of this Guidebook form a reference or resource section covering specialized topics. There is no need for everyone to read all of these chapters. Rather, you can go directly to the chapter(s) that provide measures relevant to your particular project. In addition, the Introduction to the Resource Chapters uses logic models to demonstrate how programs of different types might need or want to draw upon the evaluation resources from one or more of the resource chapters:

 

 

 


CHAPTER 1
GETTING STARTED: THINKING ABOUT EVALUATION

This chapter lays the groundwork for thinking about evaluation. Many readers of this Guidebook will never have participated in an evaluation before. Even more will never have been in charge of running an evaluation, making the decisions about what to look at, how to do it, and how to describe your results. Some readers may have had negative experiences with evaluations or evaluators, and be nervous about getting involved again. We have tried to write this Guidebook in a way that makes evaluation clear and helps all readers gain the benefits of evaluations that are appropriate to their programs.

Rigorous evaluation of projects supported with STOP formula grant funds is vitally important. Evaluation can:

You can use this information to:

The kind of information you will get, and what you can do with it, depends on the kind of evaluation you select. You need to start by asking what you hope to learn and how you plan to use the findings. Answer these questions: Who is the evaluation audience? What do they need to know? When do they need to know it?

You can choose from the following types of evaluation:

A comprehensive evaluation will include all of these activities. Sometimes, however, the questions raised, the target audience for findings, or the available resources limit the evaluation focus to one or two of these activities. Any of these evaluations can include estimation of how much the project or project components cost. Impact evaluations can include assessments of how the costs compare to the value of benefits (cost-benefit analysis) or the efficiency with which alternative projects achieve impacts (cost-effectiveness analysis).

What Should Be Evaluated?

State STOP coordinators may want to develop statewide evaluations that cover many or all of the projects they fund with STOP dollars. Alternatively, state STOP coordinators could decide that the Subgrant Award Reports and Subgrant Statistical Summaries will supply adequate basic information to describe their portfolio of subgrants, and will concentrate special evaluation dollars on special issues. To see longer term program effects on women, they might decide to fund a follow-up study of women served by particular types of agencies. Or, they might decide that they have funded some particularly innovative programs for reaching underserved populations, and they want to know more about how well known and well respected these programs have become within their target communities.

In addition, regardless of the evaluation activities of state STOP coordinators, STOP subgrantees might want to learn more about the effects of their own particular program efforts. They might devote some of their own resources to this effort, and might have a need to work with an evaluator in the process.

This Guidebook is written to help both state STOP coordinators and STOP subgrantees as they develop plans to evaluate STOP projects. It contains information that can be applied statewide, to a subgroup of similar projects throughout the state or one of its regions, or to a single project.

One issue that may arise for anyone trying to use this Guidebook is, Am I trying to isolate and evaluate the effects of only those activities supported by STOP funds? Or, am I trying to understand whether a particular type of activity (e.g., counseling, court advocacy, special police or prosecution units, providing accompaniment to the hospital for rape victims) produces desired outcomes regardless of who pays for how much of the activity?

We strongly advocate taking the latter approach—evaluate the effectiveness of the activity, regardless of who pays for it. Your evaluations of project results for outcomes such as victim well-being, continued fear, children's school outcomes, or the gradual disappearance of sexual assault victims' nightmares and flashbacks need to look at the program as a whole. Unless STOP funds are being used to fund all of an entirely new and separate activity, it is virtually impossible to sort out what STOP funds accomplished versus what other funds accomplished. Instead, we suggest using your evaluation effort to document that the type of activity has good effects. Then you can justify spending any type of money for it, and can say that STOP funds have been used to good purpose when they support this type of activity. Only when your evaluation is focused on simple performance indicators is it reasonable to attribute a portion of the project achievements (e.g., number of victims served, officers trained) to STOP versus other funding sources.

A different issue you may be facing is that your program is already under way, so you cannot create the perfect evaluation design that assumes you had all your plans in place before the program started. We discuss options for handling this situation in Chapter 2.

A final issue for those thinking about evaluation is, Is the program ready for impact evaluation? Not all programs are ready. Does that mean you do nothing? Of course not. Process evaluation is a vital and extremely useful activity for any program to undertake. We discuss this issue in more detail in Chapter 2.

 

Working Together: Participatory Evaluation

In Chapter 3 we discuss many issues that arise when working with evaluators. However, because we think attitude is so important in evaluation, we include here in Chapter 1 a few comments on participatory evaluation.

Some evaluations are imposed from above. Funders usually require programs to report certain data to justify future funding. Funders may also hire an outside evaluator and require funded programs to cooperate with the activities of this evaluator. In many cases, the programs to be evaluated are not asked what they think is important about their programs, or what they think would be fair and appropriate measures of their own programs' performance.

We think it is important for the entire enterprise of evaluation that those being evaluated have a significant share in the decision-making about what should be evaluated, when, how it should be measured, and how the results should be interpreted. We urge state STOP coordinators and STOP subgrantees to work together to develop appropriate evaluation techniques, measures, and timetables for the types of activities that STOP funds are being used to support.

A participatory approach usually creates converts to evaluation. In contrast, an approach "imposed from above" is more likely than not to frighten those being evaluated, and make them wary of evaluation and less inclined to see how it can help them improve their own program and increase its financial support.

 

Including all of the interested parties in designing evaluations helps to avoid the selection of inappropriate measures and promote the selection of measures that both reflect program accomplishments and help guide program practice.

Respect for Women's Privacy and Safety

Just as program directors and staff want their needs taken into consideration during the design of an evaluation, so too must we think about the needs of the women from whom a good deal of the data for evaluations must come. Everyone who has worked with women victims of violence knows the problems they face with having to repeat their story many times, the concerns they may have about how private information will be used and whether things they say will be held in confidence, and the problems that efforts to contact women may raise for their safety and well-being. No one wants to conduct an evaluation that will make a woman's life harder or potentially jeopardize her safety. We discuss these issues at greater length in Chapter 6. In addition, Chapter 6 also discusses some approaches you can use to establish permission to conduct follow-up interviews, and to keep track of women between follow-up interviews. We think it is important to note here that including in your evaluation design and data collection planning some women who have experienced violence will give you the opportunity to check your plans against what they consider feasible and safe.

 

How to Use this Guidebook

Most people will not want or need to read this whole Guidebook, and there certainly is no need to read it all at one sitting—no one can handle more than 200 pages of evaluation techniques and measures. Chapters 1 through 4 contain the "basics" that everyone will want to read, and that's only about 25 pages. After that, you can pick and choose your chapters depending on your evaluation needs. Some will be relevant to you early in your evaluation work, while others may be more useful somewhat later on.

Chapter 5. Since all state STOP coordinators and all STOP subgrantees must use the SAR and SSS to report activities to VAWGO, each of you should read Chapter 5 when your subgrant begins (or now, if you have already started). It will help you plan how you are going to record the data you will need for these reports.

Chapter 6. This chapter gets into the details of evaluation design, how to choose the right level of evaluation and specific research design for your circumstances, and how to protect women's privacy and safety while conducting evaluations. Everyone doing any type of evaluation should read the sections on choosing the right level of evaluation for your program, and on protecting privacy and safety. The section on choosing a specific research design contains more technical language than most of the rest of this Guidebook, and should be used by those who are going to be getting down to the nitty-gritty of design details either on your own or with an evaluator. You may also want to use that section with a standard evaluation text, several of which are included in the chapter's addendum

Chapters 7 through 15. Each of these chapters takes a different purpose or goal that could be part of a STOP subgrant, and focuses on the measures that would be appropriate to use in identifying its immediate, short-, and/or long-term impact. Chapters 7 through 12 focus on particular types of outcomes you might want to measure (victim, system, community, attitudes). Chapters 13 and 14 address measurement options for two complex types of projects—training and data system development. Chapter 15 addresses special issues involved in doing evaluations on Indian tribal lands.

If you are doing a training project, read Chapter 13 and, if you want ultimate justice system impacts, also read Chapter 9. If you are doing a victim services project or otherwise hoping to make a difference for victims' lives, read the victim outcomes chapter (Chapter 7) and the victim services chapter (Chapter 8). If your project involves developing and implementing the use of a new data system, read Chapter 14. If you are concerned with changing perceptions of justice within the system as other parts of the system change, read Chapter 12. Use these chapters as best suits your purpose. There is no need to commit them all to memory if they cover something that is not relevant to your project and its goals.

 


CHAPTER 2
DEVELOPING AND USING A LOGIC MODEL

You need to start your evaluation with a clear understanding of your project's goals and objectives. Next you need to think about the activities your project does, and your beliefs about how those activities will result eventually in reaching your project's goals. Start by drawing a diagram with four columns, A, B, C, and D. Then follow the steps below.

 

 

Creating a Logic Model

A sample logic model is shown in Exhibit 2.1, using an evaluation of a shelter-based program for domestic violence victims as a hypothetical example. Exhibit 2.2 shows a blank diagram that you can use to create a logic model for your own project.

 

 

Exhibit 2.1
Logic Model for Evaluation of Shelter-Based Services
for Domestic Violence Victims

Column A Column B Column C Column D
Background
Factors
Program
Services and
Activities
External
Services/Factors
Goals (Outcomes)
D1D2
ImmediateLonger Term
Children Shelter/Housing
-temporary
-transitional
Police
Response
A Safety
Plan
Reductions in:
-threats/stalking
-emotional/
psychological abuse
-physical abuse
-injury
Language
History of
Abuse
Counseling
-individual
-group
Family/
Friends/Social
Support
Immediate
Safety
Employment/
Education/
Income
Emergency Assistance
-cash
-food
-clothing
Availability of
Needed
Services
Linkages to Services
as Needed
-housing
-health care
-job/education
-legal assistance
Increases in:
-perceived safety
-empowerment
-mental health
Court
Response
Pending
Legal
Actions
Legal Advocacy
-court accompaniment
-help with protection orders
-referrals
Help with
Children's Needs
-counseling
-custody/visitation
-health care
-day care
Increased Legal
Protection

 

 

Exhibit 2.2
Logic Model for Your Program

Column A Column B Column C Column D
Background
Factors
Program
Services and
Activities
External
Services/Factors
Goals (Outcomes)
D1D2
ImmediateLonger Term
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

The diagrams can be used to plan your evaluation as follows:

The logic model can also be the basis for preparing a list of variables (indicators) needed to measure the concepts identified in each box. To do this, make a separate page for each box in your own logic model. Take the first page and list the variables needed for the first box. Using the example in Exhibit 2.1, the first box is "Children." Basic variables might be: yes/no; number of children; and, for each child, age, gender, where child lives (with client, where else). Additional variables that might be relevant depending on what types of services your program offers, might be whether child(ren) were also abused, how they were abused, what school they are attending, whether they have attendance problems, etc.

Once you have listed variables, then list next to each variable the source(s) where you will (may) be able to get the information, such as court records, program records, interview with the victim, the program staff, etc. Whenever possible, we recommend getting information on the same variable from multiple sources, to verify that the information is correct (e.g., number of children) or to have several "reads" on the same variable if it is a hard one to measure. For example, you could measure reductions in victimization with information from both the victim and police records—each would tell you something different about the outcome.

Repeat these two steps (listing variables and listing data sources) for each box in your own logic model (Exhibit 2.2).

Once you know what variables you want to measure and where the information will come from, you can begin to develop data collection plans. These can take a while. You need to find out what records are maintained by agencies, decide what information is needed to construct variables, get an understanding of the limits and meaning of the records, arrange permission to get the data if you don't already have it all in your files, and develop a protocol specifying exactly what data elements will be collected, by whom, and how. You need to decide when and how program participants can be contacted to provide information, what questions should be asked, and who will collect the information. Similarly, you will want to make lists of persons on the program staff and working with other agencies who need to be interviewed and what questions are appropriate for each. You may want to add to or change the forms your own agency uses to be sure that you are collecting the necessary information for each client.

 

Program Evaluability

Not every program is ready for an impact evaluation (even if it could still profit from a performance evaluation and/or from performance monitoring). In addition, every program does not need the same level of evaluation. Since there are never enough evaluation resources to go around, it is important to decide whether impact evaluation is justified and feasible for your project, whether it is likely to provide useful information, and what level of evaluation is needed (monitoring, process evaluation, or impact evaluation). Evaluation planners should ask themselves:

 

Negative or unsatisfactory answers to enough of these questions about a particular project suggest that the project is not ready for impact evaluation, and that major impact evaluation resources would be better spent on projects for which these questions can be answered more positively.

 

If Not Impact Evaluation, What?

Even if your program is not ready for impact evaluation, it could still benefit from a process evaluation, and you could still collect useful information through performance monitoring. The mere exercise of answering the evaluability questions just posed often helps a program identify gaps in its practice, mismatches between its practice and its goals, and ways the program could be improved. This is one outcome of a process evaluation; other benefits of a process evaluation include achieving a greater understanding of work flow, successes and issues in service delivery, successes and issues in outreach/client finding and in follow-up, and so on. In addition, performance monitoring will provide you with basic descriptive information about who you are serving, what they are getting from your program, and how they feel about the services they have received. These options are described in more detail in Chapter 6.

 

Evaluation Options for Programs that Are Already Under Way

With the exception of a few carefully planned research demonstrations, most evaluations are done with programs whose operations were already under way before the evaluation started. These circumstances obviously preclude starting "at the beginning." However, you still have a number of options open to you, including identifying and collecting data from a meaningful comparison group, and monitoring the performance of your agency. Chapter 6 discusses these options at greater length.

 


CHAPTER 3
MAKING EVALUATION WORK FOR YOU

In Chapter 1 we talked about the many reasons why you would want to participate in an evaluation. Assuming you are sold on the potential usefulness of evaluation for your program, how do you make sure you get a good one?

Sometimes the biggest challenge to getting useful evaluation results is finding an evaluator who understands your program and with whom you feel comfortable working. The main goal is to get what you want. It's not always easy.

This chapter offers some "down home" advice about how to get what you want from an evaluator and an evaluation. It is based on many years of experience working with programs like those being funded with STOP grants, and learning to appreciate the issues these programs face when they try to participate in evaluation.

This chapter is intended to aid state STOP coordinators as they work with evaluators charged with conducting statewide evaluations; subgrantees working with evaluators they have hired to assess their own subgrant activities; and subgrantees whose state coordinators have hired a statewide evaluator and asked the subgrantees to cooperate with a statewide evaluation.

Each of these potential users of this Guidebook has different needs. We would like to help the process of making these needs compatible rather than competitive or combative. This chapter assumes that readers accept the goals articulated in Chapter 1 for creating evaluations based on the participation of those being evaluated in designing the evaluation. When the evaluator or purchaser of the evaluation is the state STOP coordinator, it is all the more important that the coordinator and subgrantees arrive at a mutual agreement about what should be evaluated, how, and who has responsibility for what, before embarking on a full-fledged evaluation.

 

Finding an Evaluator

It may not be so easy to find an evaluator in the first place, let alone find one you will enjoy working with. You have several options: ask around; look at published reports and other published information, including using the Internet; or start at a university.

The best way to find an evaluator you will be happy with is to ask others for recommendations. People you can ask include (1) program directors in other programs like yours; (2) state or local government human services or justice system statistical agency planning agency staff, who sometimes contract out evaluations; (3) planning agency and legislative staff, who often have to use evaluation reports done by others; and (4) your state STOP coordinator. The advantage of seeking personal recommendations is that you will be able to ask your referral sources what they liked about the people they are recommending, what they did not like, and what advice they can give you about how to work well with the evaluator. In addition, you won't have to know the name of an evaluation organization in order to find out about it.

In the absence of direct access to people who have had good evaluation experiences, you still have a number of options, but you will have to do more sorting out on your own. You can search in a library, or on the Internet, using words like "evaluation," "policy research," "research," and the like. This may turn up names of organizations, but is more likely to turn up published reports of evaluations. You can examine these reports to discover who conducted the research and, if you like their style, take it from there.

Organizations that conduct evaluations include nonprofit organizations, for-profit companies, institutes or centers at universities, and some private individuals. Some but not all of the organizations have "research," "policy research," "policy analysis," "social research," and other such words in their titles.

Universities offer another possible source of evaluation expertise, through faculty who teach research or evaluation methods in the fields of public policy, public health, criminology, sociology, community psychology, and other fields. These faculty may themselves be interested in helping design and/or conduct an evaluation, or they may be able to refer you to others who have the necessary experience.

One potential source of inexpensive or free help with evaluations is university students. If you can design your own evaluation but need help with data collection, you may be able to hook up with students who need to conduct field work interviewing for a class project. You may even be able to get a student placed with your agency for a semester or two as an intern, with the expectation that the student will focus on evaluation activities such as follow-up of previous clients or maintaining a database of services rendered by your agency. One way to establish a connection to students who may be interested in your project is to cultivate a relationship with one or more interested professors of the types described in the previous paragraph.

If you are in any of the search modes other than direct recommendations from people you know and trust, you can try to improve your chances of finding someone good by asking for references. Get from the company, organization, or consultant the names and contact information for three or four clients with whom they have worked recently (preferably within the past year or two). Then check with these people to see how they felt about their experience with the evaluator.

Regardless of the route you take, once you identify an evaluation organization, be sure that you also identify the specific people in that organization with whom programs like yours have had successful experiences. Then make sure that you work with those people, and accept no substitutes. In addition, be sure that the people you want to work with have enough time available to work with you, and that their schedules can be adjusted to accommodate your getting the work done when you need it, not when it suits the schedule of the evaluator.

 

Getting What You Want from Evaluation

The more explicit you can be about what you want from an evaluation, the more likely you are to get it. One thing is absolutely certain—if you don't know what you want, you have only yourself to blame if you go ahead with an evaluation and are unhappy with its results.

A good evaluator can and should help you articulate what you want. If someone you are thinking about hiring as an evaluator walks into your office for the first time with an evaluation plan already on paper or in their head, murmur politely if you must, but get rid of them. They should be walking in with nothing but questions. They and you should spend hours coming to a mutual understanding of what the program is trying to accomplish (goals), exactly how it is trying to get there (its activities, services, or other behaviors), for whom, and all the steps along the way. You having in your own mind (or even better, on paper) a clear logic model or "theory" of how your program's activities accomplish your program's goals is important (see Chapter 2 for how to do this). If you already have one thought out, that is terrific. But a good evaluator will help you create one if you don't have one, or will help you refine the one you have. Do not go forward with an evaluation without one, and make sure it is one that both you and the evaluator understand.

You should be happy at the end of this meeting. You should feel you have been understood, that the evaluator "gets it" with respect to your goals, your problems, your concerns. Think of it as a marriage counseling session, in which A is not allowed to respond to what B has just said until he/she can repeat what B said to B's satisfaction (i.e., B feels understood). If your meeting has gone really well you might even feel inspired, because some of the things you have felt but not previously articulated about your program will now be explicit. Be sure to include all of your key staff in this meeting, especially those who have been around for a long time and who you feel have a good idea of how the program works and why. You should all feel at the end of this meeting that you have some new ideas or new ways of looking at your program.

Use the following do's and don'ts:

Don't be intimidated by evaluators. (Or, don't act that way, even if you are.) Do ask questions, and keep asking questions until you are satisfied. If you need to, write out the questions before the meeting. If you get to the end of a first meeting and have more questions than answers, insist on a second meeting. Feel satisfied with the answers to your questions, or ask some more questions. Think about how you feel when your mother comes back from the doctor's. You and she have 14 questions and she doesn't have answers to any of them, because she never asked. But the doctor probably feels pretty good about the encounter (after all, he/she understands what he/she said, and the money is in the bank).

Don't be overwhelmed or diverted by discussions of methodologies (the "hows"). Do make the conversation concentrate on the "whats" until you are sure that the evaluation will focus on the proper program goals and your objectives for the evaluation. Then worry about the best way to do the study.

Don't be intimidated by jargon, about methodologies or anything else. Do ask what they mean when they say something you don't understand. Remember, you are proposing to pay these people money to help you. If they cannot explain what they mean in words you can understand, kick them out. They won't get better as time goes on.

If you are not happy, get divorced, or don't get married in the first place. Find another evaluator.

 

For State Coordinators

Everything does not require the same level of evaluation. Think of the data in the State Statistical Summary (SSS; see Chapter 5) as being the basic evaluation information for your whole state. The minimal information that Congress requires is included on the SSS forms. You DO have to ensure that all subgrantees can and do complete the SSS forms. However, evaluative information for some (many) projects can be left at this level. Use special evaluation funding (e.g., special subgrants devoted to evaluation) to focus on specific types of activities, or specific outcomes, that you feel deserve more attention and will not be documented well unless you put more money into the effort.

Remember that you get what you ask for. If you write a Request for Proposals (RFP) or sign a contract that is extremely general and unfocused, you will have to live with whatever anyone wants to send you in response. This is especially true if responses to your RFPs and acceptance of contract deliverables are handled through standard state contracting procedures.

With respect to RFPs:

 

With respect to grant/contract provisions:

 

 

For Subgrantees

You will not have to make decisions about how many projects to evaluate, but you may have to make decisions about which activities of your own program you want to evaluate. The issues to consider in making this decision are the same as those outlined above for state coordinators.

You may be in the position of seeking to hire an evaluator for your own program and, ultimately, of signing a contract or consulting agreement with one. Here too, the comments above for state coordinators are pertinent for you.

The unique position you are in, relative to state STOP coordinators, is that they may be designing an evaluation without your involvement, but nevertheless expecting you to cooperate with its demands. We hope this will not happen, and have written the discussion about participatory research in Chapter 1 to make clear the advantages of involving those being evaluated in designing any evaluation that affects them.

However, if the STOP coordinator in your state appears poised to require you to report data that you do not think will accurately represent your program, it is time to try as many of the following ideas or options as necessary:

 

 

 


CHAPTER 4
USING EVALUATION INFORMATION

Evaluation is empowering. By participating in evaluation, you have had a hand in shaping the information through which someone can come to understand your program's purposes and accomplishments. You have also provided yourselves with a powerful tool for improving and expanding your program and its activities in fruitful ways. And you are very likely also to be in a position to influence the further development of supports for women victims of violence in your community. The benefits provided by a good evaluation can make all the effort seem worthwhile.

Once you have gone to the trouble to participate in evaluation activities, how can you make sure that you receive the benefits. The first, and obvious, way to use evaluation information to show others what you have accomplished—that is, you would use the information to promote your program, and to avoid having your program's purposes and achievements misunderstood by various audiences. The second way to use evaluation information is to improve your program. The third way to use it is as a lever for stimulating your community to make changes. We discuss each of these three uses of evaluation results in this chapter. In addition, other chapters examine in greater depth the issues of using data for program improvements (Chapter 6) and using data for community change (Chapter 10).

 

Uses for Program Promotion

A good evaluation should help you promote your program. It should make your goals clear, and should produce information to show how you are moving toward achieving them. You can include this information in news stories to make sure your program is properly understood, use the information in grant proposals to get more money for your program, or use it to tell a variety of audiences about what you do, that you do it well, and that they would benefit from using your services if they ever need them.

Avoiding Misunderstandings of Your Program

When you are responsible for designing your own evaluation, you will rarely be in a situation where you feel the data collected do not adequately reflect your program's goals or accomplishments. But what if you are in the situation of having an evaluation imposed on you by a funder or by a higher level unit in your own organization, and you feel that the data requested will not tell the story of your program? What can you do then?

We hope this will not happen with the STOP grants and subgrants. Throughout this Guidebook we have stressed the idea of participatory evaluation—of having those being evaluated participate in decisions about evaluation design, methods, and what gets measured. In Chapter 3 we presented many specific aspects of how this might work, and what you, as part of a program being evaluated, might do if you feel an evaluation is NOT going to do justice to your program. Here we want to say a few words about what you can do if, after all that, you are still living with an evaluation requirement imposed from above that you feel does not capture the essence of your program.

Suppose, for example, that you run a counseling service and the only thing the formal evaluation wants to know about you is who your clients were. They are asking for the usual demographic information (age, race, number of children); but they also are asking for some types of data that you think reflect an underlying suspicion of women who are victims of violence (e.g., chemical dependency of the victim, number of times victimized, number of prior relationships in which victimized [if a battering case], or prior sexual behavior [if a sexual assault case]). Further, they are NOT asking anything about what the women have done or are doing to help themselves. They are NOT asking for any documentation of the array of services you have given to the women. Nor are they asking for any evidence that the women appreciated your service, changed their behavior because of your service, felt better because of your service, understood their rights better because of your service, understood their feelings better because of your service, or any other impact of your service.

What should you do? There is no EASY solution to this problem, but that does not mean there is NO solution. What you would need to do is to collect, on your own, the information you feel DOES properly reflect your program's goals (see Chapter 7 for how to measure these facets of your program). Once you have this information, be sure to present it alongside the formal evaluation every time those formal data get reported. Develop materials that show what your program really does, and how women really feel about it. Presenting this information augments any impressions created by the minimal victim profile data with the broader picture of your clients and their involvement with your services.

There are also several last steps you could try. First, even before you collect this additional data, you could try working with other programs in your position to develop a data collection protocol that you commit to use in common. Then there will be strength in your numbers, both methodologically (that is, you will have more data), and politically (that is, you will have more programs working together). Second, once you have your additional data, you should send the results to whomever imposed the oversimplified evaluation. Ask that they be examined and incorporated in any reports that are issued describing evaluation results. Finally, you could try to reopen negotiations with the overall evaluation/evaluator/funder to see if in the future, the evaluation can be expanded to include more appropriate indicators of program activities and accomplishments. This might be particularly appealing since you will have demonstrated that you can provide the relevant data.

Using Evaluation Information to Get Funding

To use evaluation information to support fund-raising efforts, you would (1) identify the primary mission of any funding source you want to approach; (2) select the data from your evaluation that shows that you are addressing this mission and doing so successfully; and (3) using these data, gear your presentation to the funding source's interests and purpose.

Suppose, however, that you think there are some things that have to be accomplished before the funding source's goals can be accomplished. Your program does some of these things, and you want a funding source to support your activities, even though you can't promise that you will reach their goals every time, or quickly. This is where your logic model (see Chapter 2) can stand you in good stead, assuming that you have thought it out well and that your evaluation has documented all the intermediate steps between what you do and some ultimate goals.

Take training as an example of a preliminary step, and "more convictions" as the ultimate goal. You do training for police officers, and cannot in any way guarantee "more convictions." But if you have done your logic model well, you will be able to document a number of steps that should, ultimately, produce more convictions. For instance, you could be able to show (if you collect the right data) that compared to those who did not receive training, police receiving training are more thorough and careful in their collection of evidence, are more likely to keep the evidence secure and proof against challenge, provide better and more complete case documentation, testify more convincingly in court, and do everything with greater speed. Further, victims who interact with them feel they have been treated with courtesy and respect. Suppose also that you can show that more cases prepared by trained police go forward to the prosecutor's office than do cases prepared by untrained police.

Assume for the moment that, all other things equal, better prepared cases are more likely to result in prosecution. You now have a good argument for why your funding source should support your training activities.

Suppose, however, that after several years of police training the court records show that there have not, in fact, been increases in convictions, and your funder wants to end its support for your activities. You can use evaluation techniques to learn where the problem lies. Have prosecutors been bringing more cases using the better prepared evidence, but judges and juries resist conviction? The answer may lie with combinations of prosecutor training in how to make better presentations of evidence, how to use expert witnesses, how to protect witnesses from damaging cross-questioning, etc.; judicial training; and ultimately in community education to affect jurors' attitudes. Or, has there not been an increase in cases because prosecutors are not acting on the work of the newly trained police? Then the answer may lie in prosecutor training, in developing a special prosecution unit, etc. In any event, having a well-thought-out logic model and evaluation data to support it will let you work with your funding sources to help them understand that your "piece of the pie" is still vital, but that more needs to be done in other sectors for the community to benefit from the good work the police are now doing.

Using Evaluation Information to Increase
Public Appreciation of Your Program

Program evaluation data can also be used in all sorts of promotional ways. You can include them in brochures, flyers, and annual reports that you distribute all over town. You can hold an "Open House" to which you invite reporters, representatives of other agencies that have contact with women victims of violence, representatives of agencies and organizations concerned in general with women's well-being, and so on. Create attractive posters showing your program in action, and include on each poster one or two sentences or statistics that tell what you have accomplished. You can develop good relations with local reporters, if you don't already have them. When they write stories about your program, be sure the stories include one or two "boxes" or "sidebars" that use evaluation data to show your successes.

Uses for Program Development 1

Perhaps the most important use for evaluation results is program improvement and development. Evaluation data are particularly useful for helping you look at your program and see what is going wonderfully, what is okay but could be improved, and what cries out for fixing (most programs have some of each). Think about evaluation as an ongoing opportunity for reflection, and for comparing your performance against what you hope your program will achieve.

From the beginning, getting ready for evaluation helps you think about your program and how it is organized. You must be able to sit down and describe to an outsider what you are trying to accomplish (your goals), how the activities you perform every day increase the chances that you will reach your goals, and what internal and external factors might make it easier or more difficult for you to reach your goals. You cannot do this well on the spur of the moment. Many programs set up a retreat or off-site meeting to do this, with an evaluator and possibly also with a facilitator. This gives program staff the luxury of sitting back and thinking about how their program works; usually no one has done this for many years, if ever. In doing this exercise, many programs identify strengths of which they are proud. However, they usually also identify weaknesses, areas of misdirected energy, issues of whether current time allocations are the best use of resources for accomplishing their goals, etc. In short, the opportunity for reflection afforded by preparing for an evaluation can stimulate program improvements even before the evaluation formally begins.

A premise on which a good evaluator operates should be "no surprises." You don't want to get to the end of an evaluation, having heard nothing about findings for its duration, and be hit all of a sudden with a thick report. Even if the findings are overwhelmingly good, waiting until the end to learn them gives you very little opportunity to absorb them and figure out what they mean for potential changes in program operations. If there are some negative findings or findings about areas that need improvement, it is a lot more useful to learn about these findings as they emerge so you can discuss them and decide what to do about them. Getting a lot of negatives dumped on you at one time is discouraging and not likely to be productive, in addition to which it does not make people feel good about continuing to do evaluations.

You should set up regular feedback sessions with your evaluator to discuss evolving findings related to program processes and activities, as well as to get the perceptions and feelings of the evaluator as she or he spends time in your program and with your clients. This interaction can help both the program and the evaluator. The program staff can help the evaluator interpret the meaning of emerging findings and offer suggestions for how to gather additional information relevant to developing a full understanding of anything interesting. At the same time, program staff can use the feedback to think about whether the program needs to make some changes in the way it operates, either to enhance good performance or compensate for areas where performance may need to improve.

Another source of feedback on program operations is performance monitoring data. If you have set up data collection on program activities, services, and clients so your data system produces monthly or quarterly reports, you could present them as part of regular staff meetings and invite discussion about what they mean and how your program should respond to them. This type of open and shared discussion can help bring all staff back to an awareness of overall program goals and how daily behavior is or is not contributing to them. In most programs it is all too easy for everyone in a busy program to get overwhelmed with daily coping so they never have this type of discussion.

Regular feedback sessions from the evaluator and from a data system throughout the evaluation are the ideal. However, even if you get the results of evaluation all at once through a report from an outside evaluator, you should still set aside time for the whole staff to review them, absorb their meaning, and make decisions about how the program might want to change in light of the results. A retreat is often a helpful way to accomplish this. The evaluator should be present for part of the time, to help interpret and understand the results, and perhaps might be absent for part of the time while staff discuss what they have heard and what they might want to do about it.

Several examples may make these potential benefits more concrete. Think of the following:

 

A program runs a court advocacy project, but does not see much change in women's ability to obtain protection orders. An evaluator is hired to observe activity in the courtrooms served by the advocacy project. The evaluator observes several things: many women coming into court enter through a door far away from where the advocate is stationed; no information is available to let the women know the advocate is there to help them; the judges will not let the advocate speak to the women in the courtroom itself. The program uses this information to change its publicity tactics (more posters, flyers, and a videotape in several languages showing how to fill out forms); change the sitting position of the advocate; and work with court personnel to refer women to the advocate.

Performance monitoring data indicate that a good deal of the time of highly trained counselors is taken up answering phone calls and making referrals for women who are not appropriate to the program (but who do have other needs). Program staff discuss this finding and rearrange the phone-answering tasks so that less highly-trained people answer all phone calls, freeing up counselor time for direct services to counseling clients. This results in more counseling hours becoming available to women in the community. To help the non-professional staff feel comfortable taking all the phone calls, counselors offer training in phone interviewing techniques and also develop a detailed and easily used guide to community resources to help the non-professional staff offer appropriate referrals.

A computerized data system puts information about a batterer's prior civil and criminal record (other DV, other violence, protection orders, plus other family-related issues such as divorce, property settlement and child custody) at a judge's fingertips. All that is necessary is to enter a name and push a button on the computer. But sentencing practices do not change. Process evaluation reveals that most judges never push the button. The evaluation also learns that some judges do not do it because they do not trust computers; others do not do it because they believe they should treat each case as a separate offense; still others do not do it because they never received training in how to use the system. Extensive education and computer training with both judges (by other judges) and with the court personnel who do pre-trial and pre-sentence investigations is able to change this picture substantially.

 

Uses for Community Development

The example described earlier of the evaluation of police training demonstrates one of the ways that evaluation data can be used for community development. If the results of training are not as expected and further investigation documents one or more "missing links" in your community's structure for helping women victims of violence, you have the evidence you need for trying to bring the remaining agencies on board.

Even in situations where all relevant agencies are "at the table," ongoing evaluation results can be used to improve community coordination and perhaps develop new and more appropriate services. Suppose you have a council, task force, or coordinating body in your community, and it is looking for information that will help to prioritize new projects. Feedback from victims, systematically collected through a common questionnaire regardless of which agency or service they use, could be one way to pinpoint what needs doing the most. Polls of staff in member agencies about where the system breaks down, what types of help they need from other agencies so they can do their job better, etc., are another source of useful information for establishing priorities. Cross-training sessions are also a way to begin community development. I these sessions, staff of each agency help staff from other agencies learn the agency's primary mission, purposes, ways of functioning, issues with other agencies, and needs for cooperation from other agencies. Everyone has a turn, and everyone stands to gain.

One could even think of developing regular forums for sharing agency news, new programs, findings from research that might interest people in other agencies, and so on. Some communities have monthly lunch meetings attended by as many as 50 or 60 people from every agency in town whose work involves women victims of violence. After eating (an important point of attraction for getting them all there), there is a short presentation (not more than 30 minutes, including discussion). Sometimes one agency is "on," to bring the others up to date on things happening in that agency of relevance to women victims of violence, sometimes to share data, sometimes to propose new activities. Other presentations may be about a particular problem or issue, such as having someone come to explain the implications of a new law, or deciding that everyone needs to be present at the beginning of a discussion of women whose cases fall through the cracks still left in the system. Once or twice a year, these meetings can be used to present evaluation data and discuss their implications.

Collecting Data over Time

Whatever route you take to using data for community development, having the same (or better) data year after year can make an important contribution. When your community begins its activities, data about current functioning can give you a baseline against which to track your progress. Getting feedback every year (or more often if possible) about how you are doing on major community-wide goals can be a source of renewed commitment, as well as a way to reassess where you are going and what might help you get there faster. In order for this exercise to feel good, it will have to include feedback on intermediate goals so there is some hope that there will be accomplishments to report. In Chapter 2 when we talked about logic models, we emphasized the importance of adopting a realistic time frame for achieving your goals, and including many steps along the way so you could track progress and not get discouraged. The same applies to tracking community-wide progress only more so, as it is harder to change a whole community than it is to change one agency.

 


Notes for this section

 

Chapter 4
1. See also Chapter 6, sections on process evaluation and performance monitoring, for more discussion of using evaluation data to help you improve your program.


Continue to next section
(Chapters 5-9)