Improving Evaluation of Anticrime Programs

Free download. Book file PDF easily for everyone and every device. You can download and read online Improving Evaluation of Anticrime Programs file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Improving Evaluation of Anticrime Programs book. Happy reading Improving Evaluation of Anticrime Programs Bookeveryone. Download file Free Book PDF Improving Evaluation of Anticrime Programs at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Improving Evaluation of Anticrime Programs Pocket Guide.
Product details

David Weisburd. NW, Washington, D. It is based on a workshop, held in September , in which participants presented and discussed examples of evaluation-related studies that represent the methods and challenges associated with research at three levels: interventions directed toward individuals; interventions in neighborhoods, schools, prisons, or communities; and interventions at a broad policy level.

Account Options

The article, and the report on which it is based, is organized around five questions that require thoughtful analysis in the development of any evaluation plan: What questions should the evaluation address? When is it appropriate to conduct an impact evaluation? How should an impact evaluation be designed? How should the evaluation be implemented? What organizational infrastructure and procedures support high quality evaluation?

The authors highlight major considerations in developing and implementing evaluation plans for criminal justice programs and make recommendations for improvement of government funded evaluation studies. Key words: anti-crime programs, evaluation, experiments, impact evaluation, observational methods, quasi-experiments, randomized trials, research methods Introduction Effective guidance of criminal justice policy and practice requires evidence about the effects of the policies and practices on the populations and conditions j This summary article is based on the National Research Council Report of the Committee on Improving Evaluation of Anti-Crime Programs.

The authors of that report are: Mark Lipsey, Ph. The authors of this summary have remained as faithful as possible to the longer, original report. However, any differences that appear in this article are attributable to the authors alone and not to the National Research Council. The role of evaluation research is to provide that evidence and to do so in a manner that is accessible and informative to policy makers.

The Maryland Report Sherman et al. The Crime and Justice Group of the Campbell Collaboration has embarked on an ambitious effort to develop systematic reviews of research on the effectiveness of crime and justice programs. The Office of Juvenile Justice and Delinquency Prevention OJJDP Blueprints for Violence Prevention project identifies programs whose effectiveness is demonstrated by evaluation research, and other lists of programs alleged to be effective on the basis of research have proliferated e. These efforts reflect recognition that knowledge of the ability of various programs to reduce crime or protect potential victims allows resources to be allocated in ways that support effective programs and efficiently promote these outcomes.

Fulfilling this function, in turn, requires that evaluation research be designed and implemented in a manner that provides valid and useful results of sufficient quality to be relied upon by policy makers. Criticism of methods Methodological issues are at the heart of what has arguably been the most influential stimulus for attention to the current state of evaluation research in criminal justice.

A series of reports2 by the U. General Accounting Office GAO a, b, c has been sharply critical of the evaluation studies conducted under the auspices of the Department of Justice. The GAO review identified a number of problems that highlight the major challenges that must be met in an impact evaluation.

VTLS Chameleon iPortal Full Record

In the context of these concerns about evaluation methods and quality the National Institute of Justice asked the Committee on Law and Justice of the National Research Council to conduct a workshop on improving the evaluation of criminal justice programs and to follow up with a report that extracts guidance for effective evaluation practices from those proceedings.

The workshop participants presented and discussed examples of evaluation- related studies that represented the methods and challenges associated with research at three levels: interventions directed toward individuals; interventions in neighborhoods, schools, prisons, or communities; and interventions at a broad policy level. This report highlights major considerations in developing and im- plementing evaluation plans for criminal justice programs. It is organized around a series of questions that require thoughtful analysis in the development of any evaluation plan: what questions should the evaluation address?

What organizational infrastructure and procedures support high-quality evaluations? What questions should the evaluation address? The evaluation of criminal justice programs is often taken to mean impact evaluation Y assessing the effects of the program on its intended outcomes. Producing beneficial effects and avoiding harmful ones is the central purpose of most programs and the reason for investing resources in them.

Anti-Crime Programs: An Evaluation (and recognition) of the Plan Comuna Segura

Most of this report is, therefore, focused on impact evaluation. It does not follow, however, that every evaluation should automatically focus on impact questions Rossi et al. Or, they may be inappropriate in the context of issues with greater political salience or more relevance to the concerns of key audiences for the evaluation.

In particular, questions about aspects of program performance other than its impact may be important to be answered in their own right, or in conjunction with addressing impact questions. These questions include the following: 1. Questions about the need for the program, e.

Panel Comments: Patrick Clark

For a program to reduce gang-related crime, for instance, it is useful to know how much crime is gang-related, what crimes, in what neighborhoods, and by which gangs. Questions about program conceptualization or design, e. One might ask, for instance, whether it is a sound assumption that prison visitation programs for juvenile offenders, such as Scared Straight, will have a deterrent effect for impressionable anti-social adolescents Petrosino et al. Questions about program implementation and service delivery, e. These kinds of process evaluations determine how well the program is operating and whether it has a reasonable chance of producing the intended effects.

Questions about program cost and efficiency, e.

  • Committee on Improving Evaluation of Anti-Crime Programs's Documents -
  • Why Do Bad Things Happen To Good People: Answers to One of Lifes Greatest Moral Questions.
  • Statistics with STATA: Version 12.
  • Extreme Value Theory in Engineering.

Cost- benefit and cost-effectiveness assessments are especially informative, however, when they build on the findings of impact evaluation to examine the cost required to attain whatever effects the program produces. Because of the complex and difficult-to-control context of criminal justice interventions, programs may be implemented in such weak form that significant effects cannot be expected.

In addition, information about the nature of the problem addressed and the program concept for bringing about change is important to provide an explanatory context in which to interpret evaluation results. Different responses for improvement in outcomes are required for different problems, for example, weak implementation versus a seriously flawed program concept or approach that could not be expected to produce the intended effects. We need to know not only whether the hoped-for effects occurred but also why they occurred.

It is important, therefore, to develop the questions the evaluation is to answer and to ensure they are appropriate to the program circumstances and audience for evaluation. This form of evaluation will be tailored to the program being evaluated and will show little commonality across programs that are not replicates of each other.

Data collected on recidivism rates may describe the post-program status of offenders and may show higher or lower rates than expected for the population being treated, but they do not reveal what change in recidivism results from the program intervention that would not have occurred otherwise. Impact evaluations, in turn, are oriented toward determining whether a program produces the intended outcomes, for instance, reduced recidivism among treated offenders, decreased stress for police officers, less trauma for victims, lower crime rates, and the like.

The programs that are evaluated may be demonstration programs, such as the early forms of the Multidimensional Treatment Foster Care Program, that are not widely implemented and which may be mounted or supervised by researchers to find out if they work often called efficacy studies. Or, they may involve programs already rather widely used in practice, such as drug courts, that operate with representative personnel, training, client selection, and the like often called effectiveness studies.

For present purposes, we will focus on broader consid- erations that apply across the range of criminal justice impact evaluations. Moreover, in some instances, it may be necessary to have the answers to some questions before asking others. For instance, with relatively new programs, it may be important to establish that the program has reached an adequate level of implementation before one embarks on an outcome evaluation.

  • Formal Specification Using Z?
  • Search form!
  • A Guide to Evaluating Crime Control of Programs In Public Housing.
  • To develop and test: The inventive difference between evaluation and experimentation (2006)?

A community policing program, for instance, could require changes in well-established practices that may occur slowly or not at all. In addition, producing informative, useful evaluation results may require a series of evaluation studies rather than a single study. Finally, the nature of a program and its circumstances may be such that evaluation is not feasible for a particular program or the questions that can be answered may not be useful to any identifiable audience. Unfortunately, evaluation is often commissioned and well underway before these conditions are discovered.

The technique of evaluability assessment Wholey was developed as a diagnostic procedure that evaluators could use to find out if a program was amenable to evaluation and, if so, what form of evaluation would provide the most useful information to the intended audience. The diversity of potential evaluation questions and approaches that may be applied to a program allows much room for variation from one evaluation team to another.

Agencies that commission and sponsor evaluations will experience this variation if the specifications for the evaluations they fund are not spelled out rather exactly. Such mechanisms as Requests for Proposals RFPs and scope of work statements in contracts are often the initial forms of communication between evaluation sponsors and evaluators about the questions the evaluation will answer and the form it will take.

What is program evaluation?: A Brief Introduction

Sponsors who clearly specify the questions of interest and the form in which they expect the answers are more likely to obtain the information they want from an evaluation. However, it is important for the evaluation plan to be both well-specified and also to have provisions for adaptation and renegotiation when needed. Development of a well-specified evaluation solicitation and plan shifts much of the burden for identifying the focal evaluation questions and the form of useful answers to the evaluation sponsor.

More often, in contrast, the sponsor provides only general guidelines and relies on the applicants to shape the specific questions and approach. This may involve use of outside expertise for advice, including researchers, practitioners, and policy makers, or the capability to conduct or commission preliminary studies to provide input to the process. When is an impact evaluation appropriate Of the many evaluation questions that might be asked for any criminal justice program, the one that is generally of most interest to policy makers is, BDoes it work?

Impact evaluation is inherently difficult and depends upon specialized research designs, data collection, and statistical analyses. It simply cannot be carried out effectively unless certain minimum conditions and resources are available, no matter how skilled the researchers or insistent the policy makers, and, even under otherwise favorable circumstances, it is rarely possible to obtain credible answers about the effects of a criminal justice program within a short time period or at low cost.

This means that, to have a reasonable probability of success, impact evaluations should be launched only with careful planning and firm indications that the prerequisite conditions are in place.

Account Options

There are no hard and fast criteria for determining which programs are most appropriate for impact evaluation. The committee believes that two important criteria are the practical or political significance of the program and how amenable it is to evaluation. One is the evolution of innovative programs that have great potential in the eyes of the policy community. Such programs may be developed by researchers or practitioners and fielded rather narrowly.

The practice of arresting perpetrators of domestic violence when police were called to the scene began in this fashion Sherman With the second mechanism, programs spring into broad acceptance as a result of grass-roots enthusiasm but may lack an empirical or theoretical underpinning.

Project DARE, with its use of police officers to provide drug prevention education in schools, followed that path. Programs stemming from both sources are potentially significant, though for different reasons, and it would be shortsighted to focus on one to the exclusion of the other. A useful conceptual framework from health intervention research for appraising the significance of an intervention and whether it is a candidate for impact evaluation is summarized in the acronym, RE-AIM, for Reach, Effectiveness, Adoption, Implementation, and Maintenance Glasgow et al.

These elements can be thought of as a chain, with the potential value of an evaluation constrained by the weakest link in that chain. We will consider these elements in order. Reach is the scope of the population that could potentially benefit from the intervention if it proves effective. An intervention may have practical significance to a large population or to a specialized smaller one. Drug courts, for example, have great reach because of the high prevalence of substance abuse among offenders. It is the job of impact evaluation to determine effectiveness, which makes this a difficult criterion to apply when selecting programs for impact evaluation.

For some programs, there may be preliminary evidence of efficacy or effectiveness that can inform judgment. Consistency with well-established theory and the clinical judgment of experienced practitioners may also be useful touchstones. Adoption is the potential market for a program.


Adoption is a complex constellation of ideology, politics, and bureaucratic preferences that is influenced by intellectual fashion and larger social forces as well as rational assessment of the utility of a program. The widespread adoption of bootcamps during the s, for instance, indicated that this type of paramilitary program had considerable political and social appeal and was compatible with the program concepts held by criminal justice practitioners. Some programs are more difficult to implement than others, and, for some, it may be more difficult to sustain the quality of the service delivery in ongoing practice.

Improving Evaluation of Anticrime Programs Improving Evaluation of Anticrime Programs
Improving Evaluation of Anticrime Programs Improving Evaluation of Anticrime Programs
Improving Evaluation of Anticrime Programs Improving Evaluation of Anticrime Programs
Improving Evaluation of Anticrime Programs Improving Evaluation of Anticrime Programs
Improving Evaluation of Anticrime Programs Improving Evaluation of Anticrime Programs

Related Improving Evaluation of Anticrime Programs

Copyright 2019 - All Right Reserved