Program evaluation reporting procedures


















The Joint Committee, a nonprofit coalition of major professional organizations concerned with the quality of program evaluations, identified four major categories of standards — propriety, utility, feasibility, and accuracy — to consider when conducting a program evaluation. Propriety standards focus on ensuring that an evaluation will be conducted legally, ethically, and with regard for promoting the welfare of those involved in or affected by the program evaluation.

In addition to the rights of human subjects that are the concern of institutional review boards, propriety standards promote a service orientation i.

Utility standards are intended to ensure that the evaluation will meet the information needs of intended users.

Involving stakeholders, using credible evaluation methods, asking pertinent questions, including stakeholder perspectives, and providing clear and timely evaluation reports represent attention to utility standards. The scope of the information collected should ensure that the data provide stakeholders with sufficient information to make decisions regarding the program. Accuracy standards are intended to ensure that evaluation reports use valid methods for evaluation and are transparent in the description of those methods.

Meeting accuracy standards might, for example, include using mixed methods e. Both identify the need to be pragmatic and serve intended users with the goal of determining the effectiveness of a program. Principles of Community Engagement - Second Edition. Section Navigation. Facebook Twitter LinkedIn Syndicate. Program Evaluation Minus Related Pages. Evaluation can be classified into five types by intended use: formative, process, summative, outcome, and impact.

They include: Engage stakeholders to ensure that all partners invested in what will be learned from the evaluation become engaged early in the evaluation process.

The purpose of the program evaluation determines which type of evaluation is needed. A design evaluation is conducted early in the planning stages or implementation of a program. It helps to define the scope of a program or project and to identify appropriate goals and objectives. Design evaluations can also be used to pre-test ideas and strategies. A process evaluation assesses whether a program or process is implemented as designed or operating as intended and identifies opportunities for improvement.

Process evaluations often begin with an analysis of how a program currently operates. Process evaluations may also assess whether program activities and outputs conform to statutory and regulatory requirements, EPA policies, program design or customer expectations.

Outcome evaluations examine the results of a program intended or unintended to determine the reasons why there are differences between the outcomes and the program's stated goals and objectives e. Outcome evaluations sometimes examine program processes and activities to better understand how outcomes are achieved and how quality and productivity could be improved. An impact evaluation is a subset of an outcome evaluation. It assesses the causal links between program activities and outcomes.

This is achieved by comparing the observed outcomes with an estimate of what would have happened if the program had not existed e. Cost-effectiveness evaluations identify program benefits, outputs or outcomes and compare them with the internal and external costs of the program. Performance measurement is a way to continuously monitor and report a program's progress and accomplishments, using pre-selected performance measures.

By establishing program measures, offices can gauge whether their program is meeting their goals and objectives. Performance measures help programs understand "what" level of performance is achieved. Measurement is essential to making cost-effective decisions.

We strive to meet three key criteria in our measurement work:. A program sets performance measures as a series of goals to meet over time. Example reporting formats include data dashboards, PowerPoint presentations, one-page summaries, and more formal written reports.

Within the evaluation field there has been a strong emphasis on the use of visualizations when sharing findings in evaluation reports. There are many ways to disseminate evaluation results beyond the project team. This might include annual reports, program brochures, social media, listservs, or policy briefs. Looking for more information about evaluation reporting and dissemination?

Check out the resources below. Reporting and Dissemination: Building in Dissemination from the Start. The chapter focuses on reporting summative evaluations, connecting with key stakeholders, and strategies for presenting and communicating results. This guide from the Centers for Disease Control and Prevention focuses on evaluation use through evaluation reporting. The guide addresses the following topics: key considerations for effectively reporting evaluation findings, essential elements of evaluation reporting, and the importance of dissemination.

Evaluation Report Checklist. The Evaluation Report Checklist is a useful tool to guide discussions between evaluators and their clients regarding the preferred contents of evaluation reports. It can also serve as a checklist for evaluators as they write evaluation reports. Checklist for Program Evaluation Report Content.

It is intended to serve as a flexible guide for determining an evaluation report's content. Reporting with an Evaluator Audience in Mind. Potent Presentations. The Potent Presentations Initiative, sponsored by the American Evaluation Association, has the explicit purpose of helping evaluators improve their presentation skills, both at evaluation conferences and in their individual evaluation practice.



0コメント

  • 1000 / 1000