Cdc 1999. framework for program evaluation in public health


















Facebook Twitter LinkedIn Syndicate. A Framework for Program Evaluation. Minus Related Pages. The purposes of the framework are to:. Evaluation framework materials and resources. Contact Evaluation Program. E-mail: cdceval cdc. Get Email Updates. Eunice R. Rosner, Ed. William Kassler, M. Joyce J.

Neal, Ph. Office of the Director: Lynda S. Doll, Ph. Gollmar; Richard A. Goodman, M. Johnson, M. Sencer, M. Retired ; Dixie E. Snider, M. Speers, Ph. Washington, D. McCumiskey and Tim L. Tinker, Dr. Epidemiology Program Office: Jeanne L. Alongi, M. Briss, M. Dannenberg, M. Fishbein, M. Jarvis, M. Messonnier, Ph. S; Bradford A. Myers; Raul A. Romaguera, D. Thacker, M. Truman, M. Turner, M. Public Health Prevention Service ; G.

David Williamson, Ph. Jorgensen, Dr. Kreuter, Ph. Brick Lancaster, M. McQueen, Sc. Narkunas, M. Niemeyer, M. Fraze, M. Morrissey; William C. Parra, M. Qualters, Ph. Sage, M. Smith; and Ronald R. Greenberg, M. Madans, Ph. Cleveland, M. Dixon; Janice P. Hiland, M. Jenkins, Ph. Leslie; Mark N. Lobato, M. MacQueen, Ph. Qualls, Dr. Branche, Ph. Dahlberg, Ph. Sleet, Ph. Goldenhar, Ph. Hatcher, M. Shoemaker, M. Siegmund, M. Consultants and Contributors Suzanne R. Adair, Ph. Chapel, M. Conner, Ph.

Cotton, Ph. Davis, Dr. Fawcett, Ph. Gillmore, M. Glover Kudon, M. Greabell, M. Griffin, M. Haddix, Ph. Hassig, Dr. Henry, Ph. Hersey, Ph. Hoffman, M. Hornik, Ph. Keegan, M. Koshel, M. Lewis, M. Loe, Jr. Miner, Ph. Montgomery, M. Murphy, M. Nichols, M. Sanders, Ph. Scarpetta, M. Schrader, M. Schwartz, M. Schwartz, Ph. Smith, Ed. Sternberg, M. Wiesner, M. Wholey, Ph. Yin, Ph. Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate.

The framework guides public health professionals in their use of program evaluation. It is a practical, nonprescriptive tool, designed to summarize and organize essential elements of program evaluation. The framework comprises steps in program evaluation practice and standards for effective program evaluation. Adhering to the steps and standards of this framework will allow an understanding of each program's context and will improve how program evaluations are conceived and conducted.

Furthermore, the framework encourages an approach to evaluation that is integrated with routine program operations. The emphasis is on practical, ongoing evaluation strategies that involve all program stakeholders, not just evaluation experts. Understanding and applying the elements of this framework can be a driving force for planning effective public health strategies, improving existing programs, and demonstrating the results of resource investments.

Program evaluation is an essential organizational practice in public health 1 ; however, it is not practiced consistently across program areas, nor is it sufficiently well-integrated into the day-to-day management of most programs. Program evaluation is also necessary for fulfilling CDC's operating principles for guiding public health activities, which include a using science as a basis for decision-making and public health action; b expanding the quest for social equity through public health action; c performing effectively as a service agency; d making efforts outcome- oriented; and e being accountable 2.

These operating principles imply several ways to improve how public health activities are planned and managed. They underscore the need for programs to develop clear plans, inclusive partnerships, and feedback systems that allow learning and ongoing improvement to occur.

One way to ensure that new and existing programs honor these principles is for each program to conduct routine, practical evaluations that provide information for management and improve program effectiveness.

This report presents a framework for understanding program evaluation and facilitating integration of evaluation throughout the public health system. The purposes of this report are to. Evaluation has been defined as systematic investigation of the merit, worth, or significance of an object 3,4. During the past three decades, the practice of evaluation has evolved as a discipline with new definitions, methods, approaches, and applications to diverse subjects and settings Despite these refinements, a basic organizational framework for program evaluation in public health practice had not been developed.

In May , the CDC Director and executive staff recognized the need for such a framework and the need to combine evaluation with program management. Further, the need for evaluation studies that demonstrate the relationship between program activities and prevention effectiveness was emphasized.

CDC convened an Evaluation Working Group, charged with developing a framework that summarizes and organizes the basic elements of program evaluation.

The Evaluation Working Group, with representatives from throughout CDC and in collaboration with state and local health officials, sought input from eight reference groups during its year-long information-gathering phase. Contributors included. Approximately 90 representatives participated. In addition, the working group conducted interviews with approximately persons, reviewed published and unpublished evaluation reports, consulted with stakeholders of various programs to apply the framework, and maintained a website to disseminate documents and receive comments.

The audience included approximately 10, professionals. These information-sharing strategies provided the working group numerous opportunities for testing and refining the framework with public health practitioners. Throughout this report, the term program is used to describe the object of evaluation, which could be any organized public health action. This definition is deliberately broad because the framework can be applied to almost any organized public health activity, including direct service interventions, community mobilization efforts, research initiatives, surveillance systems, policy development activities, outbreak investigations, laboratory diagnostics, communication campaigns, infrastructure-building projects, training and educational services, and administrative systems.

The additional terms defined in this report were chosen to establish a common evaluation vocabulary for public health professionals.

Evaluation can be tied to routine program operations when the emphasis is on practical, ongoing evaluation that involves all program staff and stakeholders, not just evaluation experts. The practice of evaluation complements program management by gathering necessary information for improving and accounting for program effectiveness.

Public health professionals routinely have used evaluation processes when answering questions from concerned persons, consulting partners, making judgments based on feedback, and refining program operations 9. These evaluation processes, though informal, are adequate for ongoing program assessment to guide small changes in program functions and objectives.

However, when the stakes of potential decisions or program changes increase e. Questions regarding values, in contrast with those regarding facts, generally involve three interrelated issues: merit i. If a program is judged to be of merit, other questions might arise regarding whether the program is worth its cost. Also, questions can arise regarding whether even valuable programs contribute important differences.

Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions 3,4,11 :. These questions should be addressed at the beginning of a program and revisited throughout its implementation.

The framework described in this report provides a systematic approach for answering these questions. The recommended framework was developed to guide public health professionals in using program evaluation. It is a practical, nonprescriptive tool, designed to summarize and organize the essential elements of program evaluation.

The framework comprises steps in evaluation practice and standards for effective evaluation Figure 1. The framework is composed of six steps that must be taken in any evaluation. They are starting points for tailoring an evaluation to a particular public health effort at a particular time. Because the steps are all interdependent, they might be encountered in a nonlinear sequence; however, an order exists for fulfilling each -- earlier steps provide the foundation for subsequent progress.

Thus, decisions regarding how to execute a step are iterative and should not be finalized until previous steps have been thoroughly addressed. The steps are as follows:.

Step 1: Engage stakeholders. Step 2: Describe the program. Step 3: Focus the evaluation design. Step 4: Gather credible evidence. Step 5: Justify conclusions. Step 6: Ensure use and share lessons learned.

Adhering to these six steps will facilitate an understanding of a program's context e. The second element of the framework is a set of 30 standards for assessing the quality of evaluation activities, organized into the following four groups:. Standard 1: utility, Standard 2: feasibility, Standard 3: propriety, and Standard 4: accuracy. The remainder of this report discusses each step, its subpoints, and the standards that govern effective program evaluation Box 1.

The evaluation cycle begins by engaging stakeholders i. Public health work involves partnerships; therefore, any assessment of a public health program requires considering the value systems of the partners.

Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, an evaluation might not address important elements of a program's objectives, operations, and outcomes.

Therefore, evaluation findings might be ignored, criticized, or resisted because the evaluation did not address the stakeholders' concerns or values After becoming involved, stakeholders help to execute the other steps. Identifying and engaging the following three principal groups of stakeholders are critical:. Those Involved in Program Operations. Persons or organizations involved in program operations have a stake in how evaluation activities are conducted because the program might be altered as a result of what is learned.

Although staff, funding officials, and partners work together on a program, they are not necessarily a single interest group. Subgroups might hold different perspectives and follow alternative agendas; furthermore, because these stakeholders have a professional role in the program, they might perceive program evaluation as an effort to judge them personally.

Program evaluation is related to but must be distinguished from personnel evaluation, which operates under different standards Those Served or Affected by the Program. Persons or organizations affected by the program, either directly e. Although engaging supporters of a program is natural, individuals who are openly skeptical or antagonistic toward the program also might be important stakeholders to engage.

Opposition to a program might stem from differing values regarding what change is needed or how to achieve it. Opening an evaluation to opposing perspectives and enlisting the help of program opponents in the inquiry might be prudent because these efforts can strengthen the evaluation's credibility.

Primary Users of the Evaluation. Primary users of the evaluation are the specific persons who are in a position to do or decide something regarding the program. In practice, primary users will be a subset of all stakeholders identified. A successful evaluation will designate primary users early in its development and maintain frequent interaction with them so that the evaluation addresses their values and satisfies their unique information needs 7. The scope and level of stakeholder involvement will vary for each program evaluation.

Various activities reflect the requirement to engage stakeholders Box 2 For example, stakeholders can be directly involved in designing and conducting the evaluation. Also, they can be kept informed regarding progress of the evaluation through periodic meetings, reports, and other means of communication.

Sharing power and resolving conflicts helps avoid overemphasis of values held by any specific stakeholder Occasionally, stakeholders might be inclined to use their involvement in an evaluation to sabotage, distort, or discredit the program. Trust among stakeholders is essential; therefore, caution is required for preventing misuse of the evaluation process. Program descriptions convey the mission and objectives of the program being evaluated.

Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description should discuss the program's capacity to effect change, its stage of development, and how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation.

The description enables comparisons with similar programs and facilitates attempts to connect program components to their effects Moreover, stakeholders might have differing ideas regarding program goals and purposes. Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating with stakeholders to formulate a clear and logical description will bring benefits before data are available to evaluate program effectiveness 7.

Aspects to include in a program description are need, expected effects, activities, resources, stage of development, context, and logic model. A statement of need describes the problem or opportunity that the program addresses and implies how the program will respond.

Important features for describing a program's need include a the nature and magnitude of the problem or opportunity, b which populations are affected, c whether the need is changing, and d in what manner the need is changing. Expected Effects. Descriptions of expectations convey what the program must accomplish to be considered successful i. For most programs, the effects unfold over time; therefore, the descriptions of expectations should be organized by time, ranging from specific i.

A program's mission, goals, and objectives all represent varying levels of specificity regarding a program's expectations. Also, forethought should be given to anticipate potential unintended consequences of the program. Describing program activities i. This demonstrates how each program activity relates to another and clarifies the program's hypothesized mechanism or theory of change 16, Also, program activity descriptions should distinguish the activities that are the direct responsibility of the program from those that are conducted by related programs or partners External factors that might affect the program's success e.

Resources include the time, talent, technology, equipment, information, money, and other assets available to conduct program activities. Program resource descriptions should convey the amount and intensity of program services and highlight situations where a mismatch exists between desired activities and resources available to execute those activities.

In addition, economic evaluations require an understanding of all direct and indirect program inputs and costs Stage of Development. Public health programs mature and change over time; therefore, a program's stage of development reflects its maturity.

Programs that have recently received initial authorization and funding will differ from those that have been operating continuously for a decade. The changing maturity of program practice should be considered during the evaluation process A minimum of three stages of development must be recognized: planning, implementation, and effects.

During planning, program activities are untested, and the goal of evaluation is to refine plans. During implementation, program activities are being field-tested and modified; the goal of evaluation is to characterize real, as opposed to ideal, program activities and to improve operations, perhaps by revising plans.

During the last stage, enough time has passed for the program's effects to emerge; the goal of evaluation is to identify and account for both intended and unintended effects. Descriptions of the program's context should include the setting and environmental influences e.

Understanding these environmental influences is required to design a context-sensitive evaluation and aid users in interpreting findings accurately and assessing the generalizability of the findings.

Logic Model. A logic model describes the sequence of events for bringing about change by synthesizing the main program elements into a picture of how the program is supposed to work Often, this model is displayed in a flow chart, map, or table to portray the sequence of steps leading to program results Figure 2.

One of the virtues of a logic model is its ability to summarize the program's overall mechanism of change by linking processes e. The logic model can also display the infrastructure needed to support program operations. Elements that are connected within a logic model might vary but generally include inputs e. Creating a logic model allows stakeholders to clarify the program's strategies; therefore, the logic model improves and focuses program direction.

It also reveals assumptions concerning conditions for program effectiveness and provides a frame of reference for one or more evaluations of the program. A detailed logic model can also strengthen claims of causality and be a basis for estimating the program's effect on endpoints that are not directly measured but are linked in a causal chain supported by prior research Families of logic models can be created to display a program at different levels of detail, from different perspectives, or for different audiences.

Program descriptions will vary for each evaluation, and various activities reflect the requirement to describe the program e. The accuracy of a program description can be confirmed by consulting with diverse stakeholders, and reported descriptions of program practice can be checked against direct observation of activities in the field.

A narrow program description can be improved by addressing such factors as staff turnover, inadequate resources, political pressures, or strong community participation that might affect program performance. The evaluation must be focused to assess the issues of greatest concern to stakeholders while using time and resources as efficiently as possible 7,36, Not all design options are equally well-suited to meeting the information needs of stakeholders.

After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Among the items to consider when focusing an evaluation are purpose, users, uses, questions, methods, and agreements.

Articulating an evaluation's purpose i. Characteristics of the program, particularly its stage of development and context, will influence the evaluation's purpose. Public health evaluations have four general purposes. Box 4. The first is to gain insight, which happens, for example, when assessing the feasibility of an innovative approach to practice. Knowledge from such an evaluation provides information concerning the practicality of a new approach, which can be used to design a program that will be tested for its effectiveness.

For a developing program, information from prior evaluations can provide the necessary insight to clarify how its activities should be designed to bring about expected changes. A second purpose for program evaluation is to change practice, which is appropriate in the implementation stage when an established program seeks to describe what it has done and to what extent. Such information can be used to better describe program processes, to improve how the program operates, and to fine-tune the overall program strategy.

Evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities. A third purpose for evaluation is to assess effects. Evaluations done for this purpose examine the relationship between program activities and observed consequences.

This type of evaluation is appropriate for mature programs that can define what interventions were delivered to what proportion of the target population. Knowing where to find potential effects can ensure that significant consequences are not overlooked. One set of effects might arise from a direct cause-and-effect relationship to the program. Where these exist, evidence can be found to attribute the effects exclusively to the program.

In addition, effects might arise from a causal process involving issues of contribution as well as attribution. For example, if a program's activities are aligned with those of other programs operating in the same setting, certain effects e. In such situations, the goal for evaluation is to gather credible evidence that describes each program's contribution in the combined change effort. Establishing accountability for program results is predicated on an ability to conduct evaluations that assess both of these kinds of effects.

A fourth purpose, which applies at any stage of program development, involves using the process of evaluation inquiry to affect those who participate in the inquiry. The logic and systematic reflection required of stakeholders who participate in an evaluation can be a catalyst for self-directed change.

An evaluation can be initiated with the intent of generating a positive influence on stakeholders. Such influences might be to supplement the program intervention e. Users are the specific persons that will receive evaluation findings. Because intended users directly experience the consequences of inevitable design trade-offs, they should participate in choosing the evaluation focus 7.

User involvement is required for clarifying intended uses, prioritizing questions and methods, and preventing the evaluation from becoming misguided or irrelevant. Uses are the specific ways in which information generated from the evaluation will be applied.

Several uses exist for program evaluation Box 4. Stating uses in vague terms that appeal to many stakeholders increases the chances the evaluation will not fully address anyone's needs. Uses should be planned and prioritized with input from stakeholders and with regard for the program's stage of development and current context. All uses must be linked to one or more specific users. Questions establish boundaries for the evaluation by stating what aspects of the program will be addressed Creating evaluation questions encourages stakeholders to reveal what they believe the evaluation should answer.

Negotiating and prioritizing questions among stakeholders further refines a viable focus. The question-development phase also might expose differing stakeholder opinions regarding the best unit of analysis. Certain stakeholders might want to study how programs operate together as a system of interventions to effect change within a community.

Other stakeholders might have questions concerning the performance of a single program or a local project within a program.

Still others might want to concentrate on specific subcomponents or processes of a project. Clear decisions regarding the questions and corresponding units of analysis are needed in subsequent steps of the evaluation to guide method selection and evidence gathering. The methods for an evaluation are drawn from scientific research options, particularly those developed in the social, behavioral, and health sciences , A classification of design types includes experimental, quasi-experimental, and observational designs 43, No design is better than another under all circumstances.

Evaluation methods should be selected to provide the appropriate infor- mation to address stakeholders' questions i. Experimental designs use random assignment to compare the effect of an intervention with otherwise equivalent groups Quasi-experimental methods compare nonequivalent groups e.

Observational methods use comparisons within a group to explain unique features of its members e. The choice of design has implications for what will count as evidence, how that evidence will be gathered, and what kind of claims can be made including the internal and external validity of conclusions Also, methodologic decisions clarify how the evaluation will operate e.

Because each method option has its own bias and limitations, evaluations that mix methods are generally more effective 44, During the course of an evaluation, methods might need to be revised or modified. Also, circumstances that make a particular approach credible and useful can change. For example, the evaluation's intended use can shift from improving a program's current activities to determining whether to expand program services to a new population group.

Thus, changing conditions might require alteration or iterative redesign of methods to keep the evaluation on track Agreements summarize the procedures and clarify roles and responsibilities among those who will execute the evaluation plan 6, Agreements describe how the evaluation plan will be implemented by using available resources e. Agreements also state what safeguards are in place to protect human subjects and, where appropriate, what ethical e.

Elements of an agreement include statements concerning the intended purpose, users, uses, questions, and methods, as well as a summary of the deliverables, time line, and budget. The agreement can include all engaged stakeholders but, at a minimum, it must involve the primary users, any providers of financial or in-kind resources, and those persons who will conduct the evaluation and facilitate its use and dissemination.

The formality of an agreement might vary depending on existing stakeholder relationships. An agreement might be a legal contract, a detailed protocol, or a memorandum of understanding.

Creating an explicit agreement verifies the mutual understanding needed for a successful evaluation. It also provides a basis for modifying or renegotiating procedures if necessary. Various activities reflect the requirement to focus the evaluation design Box 5. Both supporters and skeptics of the program could be consulted to ensure that the proposed evaluation questions are politically viable i. A menu of potential evaluation uses appropriate for the program's stage of development and context could be circulated among stakeholders to determine which is most compelling.

Interviews could be held with specific intended users to better understand their information needs and time line for action. Resource requirements could be reduced when users are willing to employ more timely but less precise evaluation methods. An evaluation should strive to collect information that will convey a well-rounded picture of the program so that the information is seen as credible by the evaluation's primary users. Information i. Such decisions depend on the evaluation questions being posed and the motives for asking them.

For certain questions, a stakeholder's standard for credibility might require having the results of a controlled experiment; whereas for another question, a set of systematic observations e. Consulting specialists in evaluation methodology might be necessary in situations where concern for data quality is high or where serious consequences exist associated with making errors of inference i.

Having credible evidence strengthens evaluation judgments and the recommendations that follow from them. Although all types of data have limitations, an evalu- ation's overall credibility can be improved by using multiple procedures for gathering, analyzing, and interpreting data. Encouraging participation by stakeholders can also enhance perceived credibility.

When stakeholders are involved in defining and gathering data that they find credible, they will be more likely to accept the evaluation's conclusions and to act on its recommendations 7, Aspects of evidence gathering that typically affect perceptions of credibility include indicators, sources, quality, quantity, and logistics.

Indicators define the program attributes that pertain to the evaluation's focus and questions Because indicators translate general concepts regarding the program, its context, and its expected effects into specific measures that can be interpreted, they provide a basis for collecting evidence that is valid and reliable for the evaluation's intended uses. Indicators address criteria that will be used to judge the program; therefore, indicators reflect aspects of the program that are meaningful for monitoring Examples of indicators that can be defined and tracked include measures of program activities e.

Defining too many indicators can detract from the evaluation's goals; however, multiple indicators are needed for tracking the implementation and effects of a program. One approach to developing multiple indicators is based on the program logic model developed in the second step of the evaluation. The logic model can be used as a template to define a spectrum of indicators leading from program activities to expected effects 23, Relating indicators to the logic model allows the detection of small changes in performance faster than if a single outcome were the only measure used.

Lines of responsibility and accountability are also clarified through this approach because the measures are aligned with each step of the program strategy. Further, this approach results in a set of broad-based measures that reveal how health outcomes are the consequence of intermediate effects of the program. Intangible factors e. During an evaluation, indicators might need to be modified or new ones adopted.

Measuring program performance by tracking indicators is only part of an evaluation and must not be confused as a singular basis for decision-making. Well-documented problems result from using performance indicators as a substitute for completing the evaluation process and reaching fully justified conclusions 66,67, An indicator e. Sources of evidence in an evaluation are the persons, documents, or observations that provide information for the inquiry Box 6.

More than one source might be used to gather evidence for each indicator to be measured. Selecting multiple sources provides an opportunity to include different perspectives regarding the program and thus enhances the evaluation's credibility.

An inside perspective might be understood from internal documents and comments from staff or program managers, whereas clients, neutral observers, or those who do not support the program might provide a different, but equally relevant perspective.

Mixing these and other perspectives provides a more comprehensive view of the program. The criteria used for selecting sources should be stated clearly so that users and other stakeholders can interpret the evidence accurately and assess if it might be biased 45, Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates?

Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4. Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation.

Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation. Good evaluation requires a combination of skills that are rarely found in one person. The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise.

An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts. This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants.

The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks.

However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.

Seek additional evaluation expertise in programs within the health department, through external partners e. You can also use outside consultants as volunteers, advisory panel members, or contractors.

External consultants can provide high levels of evaluation expertise from an objective point of view. Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs.

Be sure to check all references carefully before you enter into a contract with any consultant. To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access. Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.

Evaluation team members should clearly define their respective roles. Informal consensus may be enough; others prefer a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members. Either way, the team must clarify and reach consensus on the:. This manual is organized by the six steps of the CDC Framework.

Each chapter will introduce the key questions to be answered in that step, approaches to answering those questions, and how the four evaluation standards might influence your approach. The main points are illustrated with one or more public health examples that are composites inspired by actual work being done by CDC and states and localities.

Together, they build a house over a multi-week period. At the end of the construction period, the home is sold to the family using a no-interest loan. Lead poisoning is the most widespread environmental hazard facing young children, especially in older inner-city areas. Even at low levels, elevated blood lead levels EBLL have been associated with reduced intelligence, medical problems, and developmental problems.

The main sources of lead poisoning in children are paint and dust in older homes with lead-based paint. Public health programs address the problem through a combination of primary and secondary prevention efforts. A typical secondary prevention program at the local level does outreach and screening of high-risk children, identifying those with EBLL, assessing their environments for sources of lead, and case managing both their medical treatment and environmental corrections.

However, these programs must rely on others to accomplish the actual medical treatment and the reduction of lead in the home environment. A common initiative of state immunization programs is comprehensive provider education programs to train and motivate private providers to provide more immunizations.

A typical program includes a newsletter distributed three times per year to update private providers on new developments and changes in policy, and provide a brief education on various immunization topics; immunization trainings held around the state conducted by teams of state program staff and physician educators on general immunization topics and the immunization registry; a Provider Tool Kit on how to increase immunization rates in their practice; training of nursing staff in local health departments who then conduct immunization presentations in individual private provider clinics; and presentations on immunization topics by physician peer educators at physician grand rounds and state conferences.

Minimalist theory of evaluation: The least theory that practice requires. American Journal of Evaluation ; Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage, Study of participatory research in health promotion: Review and recommendations for the development of participatory research in health promotion in Canada.

Ottawa, Canada : Royal Society of Canada ,



0コメント

  • 1000 / 1000