Program evaluation is the process of gathering information about the functions, features, and results of a program in order to judge it, improve effectiveness or make decisions concerning future programs of similar functions ( Rossi, Lipsey, & Freeman, 2004) . This process includes a wide range of methods that assess many aspects of a program in organizations. The key reason behind the evaluation is to determine their efficiency and effectiveness in achieving a certain set goal. It is always common for investors and funders of such programs wanting to know how it operates and whether it meets the demands of the public it is serving. By determining the suitability of a program, several instruments can be employed. This paper focuses on the outcome based evaluation (OBE) tools and how they are prepared.
An OBE instrument is a systematic way of analysis that is employed to determine the extent to which a program has achieved an intended result. The development of such tools helps in the facilitation of asking questions concerning the organization and how it creates program activities that are able to bring the outcomes that is believed to be beneficial to the client ( Rossi, Lipsey, & Freeman, 2004) . It can also imply that the facilitator has established through validation that the results fit the bill of the population in question. OBE answers the question of whether the organization just engages in a myriad of activities that seem reasonable to be carried out at that time.
Delegate your assignment to our experts and they will do the rest.
In this case, outcomes refer to the benefits the consumer of the program gets from its usage ( Schalock, 2001) . They are usually measured in terms of enhanced conditions, for instance self-reliance, increased literacy levels, among others. Also, it can take the form of enhanced learning, for instance, skills, knowledge or attitudes, among others. In other words, the outcome based evaluation instrument tries to address the change that has occurred in the lives of the community, organizations, families, or individuals as a result of the program. However, outcomes should not be confused by the outputs of the program or the service units, for instance the number of individuals who went through the process.
Before the evaluation instrument is built, it is imperative to understand the several components. For the outcome based evaluation instrument, the major component of importance is the logic model. It is the starting point of an evaluation plan and it identifies the process and outcomes of the model ( Frechtling, 2007) . A logic model also shows the connection between the inputs and expected results and forms a common language to answer the rationale for the need of the evaluation. It helps in the understanding of the theory of the model as it provides a graphic overview of how the parts of the program relates to the entire. A logic model contains four components; resources (what the program needs to function), activities (what must be done), outputs (quantity of the service that the model brings), and results (primary outcomes). As a case example, let’s take a program which aims at increasing the access to shelter to victims of domestic violence. The outcome should be the percentage of the increase in which the program has brought after the implementation.
The outcome based evaluation instrument can be accomplished through the following steps:
Identifying major outcomes
The program under evaluation should be assessed for the major outcomes by reflecting on the mission that the organization seeks to carry out. The question that the evaluation should ask is what impact will be experienced by the clients or customers ( Rossi, Lipsey, & Freeman, 2004) . In our case, the overall mission is to provide the shelter and resources to victims of domestic violence. The overall question should be what the provision of the shelter and other resources will have on such victims. The answer to “whys” will suggest the outcomes of the program.
Selection of outcomes
The next step is to select the outcomes that should be examined, and it is prioritized. These can be multiple outcomes depending on the time and resources at disposal.
Specification of indicators
For the selected outcomes, the indicators or observable measures are specified. These are the key points that suggest that an organization is achieving the intended outcome. It is the most important yet most challenging step in this instrument of evaluation. This is because the interested party goes from intangible concept to the specific activities. For instance, it could be keeping off drugs or not going back to the abuser.
Target Goal
The next step is to identify the clients’ target goal. That is the number of or percentage of the subjects that is committed to achieving specific outcomes ( Rossi, Lipsey, & Freeman, 2004) . These should be tied to the identified indicators that the evaluation instrument specified prior. For instance, in the case above, “increased self-reliance for 80 percent of adult, Hispanic women living in the shelters as identified by the following…” In this example, “self-reliance” is the outcome while “the following” shows the indicators.
Required information
The information needed to point out the indicators is then identified. There is a need to know number of clients in the target group that passed through the program, or how many kept off drugs. For the sake of new projects, the researcher needs to evaluate the process for verification of whether the program is rolled out according to original ideas.
Collection of data
This step involves the decision on which method to use to efficiently and realistically gather information. For example, the researcher can use case studies of success and failures of the program, interviews about the perceived benefits, program documentation, or observation of program clients and personnel.
The final step involves the analysis and reporting of findings.
This evaluation instrument is expected to have validity and reliability as far as the target group is concerned. Validity is the degree to which the instrument employed measures what it was intended to measure with accuracy. First, of importance is the need to ensuring that the data accurately measure the intended measurement. As such, when holding interviews, there is feedback on how the victims feel in the program. Another means of ensuring that there is validity is through employing the use of established survey tools. Also it is imperative that when designing surveys, the outcomes measured to be consistent to those that are defined by the professional efforts. In other words, when designing such instruments, it is essential to understand and appreciate the known findings and to determine whether the measured items are consistent to such. It is also important to select the tool that would work for the specific client group.
Reliability is the degree to which an evaluation tool provides the same or consistent results over time. For consistency, the instrument should have standards that make the data uniform. One means of achieving this is through training of the individuals who are invested to carry out the evaluation plan. It can also done by having one or two people enter the data or by creating instructions to help in the interpretation of open-ended questionnaires. Other instructions should also follow on how to code the data from qualitative information. There should be supervision of the staff mandated to carry out the plan and also conduction of quality check in the entry of data to assess inter-code reliability.
In conclusion, outcome based evaluation is an essential tool in the assessment of how a program benefits the client. It shows the resulting changes that the implementation of the program brings to the target group. It can be used in a social system which is centered on the client rather than the program itself, for instance in patient-centered care in health and solving societal problems.
References
Frechtling, J. A. (2007). Logic modeling methods in program evaluation (Vol. 5). Jossey-Bass.
Rossi, P. Lipsey, M. W., & Freeman, H.E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.
Schalock, R. L. (2001). Outcome-based evaluation . Springer Science & Business Media.
Ward, H. (2017). Current initiatives in the development of outcome-based evaluation of children’s services. In Assessing Outcomes in Child and Family Services (pp. 6-18). Routledge.