Content area
Full Text
The problem and the solution. Training program evaluation is an important and culminating phase in the analysis, design, develop, implement, evaluate (ADDIE) process. However, evaluation has often been overlooked or not implemented to its full capacity. To assess and ensure the quality, effectiveness, and the impact of systematic training, this article emphasizes the importance of summative evaluation at the last phase of ADDIE and presents developments toward a summative evaluation framework of training program effectiveness. The focus is the connection of final summative evaluation to the direction provided by the analysis phase and the concerns of the host organization.
Keywords: formative evaluation; summative evaluation; ISD; outcome evaluation; impact evaluation
As a systematic process for developing needed workplace knowledge and expertise, instructional systems design requires an evaluation component to determine if the training program achieved its intended goal-if it did what it purported to do. However, evaluation, the last phase of the ADDIE (analysis, design, develop, implement, evaluate) model, is often overlooked when organizations create and implement training programs. Strictly speaking, the larger view of evaluation may not be treated as a separate phase during the process. It is indeed an ongoing effort throughout all phases of the ADDIE process (Hannum & Hansen, 1989) and culminating at the last phase.
A number of reasons have been noted for organizations failing to conduct systematic evaluations. First, many training professionals either do not believe in evaluation or do not possess the mind-set necessary to conduct evaluation (Swanson, 2005). Others do not wish to evaluate their training programs because of the lack of confidence in whether their programs add value to, or have impact on, organizations (Spitzer, 1999). Lack of evaluation in training was also attributed to the lack of resources and expertise, as well as lack of an organization culture that supports such efforts (Desimone, Werner, & Harris, 2002; Moller, Benscoter, & Rohrer-Murphy, 2000). Even for limited efforts in training evaluation, most are retrospective in nature (Brown & Gerhardt, 2002; Wang & Wang, 2005). A study of a group of instructional design practitioners indicated that 89.5% of those conduct end-of-course evaluation, 71% evaluate learning; however, only 44% use acceptable techniques for measuring achievement. Yet merely 20% of those surveyed correctly identified methods for results evaluation (Moller...