1. Trang chủ
  2. » Luận Văn - Báo Cáo

000069141 KIRKPATRICK MODEL APPLIED TO THE TRAINER DEVELOPMENT PROGRAM OF HRDT PROJECT IN HANOI TOURISM ENTERPRISES MÔ HÌNH KIRKPATRICK ÁP DỤNG CHO CHƯƠNG TRÌNH PHÁT TRIỂN ĐÀO TẠO CỦA DỰ ÁN HRDT TẠI CÁC DOANH NGHIỆP DU LỊCH HÀ NỘI

142 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Kirkpatrick Model Applied to the Trainer Development Program of HRDT Project in Hanoi Tourism Enterprises
Tác giả Bui Thi Thu Phuong
Trường học Hanoi University
Chuyên ngành Business Administration
Thể loại Thesis
Năm xuất bản 2008
Thành phố Hanoi
Định dạng
Số trang 142
Dung lượng 58,36 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 1.1. Overview background (15)
  • 1.2. Problem identification (16)
    • 1.2.1. General problem identification (16)
    • 1.2.2. Identification of Vietnamese tourism ’s situation and training problem s (17)
  • 1.3. Research purpose (26)
  • 1.4. Research significance (27)
  • 1.5. Research questions and hypothesis (28)
    • 1.5.1. Research questions (28)
    • 1.5.2. Research hypotheses (29)
  • 1.6. Thesis stru ctu re (30)
  • 1.7. Chapter su m m a ry (30)
  • 2.1. Definition and purpose o f evaluating training effectiveness (32)
    • 2.1.1. Towards a definition (32)
    • 2.1.2. Purpose o f training evaluation (34)
  • 2.2. Approaches to evaluation o f train in g (35)
    • 2.2.1. Goal-based vs. systems-based approaches (35)
    • 2.2.2. Goal-based vs. goal-free approaches (36)
    • 2.2.3. Responsive evaluation approach (37)
    • 2.2.4. Professional review approach (37)
    • 2.2.5. Quasi - legal approach (38)
  • 2.3. Kirkpatrick model applied to training ev alu atio n (38)
    • 2.3.1. K irkpatrick's four level m odel (38)
    • 2.3.2. An added 5th level to Kirkpatrick m odel (40)
    • 2.3.3. Using Kirkpatrick M o del (40)
  • 2.4. O ther evaluation m o d e ls (41)
    • 2.4.1. Tyler’s O bjectives A p p ro ach (41)
    • 2.4.2. Scriven’s Focus On O utcom es (42)
    • 2.4.3. Stufflebeam ’s CIPP M o d e l (43)
    • 2.4.4. CIRO M odel (44)
    • 2.4.5. G uba's Naturalistic A pproach (44)
    • 2.4.6. V M o d el (45)
  • 2.5. Challenges in evaluation o f training (45)
  • 2.6. Practices in evaluation o f training (46)
  • 2.7. Chapter su m m ary (49)
  • 3.1. Restatement o f the research q u e stio n s (51)
  • 3.2. Research d esig n (51)
    • 3.2.1. Target populatio n (51)
    • 3.2.2. S am pling (52)
    • 3.2.3. Instruments and m easures (53)
  • 3.3. Data collection (57)
  • 3.4. Data a n a ly sis (58)
  • 3.5. Chapter su m m a ry (61)
  • 4.1. D em ographics (63)
  • 4.2. Level one - R eaction (66)
    • 4.2.1. M easuring reactions o f participants (66)
    • 4.2.2. Reaction differences between g ro u p s (72)
  • 4.3. Level two - L e a rn in g (74)
    • 4.2.1. M easuring participants’ learn in g (74)
    • 4.2.2. Learning differences between g ro u p s (78)
  • 4.4. Level three - Behavioral ch ang e (80)
    • 4.2.1. M easuring behavioral changes (80)
    • 4.2.2. Between groups differences in behavior (85)
  • 4.5. Level four - O rganizational benefits (87)
    • 4.5.1. M easuring organizational benefits (87)
    • 4.5.2. Assessing participants’ work com m itm en t (91)
    • 4.5.3. Between groups differences (93)
  • 4.6. Evaluation sum m ary (95)
  • CHAPTER V RESEARCH IM PLIC A TIO N S (97)
    • 5.1. Implications for relevant stak eh o ld ers (97)
      • 5.1.1 For HRDT project and Trainer Developm ent P ro g ram m e (97)
      • 5.1.2 For trainers in the tourism industry (99)
      • 5.1.3 For Hanoi Tourism E nterprises (100)
      • 5.1.4 For policy m a k e rs (101)
    • 5.2. Importance o f the academic research o f applying Kirkpatrick model in the (103)
  • CHAPTER VI R ESEARCH LIM ITATIONS AND FUTURE RESEARCH (104)
    • 6.1. Research lim itatio n s (104)
    • 6.2. Future research po ssib ilities (105)
  • CHAPTER VII C O N C L U S IO N S (0)
    • 7.1. S um m ary (106)
    • 7.2. Key findings (0)
    • 7.3. Main recom m endations (108)
    • 7.4. Concluding re m a rk s (110)
    • APPENDIX 1. Trainees' feedback and evaluation questionnaire - English v ersio n (118)
    • APPENDIX 2. Trainees’ feedback and evaluation questionnaire - Vietnamese version (0)
    • APPENDIX 3. Training evaluation questionnaire - English version (124)
    • APPENDIX 4. Training evaluation questionnaire - Vietnamese v ersio n (0)
    • APPENDIX 5. Codes for data analysis using S P S S (130)
    • APPENDIX 6. Frequency tables for level 1 ev alu atio n (132)
    • APPENDIX 7. One-way ANOVA table for level one evaluation (136)
    • APPENDIX 8. Frequency tables for level three evalu ation (138)
    • APPENDIX 9. Frequency tables o f quality ch ang es (0)
    • APPENDIX 10. Basic information o f Vietnam HRDT project (0)

Nội dung

000069141 KIRKPATRICK MODEL APPLIED TO THE TRAINER DEVELOPMENT PROGRAM OF HRDT PROJECT IN HANOI TOURISM ENTERPRISES MÔ HÌNH KIRKPATRICK ÁP DỤNG CHO CHƯƠNG TRÌNH PHÁT TRIỂN ĐÀO TẠO CỦA DỰ ÁN HRDT TẠI CÁC DOANH NGHIỆP DU LỊCH HÀ NỘI

Overview background

Over the last three decades, the marketplace has undergone a dramatic shift, making business more competitive as firms race to survive Globalization and an aging workforce have intensified these competitive pressures, prompting organizations to invest more in recruiting, developing, and retaining a skilled, knowledgeable workforce Consequently, training, education, and ongoing learning experiences for employeeshave become essential levers for achieving strategic and financial business objectives.

Training has become a strategic priority for adult professional development, driving organizations to invest more in training than ever before The American Society for Training and Development estimates the total annual U.S spending on training at around $20 billion, while other industry estimates range from $60 billion to $200 billion (Pulichino, 2007) In its 2007 Industry Report, Training Magazine found that training budgets had grown to $58.5 billion, reflecting a 7 percent increase over the previous year.

2006 Organizations reported that an average o f $1,202 (including staff salaries) was spent per learner on training (Bassi, 2007).

L earn in g E xp en ditu res

F ig u re 1 - L e a r n in g E x p e n d itu r e s (Training Magazine, 2007)

Representing a substantial portion of an enterprise’s investment in its people—its human capital—training expenditures often do not yield an immediately measurable financial return The high cost of training has driven human resource development and training practitioners to prove that programs are effective, deliver value, and align with the organization’s strategic objectives Consequently, training professionals must know how to measure and evaluate these programs and how to justify their costs by demonstrating return on investment and broader business impact.

Training, once viewed mainly as a cost, is now recognized as a strategic investment in an organization’s human capital and competitive advantage Its value lies not just in direct revenues but in developing a smarter workforce that enhances market competitiveness Therefore, evaluating training programs is a critical function for executives, enabling them to understand and measure improvements in employee performance and the resulting business outcomes driven by these programs.

The purpose of training evaluation is to guide future improvements Since 1959, Kirkpatrick's four-level model has become a standard, and Bassi et al (1996) found that 96% of surveyed companies used some form of the Kirkpatrick framework to evaluate training and development programs Despite decades of experience and research, training evaluation remains a young and evolving field that requires ongoing study to develop better evaluation instruments, a more researchable model, and improvements in the discipline and its practice (Foreman, 2008).

Problem identification

General problem identification

An old management adage—“If you can’t measure it, you can’t manage it”—highlights the need to measure training and its effects on both trainees and the organization (Pulichino, 2007) The challenge for training practitioners is determining the best way to measure and evaluate programs and to report results in a timely, cost-effective, and useful manner Where can they turn for a solution? Many practitioners rely on the Kirkpatrick four-level model as a widely used framework for evaluating training impact.

Many training practitioners rely on the Kirkpatrick four-level model, widely recognized as the most comprehensive and widely used framework for evaluating training programs across corporate, government, academic, and other institutional settings Over about 48 years in the literature, it has become the standard approach to training evaluation The model’s four levels are sequential: Level 1 captures participants’ reactions to the training, Level 2 measures what participants learned, Level 3 assesses changes in on-the-job behavior resulting from the training, and Level 4 evaluates the outcomes in terms of specific business and financial goals for the organization.

However, numerous studies, including research conducted by Kirkpatrick himself, show that the full Kirkpatrick taxonomy is not widely used beyond the first two levels Consequently, Levels 3 and 4, which are most closely linked with measuring changes and improvements in workplace performance and business results, often fail to deliver the value they might For clarification, Me Murrer et al (2000) surveyed the American Society for Training and Development Benchmarking Forum to determine how frequently each of Kirkpatrick’s four levels is used in organizations: Level 1 – 95%; Level 2 – 37%; Level 3 – 13%; Level 4 – not reported.

- 3% In addition, Twitchell et al (2000) perfonned a meta-analysis o f studies performed in the last 40 years Their research indicated the following ranges for the use o f

K irkpatrick’s four levels: level 1: 86-100%; level 2: 71-90%; level 3: 43 - 83%; level 4:

Measuring these two levels is inherently more complex, time-consuming, and costly In addition, practitioners often lack the expertise to conduct higher-level evaluations As a result, there is growing demand for more research and studies on evaluating training within organizations.

Identification of Vietnamese tourism ’s situation and training problem s

Over 23 years of national renovation, the country moved from crisis and inflation to stable growth and notable achievements The economy transformed from a low-production, aid-dependent model to a self-reliant system with initial internal accumulation Almost all key economic sectors have delivered outstanding results in recent years Living standards for both rural and urban populations have risen markedly, and the economy has sustained continuous, higher growth rates According to Minister Vo Hong Phuc (2008), the average annual GDP growth was 7.5%, and export value increased.

Vietnam's economy has undergone a sixty-fold expansion, rising from USD 789 million in 1986 to over USD 48.5 billion by 2007 (MOFA, 2008) Alongside this growth, impressive progress has been achieved in culture, education, and healthcare This emerging economic status provides a solid foundation for the development of diverse sectors, including tourism, as Vietnam broadens its industrial base and attracts investment.

Reframing thinking about tourism development, documents from the 6th through the 10th Party Congresses and the resolutions of the Central Committee plenums and the Government of Vietnam clearly define tourism as a vital part of the socio-economic development strategy with a high degree of socialization As a result, tourism development becomes a shared responsibility across all sectors, administrative levels, and mass and social organizations It is a strategic orientation within the Party and State's guidelines for socio-economic development.

Over the past two decades of renovation, Vietnamese tourism has leveraged domestic and international resources to expand capacity, achieving an average annual growth rate of about 20% since 1990 A VNAT report shows that tourism contributed 3.7% to Vietnam’s total GDP in 2007, with forecasts indicating tourism’s growing share of GDP The sector is among the few in Vietnam capable of generating more than USD 2 billion in annual revenue More than ten years ago, Vietnamese tourism was at the bottom of the regional rankings, but it has since surpassed the Philippines and now ranks fifth in the region behind Malaysia, Singapore, Thailand, and Indonesia According to the World Travel & Tourism Council, Vietnam is among the fastest-growing tourism destinations in both the region and the world; in 2004, the WTTC ranked Vietnamese tourism growth seventh out of 174 countries.

Forecasted tourism value in GDP and % contributed to total GDP

F ig u re 2 - F o r e c a s te d to u r is m v a lu e in G D P (VNAT, 2002)

Mrs Vo Thi Thang, then chairwoman of VNAT, stated that tourism's effectiveness is evident across multiple dimensions: where tourism develops, the environment becomes more attractive and livelihoods improve The spillover effects of tourist activity extend to other sectors and regions, expanding markets for local goods and services and fueling the growth of related industries Each year, dozens of traditional festivals are revived and better organized, traditional customs are promoted, and handicraft villages are revitalized, often becoming tourist destinations where crafts are produced and sold to visitors This revival creates jobs, boosts local incomes, and helps transform economic structures at national and local levels, contributing to poverty reduction Tourism development also provides revenue for the embellishment and restoration of historical relics, while heightening the responsibility of state agencies, local authorities, and communities to preserve cultural heritage Tourism promotion and advertising, both domestically and abroad, convey national cultural values to international visitors and to local audiences alike.

Tourism has contributed to human-resource development in the course of renovation, creating more than 750,000 jobs that improve knowledge, material well-being and spiritual life, and expanding exchanges between regions across the country and with other nations It has fulfilled its role as the people’s diplomat—offering peace and fostering an open economy and socio-economic development—while also attracting international support and consensus for national construction and defense.

Vietnam has been promoting industrialization and modernization, expanding cultural exchange and international economic integration, with tourism taking a spearhead role; consequently, Vietnam Tourism is responsible for developing a competitive tourism industry that makes Vietnam one of the most attractive destinations in the world The key objective is to build a robust labor force for tourism in both quantity and quality, equipped with relevant occupational knowledge and skills, ethical business practices, and strong management expertise (Hoang, 2006) Nevertheless, there are still several problems and challenges that must be overcome to realize this vision.

In Vietnam, about 1 million workers—roughly 2% of the total labor force—directly participate in tourism, spanning a spectrum of education levels from on-site training to college and postgraduate qualifications According to Nguyen Phu Due, president of the Vietnam Tourism Association, about 53% of tourism employees have not completed elementary school, 18% have only elementary education, 15% have a high-school level, 12% hold college or university degrees, and just 0.2% possess postgraduate certificates In addition, around 750,000 people work indirectly in tourism without any hospitality training About 250,000 people work directly in hotels and travel companies, with 42% of direct-service workers properly trained, 38% coming from other sectors, and the remaining 20% untrained.

As VNAT forecasted report, in 2020, there will be nearly 2 million people working both directly and indirectly in the tourism industry, increase from 450,000 people in the year

2000 The rapid increase in the number o f working people in this industry demands a serious training for new comers and re-training for current employees.

Forecasted tourism labor quantity till 2020

F ig u r e 3 - F o r e c a s te d to u r is m la b o r q u a n tity t i l l 2 0 2 0 (VNAT, 2002)

Vietnam’s tourism sector presents more than the broader labor picture; in the hotel and travel industry, three major education-related challenges hinder skill development: a lack of hospitable attitudes, insufficient foreign language proficiency, and limited professional competencies (Funnekotter, 2006) Specifically, there are gaps in professionalism, weak technical and language skills, limited general knowledge, poor understanding of customer psychology, low enthusiasm, and a lack of self-confidence Thuy Hoa (2008) notes that more than half of tourism workers cannot use foreign languages, a serious disadvantage since foreign language proficiency is a key tool for engaging foreign tourists About 45% of tourism workers can use at least one foreign language, with English speakers accounting for 40.87%, Chinese 4.59%, French 4.09%, and other languages 4.18%; among those who can use English, only 15% have university education, while the remainder are at levels A, B, or C Additionally, 28% of tourism workers know more than two foreign languages.

With limited training and education, the quality of tourism workers remains low, and there is an uneven distribution of skilled staff, concentrated in urban areas while remote regions lack qualified personnel A recent survey in Ho Chi Minh City shows that only about 40% of enterprises are satisfied with graduates’ occupational skills, with many expressing reservations about graduates’ readiness In Hanoi, the city reports roughly 360,600 people are trained in occupational skills each year, yet only a small share secure employment after graduation, while local enterprises still struggle to fill vacancies.

The tourism industry currently faces a shortage of skilled workers who can meet the requirements of employers, signaling a broader manpower gap across the sector Overall, the tourism workforce is insufficient and underqualified, which hampers performance and growth Therefore, it is necessary to implement training and retraining programs not only for frontline staff but also for trainers and managers to raise overall competency and sustain industry competitiveness.

Vietnam has not yet established professional standards for tourism education (Nguyen, 2008); there is no university dedicated specifically to tourism in the country, though more than 20 universities offer tourism departments However, faculty in these departments are not required to have specific qualifications, and they often lack proper textbooks or curricula As a result, each university teaches in its own way, leading to uneven and inconsistent standards across the sector Graduation standards are generally low and do not meet real-world demands.

The country has more than 40 vocational schools, o f which 15 are in Ha Noi and seven in

Ho Chi Minh City Those schools provide training for m any different fields, but only some teach tourism knowledge There are only four tourism schools in Ha Noi, Hue, Vung Tau and Ho Chi Minh City.

W ith the increasing demand for m anpow er for the sector, the volum e o f training offerings is too small In addition, we do not yet have professional standards for tourism As a result, there is a need for developing and applying the V ietnam Tourism Occupational Skill Standards (known as VTOS) widely in the workplace.

From an economic and human resources perspective, the labor force is a core driver of product value As the value of the labor force increases through training and practical experience gained by workers, production efficiency improves, helping to reduce production costs and influence product pricing Ongoing investment in workforce development—skills training, on-the-job learning, and sector-specific experience—boosts labor force value and supports competitive pricing in the market.

Research purpose

Recognizing the crucial role of training evaluation in enterprises for demonstrating effectiveness, value, and contribution to organizational objectives, this study identifies the challenges involved in applying Kirkpatrick’s four-level model in practice and flags the major issues facing Vietnam’s tourism industry; it focuses on applying the Kirkpatrick model to the Training Development Program (TDP) of the HRDT project in Hanoi tourism enterprises Through four levels of evaluation, the findings will guide improvements in the effectiveness and efficiency of tourism training in enterprises, thereby elevating the quality of Vietnam’s tourism workforce.

The main objectives o f the research are as follow:

• To evaluate the extent to which trainees o f the TDP react to the program, gain the knowledge, change behavior back in the workplace and organizational benefits from the program.

• To m easure the differences o f training outcomes among the four levels o f evaluation between group o f trainees from travel agencies and those from hotels.

• To provide valuable information and suggestions to improve the effectiveness and efficiency o f the TDP.

• To m ake recommendations for Hanoi tourism enterprises in their on-the-job training programs.

• To serve as a learning lesson for other training programs

• To serve as initial research on training evaluation in Vietnam and place a start for further research on this issue.

Research significance

Human resources are a distinctive form of capital, with the inputs of people acting as the decisive factor in any business process Training and the effective utilization of human resources are investments that carry the same strategic importance as other assets The key investment partners—training institutions, trainees, employers, and government—fulfill varied yet complementary roles, all seeking to meet labor market demand while pursuing their own objectives.

Human resources in the tourism industry have distinctive characteristics that require high-level professional and practical skills because their primary audience is visitors To ensure quality service, theoretical knowledge, attitudes, and practical competencies must be tested and demonstrated through hands-on experience in serving tourists.

Practice o f training and evaluation in Vietnam Tourism industry requests a systematic research into measuring training effectiveness in tourism enterprises.

This study contributes to the literature by evaluating the effectiveness of Kirkpatrick’s four levels in Hanoi’s tourism enterprises, using the Trainer Development Program of the HRDT Project as a representative case It offers a forward-looking view on training evaluation in Vietnam’s tourism sector through the HRDT Project and the Trainer Development Program By measuring how trainees gain knowledge, respond to the program, change on-the-job behavior, and generate organizational benefits from the TDP, the findings provide actionable insights and recommendations to improve the effectiveness and efficiency of training, thereby benefiting the tourism industry nationwide.

As economies transition from agriculture-based systems to service-focused models, this research supports the growth and resilience of the service sector With tourism as a key component, the study highlights how the tourism industry drives value, employment, and competitiveness within the service economy.

Academically, the research would perform as initial research o f applying Kirkpatrick model to V ietnam ese SMEs, particularly Tourism enteiprises in Vietnam.

Because the evaluation of training programs in Vietnamese enterprises is often inconsistent or absent, this study provides valuable insights for training professionals, managers, tourism enterprises, and participants in the Trainer Development Program of the IIRDT project, as well as for anyone interested in improving training outcomes in Vietnam.

Research questions and hypothesis

Research questions

This study aims to deepen understanding of Kirkpatrick’s model by evaluating the effectiveness of training in Hanoi’s tourism enterprises, using the Trainer Development Program of the HRDT project as a representative case To achieve this aim, the research formulates focused questions designed to examine how the training affects participant reactions, learning, on‑the‑job behavior, and organizational results, and to identify the factors that enable or constrain the impact of the Trainer Development Program on Hanoi’s tourism sector.

• To what extent do trainees o f the TDP react to the program, gain the knowledge and change behavior back in the workplace?

• W hat are the organizational benefits that Hanoi tourism enterprises gain as results o f training o f TDP?

• Are there any differences in the reaction, learning, behavior and results between participants from hotels and those from travel agencies?

• Is the TDP training helped increase the em ployees’ com m itm ent to organization?

Research hypotheses

Human resources are foundational to the development of any industry, and when designed, adapted, and presented effectively, nearly any resource—human, natural, or otherwise—can become a tourism product Strong leadership and a qualified team are decisive for achieving tourism goals and for transforming the sector into a significant contributor to the service economy and national income EU-supported training courses are boosting capacity building and improving training methods for human resources development in tourism As PhD Tran Trung Dung, principal of Hai Phong School of Tourism, notes, the EU project is playing a key role in creating positive changes to training methods and ideas across tourism institutions and enterprises in Vietnam.

An observation by the Director of Kim Lien Hotel and Tourism Company confirms that the EU project’s training courses provide strong opportunities for all participating enterprises The immediate, tangible results speak for themselves.

Training results are evident in the improved quality of knowledge and the professionalism of staff Thanks to employees who completed these courses, the tourism workforce will comprise qualified professionals who help managers assess professional expertise and occupational skills more precisely (Phan, 2006) Moreover, numerous directors of tourism enterprises have sent letters of thanks to the HRDT project management unit for the training courses.

From all the good feedbacks from enterprises’ employees and managers who have attended the T D P ’s training courses, it is possible to hypothesize that

(1) The TDP shows its effectiveness in tourism enterprises

(2) There are differences in term s o f reaction o f trainees, the learning, behavioral cha nges and organizational benefits between grou p o f travel agents and group o f hotel trainees

(3) Training increases the level o f E m ployees com m itm ent to organization.

Thesis stru ctu re

This thesis comprises seven chapters, beginning with Chapter 1: Introduction, which presents the research background, problem identification, purpose, significance, research questions, and hypotheses; Chapter 2 reviews and synthesizes global evaluation approaches, models, and studies to establish a foundation for the Vietnam context; Chapter 3 outlines the methodology, restating the research questions and detailing the research design, data collection, and data analysis; Chapter 4 presents the research findings and discusses the major results; Chapter 5 discusses research implications for relevant stakeholders and highlights the academic value of applying the Kirkpatrick model in Vietnam; Chapter 6 examines research limitations and outlines future research possibilities, paving the way to the concluding chapter.

Chapter su m m a ry

Recent market shifts have intensified competition, pushing businesses to invest more in recruiting, developing, and retaining a skilled workforce Companies now spend more money, resources, and time on training programs, and the high cost of training has led practitioners to emphasize measuring training effectiveness In practice, many rely on Kirkpatrick’s four-level model for evaluation, which divides assessment into reaction, learning, behavior, and results; however, numerous studies show that the full taxonomy is not widely used beyond the first two levels.

Vietnam has been promoting industrialization and modernization, boosting cultural exchange and international economic integration, with tourism playing a spearhead role, and the development of human resources in tourism is an intensive, urgent, and strategic task supported by the National Administration of Tourism and other ministries, as well as strengthened by international partners, including the EU-funded Vietnam Human Resources Development in Tourism Project Within this project, the Trainer Development Programme is the most closely aligned training initiative for tourism institutions and enterprises, addressing two major problems: the lack of professionalism and insufficient competencies, along with divergent standards among enterprises Past PMU evaluations focused mainly on reaction and learning, prompting this study to apply the Kirkpatrick model to the TDP in Hanoi tourism enterprises By assessing trainees’ knowledge gains, their reactions to the program, changes in behavior on the job, and the organizational benefits of the TDP, the findings will offer valuable information and recommendations to improve training effectiveness and efficiency, thereby benefiting the Vietnamese tourism industry as a whole.

Evaluation is a core component of most instructional design models, with tools and methodologies that determine the effectiveness of instructional interventions The training program evaluation phase is a critical, culminating step in the Analyze–Design–Develop–Implement–Evaluate process, guiding how assessments inform improvements Despite its importance, measuring how a training program impacts an organization remains challenging for training professionals Empirical research and case studies on organizational impact are relatively scarce, underscoring the need for robust evaluation methods and clear measurement strategies.

W einstein & Waite, 1998) Furthermore, there is an on-going debate in the field o f evaluation about which approach is best to facilitate the processes involved (Eseryel,

This chapter surveys and synthesizes global evaluation approaches, models, and studies to establish a foundation for the research conducted in Vietnam, and it comprises six main parts: definitions and purposes of evaluation; a concise review of current approaches to evaluating training; the application of the Kirkpatrick model to training evaluation; an overview of other models used as training-evaluation tools; the challenges inherent in the training-evaluation process; and a comparative discussion of training-evaluation practices in Vietnam and in other countries.

Definition and purpose o f evaluating training effectiveness

Towards a definition

Providing a sound definition is more than a lexicographic exercise; it clarifies and refines concepts, generating a framework within which to develop a pragmatic approach to the subject (Foxon, 1989) Evaluation is no exception, and the apparent confusion in the minds of many as to the purposes and functions of evaluation corresponds to the ignorance or misunderstanding of what is meant by this and related terms such as research, validation, and assessment A variety of definitions can be found in the literature, many of them stipulative, and the inconsistencies in the use of terminology have led to ongoing confusion.

“muddied the waters” o f training evaluation a great deal, affecting the success o f evaluation efforts (W ittingslow, 1986).

Across the reviewed literature, evaluation is commonly defined as the systematic gathering of information used to make value judgments about a program These judgments guide decisions on necessary changes or, in some cases, the possible cessation of the program.

W illiams (1976) defines evaluation as the assessm ent o f value or worth Harper & Bell

Evaluation involves the planned collection, collation, and analysis of information to enable judgments about value and worth Scriven (1991) views evaluation as a trans-disciplinary enterprise, broader than any single area of applied social science, and defines it as the process of determining the merit, worth, and value of things Rossi and Freeman (1993) define evaluation as “the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of programs.”

Several definitions (Goldstein, 1978; Siedman, 1979; Snyder et al., 1980) focus on the determination of a program's effectiveness, while other definitions emphasize evaluation as a basis for determining how programs can be improved (Rackham, 1973; Smith, 1980; Brady, 1983; Morris, 1984; Foxon, 1986; Tyson & Bim brauer, 1985) The distinction between formative and summative evaluation is not mentioned by most of these writers, but it is implicit in their definitions.

Many writers not only differ in their definition o f evaluation - they also use evaluation terminology interchangeably and in som e cases quite confusedly Burgoyne & Cooper

(1975) for example, use the term evaluation research as synonym ous with evaluation

Evaluation and research may seem similar at first glance, but they have distinct aims and methods Research is focused on advancing scientific knowledge; it is not assumed to be immediately useful or practical, and it typically features control groups, experimental designs, and a commitment to total objectivity Evaluation, in contrast, is defined by its context—the problem is shaped by the setting in which it occurs—and the evaluator’s task is to test generalizations rather than hypotheses While the evaluator may be forced to confront value judgments at multiple stages, the researcher is expected to avoid any subjectivity.

Evaluation consists of description and judgment, while measurement or assessment provides the data on which that evaluation rests; this creates confusion around the terms “evaluation” and “validation,” a distinction some writers make differently across regions While most American writers treat validation as part of evaluation, British scholars such as Hawes & Bailey (1985) and Rae (1985) distinguish the two, with Rae defining assessment as measuring the practical results of training in the work environment, and, together with validation of the training and training method, comprising evaluation Consequently, in HRD literature the terms “validation” and “evaluation” do not always mean the same thing (Foxon, 1989).

The literature reveals a broad spectrum of definitions and widespread confusion around related terms, indicating that HRD practitioners have yet to settle on a clear, shared understanding of what 'evaluation' actually entails.

Purpose o f training evaluation

As well as the lack o f agreed-on definition o f evaluation, there is an equally broad range o f opinions as to the purpose o f evaluation.

Bramley and Newby (1984) identify five main purposes of evaluation: feedback, which links learning outcomes to objectives and provides a form of quality control; control, which uses evaluation to connect training with organizational activities and to assess cost-effectiveness; research, which determines relationships between learning, training, and transfer to the job; intervention, in which the results of the evaluation influence the context in which it occurs; and power games, where evaluative data can be manipulated for organizational politics.

Evaluation serves to provide data that demonstrate a program’s effectiveness on targeted behavior (Hewitt, 1989, p.23) Wigley (1988, p.21) presents a broader aim, emphasizing that evaluation should be used to improve the program and facilitate informed decision making Overall, Bushnell offers the most comprehensive view of the purpose of evaluation.

Bushnell (1990, p.41) outlines four evaluation purposes for training: confirming that programs achieve their intended objectives; identifying the changes needed to improve course design, content, and delivery; verifying that trainees actually acquire the necessary knowledge and skills; and balancing the costs against the results of training These purposes all rely on end-stage evaluation methods, since the data required to satisfy them can be gathered only after the training program has concluded.

Evaluation should fulfill these purposes, with the central aim of influencing decisions about the need for future training programs, the need to modify future programs, the need to modify instructional tools at all stages of the instructional process, and the need to provide cost or benefit data regarding the programs offered by the training department or a consultant If decisions are to be influenced, then the culmination of any set of evaluation activities is the compilation of a report containing recommendations A good evaluation chain generates actionable guidance that supports program development, tool refinement, and informed budgeting for learning and development initiatives.

Approaches to evaluation o f train in g

Goal-based vs systems-based approaches

Evaluation of training programs typically relies on two dominant paradigms: goal-based and systems-based approaches The goal-based camp is epitomized by Kirkpatrick’s four-level framework—reaction, learning, behavior, and results—which maps participant responses to subsequent performance outcomes and has driven extensive later research On the systems side, influential models include the CIPP (Context, Input, Process, Product) model, the TVS (Training Validation System) approach, and the IPO (Input, Process, Output, Outcome) model, all of which emphasize evaluation across stages from context and resources through implementation to final outcomes.

Goal-based evaluation models can help practitioners frame the purposes of evaluation, ranging from purely technical to covertly political aims, but they stop short of prescribing the steps needed to realize those purposes or explaining how results should be used to improve training The key challenge for practitioners is selecting and implementing appropriate evaluation methods—whether quantitative, qualitative, or mixed—within these frameworks Because of their apparent simplicity, trainers often dive in and start using such models without first assessing their needs and resources or outlining how they will apply the model and interpret the results (Bemthal, 1995, p 41).

Systems-based models such as CIPP, IPO, and TVS help frame the overall context of training programs, but they often lack granularity and fail to represent the dynamic interactions between design and evaluation Few of these models offer detailed descriptions of the steps and processes involved, and none provide practical tools for evaluation Moreover, they overlook the collaborative nature of evaluation, including the various roles and responsibilities people assume during an evaluation process (Eseryel, 2002).

Goal-based vs goal-free approaches

Comparing goal-based and goal-free evaluation approaches shows that goal-based evaluation tends to measure intent and may miss other achievements, because evaluators often seek what they expect and overlook unanticipated changes The goal-free method overcomes this bias in program evaluation by having the evaluator, unaware of the programme’s objectives, talk with participants about any benefits they received, enabling the detection of both unintended and intended effects This approach yields two kinds of information: an assessment of actual effects and a set of needs that helps judge the importance of those effects Although some critics overstated goal-free methods as a reaction to the ubiquity of goal-based evaluation, many evaluators now collect qualitative data on what affects participants’ value alongside quantitative data on the number of objectives achieved.

Responsive evaluation approach

There was a growing conviction that evaluation is a political process and that society’s diverse values aren’t captured by an evaluative method that assumes consensus is possible In response, the term “responsive evaluation” was introduced by Stake (1975) to describe a strategy in which the evaluator concentrates less on the program’s stated objectives and more on its effects as they relate to the concerns of interested parties—the stakeholders.

In a responsive evaluation, the evaluator begins by identifying the principal clients to understand the range of postures toward the programme and the purposes each group has for the evaluation The next step is to make direct, personal observations of the programme to grasp what it is actually about Through this process, the evaluator uncovers the programme’s stated and real purposes and the concerns held by various stakeholders With these insights, the evaluator is now positioned to conceptualize the key issues and problems the evaluation should address.

According to Legge (1984), responsive evaluation is gaining ground as the preferred method for evaluating educational and social programs in the USA Its strengths for evaluating training and development activities within organizations come from its consideration of the interests of multiple stakeholders—not just the program sponsors—and it provides a clear rationale for collecting information.

- the needs o f the various stakeholders.

Professional review approach

M ost courses leading to professional recognition are approved by a committee which reviews evidence o f what the course will contain and whether it reaches the desired standard This is what we are calling the professional review strategy.

This sort o f approach can be used within an organization to consider the relevance o f a syllabus to organizational requirements and the breadth and quality o f the programme An informal variation o f this is quite common; the training manager offers a particular programm e and puts it into the brochure because he or she has carried out a ‘professional review ’ o f it.

Quasi - legal approach

Quasi-legal evaluation operates like a court of inquiry, with witnesses called to testify and evidence submitted Great care is taken to hear a wide range of input—opinions, values, and beliefs—from the organizers of the programme, the users, and the accountants This approach has been used to evaluate social programmes, but, to our knowledge, not for formally evaluating training or development activities sponsored by organizations It might be suitable for such purposes if a sufficiently impartial framework is established.

‘ju d g e ’ could be found and provided some agreement could be reached about who com prised the key witnesses.

Kirkpatrick model applied to training ev alu atio n

K irkpatrick's four level m odel

Donald Kirkpatrick first described in 1959 a straightforward four-level taxonomy for evaluating training and programs, sometimes referred to as steps or segments The structure of this four-level taxonomy implies that each level builds on the outcomes of the preceding level In his work, Kirkpatrick identified four levels of evaluation.

•Tramees' "liking o f and "feeling for" a tiaimng program

•"Principles facts, and tecluuques understood and absorbed" by the trainees

• B ehavior ô"Using [learned prm aples and tecluuqnes] on the job"

•"Results desired reduction of costs, reduction of turnover and absenteeism, reduction of grievances, increase m quality and quantity of production, or unproved morale"

The Kirkpatrick model shows that evaluating training at each level answers whether a fundamental requirement of the program was met, and it does not imply that one level is more important than another—every level matters Each level acts as a diagnostic checkpoint for problems that may surface at the next level, so if participants did not learn (Level 2), the reactions captured at Level 1 can reveal barriers to learning Likewise, if participants do not use the skills in the workplace (Level 3), this may indicate that the necessary learning did not occur at Level 2 in the first place.

According to Alliger and Janak (1989), the power of Kirkpatrick’s model lies in its simplicity and its ability to frame discussions about training evaluation criteria, making it easier for practitioners to think about what to measure In essence, it provides a common vocabulary and a rough taxonomy for criteria, enabling a clearer, more systematic approach to evaluating training outcomes While the level-based structure enhances accessibility and communicability, its simplicity can sometimes overlook nuanced results in more complex training programs.

Levels 1 and 2 have contributed to the popularity of Kirkpatrick's Four-Level Training Evaluation Model, yet many organizations do not fully implement it, often citing the perceived complexity of Levels 3 and 4 Business Performance Pty Ltd (2002) argues that the difficulty and cost of conducting an evaluation increase as you move up the levels, so decisions about which levels to evaluate must be made carefully for each program Level 1 evaluation (Reaction) may be conducted for all programs; Level 2 (Learning) for hard-skills programs only; Level 3 (Behavior) for strategic programs only; and Level 4 evaluations (Results) for programs costing over $50,000.

Various figures in the literature (e.g., Conway, 2002) suggest that about 90% of organizations measure reaction, while only around 20% measure learning, roughly 10% measure behavior, and fewer than 5% measure results Additionally, Schmalenbach (2002) argued that Kirkpatrick’s level 4 does not readily address whether the investment was worthwhile Despite this limitation, many evaluation practices remain skewed toward reaction and, to a lesser extent, learning and behavior, with fewer organizations capturing the ultimate impact on results.

K irkpatrick has m ade the most important contribution to the goal-based approach.

An added 5th level to Kirkpatrick m odel

In 1991, Jack Phillips added a 5th level to the Kirkpatrick model, called ROI or return on investm ent This level compares the costs and benefits o f training The units o f

Currency does not have to be financial, though it often is, and in evaluation it can capture the value of training beyond money However, some evaluation thinkers have pushed back against this added fifth level, because ROI is a term from the world of finance and it implies that training must pay for itself financially or it should not happen at all.

R eturn on investm ent (ROI) is calculated using this formula (Mejia, Balkin & Cardy,

(T rai ni ng b e n e f i t s — Train ing costs)

Using Kirkpatrick M o del

Training evaluation is not straightforward, and evaluators apply the Kirkpatrick model to measure effectiveness across its four levels Business Performance Pty Ltd (2002) provides a quick guide detailing suitable information sources for each level, helping practitioners select the right data at every stage of the evaluation.

•Focusgroup sessions with participants y level 2: Learning

•Pro-and post-test scores

Level 3: Behavior •Completed self-assessment questionnaire

•Reports from customers, peers and participant's manager

F ig u re 5 - U sin g K ir k p a tr ic k m o d e l - Business Performance Ply Ltd (2002)

O ther evaluation m o d e ls

Tyler’s O bjectives A p p ro ach

Ralph W Tyler’s seminal 1949 work, Basic Principles of Curriculum and Instruction, argues that education faces a central problem: programs lack clearly defined purposes He says these purposes should be translated into concrete educational objectives, and that an objective-based evaluation framework lies at the heart of his proposal Objectives are crucial because they underpin planning, guide instruction, and inform the development of tests and assessments, while also providing the basis for a systematic evaluation of a program.

The process o f evaluation proposed by Tyler had a number o f phases as follows:

• Collect, from as wide a consultation as possible, a pool o f objectives which might be related to the curriculum.

• Screen the objectives carefully to select a subset which covers the desirable changes.

• Express these objectives in terms o f the student behaviors which are expected.

• Develop instruments for testing each objective These must meet acceptable standards o f objectivity, reliability and validity.

• Apply the instruments before and after learning experiences.

• Examine the results to discover strengths and weaknesses in the curriculum.

• Develop hypotheses about reasons for weaknesses and attempt to rectify these.

• Modify the curriculum and recycle the process.

Discrepancies in perform ance would then lead to modification, and the cycle begins again (W orthen and Sanders, 1987).

Ralph Tyler’s educational philosophy aligns closely with contemporary views, notably echoing Kirkpatrick’s Level 3 outcomes, and he characterizes education as an active process driven by the learner’s own efforts He was also clearly influenced by Bobbitt’s ideas on job analysis and behaviorism, arguing that education is a process of changing people’s behavioral patterns In this Tylerian framework, evaluation becomes a means to determine the educational effectiveness of learning experiences, a standpoint later echoed by Bloom, Madaus, and Hastings (1981).

Scriven’s Focus On O utcom es

In 1957 the Russians launched their first Sputnik and this had a profound effect in

In America, the education system was widely blamed for not producing sufficiently imaginative citizens, and in the search for new curricula, educators found that the traditional evaluation system based on Tyler’s work did not help Soon after, Scriven’s model became the most influential framework, with Scriven distinguishing two types of evaluation: formative evaluation, which aims to improve the program, and summative evaluation, which aims to judge its worth He also argued that evaluation should consider not only whether goals were achieved, but also whether the goals themselves were worth achieving.

Scriven’s Focus on Outcomes model (1967) defines an outcomes-based evaluation in which an external evaluator, unaware of the program’s stated goals and objectives, determines the program’s value by examining the actual outcomes and the quality of their effects This independent assessment prioritizes measurable impact over intended aims, providing a rigorous judgment of program effectiveness (Schmalenbach, 2002).

From an organizational performance perspective, this approach is acceptable because it makes it easier to observe the program’s impact at a macro level than when focusing on individual performance or goals Yet it raises concerns about individual bias and interpretation, and about how thoroughly the evaluator is briefed By design, this model cannot readily forecast likely outcomes and therefore isn’t readily usable in ROI analyses, particularly since it offers little guidance on identifying root causes of poor performance or unwanted behaviors (Vail, 2006).

Stufflebeam ’s CIPP M o d e l

The CIPP model, which is developed by Daniel Stufflebeam and colleagues in 1966, is know n as a systems evaluation model Prim ary components include:

• Context - identify target audience and determ ine needs to be met.

• Input - determine available resources, possible alternative strategies, and how best to m eet needs identified above.

• Process - examine how well plan was implemented.

• Product - examine results obtained, w hether needs were met, what planning for future required.

This model simultaneously examines both process and product, offering formative feedback to shape improvements while the project is underway and a summative assessment of outcomes at the end (Robinson, 2002) However, according to Stufflebeam (2002), the evaluation of likely outcomes is not included prior to actual training delivery, so the model does not readily fit ROI contexts without further modification The contextual element suggests that training is part of the solution and often requires a prior step, which further distances this model from ROI-based evaluation as it currently stands Unlike the Phillips and Kirkpatrick models, this approach requires evaluating the effectiveness of the process itself—often referred to as validation—to avoid conflating process assessment with outcome evaluation, i.e., to determine whether the training delivered its objectives.

CIRO M odel

Another evaluation approach that lends itself to adaptation is that described in the work o f

P Warr, M Bird and N Rackham in 1970 This model, known by the acronym “CIRO” , is also based on four measurement categories: context evaluation, input evaluation, reaction evaluation and outcome evaluation Therefore, it encompasses several o f Kirkpatrick’s levels, specifically level 1 and arguably level 4 if the outcomes are expressed in terms o f business impact It is also very similar to the CIPP model in most other respects, and “shares in a lack o f detail and prescription in how to undertake any o f four main elem ents” (Harris, 2004).

Schmalenbach (2002) argues that the CIPP and CIRO evaluation models follow Kirkpatrick and Phillips by employing control groups and expert assessments of improvements, creating a repeatable process that can begin to answer questions about program value and the efficient use of limited resources.

G uba's Naturalistic A pproach

The Guba & Lincoln model (1982) places its emphasis on collaboration and negotiation among all the stakeholders as a change agent in order to “socially construct” a mutually agreed-upon definition o f the situation.

In many organizations, all stakeholders involved—including evaluators—are assumed to be equally willing to agree to change In practice, this most closely reflects reality when evaluation occurs after the fact Without objective tools, stakeholders collectively assess the value of the program, which gives some structure to the notion that training is done on trust As Laughlin and Broadbent (1996) argue, this approach does not meet the rigor and objectivity demanded by ROI analysis.

V M o d el

A ccording to Schmalenbach (2002), the ‘V M odel’ as adapted by Bruce Aaron is based on an approach in the IT world used for developing software.

As stated in this model, imagine a 'V ' where the left hand slope is labeled analysis and design From the top, m oving down the slope you will find ‘business need’, then

The framework moves from capability requirements to human performance requirements, and finally, at the bottom where the left and right hand slopes meet, you find the performance solution From the top of the right-hand slope, labeled measurement and evaluation, you will locate the metrics and criteria used to assess performance.

‘ROI o r business results’, then moving down we come to ‘capability status’, then ‘human perform ance im pact’.

Each element connects deliberately to its counterpart on the opposite slope at the same level, creating a symbiosis between analysis and design, as well as between measurement and evaluation This relationship is both formative and summative, assessing capability and process alongside the final solution or product.

This framework is designed to support an ROI-driven approach, enabling consideration of return on investment and evaluation before committing to a solution Although it's not immediately clear how ROI and evaluation can be forecast ahead of implementation, the model does support the ROI concept, even if the details of forecasting are not fully specified.

Interestingly, none of the models—except perhaps the 'V' model—specifies who should be responsible for which tasks Most of the thinking has been done by people connected to the training world, which creates an assumption, borne out in practice, that the trainers do the work and shoulder the blame for a poor result.

Challenges in evaluation o f training

Instructional systems design is a systematic process for developing the necessary workplace knowledge and expertise, and it includes an evaluation component to determine whether the training program achieves its intended goals and delivers the expected outcomes However, evaluation is often overlooked during the design and implementation of training programs, and a number of challenges have been noted for organizations that fail to conduct systematic evaluations.

According to Wang and Wilcox (2006), many training professionals either doubt the value of evaluation or lack the mindset needed to conduct it Others refrain from evaluating their training programs due to limited confidence in whether these programs deliver value or impact for their organizations (Spitzer, 1999) Additional obstacles include a lack of resources and expertise, as well as an organizational culture that does not support evaluation efforts (Desimone, Werner, & Harris).

Progressing through the Kirkpatrick model increases evaluation complexity; while level 1 is easier and cheaper to assess, level 4 entails far greater challenges and cost, a point Kirkpatrick himself underscored, with estimates that roughly 95% of training is evaluated at level 1 but only 5–10% at level 4 (Schmalenbach, 2002) Hubbard (2001) also pinpointed several shortcomings of the Kirkpatrick model that create difficulties for evaluators in accurately determining training effectiveness The first measure—what participants say about the training—suffers from poorly designed evaluation forms, a flaw that can be remedied by crafting course evaluation questions that mirror the learning objectives and by attaching a numerical scale to each item The third measure—whether the skills acquired are used back in the workplace—highlights the need to assess transfer of training to on-the-job performance.

- is particularly em otional Lastly, the fourth m easurem ent im pact to the organization is hard to measure in almost cases.

Montebello, R Anthony, Haga, and Maureen (1994) identified three key challenges in evaluating training outcomes: selecting appropriate measures, ensuring the reliability of those measures, and determining the timing of assessments Outcome measures are often not possible until several months after training, and the longer the interval, the more likely other events will influence the results Additional considerations in choosing measures of change include practical factors such as cost, availability, and accessibility.

Practices in evaluation o f training

There is ample evidence that evaluation continues to be one o f the most vexing problems facing the training fraternity (Foxon, 1989) Catanello and K irkpatrick’s 1968 survey of

110 industrial organizations evaluating training revealed that very few were assessing anything other than trainee reactions.

Across related data and much of the literature, the evaluation of training is almost always limited to end-of-course trainee reactions, and the resulting data are seldom utilized For example, Galagan (1983) and Del Gaizo (1984) cite a Training and Development Journal survey in which 30% of respondents identified evaluating training as the most difficult part of their job.

A 2006 ASTD survey of American organizations shows a strong emphasis on reaction-based training evaluation: 94% measure participants’ reactions, 31% assess learning (Level 2), 13% track behavior changes (Level 3), and only 3% measure business results (Level 4) This distribution reveals a bias toward simple, superficial evaluation metrics that overlook deeper learning impact and measurable business outcomes.

Two European Commission projects have recently collected data on evaluation practices across Europe Analysis of survey responses indicated that formative and summative evaluations were not widely used, while immediate and context- or needs-analysis evaluations were more widely employed In most cases, managers bore responsibility for evaluation, with informal feedback and questionnaires among the most frequently used methods The majority of respondents claimed to assess the impact on employee performance (the learning level), whereas less than one-third assessed the impact on the organization (the results level) Information from evaluations was used mainly for feedback to individuals, with less emphasis on revising the training process and only rarely for return-on-investment decisions The studies also found statistically significant effects of organizational size on evaluation practice, with small firms constrained by their internal resources Managers are probably responsible for all aspects of training (Sadler-Smith et al., 1999).

In Australia, a survey of public service and private sector trainers found a strong belief in evaluation and widespread use of end-of-course forms to measure trainee reactions to instructors, course content, and facilities However, 75% admitted that their evaluation stopped there, mainly because they did not know what else to do As Easterby-Smith and Tanton (1985) observed, much current practice is ritualistic, and in many cases the post-course data merely confirm pre-course judgments that the training was satisfactory, implying that the real assessment often occurs before the course is delivered.

According to Foxon (1989), many practitioners view the evaluation of programs as a problem rather than a solution, and as an end rather than a means to improve training When evaluations are undertaken, they are often conducted in a seat-of-the-pants manner with very limited scope Trainers are overwhelmed by quantitative measurement techniques and lack the budget, time, and expertise needed for thorough evaluations As a result, they frequently revert to the only measure they know—post-course reactions—to reassure themselves that the training was satisfactory.

If the literature reflects general practice, many practitioners do not understand what the term evaluation encompasses, what its essential features are, or what purpose it should serve Consequently, the deployment of training courses often outpaces what is known about their actual usefulness When such programs are evaluated, the data sources most commonly cited beyond trainee reactions are counts of participants, reductions in absenteeism, and high instructor ratings As a result, trainers are often judging programs by activities—such as the number of training days—rather than by meaningful outcomes Many practitioners regard the development and delivery of training as their primary concern, with evaluation treated as an afterthought.

Many practitioners cling to the premise that no news is good news and avoid formal program evaluation They prefer to remain in the dark, fearing that evaluation will confirm their worst fears, especially since they have no viable alternative to offer management if the current program is educationally ineffective As a result, they settle for a non-threatening survey of trainee reactions instead of a rigorous assessment of educational outcomes.

Many practitioners express a need for evaluation software to support their training practice Consequently, in 2008 Mr Leslie Allan of Business Performance Pty Ltd published a training evaluation toolkit software that offers simple tools to help practitioners determine and measure the impact of training in their organizations Since then, the software has grown increasingly popular within the HRD community.

However, since the tim e the software exists is too short, its effectiveness has not yet been proved in any case.

In Vietnam, a review of a large body of literature shows that the evaluation of training programs in enterprises is often inconsistent or missing, and this is underscored by the relatively small number of studies on training evaluation (Tran, …), highlighting a gap in understanding the effectiveness and impact of enterprise training initiatives.

2008) Most o f the evaluations are conducted for the formal education at colleges or universities Some examples can be listed here as the training evaluation o f HCMC

Universities such as the University of Pedagogy, the education evaluation efforts at Ho Chi Minh City University of Economics, Nong Lam University, and other institutions conduct educational assessments, but these evaluations often stop at surveying students’ reactions and learning Broader evaluations of training effectiveness are conducted primarily for ODA-funded projects, focusing on societal issues such as UNFPA-supported programs (UNFPA, 2008) Moreover, training effectiveness is frequently evaluated superficially in Vietnam due to the belief that training is merely the process of transferring knowledge, which makes it hard to measure knowledge gains In addition, training needs are often identified vaguely, which further complicates the evaluation of training effectiveness (Vu, 2007).

Nevertheless, there is a growing recognition that a robust evaluation process is vital to the success of a training program (Tran, 2008) Training professionals are increasingly enrolling in courses that teach them how to evaluate training effectiveness within their organizations, focusing on defining learning outcomes, measuring impact, and demonstrating return on investment through proven evaluation methods.

Senior Personnel Officer provided by HCMC University o f Economics, Evaluate Job Effectiveness provided by Business Edge, etc.

In Vietnam, no original training evaluation approaches or models seem to have been developed; what exists are translations of famous management and HRD models in books and articles Consequently, when evaluating a training program’s effectiveness, practitioners rely on a simple method—a questionnaire that captures participants’ reactions and learning.

Research d esig n

Level one - R eaction

Level two - L e a rn in g

Level three - Behavioral ch ang e

Level four - O rganizational benefits

RESEARCH IM PLIC A TIO N S

R ESEARCH LIM ITATIONS AND FUTURE RESEARCH

C O N C L U S IO N S

Ngày đăng: 19/11/2025, 21:34

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w