Similarly, many generic items only appear in one section of the list.. The marketing section contained eighteen items and the start-up section contained thirteen items.. items within a s
Trang 1does “prepare leadership for the truth” mean? Similarly, many generic items only appear in one section of the list For example, the item “build trusting relationships” only appears in the marketing section but could easily appear
in other stages of the change process The current list may therefore be con-strained in its description of OD competencies Third, Sullivan’s efforts have been qualitative and consensus building in purpose He has been trying to get the field to develop and agree on the skills and knowledge necessary to practice OD As a result, the list hasn’t been subjected to any sort of quanti-tative testing—a requirement of any good competency modeling effort (Roth-well & Lindholm, 1999) We don’t know, for example, whether contracting skills are any more or less important than implementation skills in success-ful OD practice
The time seems right, therefore, to apply a more quantitative approach to determine whether any structure exists within the list and to explore the rela-tive importance of various competencies
STUDY METHODOLOGY
To approach the purpose of this study—to explore the structure and utility of the competency list generated by Sullivan and his colleagues—the following methodology was applied
First, the items were reviewed by the authors for use in a survey format The items were screened for vocabulary (jargon), clarity, assumptions, and bias (Emory, 1980; Sudman & Bradburn, 1983) In particular, several items were
“double barreled” and had to be split into two or more questions For example, one of the original items in the list was “identify the formal and informal power
in the client organization in order to gain further commitment and mobilize people in a common direction.” This item was broken up into several items, including “identify formal power” and “confirm commitment of resources.” On the other hand, many items were considered redundant and eliminated The final number of items used in this study was 141
Second, the questionnaire was designed The survey was organized into sec-tions similar to the most recent version of the OD competencies and roughly paralleled the action research process The survey contained nine sections
The first two sections, marketing and start-up, roughly correspond to the entry and contracting phase of the general planned change model The marketing section contained eighteen items and the start-up section contained thirteen items The third section was titled diagnosis and feedback and con-tained twenty-three items The action/intervention planning section concon-tained sixteen items The fifth section, intervention, contained seven items The next two sections, evaluation and adoption, contained thirteen and nine items,
Trang 2respectively, and correspond to the evaluation and institutionalization processes
of planned change The eighth section was titled separation and contained five items The final section presented thirty-seven items under the heading of “other competencies” and represented a list of new items suggested by experts and other contributors to Sullivan’s list of competencies
Respondents were asked whether the competency was essential to success
in OD today and, if so, the importance of the item to successful OD practice The response format for the first question was “yes/no,” while the importance scale ranged from 1 (not at all critical) to 5 (absolutely critical) The intent in the importance scale was to encourage the respondent to make discriminations between competency items that were “nice to have” versus “had to have” for success The data analyzed here are the importance ratings of the 141 items Third, the survey was placed on the web and invitations to complete the sur-vey were sent out to OD professionals This occurred in a variety of formal and informal ways For example, personal invitations were made during national and international presentations by the authors, the national OD Network invited people through its website home page, and an announcement and invitation were included in several issues of the OD Institute’s monthly newsletter In addition, electronic invitations were sent to Internet listservs operated by the
OD Network, the OD Institute, the Appreciative Inquiry Consortia, ASTD, Pepperdine University’s MSOD program, and the Association for Quality and Participation Email invitations were sent to key OD leaders
Three hundred sixty-four people responded to the survey The demographics
of the respondents are shown in Table 5.1 The modal respondent was Ameri-can, came from the private sector, and had a master’s degree The sample rep-resented a broad range of OD experience
Fourth, descriptive statistics were calculated for each item Then the analy-sis proceeded along two parallel tracks The first track was an analyanaly-sis of the
Table 5.1 Sample Demographics
graduate
Trang 3items within a section of the survey, and the second track was an analysis of all the items together In the first analysis, items within a section, such as market-ing or adoption, were factor analyzed to determine whether items possessed any underlying structure Clusters of items measuring a similar concept were created and labeled As part of this process, items that did not correlate with other items in the section were identified That is, a factor analysis includes information about items that either do not correlate well with other items or that are too generic and therefore correlate with too many items to be useful For example, the item “build trusting relationships” in the marketing section, while no doubt important, was consistently a part of almost every cluster and therefore provided little discriminating information
In the second analysis, all 141 items were submitted to a single factor analy-sis so that the items from one section could correlate with items in another sec-tion As with the first analysis path, clusters of items measuring similar concepts were created and labeled, and any items that did not correlate with other items
or correlated with too many items were identified
In the final step, the two sets of clusters were compared Where there was similarity between the two sets, a cluster was retained using as many com-mon items as possible Where unique clusters showed up in either set, the authors conferred and chose the clusters that seemed to best represent a broad scope of OD competencies Based on this final set of clusters, the research questions concerning structure and utility of the list of competencies were addressed
The process of labeling a set of items to represent an underlying competency
is neither scientific nor quantitative It is, in fact, quite subjective, and the authors toyed with different labels in an attempt to convey the essence of the items within a cluster This was easy in some cases and much more diffi-cult in others In the first sectional analysis, we tried to use labels that acknowl-edged the phase of the process they represented Thus, clusters from the diagnosis section were given labels with diagnosis in mind In the pooled item analysis, we had more free reign to provide labels reflecting nuances in the mix
of items representing the competency In the end, we chose labels that we thought were fair, but recognize that we should be cautious in believing that our labels are the “right” ones
For the statistically minded reader, several assumptions were made about the data, including the ratio of sample size to the total number of items and whether generically worded items referred to a particular section of the survey Any definitive OD competency study will have to substantially increase the sam-ple size relative to the number of items However, the exploratory nature of this study supports a more relaxed set of assumptions We hope the data presented here can improve the efficiency of any future study A complete output of this analysis is available from the authors
Trang 4Descriptive Statistics The descriptive statistics from the survey displayed a very consistent pattern Almost all of the items were rated very high (4) or absolutely critical (5) The range was from 3.5 to 4.9, with only 11 of 141 items having a mean below 4.0 and 51 items having means of 4.5 or higher Despite the intent to steer respon-dents away from very high scores, nearly all of the distributions are skewed This is not very helpful, and future studies need to take this into account In other words, respondents were unable or unwilling to effectively discriminate between the items in terms of their importance At some level, all of the OD practitioners in this sample are saying “all of these are really important.” It is Sullivan’s belief that only high rates have survived because the list has been scrubbed and revised by so many practitioners over the years
Section-by-Section Analysis
In the first analysis, all items within a section were submitted together, but separate from other items and sections in the survey The analysis produced thirty-two competency clusters using 116 of 141 items The final results are shown in Table 5.2
Pooled-Item Analysis Table 5.3 describes the final competency clusters from the second analysis, where all items in the survey were submitted together In this analysis, items from any section of the survey could correlate and form a cluster with items from any other section The table identifies the name and the number
of items in the cluster In this analysis, thirty-three clusters were produced using
115 out of 141 items
The first fifteen clusters all contained multiple items and ranged in size from two to thirteen items Three of the clusters contained two items, while the remaining clusters contained between three and thirteen items The last eigh-teen clusters contained between one and four items, with thireigh-teen of these clus-ters only containing one or two items In comparison, only eight clusclus-ters in the first analysis had one or two items Thus, the distribution of cluster sizes is more skewed in the second analysis Many competency clusters in Table 5.3 closely resemble those from the section-by-section analysis As a result, and in the interest of space, we proceed to a comparison of the two analyses
Comparison of the Two Analyses Table 5.4 presents a comparison of the clusters generated by each of the analy-ses The table presents the two or more cluster name(s) and the number of items for each cluster where the cluster labels were similar It then presents the
Trang 5Table 5.2 Section-by-Section Results
Quickly assess opportunities for change 4
Action Planning Creating an implementation plan – I 4
Creating an implementation plan – II 3 Facilitate the action planning process 3 Obtain commitment from leadership 2
Transfer ownership of the change 3
Use evaluation data to adjust change 4 Adoption Manage adoption and institutionalization 9
Be available to multiple stakeholders 7 Ability to work with large scale clients 4
Be current in theory and technology 4
Possess broad facilitation skills 2
Trang 6Table 5.3 Pooled-Item Analysis Results
Trang 7Table 5.4.
Determine the types of data needed Determine the amount of data needed
Trang 8Table 5.4.
continuous Maintain/incr
change Determine the parts of the or
special focus of attention Ensur
Trang 9Sectional Analysis
implementation plan – II (3) Under
Trang 10Table 5.4.
Plan for post-consultation contact R