1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

Practicing Organization Development (A guide for Consultants) - Part 38 pdf

10 232 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 212,52 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Standardized Surveys Customized Surveys Standardized Surveys • Can be adapted to the vocabulary of the organization • Can be focused to the specific variables of interest to a particular

Trang 1

41 Every meeting is evaluated by the team 5 4 3 2 1

42 Suggestions for improvement are made 5 4 3 2 1

feedback to each other.

feedback for improvement to each other.

Acting

participants within two days.

assignments related to action steps.

47 Additional agenda items are provided 5 4 3 2 1

to the team leader in time to include

on the distributed agenda.

Overall

leadership to the team.

49 I look forward to our team meetings 5 4 3 2 1

50 I am generally satisfied with our team 5 4 3 2 1 meetings.

Exhibit 12.4 Comparison of Advantages for Customized vs Standardized Surveys

Customized Surveys Standardized Surveys

• Can be adapted to the vocabulary of the organization

• Can be focused to the specific variables of interest to a particular OD process

• Owned by the organization when developed

• Less costly over time

• Participants feel more ownership in the survey and evaluation processes

• Because results cannot be compared with other organizations, the focus is on organizational improvement, not comparison with how others are doing

• Readily available without developmental time

• More likely to have appropriate psychometric measures (reliability, validity)

• Unlikely to change over time, so comparison can be made across time

• Likely to have been used in multiple contexts, providing comparative data (though this may not be what is wanted)

• Will often look more professional (though this is not necessarily so)

• May already be available in web-based format

Practicing Organization Development, 2nd Ed Copyright © 2005 by John Wiley & Sons, Inc Reproduced

by permission of Pfeiffer, an Imprint of Wiley www.pfeiffer.com

Trang 2

Measurable Outcomes

As specified earlier in this chapter, there are a number of measures that are gen-erally available in most organizations that could be used to determine the impact of the OD process on these outcomes For example, it would be relatively easy to determine whether a predetermined sales level had been achieved, whether there had been an improvement in the number of complaints filed, whether attendance has improved, and on and on the list could go All of these are quantitative items that are relatively easy to measure The major difficulty

in their use, of course, is determining the cause of the improvements So many things can cause changes in these measures—the weather, the external econ-omy, an increase in the advertising budget, changes in personnel—in addition

to the OD process It is almost impossible to determine that any specific change was caused by the OD process

Comparison with Competitors

Preferably, when benchmarking is done, it is a comparison of processes used

In practice, companies often look at outcome measures, for example, produc-tivity measures, quality measures, market share, return on investment, and so

on These, too, are quantitative measures Looking only at differences in num-bers is not particularly useful, because the important information (how the out-comes were obtained) is not known These comparative measures are also not always easy to obtain When industry groups collect such information for benchmarking purposes, membership in that industry group is usually all that is needed to gain access to the information When industry groups do not collect desired measures, however, the information can be very difficult to acquire Pub-licly owned businesses may be required to have such information on file because of Security Exchange Commission requirements Privately owned busi-nesses, however, may be very reluctant to share such information

Cost/Benefit Analysis

Another quantitative approach to evaluation is a comparison of the costs of the

OD process compared with the perceived or anticipated benefits This approach requires a group of knowledgeable individuals within the client organization to estimate the financial benefits from the process and compare these against the costs, both explicit and implicit If this return rate is greater than alternative investments, then the investment in the OD process is (or was) worthwhile

Psychometric Requirements

There are three basic principles of measurement: validity, reliability, and prac-ticability These provide credibility and meaning to information derived from measurement Unfortunately, because the first two are technical in nature, often

Trang 3

client- or consultant-developed instruments do not pay attention to these requirements Unfortunately, this occurs occasionally with commercial instru-ments as well When using an instrument, questions about these principles should be asked

Validity ensures that the instrument measures what it is intended to measure.

Although there are many types of validity, the two most important for OD eval-uation are face validity and predictive validity

Face validity refers to the perceived accuracy and appropriateness of the

instrument with respect to the stated outcomes Ultimately, such validity will

be determined by whether the organization’s indicated needs are met by the OD process The results should make sense to those knowledgeable about the orga-nization and OD and should match the information sought by stakeholders or decision makers

Predictive validity is the ability of one measure to predict performance on

some other measure It requires the collection of data during the intervention

to be correlated at a later date with results data gathered on the job If there is predictive validity, there should be a high correlation between formative mea-sures and workplace meamea-sures

Reliability means that the instrument would obtain the same results if used

several times Reliability is associated with consistency and accuracy It is eas-ier to determine reliability than it is to determine validity An instrument must

be reliable if it is to be valid

A nonnumeric approach that will improve reliability—but not show whether the instrument is, in fact, reliable—is to complete a pilot study with a small sample of employees prior to full-scale administration of the instrument The

OD consultant should check responses to identify where misunderstandings might exist, ask those completing the instrument where they encountered dif-ficulties, and then revise the instrument based on the responses

A test-retest procedure consists of administering the instrument to a pilot group and then administering it again to the same group after a brief interlude (perhaps a week) The correlation of the two sets provides a measure of stabil-ity Another approach is to determine subscores for odd-numbered questions and even-numbered questions for each respondent and obtain a correlation between the two sets of scores The closer the correlation is to 1, the higher the reliability Finally, Cronbach’s alpha and factor analysis are additional statisti-cal measures that can be used for reliability determination (For more informa-tion on calculating reliability, see Fitz-Gibbon & Morris, 1987.)

Practicability means that the instrument can be completed within a

reason-able amount of time with tools that are readily availreason-able Administering the instrument to a pilot group is an excellent way to determine the clarity of items, consistency of interpretation, time needed, and administrative detail needed

Trang 4

Qualitative There are also a number of approaches to evaluation that do not result in numeric data, yet the information can still be very useful In fact, qualitative data may provide the evaluator with more depth of understanding about the processes and how they are being perceived In addition, they can often be implemented “on the fly,” providing immediate feedback

Informal Group Processes Many useful, informal evaluation processes can

yield intuitive or qualitative data Among others, these can include:

• Polling members of a large group by having them raise their hands to indicate their levels of satisfaction, showing five fingers for very satisfied and only one finger for very dissatisfied

• Doing a “whip” (having each member speak in turn) to obtain verbal feedback ranging from testimonies to constructive criticism

Interviews An interview guide can be used to ensure that exactly the same

questions are asked in the same order with all interviewees For a structured inter-view, the guide would be quite detailed An unstructured interview—although less precise—allows the interviewer to follow new trains of thought The guide for an unstructured interview may contain just a few questions, such as the following:

• “What is going better for you in your job now, compared with the way it was before the intervention?”

• “What is not going as well for you in your job now, compared with the way it was before the intervention?”

• “If you could make just one change in what we are doing now, what would it be?”

• “What else would you like to share with me?”

Focus Groups A variant of an interview is a focus group In essence, a focus

group is a group interview The same questions as suggested for interviews can be used for focus groups A focus group is particularly useful when you want people

to build on others’ comments The downside of a focus group is that it can con-tribute to “groupthink,” the tendency for a dominant voice to influence others to think in the same way, regardless of whether that really reflects how they think

Observation Observing changes that have taken place (or are taking place)

in the workplace can provide indications of progress, resistance, and learning There may be occasions when videotaping an event (for example, a team meet-ing) can provide a useful tool for assisting in the observation process Observing

in two areas of an organization, one of which has participated in the OD process and the other which has not, can also be helpful

Trang 5

Anecdotes Ask the steering committee of the OD process, the client, or

members of the client organization for subjective responses to key issues—on

a daily basis (at the micro level) or as a monthly/yearly review (at the macro level) Stories are a common way for people to share what is really important

to them

REPORTING THE OUTCOMES OF THE EVALUATION

Having data is not enough Data and the corresponding results can be com-pelling to the person conducting the evaluation and to no one else The aim in reporting results is accessibility When the results from an evaluation are truly accessible, people understand the data in a way that empowers each person to take action There are a variety of ways to report data:

• Charts and Graphs—pictorially presented data that give the viewer a

comparative appreciation for the numbers Charts and graphs show how the measures compare to each other and aggregated measures such

as the mean and standard deviation (an indication of the variation in the data) Means and standard deviations can be reported from the existing data or other relevant data from such stakeholder groups as another department, competitors, industry, and society

• Scorecards—a summarized or key set of measures that is important for

everyone to see and track over time These measures are often able to be seen on one page and address such things as satisfaction, productivity, learning, and profit (for more detail, see Kaplan & Norton, 1996)

• Metaphors—a story or picture that captures the meaning of the data

and relationships among key variables Data can be put into a story-board that captures the key relationships in the form of a story that integrates metaphor with charts and graphs An example of this approach as applied to major corporations can be found at Root Learning (www.rootlearning.com)

• Predictive Models—a picture that shows the relationships among a

vari-ety of key measures that flow to important end-result outcomes This concept was popularized by Sears, which developed the concept of the employee-customer-profit chain (Rucci, Kirn, & Quinn, 1998)

Each method above can support existing change initiatives and can lead to change being initiated As will be discussed next, the idea of creating a systems view of the organization requires drawing from all the tools for reporting out-comes described above Creating a systems approach is the best way to truly change behaviors

Trang 6

USING EVALUATION TO CHANGE BEHAVIORS MOVING FORWARD

The most compelling way to change behaviors is to give an objective assess-ment of the behavior and other related factors That is, a systems approach is the best approach to creating sustainable change in organizations Developing

a predictive model in an organization is important in ensuring that the organi-zation takes the necessary action One principle of Gestalt theory is that, when

a system fully understands itself, it will know what to do next (Emery, 1978) The act of noticing the self is the most powerful tool for change We advocate creating a systems model that puts the pieces together and paints a very descrip-tive picture of the organization This type of modeling provides diagnostic feed-back at the individual, department, site, and organizational levels The system modeling process is grounded in theoretical development concepts and field research methodology These are important components in creating a whole sys-tem predictive model for an organization The steps are summarized below

Step 1: Evaluate Existing Models and Measures The goal in this step is to map the explicit and implicit relationships Mapping the perceived relationships is important for future steps Most people have an innate need to express and hypothesize For example, someone may believe that, as the organization’s employee job satisfaction increases, the turnover rate will decrease This is a hypothesis, because it is proposing a relationship between two variables Clarifying existing hypotheses gets people involved in exploring additional relationships and other performance-related questions (for more information, see Hedrick, Bickman, & Rog, 1993; Keppel, 1982) There are five key questions to ask:

• What types of data are being collected?

• What are the location, retrieval mechanisms, and quantity of stored data?

• How are the data being collected?

• Why are the data being collected?

• What are the expected relationships among the variables?

Detailing existing models and measures will, without a doubt, stir up cogni-tive dissonance and resistance with key stakeholders A rigorous examination and use of information can be intimidating to organizational members, includ-ing leaders (Harrison, 1994) This step will indicate the pace and method with which you should proceed In some cases it might be advisable not to proceed The leaders of the organizations or unit utilizing this process must embrace the

Trang 7

reality that there will be positive and negative findings This process is based

on the value of truth and objectivity While the truth can set you free, develop-ing and maintaindevelop-ing performance management systems requires more effort and discipline than many existing managers practice Going by intuition is quicker and not easily verified It also lends itself to political maneuvering That is, a disjointed performance management system with unreliable measures empow-ers leadempow-ers to make quick decisions and move their agendas forward without much justification The decisions can be supported by anecdotal evidence that may or may not be reliable

Step 2: Enhance Existing Models and Measures Based on the analyses conducted in the previous step, a new set of questions should surface and a better idea of the whole performance system should begin

to emerge The three prongs of sound thinking are instrumental in developing

a more thoughtful and testable model and can be easily framed as questions Prong one—what do you currently know about “X”? Most people will go on a hunch by using easily accessible information It may be effective, but it will not

be helpful in more complex decisions and modeling like the approach being pro-posed here Prong two—what do the experts, research findings, and theories say about “X”? Some people will review expert research findings This will get

at important research and relevant theoretical issues However, it lacks the nec-essary practical implications associated with benchmarking And, prong three— what are best and worst in class doing with regard to “X”? Benchmarking may

be a valuable tool, but it must be assessed with the unique knowledge that exists in the organization (prong one) and expertise outside the organization (prong two) The three prongs of sound thinking should be conducted com-pletely and in the order described The questions presented above can be posed

in a variety of forms and are presented as examples to get you started The results for people using an exercise like this are confidence in the decisions made, better decisions, and an appreciation for sound thinking

Step 3: Install and Initiate Data-Collection Process The process by which data are gathered and stored can be overwhelming The focus of this step is on getting the most reliable, relevant, efficient, and accu-rate data It is important to consider where to house the organization’s data Many organizations keep their information decentralized In some cases, infor-mation is kept from other parts of the organization Creating a whole system model requires that all information be centralized There are exceptions to this prescription, but it should be explicitly addressed and agreed on While data warehousing is centralized, the collection process can be decentralized Because there are various ways to measure something, the previous steps should be used

to choose the best measurement protocol As for the timing of data collection, decisions have to be made about frequency, date, and time

Trang 8

Step 4: Diagram the Predictive Model Diagramming the predictive model requires conducting statistical analyses and visually mapping the significant and meaningful relationships Examining rela-tionships in the whole system model requires at least two questions to be answered First, what relationships are significant? As relationships are shown

to be significant or not significant, the model’s validity becomes apparent and the opportunity for refinement becomes clearer For those relationships that are not significant, the variables in question may need to be revised or removed Second, how important and meaningful is the significant relationship? This is determined by statistical tools that provide information as to the amount of vari-ance explained by the predictors in the performvari-ance variables of interest Mean-ingfulness also refers to the degree of impact a predictor variable has That is, as the predictor variable moves (that is, up or down), what degree of change does

it create in the outcome variables? For example, a 3 percent increase in employ-ees’ perceptions of environmental support may lead to a 2 percent increase in critical customer service behaviors, thus leading to a 3 percent increase in cus-tomer intention-to-return This 3 percent increase in cuscus-tomer intention-to-return may then lead to a 4 percent increase in actual customer retention and a 2 per-cent increase in sales This example indicates that a relatively small change in a predictor variable can translate into a large leap in performance

Step 5: Use the Feedback Process to Initiate Action Feeding back the results from performance modeling can be productive and challenging The objective is to facilitate a healthy and constructive discussion that leads to action Figure 12.5 presents a simple model that can be used to facilitate the discussion It is based on clearly separating the facts of the situa-tion from judgments and emositua-tions

To begin, identify the relevant facts Facts are objective, measurable, and observable information Second, draw judgments (also called conclusions) from agreed-on facts Judgments are value-laden opinions (that is, good and bad) related to the relative importance of the information The modeling process pro-vides facts related to causal relationships among the soft and hard measures These facts can be helpful in evaluating longstanding judgments that have existed in the organization With empirically based facts, the judgments are more objective and the emotional ramifications can help to build consensus Therefore, it is important to clearly connect facts to judgments Next, it is impor-tant to share the emotions that are surfacing as the facts are shared and judg-ments are formed (happy, sad, mad, fear, and guilt) Using the performance model to facilitate the sharing of emotions can be helpful in eliminating the static that often interferes with interpersonal communication Finally, after coming to consensus on judgments regarding the facts, ask the following question: “What

do you choose to do?” This is the step that promotes action and accountability

Trang 9

We recommend that the model in Figure 12.5 be presented at the beginning of the feedback session as part of the ground rules Use the model explicitly as a tool for keeping discussion focused and constructive

BARRIERS TO EVALUATION

It is difficult for any professional to seek input on the quality of his or her prod-uct or service and the level of his or her expertise For this reason, some con-sultants and clients create barriers—real and imagined—to planned evaluation The following are some factors that discourage evaluation

Lack of Money The decision makers of a client organization may believe that the benefits of

an evaluation do not warrant the expenditure for the OD consultant to conduct the evaluation They may believe that the funds would be better spent on addi-tional interventions In our experience, this seems to be the major barrier to OD evaluation

Lack of Time

An OD consultant may find that commitments to other clients create time con-flicts that will not allow him or her to conduct an effective evaluation In the

Figure 12.5 Components for Facilitating a Healthy Feedback Session

Facts

Emotions Judgments

Trang 10

same way, the decision makers of an organization may be impatient to move

on and not want to take the time to conduct an evaluation

Organizational Politics The contact person within the client organization may have taken a significant risk in contracting with an external consultant or in championing the hiring of

an internal consultant An evaluation may lead to the conclusion that the inter-vention was not successful, thereby confirming the poor judgment of the con-tact person in deciding to use an OD consultant or in selecting the particular consultant

Consultant Reputation

An OD consultant may be reluctant to conduct an evaluation If the evaluation

is positive, it may be thought that the consultant influenced the assessment of his or her own work If the assessment is negative, the client organization—as well as the consultant—may feel that a lack of expertise is indicated This could jeopardize the consultant’s role within the contract and the client organization and could affect his or her reputation in the OD field

Lack of Measurable Variables Some interventions produce outcomes that are not easily measured If this is the case, the difficulty of the evaluation task may lead the client and the con-sultant to avoid it

Lack of Competence Many OD consultants have more training and experience in conducting inter-ventions than they do in conducting evaluations Therefore, they may feel more comfortable conducting interventions than they do conducting evaluations, so they tend to emphasize the former and avoid the latter

Fear of Being Blamed

If fear exists within the organization, lack of cooperation may stem from fear

of being blamed There also is an inclination within U.S culture to use evalua-tion to blame someone Our experiences in school exemplify the negative nature

of evaluation If the outcome of an OD effort is not everything that was desired, evaluation may create an opportunity to blame someone—if not the consultant, then the manager, the supervisor, the change-team members, or the employees

Perceived Lack of Value of Evaluation Previous experience or lack of understanding of the value of evaluation may create a perception that evaluation is not necessary and that it does not add value to the intervention

Ngày đăng: 02/07/2014, 02:20

TỪ KHÓA LIÊN QUAN