SOME IMPLICATIONS OF THE FINANCIAL TIMES ’S METHODOLOGY

Một phần của tài liệu How to get into the top MBA programs (Trang 71 - 74)

A savvy admissions director trying to boost her school’s position in the FT’s rankings could make a few simple moves that would produce large results. First, she would try to accept as many applicants as possible from the nonprofit and governmental sectors, in order to minimize the average salary of the incoming class (so that the resulting post-MBA salary “step-up” would thereby be maximized). The same goes for younger candidates: They tend to have lower salaries before business school than do older candidates.

Candidates serving in the armed forces with technical backgrounds, who would be suitable candidates for consulting jobs after business school, would be ideal. (Their current salaries are below market, making the step-up effect all the greater.) She would also try to ensure that all candidates would head to the highest paying professions, notably consulting and investment banking, and would shy away from those who would start up their own companies or go into the nonprofit or governmental sectors—again, to maximize the step- up effect. Similarly, she would favor applicants who would head to the United States or Switzerland (where salaries for managers are highest) rather than harming the school’s ranking by going somewhere offering lower salaries.

But a savvy admissions director could go much further than this. By getting the school’s dean to replace the current board of directors with foreign women, and by maximizing the number of foreign women in the program itself, she would maximize the school’s

“internationalization” score.

And so on.

This somewhat tongue-in-cheek response to the FT’s rankings is not meant to be highly critical. In fact, the FT offers a valuable and sensible set of rankings. On the other hand, no ranking is perfect—as the above comments suggest.

BUSINESS WEEK

Methodology. Business Week magazine rates MBA programs every even-numbered year in a late October issue of the magazine. It then incorporates these views in its companion book, Business Week’s Guide to the Best Business Schools. It rates the top 30 schools based on surveys of two “consumers” of business school services: the students who attend and the employers who hire graduates of these schools.

In addition, it considers the “intellectual capital” of the school itself by examining the extent to which the school’s professors are published in the major business journals or reach a wider audience. The rating of each school is based on its combined score; the students’ and employers’ opinions each represent 45 percent (90 percent together) of the total; the intellectual capital score only 10.

Advantages of the approach. It is obviously appropriate to know what the two major consumers of business school services—the students who are trained by the schools, and the employers who hire their graduates—think about the programs. Student views of the total experience

—of teaching quality, academic atmosphere, career services, and so on—are clearly relevant. Similarly, the views of likely employers

concerning the quality of a given school, and its graduates, are important when trying to assess how useful a degree from that school will be.

Limitations of the methodology. Business Week has made substantial changes in its methodology in recent surveys. The biggest change is that it now includes a measure of the intellectual capital of a school. In other words, to what extent is the intellectual debate in a field such as finance driven by the faculty at school X rather than the faculty at school Y. There are two problems with this approach. First, the means chosen to measure intellectual capital are questionable. As just one example, in looking at articles that appear in the major professional journals, points were assigned on the basis of the articles’ length! (Why anyone would want to encourage business academics to write longer articles is entirely unclear to me.) Second, and more important, the underlying principle of Business Week’s approach to ranking has been predicated upon the idea that all relevant information is to be found in the opinion of a school’s students and recruiters of those graduates.

Thus, rather than look at a school’s acceptance rate, average GMAT score, or other data about the students, Business Week chose to consider the view of recruiters. The rating recruiters gave a school, it was assumed, incorporated all of these data. Therefore, Business Week did not double count GMAT data by including recruiters’ opinions of graduates as well as a separate rating of the school’s average GMAT score. Presumably the same is true regarding intellectual capital; a student exposed to the finest finance minds is presumably a more able graduate and valued as such by recruiters, who rate the school highly in consequence. Therefore, it is unclear why Business Week has decided to double count intellectual capital—and no other measure.

It is important to note at this point that both student and employer surveys are subject to significant sources of potential bias. The student survey, for example, is very open to being “gamed.” If you were a student at good old Acme Business School, and you knew that your employment options and starting salaries depended to some substantial degree on the rating of your school, wouldn’t you consider rating the school as better than perfect in every survey category so as to improve your employment prospects? Another potential source of bias concerns the nature of the students who choose to attend different schools. It is quite possible that those attending Harvard are sophisticates who demand the best, who went to Harvard expecting to have the best possible professors for each and every course, and who would be disappointed if they did not have the most famous and pedagogically able professor for each course. Students who go to Indiana University, in the unpretentious town of Bloomington in the middle of America’s corn belt, might have very different, very reasonable expectations. Thus, the ratings by these different groups of students might not be readily comparable.

The employer survey suffers from a similar potential problem. Employers hiring from Harvard and Indiana might be hiring for very different positions and expecting very different people from each school. They might pay Harvard graduates $150,000 and expect them soon to be running a region, whereas they might hire an Indiana graduate for $80,000 and expect him to be a solid contributor who will take twice as long to get a promotion as the Harvard graduates. Once again, are employer’s ratings of the schools truly comparable?

There are also the usual problems with the selection of the employers for the survey. For example, there is an apparent bias in favor of larger employers, even though more and more MBA graduates are being employed by smaller, especially high-tech, firms. Similarly, there is the problem of potential nonresponse bias—which puts entrepreneurially focused schools at a particular disadvantage. (Would-be entrepreneurs are disproportionately likely to be “unemployed” when they graduate, trying to get their company up and running.)

In addition to all of these issues, the Business Week survey arguably asks the wrong question of employers. It seeks to learn whether employers were satisfied with their recruitment effort at a given program. If the employer says “no,” the survey does not ask for an explanation. Instead the “no”

response is counted against the program, even if the underlying reason for the dissatisfaction was that the employer was unable to hire students it dearly wished to employ. In other words, an employer’s inability to attract a student could undermine a program’s ranking, even though it was the employer that was found wanting.

One further note: Rather than pay attention to the self-serving comments of students about their own programs, why not emphasize where the best applicants choose to apply and enroll? For example, of those students admitted to Harvard, MIT, Stanford, Chicago, and Wharton, what percentage choose Stanford? “Voting with their feet” is a more reliable indicator of what students think about different programs.

U.S. NEWS & WORLD REPORT

Methodology. U.S. News & World Report magazine rates business schools each year in March. Its methodology is much more complex than that of Business Week. U.S. News considers three factors: a school’s reputation, placement success, and admissions selectivity, each of which is composed of multiple subfactors.

Advantages of the approach. The virtues of this approach are clear. First, by explicitly taking many more factors into account than does Business Week, it may well do a more thorough job of measuring what makes a business school great. (Business Week could retort that its own methodology implicitly takes these other factors into account insofar as its survey respondents—students and employers—evaluate whatever factors they consider relevant.) Second, the ratings that U.S. News’s methodology produces are quite stable over time, with essentially none of the extreme jumps and falls in individual ratings that have been so marked in the Business Week ratings. This is presumably a reflection of reality, insofar as it is highly unlikely that the actual quality of many business schools would change dramatically in a short period of time.

Limitations of the methodology. A school’s reputation counts for 40 percent of its score. This comprises results from two separate surveys.

The first is a poll of the deans and program directors of America’s accredited business schools; the second is a poll of the corporate recruiters who recruit from top-ranked programs. Both of these polls are subject to potential bias. It is highly unlikely that even a well-informed business school dean will know the operations of other schools so thoroughly that he or she can accurately rate many dozens of them. This is likely to be even more true of recruiters, who are not even in the education business. The likely impact of this is that both groups might tend to overrate schools that are already famous or those that make the biggest splash (perhaps due to their faculty’s publications).

Placement success counts for 35 percent of a school’s score. This comprises the percentage of students who are employed at the time of graduation and three months later, and the average starting salaries and bonuses. The possible biases here are several. Average starting salaries are highly industry- and location-dependent. Investment banking pays more than industry, so schools that turn out investment bankers will be favored relative to those that turn out people who go into industry. The same is true of schools that send graduates to New York rather than to Durham, North Carolina. The salaries in the former will tend to dwarf those in the latter for the same job, given the different costs of living and other factors. Thus, Columbia and NYU in New York City will have an advantage over the University of North Carolina. The other possible bias concerns the fact that the data are reported by the schools themselves, and it is quite possible that some “cook” the numbers. In reporting starting salaries, for example, will a school include those who take jobs out of the United States? If the dollar is high, and foreign salaries when translated into dollars appear low, the school might choose to forget to include them. When the dollar is low, perhaps the school will remember to include them. The other data can also be manipulated by excluding various categories of students or simply by lying.

Student selectivity counts for 25 percent of a school’s score. This comprises the undergraduate grade point averages, average GMAT scores, and percentage of applicants the school accepts. Since these are all reported by the schools themselves, they are all open to manipulation. A school can include or exclude various groups of students to manipulate its numbers. For example, should a school include only American students in calculating its undergraduate grade point averages if only the Americans have been graded on the traditional 4-point scale? On the other hand, perhaps it should include the non-American students, too; but in that case, how should it translate the French 20-point scale or someone else’s 100- point scale into an American equivalent?

Student selectivity measures can be gamed in another fashion. Columbia, for instance, goes to great lengths to determine whether an applicant will accept the offer of a place before deciding whether to admit him or her. It thereby greatly reduces the number of students it needs to admit in order to fill its class. Other schools make it easy to apply by waiving the application fees for various applicants, just as some schools encourage even no-hopers to apply, all in order to improve the numbers they report to the AACSB (which accredits American and other programs), U.S. News, and other rankings.

And, frankly, even legitimate numbers may have limited information value. Undergraduate GPAs are not comparable from one major to another at a given school, let alone from school to school (or country to country). In fact, grade inflation (and, thank goodness, grade deflation) can make GPAs hard to compare over time—even for someone taking the same courses at the same university.

ECONOMIST INTELLIGENCE UNIT

Methodology. The Economist Intelligence Unit (or “EIU”) sends detailed questionnaires to business schools and students around the world.

Numerical data from the schools (such as the percentage of women in the program) is combined with student opinions to rank schools according to four criteria: the ability to (1) open up new career opportunities or further a current career, (2) increase salary, (3) network, and (4) develop personally and educationally.

Advantages of the approach. The EIU’s emphasis on student opinion is a welcome complement to the Wall Street Journal’s focus on what recruiters have to say. The four criteria chosen by this survey are eminently reasonable ones for ranking schools insofar as they do reflect major applicant concerns. Understanding what is most important to students, and then attempting to evaluate how business schools stack up on each of these four criteria, is highly relevant to choosing a program.

Limitations of the methodology. This survey is too riddled with issues to attempt to catalogue all of them. One problem is one common to all student surveys: Most students experience only one program, so their ranking of it may reflect more about their own expectations than about the school’s actual performance. By the same token, all student surveys run the risk of being gamed by students desirous of promoting their schools (and, not coincidentally, their own career prospects). These issues are exacerbated by the nonresponse bias of such surveys: Are the opinions of those who choose not to participate the same as those who do respond?

A brief look at the first criterion examined by the EIU—the extent to which a school “opens new career opportunities,” which is accorded 35 percent of a school’s total score—illustrates the nature and problems of this ranking. One-quarter of the “new career opportunities” score is determined by the diversity of recruiters at a school. This is defined as the number of industry sectors recruiting at it, which clearly favors large schools (as well as those not choosing to focus on particular industry sectors). Another quarter of the score is based on the extent to which students get their ultimate jobs via the schools’ career services. This is most peculiar. Shouldn’t the standard be the quality of the job obtained (and the extent to which it is what the student wanted), by whatever route? Similarly, this measure penalizes schools whose students start their own businesses or head to their own family businesses. Yet another quarter is based on the percentage of graduates in jobs three months after graduation. This statistic tends to reward those schools whose major regional markets are booming as much as it distinguishes top schools from lesser competitors. The remainder of the score is based on student opinion.

Put simply, the results of the EIU survey tend to be highly anomalous as well as volatile. For instance, Vlerick Leuven Gent (in Belgium, now that you ask) jumped from 47th in the world to 15th in a recent ranking. The idea that a school could improve to this extent in one year (short of buying Harvard’s faculty in the interim) is preposterous. The idea that schools which attract much less qualified students are better than rivals that get first choice of both students and professors seems to me highly questionable. Much of the substantive learning in a program comes from other students;

so, too, does the value of networking amongst one’s classmates.

FORBES

Methodology. Forbes ranks business schools annually by surveying alumni five years after they graduated from business school. It compares their earnings (in the five years since graduation) to the cost of attending business school (including tuition and fees, plus the opportunity cost of attendance, meaning the two years or so of compensation forgone). After adjusting for the cost of living in different places, and discounting the earnings using a money market rate, it comes up with a return on investment (ROI) for each school.

Advantages of the approach. The Forbes survey has two major advantages. It provides a simple figure by which all schools can be compared— eliminating the apples and oranges problems inherent in more complex systems. The figure it chooses, ROI, is particularly relevant to those applying to business school, who are likely to be highly motivated by the potential payoff to getting an MBA.

Limitations of the methodology. The methodology is a simple one, so it inevitably leaves out a great deal. For instance, it ignores graduates’ earnings beyond the five-year mark. Similarly, its methodology is not designed to spot recent trends, given that it does not include the earnings of more recent graduates. In addition, it obviously suffers from substantial potential nonresponse bias, with only 24 percent of those surveyed responding to the most recent survey. And it takes as gospel the data those respondents provide. Perhaps most important, its results are likely to be relevant only to those applicants in situations similar to those who responded to the survey. If the bulk of the graduates responding are in finance, for instance, it is not clear that the data they provide will be relevant to someone who intends to go into marketing.

OTHER RANKINGS

Obviously, the published rankings do not necessarily cover all of the issues and concerns you might have about business schools. This leaves you with room to make your own rankings, tailored to whichever criteria you deem to be most important. Following are a few of the “rankings” you might

consider helpful.

Một phần của tài liệu How to get into the top MBA programs (Trang 71 - 74)

Tải bản đầy đủ (PDF)

(297 trang)