Another major publication (U.S. News & World Report) just unveiled its latest business-school rankings. If you look closely, they are hardly similar to rankings at other publications—most notably those provided by Business Week, the Wall Street Journal, and the Financial Times.
The best advice? Take a peek at rankings, but don’t get obsessed by them—whether you are applying, are in b-school or are an alumnus. Nonetheless, you may encounter a world or a culture (ours) that is fascinated with rankings, ratings and lists.
- Consortium schools, on the whole, generally fare well. That Consortium schools aggressively seek students from all backgrounds and foster a diverse environment might be a contributing factor.
Rankings identify quality business schools that are “doing it right,” keeping up to date, accepting top-tier students and making themselves relevant. But the criteria can be a problem. They vary by publication. And even for the same publication, both criteria and rankings may differ significantly from period to period. It becomes hard to determine which ranking in which year is the fairest.
Rankings are controversial, but attention is paid to them—at least for a short period after they are publicized. Students and alumni in transition hope they add prestige and differentiation to a resume’. (And sometimes they do.) Other alumni care about them merely for pride’s sake.
- Corporate recruiters pay attention to them, too, if only to narrow the pool of schools they can efficiently recruit from or draw the line above which they will focus recruiting efforts. In that respect, rankings are a planning tool, not a point of prestige or pride.
- Deans may be forced to obsess over them, even if they don’t want to. Rankings might be one of the benchmarks for how they are evaluated. Deans, therefore, will be attentive to all criteria for which they can influence (students’ test scores, placement efforts, recruiters’ perceptions, faculty-student ratios, alumni giving, etc).
There are plusses and minuses when it comes to rankings. The minuses first:
- Deans and school officials will inevitably focus on them, sometimes too much, because rankings may be a factor in evaluations or in how outsiders perceive their performance. That risks the leaders of b-schools distorting the basic missions of teaching students, increasing knowledge, contributing to global enterprise and promoting dialogue.
- Deans don’t want to deviate from the mission, but they have constituencies they must respond to. They may need to explain a decline in ranking from, say, no. 7 to no. 13 in one year. Rankings will force them to care more about GMAT scores, the school’s perception from the outside, and the starting salaries of the graduating class. Caring too much about starting salaries might mean the school pushes students to work at hedge funds rather than in public service.
- Rankings can also be unreliable, just when some feel the need to rely on them. Even Stanford might be rated no. 1 in one, but no. 6 or 7 in another. Or in the same publication, it might be rated no. 1 this year, but fall to no. 4 or 5 the next year.
- Rankings might influence a prospect’s decision to choose one school over another, when the lower-ranked school is a better fit.
Beyond the vaunted lists of 1-50, there are some subtle plusses in the effort:
- The exercise might highlight good, unheralded schools that, for some reason, exist (and thrive) outside the halo of a “Harvard-Stanford-Wharton.” For example, when rankings appear, we see Wharton on the lists, but we also see other outstanding schools such as Carnegie Mellon, Case Western, Vanderbilt and USC.
- We may also see how some schools from year to year improve or deserve attention for special initiatives or novel changes in curriculum. Note how Yale last year courageously altered its entire approach, possibly setting a trend. The process can reveal how, say, Cornell, NYU, and Dartmouth are competitive and have sneaked into top-tier.
- The exercise permits prospects to review and compare certain statistics if they need to. The criteria, however, are still based too much on narrow statistics (e.g., average starting salaries).
- Publications tend to accompany lists with a general, somewhat thorough assessment of graduate business education, the curriculum, and relevance to global issues. They step back to report whether schools are preparing students for the next generation of corporate challenges (in management, technology, environment, globalization, and even accounting). In the latest, U.S. News analyzes schools’ efforts to increase female students.
- No doubt for the next round, publications will ponder whether schools have “taught the crisis” sufficiently and have prepared students for a new landscape in finance (including regulation, risk management and recession-proof initiatives).
- Rankings sometimes highlight schools that excel in a particular field: marketing, finance, international business, operations, etc. Schools with distinguished specialties can be highlighted—to the benefit of recruiters or prospective students. If Michigan and UNC excel in operations, and if NYU and Virginia excel in finance, the publications will say so. In the latest, Indiana is among top schools in accounting.
In sum, the best compromise would be for publications like Business Week and U.S. News to identify a class of top schools (say, top 25 or 50), all deserving of, say, a AAA-rating based on minimum criteria, but not ranked.
The problem with that is that doing that risks their selling fewer magazines. Rankings create mild controversy, which of course increases newsstand sales.
Tracy Williams
Any source
No comments:
Post a Comment