Paul Boyer. Lawrenceville, New Jersey: Thomson Peterson’s, 2003; 240 pp; ISBN: 0-7689-1360-8, hardcover $24.95 US.
In the village hall of my prairie childhood, news reels preceded the movies on Friday evenings. In late summer, when not much else was happening in the world, we were treated to footage of the Queen’s Plate, the premier horse race in Ontario.
I remember a youthful and perky Elizabeth II appearing one time to give the victorious owner his trophy. The winning horse and jockey stood to one side, demoted from centre-screen, to put a balding, middle-aged man, resplendent in a Savile Row suit, at centre. It was E.P. Taylor himself, looking awfully satisfied.
The whole thing was something of a mystery to us country bumpkins, but it was fine to see those horses, running to glory (as the announcer liked to say). By the end, we were actually cheering for Taylor’s entry, or for whichever eastern Canadian mogul’s horse happened to be in the running (or the owning) that year.
The memory returns each fall when the Maclean’s university ratings come out. You can’t but wonder. Who will be No. 1 this year? Will Toronto’s grip on the top slot finally loosen? Who’s up and who’s down? Has Maclean’s methodology changed? What difference will it all make?
Paul Boyer’s slim volume deals with the American equivalent of Maclean’s, the U.S. News and World Report rankings. As he shows, American publishers were busy at this task as early as 1983, well before Maclean’s. South of the border, at least three national ranking systems now compete, if you’ll forgive the word.
American and Canadian experiences of ranking have many similarities. It is noteworthy that the growing popularity of rankings coincided with a mini-revolution in government policy, the appearance of so-called performance indicators.
Performance indicators are simple statistics to measure such things as throughput (the speed of students’ passage through degree programs). Rankings and performance indicators on both sides of the border are strongly influential in public funding and governance decisions. This is big “business,” tied closely to the appearance of Thatcherist and Reaganite schemes of government micro-management, and combined over the past 30 years with sharp cuts in public funding for higher education.
Rankings matter, but not for reasons Maclean’s editor Anne Dowsett Johnston would claim. She knows a step up or down in the rankings may account for an increase or decrease in entrance applications. But the rankings are important for even more unfortunate reasons. An example will begin to show why this is so.
Five years ago, the University of British Columbia rankings were not going especially well. They were descending, or threatening to descend. It was just enough to worry the administration and, presumably, to irritate the board of governors. UBC reviewed the way it reported student-teacher ratios and the way it counted how many students were being taught by tenured professors rather than by part-time and sessional lecturers. The university also combed the records to make sure every possible UBC award, grant and patent really had been counted. And as on many previous occasions, Maclean’s was pressed to review its methodology. Did it make sense, for example, to put so much emphasis on things like “number of library books per student” in the age of the Internet?
In 2003, 2004 and 2005, UBC moved from fifth to fourth place, where it stays (Maclean’s “University” issue, 6 Nov. 2005). UBC president Martha Piper plans to push UBC past Toronto, McGill and Western Ontario in the rankings. But that is no easy matter: UBC students will have to outdo the rest in entering averages. And UBC will have to find a way to get more of its students graduated “on time,” reduce class sizes, do something about that pesky library and persuade alumni and academic and business leaders that UBC’s reputation should rise to surpass its eastern counterparts.
But is this the best way to run a university? To be fair, UBC is not entirely run on the basis of rankings and performance indicators. After all, UBC’s happiness at moving from fifth to fourth place is restrained by the thought that it might well drop back again in a year or two.
At smaller universities like Trent and Mount Allison anxious administrators ask the same questions as UBC, but with greater urgency. If rankings (provincial, national and Maclean’s) depend on success in getting and keeping Canada Research Chairs, or finding matching funds for CFI grants, or just attracting first-year students, it’s a far more serious matter for smaller places than it is for the fat cats. Thus the full significance and the fundamental nonsense of rankings are best understood by thinking how the whole system works, not just a part of it.
In all this, university administrations have rarely asked “Does our competitiveness make us better at educating? And does it get us more money?”
Not only are these questions unasked; neither is there any sign that our university officials have noticed a dreadful paradox. For it turns out that governments can look at improvements in rankings (whether performance indicators or Maclean’s) and see in them signs of “excellence.” This becomes an excuse to give less funding, not more. If a province’s universities are doing well on rankings or performance indicators, then they must need less money, not more. It goes without saying that declines in rankings invite punitive expeditions from provincial ministries of advanced education.
In short, universities cannot win the rankings game.
The goal posts move, the definitions are malleable and the purposes of the exercise have nothing to do with education. A rank may move just because 0.3 per cent of alumni (those who are willing to answer the pollsters) have changed their minds about the reputation of their alma mater.
So it’s fair to call the rankings an essentially negative factor in Canadian post-secondary education. They detract from the work of our universities and colleges. Indeed, they undermine it. At bottom, they are sop to the denizens of right-wing think-tanks. They are a bone that can be thrown to people professing mystical faith in markets. They are about cuts and control. They are beloved of pundits, technocrats and bureaucrats who follow the latest management fads, or who imagine that a “market” in Maclean’s rankings will make universities and colleges better. These same enthusiasts actually advise ministers of advanced education in every province and territory. Matters of judgement and value and civic life are of little concern in the world of rankings, and of even less concern to these enthusiasts.
Indicators and rankings are also the fond playthings of applied statisticians at Statistics Canada and Human Resources Development Canada — should anyone think the madness is limited to the private sector.
Yet all is not lost. For one thing, the amount of ink spilt on the Maclean’s ranking has been in decline for several years. A good many observers have begun to see how rankings trivialize universities. In the world of rankings, after all, only outcomes matter. Academic quality, intellectual integrity, fairness and equity may also matter, but only a little. In one humorous put-down, a Toronto Star commentator noted a Maclean’s indicator of quality, based on the proximity to campus of good beer halls, could not be far off. Lately, it’s been hard to find a daily newspaper that gives much space to the rankings.
Still, enthusiasts insist we must accept the new world. They say we should redefine “quality” in terms of numbers and outcomes. As for fairness, why not just agree on objective standards for accessibility and compare numbers from one year to the next? Why waste time, they ask, on discussions of university governance, or pedagogy, or the history of fields of study, or the aims of Canadian society?
Get to the point!, the enthusiasts say, Get to the numbers! Even if the numbers are merely superficial summaries of received opinion. The lived experience of teaching, the real work of research and the tough work that goes into learning to write, to argue and to reason don’t show up in Maclean’s. For the enthusiasts, these objections matter little.
1 It would be idle to pretend ordinary parents and their university-age children are impervious to this kind of argument. But there is more.
The last refuge of the enthusiasts is the matter of accountability. By accountability they mean control — that is, conformity with quantifiable goals and objectives. They do not always exclude accountability (defined as responsible decision-making, with due process, and in senates or in the wider community), but this latter kind of accountability comes last on their wish-list. Accountability as control comes first. The objectives and goals they have in mind are, naturally, their own, not those of the whole public, or the whole community.
There are indicators and descriptions of academic work that do make sense, and that could and should inform academic decisions and shape public policy.
Few will object to the gathering and use of numbers that tell us how big the system is, how representative it is of society, how much the system costs and why. But Maclean’s rankings are not “good-sense” descriptors of post-secondary education in Canada. It is odd in the extreme to say that quality is just a matter of quantifiable inputs and outputs. And it is politically naïve to pretend choices about degrees, curriculum and R&D are best done by studying input/output ratios. It is even stranger to say quality and choice are “informed” by competitive rankings.
Suppose you did as former Ontario premier Mike Harris and Alberta Premier Ralph Klein both intended in the 1990s, with their rankings and performance indicators. You would decide whether university x or college y would be allowed to have a BA on the basis of throughput indicators (how many years from degree start to degree finish), or employability (how many weeks to employment in the field in which you’re trained), among many other similar measures. In years when the economy was expansive, almost all programs of study would survive. In other years, almost none would survive. Market measures are, then, king and queen and university autonomy is a shadow.
Maclean’s university issue is each year another in a chain of events that help government bureaucrats and private technocrats. They are looking for guidance. They would prefer not to ask university graduates, nor to consult university senates and departments. Instead, using rankings and indicators, they hope to answer — painlessly and quickly — the tough questions of higher education governance. I understand their plight, but disapprove of their solutions.
Paul Boyer’s volume on college rankings in the United States gets three things right and many things wrong.
First, the book offers a useful and accurate history of the development of rankings in the U.S. He rightly says that U.S. News and World Report rankings (first published in 1983) were all about selling magazines. Like Maclean’s, USNWR never did any significant research on higher education. It relies on cheap and easy surveys of opinion, and on already-published statistics. Its costs are low and its profits are high. Its rankings make money, and who would deny that it is wrong for the publishers of USNWR to realize profits?
Boyer reminds us that there has always been an informal “league table” of universities and colleges in the U.S., as there is in every country that can boast more than one university or college. But Ivy League universities don’t have to worry much about rankings — a point about which the author says little. These places have long and distinguished histories, well-established connections to federal funding for defence and other research, and significant endowments. Ranking systems will never come up with conclusions that embarrass Harvard or Yale or Cornell. In the UK, the same might be said of Oxford and Cambridge.
By contrast, rankings matter a great deal to a significant fragment of the American post-secondary system, possibly 20 to 25 per cent of it. These 500-odd institutions have a tenuous hold on public funds and/ or private donations. Boyer’s contrast between the Ivy League and “the rest” is naïve, by the way, for it is patent that all American universities and colleges are caught in the net of performance indicators and ranking. Published research arrives nearly always at this conclusion. Boyer is simply too optimistic, a Polly-Anna loose among the barracudas.
Boyer claims rankings are popular mainly because they play on the fears and anxieties of families. If you think your child’s choice of university will decide her entire life course, then you’ll read the rankings. He reminds us that high-achieving high school students are the ones who pay most attention to rankings. (p. 40) American families who can afford a Toyota Camry education think rankings and performance indicators are part of an informal contract between themselves and the colleges their children attend.
Boyer’s potted history of rankings leads to his second main point: rankings lead the most competitive American institutions to use smoke and mirror tactics, emphasizing the promotion of research and image — neither of which has much to do with the quality of learning nor careful consideration of a 21st Century curriculum. (p. 171)
Promotion and advertising are enormous line items in the operating budgets of such places. Students and educators alike are encouraged to focus on the symbolic remnants of an elite system — selectivity, reputation and financial resources. These indicators are viewed as expressions of “quality.” However, they tend to reflect and reinforce a much older preoccupation with prestige. (p. 170)
Boyer sees the irony of a ranking system that promotes mindless elitism and competition, while “the nation embraces the values of inclusion and preaches that education is good for all.” (p. 165)
His points about elitism come in a chapter entitled “The End of Rankings.” He presses the claim that there has been a successful push since 1970 to make post-secondary education possible for at least 50 per cent of the college-age population. That level of access has been achieved. Boyer might have noted that all the OECD countries, Canada included, have exactly the same objective. Because there is such broad agreement on this objective, there is less carping about specific matching of education to employment than there used to be, Boyer says.
It used to be said Latin and Greek majors were headed for the dustbin. But now, in the early 2000s, the point is either you get a degree or you suffer major economic damage for the rest of your life. There is no rush to get into Latin and Greek courses, but there’s similarly no rush to get out of them.
Boyer rightly says (although he’s confusing on the point: is the system universal or isn’t it?) that even in a broadly accessible system like the American, rankings still do enormous damage. Above all, rankings point directly away from what he calls quality education.
So what is quality education? On this huge theme, Boyer gets enough things right to warrant a third checkmark. He offers five criteria, complete with pertinent standards, that he thinks parents and students should use to decide where they’ll be educated (pp. 105–148):
- Students should have a general education, the kind that will encourage them in a lifelong sequence of work, private inquiry, civic activity and creative participation in the culture (this means credited seminars in all years, public service work and much else);
- They should learn to reason and to write well;
- Their college or university should show commitment to “active learning” (that means small classes, opportunity to know professors at non-teaching times and opportunities to participate in faculty research);
- Learning should take place in wider communities — city, nation and world; and
- The university ought to be a diverse, intellectually active and respectful community.
This list works reasonably well for the U.S., but would require translation in Canada. Our university history, the essentially public character of our system and even its relatively small size, all lead to different ways of seeing what should count as “quality” in Canada.
Although I liked the idea of a list of this sort, Boyer is weak in laying out the philosophical, historic and political basis for it. When he gets down to detail, he is in trouble. The matter of small classes, for instance, sounds good, but in practice, surely it’s no more or less than item number 9 in Maclean’s ranking for 2004. Boyer has jumped out of the frying pan into the fire.
Perhaps the weakest feature of the book is its lack of interest in matters of governance and management. It makes little sense to propose undergraduate programs where participation and critical thinking are central, but not to recognize that a strong academic government must be in charge of it all. If the entire university is seen as a mechanism in need of “management,” then the institutions of academic self-governance and public accountability will be moribund. Surely this is the great problem of the early 21st Century in Canadian higher education. We have large universities, underfunded and over-managed, fascinated by image and a mindless urge to be “world-class.”
For faculty members, the connection between sensible governance and sensible education is a close one. Boyer somehow missed that connection. Canada is well prepared to reassert that connection. The time is ripe. As we do, the dreadful vision of the horse race in Canadian higher education will fade away.
1 For a discussion of the “rankings enthusiasts” and their arguments, see esp. William Bruneau and Donald C. Savage,
Counting Out the Scholars (Toronto: James Lorimer, 2002), www.caut.ca/en/publications/bookseries/scholars.asp.
William Bruneau is a former president of CAUT and a member of CAUT’s academic freedom and tenure committee. He lives and writes in Vancouver.