Back to top

CAUT Bulletin Archives
1996-2016

December 2016

Interview / Yves Gingras

YvesGingrasIn your book Bibliometrics and Research Evaluation: Uses and Abuses, you essentially tear performance indicators apart. What are your main arguments?
Whether we’re talking about impact factors, the h-index, or academic rankings, evaluation isn’t really the issue. We’ve been evaluating the academic world since the 17th century. Even [Sir Isaac] Newton’s works were evaluated whenever he submitted an article to a scientific journal. The real issue is who is controlling the evaluation, and what the evaluation is controlling. The problem with performance indicators is they’re all about taking control of the evaluation out of researchers’ hands. Peer review is still widespread, admittedly, but bibliometric indices are gaining ground, and that’s a dangerous trend. If you want to take a temperature, you use a thermometer, not a hydrometer. None of the performance indicators have properties they can call their own and none of them are aggregated. It would be absurd to use performance indicators to make decisions, since they can say one thing, and then the complete opposite. It takes judgment to properly evaluate the performance of an individual or a department. If someone has submitted tons of articles to journals in the last three years and none of them have been published, that speaks volumes about the quality of the research. Using figures to replace judgment is just stupid.

What do you think about firms like Data180 and Elsevier that are selling faculty evaluation tools?
A market for evaluation has developed over the past 10 years. Companies have been knocking at universities’ doors, offering up tools to measure productivity. Administrators think it’s wonderful, so they spend enormous amounts buying these products. They’re caught up in this new ideology of public management that started in hospitals and has now trickled down to universities. They want to see numbers. They want a dashboard with flashing lights. If you appoint a real professor to manage a university, he or she will soon see how ridiculous these indi­cators are. And if you appoint a manager who doesn’t know anything about aca­demia, he or she will feel powerless and turn to these kinds of measures because they can be reassuring.

As professor and Canada Research Chair in History and Sociology of Science at Université du Québec à Montréal, you’re well placed to explain how these indicators penalize some fields.
It’s clear to see. When we talk about university rankings, for instance, the ranking game favours achievements in the natural sciences. And you have to wonder who’s going to tackle a specialty like the history of Manitoba if they know their research will never be published in any of the top journals. Classifying things this way is eroding fields of research. In the natural and biomedical sciences, you make your research known by publishing in scholarly journals. But you can’t export that model to sociology or history. In humanities and social sciences, the most important way to distribute your work is by publishing a book, and books don’t count as much as journal citations. 

How do performance indicators influence the value we place on teaching and the services universities provide to the community?
All these indicators measure is research. So they encourage people to only value research. Professors are told they deliver good classes, but they get a poor evaluation because they haven’t published enough. If you want my opinion, we’re sawing off the branch we’re sitting on, since 80% of our students are at the bachelor’s level and aren’t destined to go on to do research. Research is the cherry on top. It only takes a bachelor’s degree to train a lawyer. The same goes for a chemist or an engineer. We’re devaluing education, so professors don’t have any choice. They’re giving as few classes as possible so they can publish as much as they can as a means for career progress.

What do you think about university league tables?
They’re open to manipulation. There’s a university in Saudi Arabia that made dramatic gains in the league tables by appointing some of the most highly cited researchers in the world as adjunct pro­fessors. There were minimal requirements for them to be physically present, but they listed the university as a secondary academic affiliation. Their role helped elevate the Saudi university in rankings that rely heavily on a university’s number of highly cited researchers, but no new lab buildings were built. The university’s efforts to enhance its international standing through this program were later exposed, but it does raise questions about how university rankings can be manipulated. The Shanghai ranking is another example. One of the indicators they factor in is the awarding of a Nobel Prize. That’s absurd. A Nobel Prize can often be awarded for work a person did 20 years ago. How can something that was done in the past be a reliable indicator of the quality of education at that institution today? It’s all just hot air.