Back to top

CAUT Bulletin Archives
1996-2016

October 2016

Formula evaluation

Faculty groups in the United States are warning that the increasing reliance upon metrics to assess their “productivity” is threatening the traditional system of peer review in hiring, tenure and promotion decisions.

Full-time faculty members at Rutgers University have pro­tested the university’s decision to contract with Academic Analytics, a private company that has developed a patented algorithm — the Faculty Scholarly Productivity Index — to assess individual performance.

“Most faculty members have some sort of direct experience of metrics used to assess performance,” the American Asso­ci­ation of University Professors said in a statement released earlier this year. “There is, however, good reason to doubt the utility of such metrics in tenure and promotion decisions and/or in judgments affecting hiring, compensation or working conditions.”

According to the Rutgers chapter of the AAUP, the FSPI is said “to encroach upon academic freedom, peer evaluation and shared governance by imposing its own criteria on emphasizing research” and “utterly ignoring the teaching, service and civic engagement that faculty perform.”

In Canada, universities and colleges have recently signed similar contracts with companies such as Scival and Faculty­180. In each case, metrics are used to develop an aggregate scoring of a program or department based on individual academic productivity to assess the scholarship quality of the whole unit.

“Significantly, individual academic staff are not permitted access to the data,” notes CAUT executive director David Robinson. “Only senior administrators can see the results, and faculty members aren’t always permitted to check the accuracy of how they’ve been assessed.”

Other critics have concerns about the capability of quant­itative research to fully capture research excellence.

A recent report produced by the Higher Education Funding Council for England expresses “skepticism” about the use of metrics, insisting that peer review should constitute the primary procedure for evaluating research quality. The same study also uncovered that indicators can be misused or “gamed,” and that “it is not currently feasible to assess research outputs or impacts … using quantitative indicators alone.”

“We need to ensure that metrics alone do not determine tenure, promotion, or hiring decisions,” adds Robinson.