Librarians and libraries have concerned themselves with Performance Indicators for many decades. We have not always used this term but we have long been interested in measuring and evaluating what we do. We collect statistics which we have long believed describe both what we do and how well we do it. Until recently, we have not really had to ask ourselves what all these numbers really mean. Do they have any meaning at all?
What has changed is the political and economic context in which we provide our services. University administrators, politicians and the general public, through such mechanisms as Maclean's Annual Ranking of Universities, are requiring proof that we are "accountable" both to our institutions and to our funding bodies (the provincial and federal governments) and are demanding "quality," however vaguely that is defined. Universities must argue with provincial departments of finance that operating grants should not only not be reduced, but increased. Libraries must prove that cuts to the collections budget, for example, will have detrimental effects on teaching and research. And we must make these arguments when both institutions and governments are actively seeking reasons to justify reductions and ways to redirect the funds elsewhere.
A great emphasis is being placed on establishing goals and objectives and then proving to the "public" that the institution is meeting these goals. We are being asked to provide value for the money we are granted. For now, it is perhaps inevitable that governments will demand some kind of quantitative measurement of our institutions because somehow this measurement will prove (or disprove) that value is being provided. This then provides the justification for budget decisions.
It is, however, not only possible and desirable, but essential to show the value of educational institutions and practices in many other ways: with reference to the historical development of the institution -- in its community, province, and country; with respect to an educational theory that responds to changing social needs, and so on. There is a great danger in allowing purely quantitative data to become a proxy for "quality" and "value received." Although it is central to our concept of accountability that we are able to explain what we do and why we do it, and although we must not lose sight of this in response to increasing government and institutional demands for "statistics," we need to think of the value of our institutions from a number of perspectives, of which goal-achievement and quantitative measures are only two.
Library Objectives
We already measure our activity so that we can inform our publics about our work. We are accountable. Accountability means "the ability to say what we are doing and why." Internally, we want to improve our performance over time and compare that performance with other academic libraries where that is relevant. If we are to be accountable the assessment must be related to the objectives and activities of the individual library and to the needs and objectives of the institution. A fundamental component is participation in the decisions about these objectives and activities. We have a right to participate in decisions that affect us.
It is important, also, to separate performance indicators adopted for the purpose of evaluating the institution or individual library and those which lead to the evaluation of individuals, even though administrators may deny that they will be used for that purpose. It is essential that the definition, development, and implementation of PIs be accompanied at all times by publication and by a public (or even peer) review.
What does it mean when an institution adopts as one of its goals that the library move up in the American Research Libraries (ARL) rankings of academic libraries as the Board of Governors at the University of British Columbia did a number of years ago? If circulation figures increase is this a measure of the effectiveness of classes delivered by librarians on good research strategies? Does an increase in the number of reference queries translate into claims that more and better quality research is being conducted? We should be using statistics to help us manage and plan. We should count the number of books in our collections to aid in the planning for new library buildings, not in order to move up in the rankings for academic libraries.
In a document entitled Developing Indicators for Academic Library Performance: Ratios from the ARL Statistics 1992-93 and 1993-94 (ARL Statistics and Measurement Program -- http://arl.cni.org/stats/Statistics/) Martha Kyrillidou has identified three issues affecting reliability and validity of library data: "consistency across institutions and through time; ease vs. utility in gathering data; and values, meaning, and measurements."
Consistency
There is a great desire to make comparisons among institutions and to somehow rank or rate them according to some sort of objective criteria. In Canada we have the annual Maclean's ratings which pit universities against one another in a sort of competition or popularity game. The essence of the university is suddenly reduced to numbers. The ARL study points out that "lack of consistency in the way data are collected from institution to institution and in the way data are collected over time within the same institution creates problems" in doing these kinds of institutional comparisons and in assessing changes over time. It is very difficult to develop enforceable standards for reporting data, and university policy and practice should have regard to this fact.
Ease vs. Utility
It is especially tempting to develop performance indicators from data that can be easily collected. However, as the ARL study demonstrates "what is easy to measure is not necessarily what is desirable to measure." We must develop a system of gathering data which is linked to assessing our progress towards meeting our stated goals rather than setting goals based on the data that we are already collecting.
Values & Meaning
We must be very careful not to mistake the indicators that we develop for the value that is reflected in that indicator. Underlying academic and administrative values differ widely from library to library. One library may place a high priority on keeping unit costs for serial subscriptions low while another may think that providing more costly journals from the major publishers, which yields a higher unit cost, results in a higher quality service. Unit costs for collections are also greatly affected by the range of academic programs which the library must support and these in turn may reflect institutional values. Some universities have large multi-branch library systems and others have large concentrated central library facilities. When these institutions are compared by calculating the expenditures per student incorrect conclusions about relative efficiency vs. service delivered may be made.
In an article on Performance Indicators in the Library and at Trent University (CAUT Bulletin, Nov. '95, p. 3) Ken Field wrote: "Library stats tend to be out of context. When they are reported they do not show ... the level of success achieved by a user of the system. For instance, data on acquisitions, whether they be the amount of money spent on library acquisitions per student or the percentage of the library acquisitions budget out of the overall operating budget, will give one an idea of whether these amounts are rising or falling but they will not show the detrimental or positive effects that decreases or increases are having on the ability of faculty and students to carry out research or study."
The recently released CAUT Policy on Performance Indicators tells us that "the choice of indicators and PIs should be driven by policy -- not the other way round. The fact that something can be measured should not mean that we necessarily measure it." For too long, libraries have fallen into this trap. It has been relatively easy to gather "statistics" but very difficult to translate these numbers into evidence of quality of service and of the satisfaction of our user community. This in turn makes it very difficult for us to justify demands for increased funding.
Elizabeth Caskey is head of the David Lam Management Research Library at the University of British Columbia.