What are some alternatives to the library nu


Code Number: 004-120-G
Division Number: VI
Professional Group: Statistics
Joint meeting with: -
Meeting Number: 120
Simultaneous interpretation:   No

Sensible data collection in information contexts

G. E. Gorman
School of Communications and Information Management
Victoria University of Wellington
Wellington 6001, New Zealand
E-mail: [email protected]

Abstract

Libraries and information agencies rely heavily on 'statistics' to describe their services, evaluate their activities and measure their performance. In the data collection on which statistical analysis is based, there are always assumptions and uncontrolled variables that affect the purity and objectivity of the Adversely affecting data and thereby contaminating the analysis and interpretation of the data. This paper highlights some of these variables in an attempt to alert information managers to the dangers of data collection and to encourage them to develop ways of data control that make more effective use of the Statistics allowed.

paper

Statistics in view

An important activity of library management at the end of the 20th century is the collection of data and the production of statistics. For most library managers, 'the more the better' applies - more data leads to more meaningful information, which in turn results in more informed decisions and thus more adequately managed services. The underlying assumption is that data on library activities can be converted into useful information, and that information becomes management knowledge.

It is thus understandable that data collection is viewed by many as the most fundamental activity in the management process. But it is less understandable that managers in general tend to understand the data collection process? -interpretation? - to see application as something that has to be accepted without questioning, and that many apply this model over and over again without asking if there isn't a better way to collect and use the data. For a social scientist whose main interests are research methodologies and whose main occupation is imparting knowledge about research to students in library and information science (LUS), this is a worrying state that has been around for several decades.

The main purpose of this paper is to suggest that information specialists could usefully employ not only quantitative but also qualitative data collection and analysis methods to make their research more reliable and meaningful. A secondary purpose is to highlight some of the dangers that result from unconditional acceptance of the process of data collection and interpretation.

Quantitative variables say something about 'how much' or 'how many' of a certain property or attribute can be found in an object, event, person or phenomenon. An example is the number of computers available for student use on campus. Qualitative variables classify an object, event, person, or phenomenon into categories, taking into account the attributes that distinguish them. For example, the publication language of a particular one can be English, French, Hebrew, or Spanish.

By looking beyond 'how much' or 'how many' and grasping the characteristics of the people, things, or activities being counted, librarians necessarily gain a more useful understanding of their own organizations and their work.

This is not a new concern and the solutions offered in this presentation are not unique; but regardless of what has been said in the past, the problem remains and it seems worthwhile to recount the realities. One of my colleagues at Victoria University, Rowena Cullen, has questioned the value of relying solely on quantitative data in the context of her research on performance measurement. Discussing the work of Pratt and Altman and the Department of Library and Information Statistics (LISU), she questions the reliability of library statistics as the sole and reliable measure of library activity, and in particular whether such data allow a large correlation between input and output. `` In particular, fundamental questions about the satisfaction of users with their library / information services are only touched on briefly by a small percentage of the studies considered here, although the authors state in several places that a more detailed analysis is possible and indeed desirable ".

Cullen goes on to show in her paper that a library is a social construct and therefore performance measurement is also a social construct. This then means that we have to look for a matrix that encompasses values, focus, and purpose - three axes necessary to understand the library as a social construct. In my view, this social construct is a means of looking at libraries and information organizations, and when we are in the field of social constructs, non-quantitative methods of data collection and analysis take on additional importance. This is particularly true in three areas: library users, collections and services (or inquiries); each of these will be discussed in turn.

Can users be counted meaningfully?

Data collection is based on the assumption that it is possible to achieve an appropriate representation of the objects / population examined. In a library, such an assumption must be challenged when applied to the user population of a particular library service. For example, suppose we are interested in the number of people who use a library. How do we count or measure this? A simple way is that we simply count everyone entering or leaving the building, either mechanically or manually. And many libraries do just that - how many annual reports proudly boast that - in 199X the library was visited by XXXX users "? But what does this tell us? Were they occasional users, serious scholars who made extensive use of sophisticated research, Stu -dents looking for materials from a reading list, elderly people who used the library as a social center, parents who took advantage of offers for their children? In other words: counting people tells us very little, as it does not specify the different user categories and thus also not the specific demands they may place on the services. This is an excellent example of the inadequacy of data as a basis for meaningful statistics or information of any value, since they are based on such a rough perception of the user population The assumption is incorrect, the data are sin d flawed and so their interpretation must be equally flawed.

What we really need is a profile of the actual library users - who they are, what they expect when they enter the library, how they use the equipment and services, what they think of the equipment and services, why they might choose the library electronically or on site, etc. None of this useful data can be obtained by simply counting the users. Furthermore, no type of counting, not even the most sophisticated and detailed user overview, will tell us anything about the potential users or the non-users, even if this type of information is what managers really want - they want to know or at least need to know how it is potential market for their services is designed so that they can develop a management plan that gives them access to this reserve. Even in a country with as much awareness of the importance of libraries as Australia, where more than 60% of the population use public libraries (whatever 'use' means), there is a large proportion of non-users, at who we need to get interested in our services. Even the most skilled researcher will tell them that collecting this more useful and therefore more sophisticated user and non-user data is difficult and time-consuming and costly, but there is no substitute for it.

What is the value of the inventory count?

If my questioning of the mere counting of users in contrast to the counting of certain groups of users and the determination of their assessment of certain services is justified, then it is possible to look away from people and towards objects - here specifically the stocks (such as also always defined)?

In libraries, it has been an almost biblical truth, especially since the "good old days" when the Clapp-Jordan formula gained its supremacy, that counting the size of a book collection gives us data that are quantitative as well as qualitative Again, almost every annual report says that 'the size of the collection has now reached X volumes of books, Y current periodicals, Z electronic resources ". But what is the relationship between the size or quantity of a collection and its quality? This question repeatedly frustrates statisticians because it questions the value of the statistical enterprise. But - just as with users - we as information specialists have to be particularly interested in values ​​and meanings, regardless of whether we are looking at users or stocks.

Assuming that there is a relationship between quantity and quality, and I am not assuming it by any means, then it is necessary to question the value of inventory data in terms of the relationship between size and the level of service. To compensate for this, many librarians count the loans or uses of books, reference works, magazines, CD-ROMs, etc. However, each count of loans or uses can result from an unusual and unusual use by one at a time Scientists working on a project get into trouble, due to a borrower with a temporary weakness for a certain topic, etc. In an age of economic priorities and additional services that are subject to a fee, the question may also be asked to what extent lending or using library materials is a valuable indicator of anything at all are.

It is of course possible to increase the value of data on loans and uses by introducing some kind of quality indicators. This usually means some sort of 'ranking system', usually one that measures holdings in the local collection by an external standard. In New Zealand, this may mean that Wellington City Library assigns a higher value to the holdings in the New York Public Library Or a university library may rank publications from prestigious academic publishers and UN agencies higher than popular novels or government publications in their own country, but is it the job of a library or information service to respond to the needs of a specific local community , or should it be concerned with measuring itself against national or international criteria? This means that for Wellington City Library, materials that only it owns can and may even be more important than those that are also in the New York Public Library are present.

In other words, counting inventory, be it from books or any other medium, is not a necessary measure; Counting borrowings or uses is not a measure of the level or quality of use, merely that this media has been taken off the shelves or accessed electronically, perhaps because nothing better was available Question past? To too many librarians, the level of demand for a service is not an important issue, while the size of the holdings or the number of uses is.

Are inquiries a substitute for user or inventory counts?

Counting users or holdings gives us some data, albeit of limited value, but it must be reiterated that too many library services hide behind these raw numbers and rely on them for more meaningful data analysis. An alternative used by some institutions is the counting of user inquiries (directed to employees, electronic systems or in the form of other question-and-answer modes). Some excellent examples of this can be found in 'Libraries in the Workplace', one of the excellent reports produced by David Spiller and LISU staff: How much research (by the end users themselves or through mediation) did you do Estimated made by your library / information center? How many queries do you estimate your library / information center answered?

When asked about the number of inquiries, libraries tend to record the number of inquiries in a certain time or to observe perceived user interactions with machine information resources. As is always the case with data collection, however, it is easily possible that the data is collected by the person who entered it - usually a library employee who may feel threatened by the procedure and stretches out the numbers to make the information appear busier than it actually is is - be distorted or disfigured. For example, an employee may intentionally change the numbers and record a higher number of inquiries than were actually made; or, more typically, a simple question about direction can be viewed as a request, even if the employees are really only supposed to count specific requests for information.

As with the questions about inventory turnover, here too we would like to know something about the level of inquiries. Are all requests the same? No. Do some take more time and effort? Naturally. So why not ask a question, collect the data over time and the amount of detailed knowledge that is required to answer a question.

Consider how much richer the data would be if the following question were asked: What percentage of the total inquiries answered by the library / information center required the following time in your estimation: then give time frames, from 1 minute to 10 Minutes etc? Or how about asking for information about the type of request: Was it related to the organization of leisure time, was it purely informative in nature, was it research-oriented? Would questions like these give us better insights into the nature, depth and quality of the services offered?

As query systems become increasingly automated, it is relatively easy to integrate recording mechanisms into electronic systems that enable the extraction of data on the length of the queries, the amount of data researched, and so on. In connection with automated systems, the question of how to count interactions between users and machines also arises. In this area it is more difficult to falsify the data or at least easier to exclude requests with non-informative content from the count.

At the other extreme is the collection of data about assumed user interactions that are notoriously unreliable because they are necessarily based on distant, inconspicuous observation. This method of data collection is particularly exposed to coloring, especially in a library environment in which cheap labor (e.g. students as observers) is used. This can lead to a selective recording of observation data. Certain objects or relationships are more likely to be recorded by observers with different interests, preferences, or backgrounds. "In other words, observation skills are necessary, and if they are flawed, the data will be flawed. Allan Kellehear's excellent paper on observation includes a number of Reservations about this technique of data collection, which can all be summarized as follows: the observer must have skills in observing and must never ascribe motives to the observed interaction or the observed behavior.In an information context, there is a natural tendency to assume that an interaction is task-related in some way (a user looking for information for a specific purpose) and this means that one ascribes a motive that may not even exist.

The problem with the decision makers ...

One problem with academics advocating more abundant data-gathering techniques, of course, is that, as all practitioners know, we live in an ivory tower far removed from "the real world." Indeed. And it must be recognized that in 'the real world' the decision-makers, for whom so much data collection and analysis is done, simply do not want many details, do not want to think about data, but simply want a simple table that shows that the facility X is better than facility Y ('better' means a bigger budget, more information activities, larger inventory, etc.). This means that we need to recognize that data collection is determined to a significant extent by those to whom practitioners are accountable and those to whom we are accountable often have 'pea counter mentalities'.

Whether the decision-makers are government officials, managers, politicians, or finance professionals, it is important to recognize that they have the power to dictate what data we collect, how the data is used, and how it is presented. Every library or information center is accountable to somebody else because they are dependent on him for financing, that is, for their very own purpose. That "somebody else" needs to understand the information needs of libraries. If outside decision-makers are allowed to dictate the needs of data collection and presentation, then it is perfectly realistic to expect them to be their own rather than theirs Structure the interests of the library accordingly - and why shouldn't they?

The increasing sophistication of automated library systems and the greater ease with which numerical data - about users, about holdings, about holdings, about usage - can be gathered means that we are more than ever attached to simple quantification as an evaluation measure . As this happens, decision makers believe more and more that data can be collected by pressing a key here, by pressing a command. As a result, it is becoming increasingly unlikely that we will be able to break out of the straitjacket of 'number crunching' as our inspectors continue to view this as the most effective way of evaluating our services. Also, and it must be admitted, the software we use lacks to help in the analysis of qualitative data (which are not easy to analyze), simply in terms of user-friendliness and the ease of interpretation necessary for data analysis. Despite the positive assessments of software for the analysis of qualitative data by evaluators such as Miles and Huberman, one remains the majority After all, computer software uses technical processing methods for qualitative data that are intrinsically better suited for other, more time-consuming methods.

One has to make a crucial distinction between efficiency (the lowest cost of something per unit) and effectiveness (successful completion of a task). Our decision-makers are almost invariably driven by efficiency considerations, and the technology that enriches data collection and analysis certainly improves efficiency - but only increases efficiency. In contrast, we information specialists are part of a service industry in which the successful fulfillment of our task - effectiveness - should come first.

What can be done

In the preceding discussion there are a number of conclusions about what we can do to move the situation from number-oriented and efficiency-driven data collection and analysis towards more sensitive, more meaningful collection and analysis techniques. All of them are offered not as alternatives to, but rather as enrichments for the statistical standard measures that are universally used in the information sector.

  • Seriously consider the genuine shortcomings of quantitative data collection and analysis methods and try to incorporate qualitative methods that allow a better understanding of library users, collections and services.
  • Focus less on the users as a whole and more on specific user categories and profiles of their wishes and needs.
  • Place less emphasis on the numerical aspects of the collection and more appropriate indicators of collection quality.
  • Pay less attention to simple user inquiries and more to the nature and level of these inquiries.
  • Use qualitative data collection methods with full awareness of the problems associated with achieving value-free use of these methods.
  • Promote an awareness among decision-makers that efficiency and effectiveness are not concepts with equal rights and that effectiveness in the information sector is a higher good than efficiency.
  • Work with software developers to develop software for qualitative data collection that is acceptable in terms of usability and analytical capabilities.

Conclusion

A recent paper by Dole and Hurych discusses "new units of measurement" for evaluating libraries, particularly with regard to electronic resources. The authors provide an excellent overview of conventional measures as well as give clear insight into current developments. It is encouraging that about usage-based units of measure is being considered, but it is depressing that these are only a very small component of conventional cost, time and usage-based units of measure. If this is the future of data collection in libraries, I am not convinced that we can make a significant improvement in from my point of view will not experience a satisfactory situation.

More promising is the work suggested by the United States 'Coalition for Networked Information (http://www.cni.org) and especially Charles McClure. In' Asses-sing the Academic Networked Environment: Strategies and Options "(Assessing the Academic Network Environment: Strategies and Options), he and Cynthia Lopata present a handbook for evaluating networks that is largely qualitative in its approach and provides a strong argument for the use of qualitative methods in evaluating academic networks. However, this does not seem to have been received with general recognition and has certainly not had a major impact on the 'data collectors' community.

In the final analysis, we argue for greater awareness among library professionals that meaningful data is contextual and meaning based on interpretation, that it is derived from variables that are complex and difficult to measure, and that understanding is an inductive process . This differs from, but does not necessarily conflict, with the traditional quantitative approach of the statistician, which assumes that it is possible to identify and measure variables in a relatively simple way, and that norms and agreement are made by deduction the data can be obtained. Both have their place in information work, but please don't let us emphasize one at the expense of the other - or rather continue to over-emphasize one (quantitative) at the expense of the other (qualitative).

Do you remember the classic work of Webb, among other things, on inconspicuous measures, Chapter 8 of which contains the passionate request to researchers to use 'all available offensive weapons'? More than thirty years later, it is high time that information professionals heeded this call and look beyond their numbers for sources with potentially deeper meaning.

Bibliography

1. Hafner, A.W. (1998). Descriptive Statistical Techniques for Librarians. 2nd ed. Chicago: American Library Association.

2. Pratt, A., and Altmann, E. (1997). 'Live by the Numbers, Die by the Numbers.' Library Journal April 15: 48-49.

3. England, L., and Sumsion, J. (1995). Perspectives of Public Library Use: A Compendium of Survey Information. Loughborough: Library and Information Statistics Unit.

4. Cullen, R. (1998). Does Performance Measurement Improve Organizational Effectiveness? A Post-modern Analysis. "In: Proceedings of the 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services Held at Longhirst Hall, Northumberland, England, September 7-11, 1997, 3-20. Newcastle upon Tyne: Information North .

5. Clapp, V.W., and Jordan, R.T. (1965). Quantitative Criteria for Adequacy of Academic Library Collections. ' College and Research Libraries 26: 371-380.

6. Spiller, D .; Creaser, C .; and Murphy, A. (1998). Libraries in the workplace. LISU Occasional Papers, 20. Loughborough: Loughborough University of Technology, Library and Information Statistics Unit.

7. Kellehear, A. (1993). The Unobtrusive Researcher: A Guide to Methods. St Leonards, NSW: Allen and Unwin.

8. Miles, M.B., and Huberman, A.M. (1994). Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks, CA: Sage Publications.

9. Dole, W.V., and Hurych, J.M. (1999). New Measuremnts for the Next Millennium: Evaluating Libraries in the Electronic Age. ' paper prepared for CoLIS3: The Third International Conference on Conception of Library and Information Science, Dubrovnik, Croatia, 23-27 May.

10. McClure, C.R., and Lopata, C. (1996). Assessing the Academic Networked Environment: Strategies and Options. February 1996.

11. Gorman, G.E., and Clayton, P.R. (1997). Qualitative Research for the Information Professional: A Practical Handbook. London: Library Association Publishing.

12. Webb, E., et al. (1966). Unobtrusive Measures: Non-Reactive Research in the Social Sciences. Chicago: Rand-McNally.