If you’ve been reading the latest American Libraries Direct, you’ve seen the article about the “hate speech” tag that has been assigned to a significant amount of works by Ann Coulter at Mount Prospect Public Library.
A patron, scrolling through a list of books by Ms. Coulter, discovered the books and complained:
“I don’t understand why the library is letting people make political statements on their site,” said Alaimo, a political conservative. “By not taking it off, the library is agreeing with it.”
Fortunately, as the article goes on to state, the library officials disagree with Mike Alaimo, the library patron who made the complaint. Unfortunately, many other library officials reading this are nodding their heads in affirmation that it was only a matter of time a patron complained – after all, how can we “control” our catalogue and what goes in it if we allow users to generate their own tags and contribute to our catalogues.
This is a heated discussion that often comes across AUTOCAT, with the most recent occurring over Sarah Palin’s new book and the tags “I can see Russia” and, if memory serves, “Sea of Pee”.
I’ve had a look at Mount Prospect’s catalogue and records. Mount Prospect Public Library is using the discovery layer AquaBrowser. I am very familiar with this product because our library is just finishing the AB implementation process. AB uses LibraryThing tags to populate the user tags in bib records. This is a perk, because many libraries don’t have the population or patron usage to make tagging successful without an underlying foundation.
However, as tags are added by users, just like in LibraryThing, they are weighted by the amount of times they are used. Absurd, ridiculous or inaccurate tags as considered by the general user (or us) will become buried as more and more users tag with similar or “better” tags. In the end, those tags that aren’t useful will fall to the very bottom of the retrieval list as newer, more useful tags are added or reaffirmed. This system only fails when there are very few tags, or the library makes the decision to display all tags associated with an item, rather than the top 5 – 10 – 15, and so on. Is there a need to display more than 10 or 15 terms? Usually, those terms in the top 10 are the most useful and most frequently used.
AB also has an option for a “black list” that allows a library to build an index of terms that are not allowed. There is an existing, standard list by many libraries using AB and each individual library can build upon or remove from that list as needed or desired. As a result, socially unacceptable terms (as determined by each library) are barred from appearing in user tags and reviews. However, tags that reflect public opinion, emotions, ideas or views should not. After all, these are user tags, not access points created by the library – by professionals. And, despite many professionals’ concerns, user tags are not inserted into our records, they merely sit “on top” like another layer of icing on cake.
Studies have shown that user tags result in a consensus of acceptable vocabulary created by users. The masses outweigh the handful of individuals that tend to fall into the extremes – whether it is through individual point of views or a creative use of language to get around the black-listed words.
Steven Arakawa, Catalogue Librarian for Training and Documentation at Yale University made a good point on AUTOCAT when speaking about the controversial tags assigned to Sarah Palin’s new book:
“It’s to be expected that political and cultural friction works will generate tags that push the envelope of decorum and often do more harm than good for the position being advocated. And the official “tags” provided by catalogers can introduce objectivity and neutrality which is a positive contribution if sometimes bland, like network news.
But no one seems to have put in a good word for taggers’ specialist knowledge–as opposed to emotional connection–regarding many niche subjects, knowledge that might very well go beyond the general knowledge of the average cataloger.”