Competency lists considered harmful. Can we rethink them?

Could we talk about skill and competency lists, please? They’re everywhere, inescapable as change. Professional organizations have made dozens. Dozens more come from the LIS literature as content analyses of collections of job ads or position descriptions. Whatever job you do or want to do in libraries, someone’s made a list of the skills you must supposedly have mastered.

I’m not convinced these lists are as useful as they could be. I’m completely convinced they do a lot of unnecessary harm. If we must have them, we could stand to rethink how we make and present them.

I avoid showing competency lists to my students because they reliably freak out over list length and complexity, never mind the highly prescriptive, even accusatory, tone in which the list’s surrounding text is often written. The more conscientious the students are, the worse they panic. Worse yet, I’ve seen their panic send them into Impostor Syndrome tailspins that sap their curiosity as well as their willingness to tackle exactly the new and growing areas where academic libraries most need them. I do my best to talk them down, but it doesn’t always work. Sadly, it’s commonly the brightest, most promising students who retreat fastest and furthest, afraid they’re in for nothing but harsh judgment and failure if they pursue jobs described in these lists. Frankly, I think these lists too often provide workplace grist for exactly the harsh judgments my students are desperate to avoid. Skill lists unaccompanied by information about available resources and job context make it easy to subscribe to the fundamental attribution error when something goes wrong, blaming a student or working librarian for not having enough (or the right) skills instead of doing the broad honest analysis of the situation that might implicate the library in some or all of the difficulty.

A number of competency-list interventions, some easier to implement than others, could stem the unproductive panic. Some sense of priority, some ranking by centrality to the job or association with specific job tasks, would be an enormous relief. A roadmap would be even better: “start here, expand into this, eventually pick up that, but only the die-hards find that other thing useful.” My students experience lengthy, unranked, unprioritized laundry lists of skills as accusations that they can never learn enough or be good enough, or even subtextual gloating that they’ll never win jobs. Understandably wanting to dump the stress, they turn furiously on us instructors for yet another tired round of the theory-praxis wars. This is neither necessary nor useful. No one really expects students to pick up a lengthy career’s worth of knowledge in a mere twelve to fourteen three-credit courses! How tremendously insulting to longtime professionals such an expectation would be. The problem is that laundry lists of unranked skills imply precisely that expectation.

Another useful change, then, though it would take real research, would be an indication of how, and roughly when in their careers, practitioners acquire job-related skills and knowledge. Taking scholarly communication as an example, I learned to read journal-publication contracts by experience on the job, and I strongly doubt I’m alone among scholarly-communication specialists in that. The same goes for any number of technical chores, too numerous and boring to list, specific to the various roles I’ve undertaken. Not only would a sense of timing, optionality, and learning modality relieve my students’ (and consequently my) stress, it would also help librarians who need to update their skills, cross-train in something new to them, or change their specialty. It doesn’t always make sense to try to learn some things in classrooms, much less learn everything right away. It’d be awfully nice to know which skills belong where and when.

It doesn’t help that competency lists are written from the point of view of some sort of neo-Platonic universal library that does everything imaginable in-house and is simultaneously tiny, gigantic, and every size in between. In real academic libraries, the skills needed for what is nominally the same job are partial, context-based subsets of the whole. A library whose institutional repository runs on open-source software managed in-house will need different skills in its institutional-repository manager from a library that pays for a vendor’s software-as-a-service offering. A library working toward a campus open-access policy needs different people skills from one whose faculty have already implemented such a policy. When competency lists do not clearly tie listed skills to real-world tasks and situations, they fail to heed the contexts that shape need for certain skills, much less help list users winnow the list wisely in accordance with their local context.

Distinguishing between a skill or knowledge that must be always at the librarian’s fingertips and one that can be looked up as needed would be nice. “Publisher self-archiving policies” often appear on scholarly-communication competency lists. Nobody in the field would ever go about memorizing them all, though, not least because they change on a whim. Looking them up as needed is what SHERPA/RoMEO is for, and when that service doesn’t come through, librarians investigate publisher websites or read example contracts at time of need. My students don’t know that, though, and it’s impossible for the inexperienced to tell the difference from the competency lists. The lack of differentiation between “know this” and “know where to look this up” doesn’t just panic my students, of course; library managers and search committees can be forgiven for letting competency lists send them on wild-goose chases for employees with encyclopedic knowledge on a topic that practitioners in the field actually just look up.

That leads me to job-ad content analyses in the LIS literature, a genre I honestly find exasperating. My problem isn’t so much with content-analysis technique as with the uncritical acceptance of job ads as realistic guides to employee skills. Stop me if you’ve heard this one before: a search-committee chair sends out a plea to librarian friends on social media, “We’re hiring a Library Shininess Specialist, which we’ve never had before. Somebody please tell me what I should put in the job ad!” Or this one: the skills and responsibilities sections of a job ad are nothing but giant laundry lists compiled from other job ads and content analyses from the LIS literature, coupled with stingy or even absent discussion of what resources the library will provide to whoever wins the job. When I see these social-media requests and patchwork ads, I make a mental note to warn my students against applying to the job. These ads come from libraries that have not thought hard enough about their context and their milestones, much less what a new Library Shininess Specialist needs from their library employer in order to succeed. I don’t want my new graduates to burn out and leave.

In other words, too many job ads are pure wishlists. Some are even wishlists patched together from other wishlists! Unfortunately, the cost of an unrealistic, na├»vely-compiled laundry list of a job ad does not become evident until a search fails or a hire doesn’t work out, which is not enough to keep bad ads from being written and published in the first place. If the LIS literature has any way to tell the difference between a thoughtful, carefully-crafted job ad and a hasty sloppy patchwork wishlist, I have yet to see it; bad ads are analyzed as though equivalent to excellent ones. Nor does the job-ad analysis literature assess job-ad outcomes. This is understandable, as gathering data would be fraught with human-resource confidentiality pitfalls, but the unfortunate result is that no one actually knows how well job ads do at attracting viable candidates, much less achieving successful hires. Why, then, do we grant job-ad analyses so much credence? In addition to feeding back into more bad job ads, these analyses also fuel competency lists, which is nothing if not troubling; a realistic competency list cannot be grounded in untested, assessment-free wishlists. In LIS education, these content analyses and the resulting competency lists become sticks to beat educators with, fueling staggeringly impractical expectations from students and practitioners about what a two-year master’s curriculum can realistically accomplish. Garbage in, garbage out, garbage everywhere!

I like the idea of competency lists, just not their present construction. In an ideal world, these lists would reduce anxiety in library-school students and practitioners committed to lifelong learning, channelling their energy productively by breaking down jargon-laden job titles into a sensible succession of digestible pieces. Properly coupled with task analysis, competency lists could also be useful professional advocacy tools, expressing clearly what librarians really do with their days. Finally, competency lists ought to be much better tools than they are for libraries and librarians working out how to implement new initiatives. If we reconceive these lists as tools to help librarians and library-school students plan their learning, and libraries plan their evolution, we can perhaps escape the anxiety, censorious finger-pointing, and poor planning such lists far too often incite today.

Note: This post is copyright 2014 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Linked data in the creases: blinkered by BIBFRAME, have we missed the real story?

I keep you in the creases / I hide you in the folds / Protect you from the sunlight / Shield you from the cold. / Everybody said they were glad to see you go / But no one ever has to know.

—Amber Rubarth, “In the Creases

American catalogers and systems librarians can be forgiven for thinking that all the linked-data action lies with the BIBFRAME development effort. BIBFRAME certainly represents the lion’s share of what I’ve bookmarked for next semester’s XML and linked-data course. All along, I’ve wondered where the digital librarians, metadata librarians, records managers, and archivists—information professionals who describe information resources but are at best peripheral to the MARC establishment—were hiding in the linked-data ferment, as BIBFRAME certainly isn’t paying them much attention. After attending Semantic Web in Libraries 2013 (acronym SWIB because the conference takes place in Germany), I know where they are and what they’re making: linked data that lives in the creases, building bridges across boundaries and canals through liminal spaces.

Because linked data is designed to bridge diverse communities, vocabularies, and standards, it doesn’t show to best advantage in siloed, heavily-standardized arenas such as the current MARC environment. If BIBFRAME sometimes feels uncompelling on its own, this is likely why! Linked data shines most where diverse sources and types of data are forced to rub elbows, an increasing pain point for many libraries trying to make one-stop-shopping discovery layers and portals. I first noticed an implementation that spoke to that truth in 2012, when the Missouri History Museum demonstrated their use of linked data as a translation layer between disparate digital collections with differing metadata schemes. SWIB13 offered plentiful examples of similar projects, including an important one from the US side of the pond. In building the AgriVIVO disciplinary search portal, Cornell University walked away from traditional crosswalks, instead finding the pieces of information they needed from whatever metadata their partners could give them and expressing those in linked data. This just-in-time aggregation approach lets AgriVIVO welcome and enhance any available metadata while avoiding tiresome and often fruitless arguments about standards and metadata quality.

What interests me most about this design pattern is how it neatly bypasses problems that led earlier aggregation projects to fail. The ambitious National Science Digital Library project of the mid-2000s foundered on the average science project’s inability to get to grips with XML, never mind setting up as an OAI-PMH provider. (Chapter 10 of Carl Lagoze’s dissertation offers the gory details, for those interested.) AgriVIVO, instead, takes Postel’s law to heart: it accepts whatever it is given, and gives the cleanest linked data it can back to the web. As this design pattern catches on, we could see less friction and standards-squabbling among information communities, which will be free to describe their materials as they see fit while still contributing to the growing interconnection of the cultural-heritage web. Librarians, archivists, and museum and gallery curators meshing together on the web while doing their own thing—what an opportunity!

It should surprise no one that the premiere conference for semantic web technologies in libraries is held in Europe; European libraries have led actual linked-data implementation all along. If I had to guess why, I would point to their small size, small numbers, and resulting agility, as well as their clear and unchallenged technology leadership within their country’s libraries. European national libraries, from what I can see, tend not to bog down as much as American library communities do in grindingly political, perfectionistic, top-down standards processes. Instead, they eye possibilities critically and solve problems however they think best, unconstrained by one-true-standard thinking.

This lent a delightfully grounded ambition to several of the development projects I saw at SWIB13. I was taken rather aback at first by the notion of an entire e-resource management system predicated on linked data—it struck me as frighteningly complex and fraught—but on second thought, if developer Leander Seige is solving a real data-integration problem for his library with the tools he has to hand, why not? Similarly, the ontology- and vocabulary-mapping projects at the Plattner Institute, Stuttgart Media University, and Mannheim University Library are not random pie-in-the-sky experiments, but active real-world problem-solving where linked data is the best-fit solution rather than just a trendy buzzword.

The presentation that most refined my thinking about linked data was Martin Malmsten’s “Decentralisation, Distribution, Disintegration—towards Linked Data as a First Class Citizen in Libraryland” (I would link to the video if I could, as the slides capture very little of Malmsten’s compelling arguments.) Malmsten sold me at once when he related how the National Library of Sweden, sick of MARC behaving as a stumbling-block in many of their projects, declared “Linked Data or die!” and audaciously set about making it happen. Along the way, the Swedish developers discovered that serialization formats like MARC and XML, as well as standards like METS, constrain innovative thinking too much and invariably involve shoehorning data into forms and formats that don’t quite fit it.

What linked data let Malmsten and his compatriots do was express their data in the manner best befitting it, while “keep[ing] formats and monsters on the outside” by automating the re-expression of the data in older, staider standards as necessary—and only as necessary. If broadly adopted, the National Library of Sweden’s approach frees us from the eternal lipstick-on-pig question of how best to present eccentric, often inadequate, almost always expensively-homegrown data to patrons. Instead, we will put the patron experience first, asking “What data do patrons actually want to see or use, and before we go creating it, does it perhaps exist already in the vast existing web of data?”

Malmsten also made clear that “Linked Data + UX = actually useful data.” Linked data on the inside is a hard sell without obvious user-experience benefits on the outside for both patrons and librarians, a point my rather eccentric keynote entirely agreed with. For that reason, France’s OpenCat effort was my favorite linked-data project from SWIB13. Since the National Library of France has already done considerable linked-data authority control on names, subjects, and titles, they are now leveraging it to build lightweight, easy-maintenance, enriched OPACs for some of the smallest libraries in the country, libraries too small for MARC to be an easy or comfortable fit.

After SWIB13, I firmly believe that it isn’t the big standards-development efforts that will shape linked-data adoption in libraries. Linked data will grow in the creases, the folds, the cracks of our notoriously rickety metadata edifices. It will often grow in the dark unnoticed, shielded by its champions, as with a project I heard about informally (and won’t name) that nearly died by cold administrative fiat before its developer made it too amazing to kill off. As it quietly solves stubborn problems, empowers our smallest libraries, and connects libraries big and small with the larger web, linked data will remake more and more library data in its image—and if good interface-design practices come along for the ride, no one ever has to know!

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

The reskilling muddle: wasted time, opportunity, and money

I’ve had some strange experiences teaching workshops and continuing-education courses over the last couple of years (details have been glossed over for obvious reasons):

  • The learner who finished one of my continuing-education courses and asked “So what do I do with this now?”
  • The learner who, upset and viscerally offended, demanded to know why they had to learn a topic that happened to be outside their existing knowledge-base in a different continuing-education course
  • The would-be learners who pay for an online course they barely look at, much less complete

These challenges simply don’t happen in my regular library-school classrooms. Sometimes I can easily take them as a salient reminder to me to explain clearly the “why” behind the “what” in my teaching. More often, though, I find myself worried, both for these learners and for the state of the overall pool of professional skill.

It’s no particular surprise—often it’s necessary—for a library-school student to take a course they don’t entirely understand the “why” of. This is why library schools have advising; students who don’t yet have a coherent professional body of knowledge need guidance from someone with an overall concept map of LIS and the patience and intuition to match them with courses they can’t know they need, or will enjoy. In the classroom, students who don’t entirely know where I’m going with something—which is reasonable—are willing to spot me some time to make it clear, or explain the applications.

So what’s different in the continuing-education context that makes the occasional working professional either completely check out or go on a tear? In all honesty, I don’t know. I started on this train of thought because I felt so surprised and helpless over the situations described above. I do have some highly tentative guesses to share for discussion.

Library-school students have made a choice—more or less well-informed, of course, but still a choice—to be where they are, and the fact of that choice predisposes them to be open-minded about what I can offer them. Curiously, though, the more steeped in the library or archive environment a new student is, the less tolerance they seem to have for knowledge outside their experience. They aren’t quite as explosive or avoidant about it as the learners I discussed above, but they’re quite definitely on the same wavelength. I can’t help but suspect that the inflexible boundedness of many information positions, especially but hardly exclusively on the paraprofessional level, is damaging professional curiosity in some people. This certainly seems a shame.

Some of my continuing-education learners are being pushed into my courses by their management. We don’t currently ask our non-completers their reasons for dropping out, but in interactions with some who drop halfway I have seen extrinsic rather than intrinsic motivations at work. I hear three messages in that: first, that some professionals are having trouble motivating themselves to learn; second, that trying to force them into it does not create that motivation; and third and least surprising, without that motivation, learning can fail altogether. Another factor is time, naturally enough. I am always disturbed when a learner tells me they are in the course because their employer told them to, but their employer is giving them no work time to complete the coursework.

I also see a species of magical thinking around reskilling generally and technology-related learning in particular. Some professionals hear acronyms, buzzwords, or hype and immediately leap to buy training, without doing enough groundwork to understand how the topic might fit into their professional environment, or to know whether they themselves are genuinely curious about it. A few view the training almost as a drowning man might view a life-raft, hoping desperately that the topic (whatever it is) will insulate them from change. That’s an awful lot to expect of a four- to eight-week class.

As I was evaluating applicants for the last Digital Humanities Data Curation institute, I saw another odd phenomenon: workshop applicants (and there were many, some of them librarians) whose wealth of experience suggested they had little if anything to learn from the workshop. To me, this suggests a strong desire for some sort of credential as external evidence of existing knowledge. This in turn somewhat aligns this phenomenon with magical thinking about training: a defensive move to demonstrate value to employers, rather than a true need or desire to learn.

Whether my guesses turn out right or wrong—and I would dearly love to see more contemporary research on how professionals choose learning opportunities—what’s already clear to me is that an awful lot of time and money is being wasted on futile, unneeded, or bad-fit training. Clarifying, as a profession, what we expect in terms of lifelong learning, and sorting out how best to guide professionals in choosing what to learn and how to learn it, would be a terrific start.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Breaking the panopticon: who’s watching library patrons, and can we stop them?

Teaching from the real world is pure joy most of the time. Students love it when they see something from class in the pixels of library journals and magazines, the mass media, or the technology press. Most of the time, discussing change while it’s happening is a visceral lesson in professional adaptability and continuous learning. I could have done without having to teach technology-related privacy issues to my “Digital Trends, Tools, and Debates” students in the shadow of the NSA’s newly-revealed surveillance practices, however.

Those who watch my Twitter feed have lately endured many 140-character howls of helpless dismay as I read the tech press in the late afternoons. Leaving that anger aside as I wrote and recorded lectures nearly broke me. Boiling immensely complex facts based on technologies no less complex into a snappy and comprehensible lecture is hard enough, but it’s a challenge I’m well-used to; disciplining myself to avoid bursting into spittle-flecked rants was the hard part.

As I always do, I explained to my students why I chose to teach them about this. My own visceral outrage aside, the simplest reasons call back to parts of the ALA Code of Ethics:

II. We uphold the principles of intellectual freedom and resist all efforts to censor library resources.

III. We protect each library user’s right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted.

V. We treat co-workers and other colleagues with respect, fairness, and good faith, and advocate conditions of employment that safeguard the rights and welfare of all employees of our institutions.

VI. We do not advance private interests at the expense of library users, colleagues, or our employing institutions.

What price intellectual freedom and freedom to read, never mind privacy and confidentiality, when the NSA has built weaknesses into security standards and frameworks that could help other snoops grab every byte passing through a library computer, or over the library wireless network? When Amazon tracks library checkouts to Kindle devices, creepily attaching buy-this-book come-ons to due-date notices? When any number of commercial data warehouses track patron information behavior on the computers and wifi networks libraries provide?

The Internet in general and the web in particular have become Jeremy Bentham’s panopticon. That panopticon unquestionably surveils us and our patrons. If libraries are truly to be the privacy-protecting, commercial-free civic spaces they aim to be, shouldn’t we librarians extend the principles of the ALA Code of Ethics to digital environments as well? What would that take?

The scope of the problem

This isn’t only about circling library wagons against the NSA. Who surveils library staff computers? In many K-12 environments, the answer is obvious and in some ways troublesome. To my surprise, however, schools do not harbor the only library environment where employer surveillance may appear. When I asked, several librarians in academic and public libraries privately voiced suspicions to me that either library IT or the IT establishment in the library’s parent organization was logging behavior on work computers. Even more troublesome: they did not know what was and wasn’t logged, had no available policy on the question, and could not find out more. I don’t find this uncertainty indicative of what the Code of Ethics terms “respect, fairness, and good faith.”

My entirely unscientific and not-to-be-relied-upon information gathering for this column suggested that surveillance may be commoner when library IT is not controlled by the library. This makes intuitive sense. Not only do many corporate, government, and academic IT centers not share library ethics, they operate under different constraints and directives. A library, for example, can push back against overreaching copyright enforcement directives; we understand fair use and consider fair-use advocacy part of our mission. When the RIAA, a major serials publisher or aggregator, or similar copyright-owner interests lean on IT, however, IT has little choice but to make the problem go away with minimal hassle and minimal legal risk to the larger institution. This is liable to mean surveillance (in the form of log monitoring at minimum) and no-longer-neutral web access.

As for warding off surveillance from private interests, I’ve been teaching my Digital Trends students about the commercial web-tracking establishment and available techniques to defeat it for years. When I asked Twitter and FriendFeed whether any libraries had defended against this surveillance by adding anti-tracking plugins to the web browsers in stock patron or staff computer configurations, however, I came up completely empty. I found that both unexpected and troubling. I would dearly love comments here from librarians who have considered this issue and implemented privacy-protecting measures in their libraries!

Ignorance is part of the problem, certainly. My own wake-up call came a couple of weeks ago, when I interviewed Brendan O’Connor, a student in the UW-Madison School of Law, about the cheap, Tarot-deck-sized wifi surveillance box he calls the “F-BOMB” along with its monitoring system CreepyDOL, built as a proof-of-concept assessment of the privacy threats involved in much normal everyday network use. Before talking to Brendan, I hadn’t any notion how much data wifi-enabled devices such as laptops, tablets, and smartphones regularly and unstoppably leak, nor how oblivious to personal-data leakage many websites (including librarian favorites such as newsfeed-readers) are. Supposedly I teach technology! If there’s this much I don’t know, when I make constant and regular effort to keep up with technology-related privacy issues, I can’t help but be concerned about the level of awareness in librarianship generally. How can we decide what to do about a phenomenon we don’t understand?

What to do?

That we as a profession have a duty to advocate with legislators and technology providers for better privacy protection in communication protocols, on websites, and in mobile platforms seems beyond question. Frustrated with the stalemate he perceives in the technology establishment around personal privacy, Brendan O’Connor suggested to me that privacy protection could be usefully framed as a consumer-safety issue. I think that a promising approach, but I see no reason standard library ethical stances around personal privacy as an inescapable component of intellectual freedom and citizenship cannot make themselves heard as well. Available fixes are highly technical, of course, but the needed advocacy to force the technology establishment into making those fixes relies on exactly the sort of ethical suasion that libraries and their professional organizations excel at.

What immediate technical fixes could libraries implement? When I brought up browsing privacy on FriendFeed, librarian Aaron Tay of the National University of Singapore wondered whether I was advocating that all libraries place their computers on the (possibly NSA-compromised, but still best-of-breed) TOR anonymizer network. I’ve used TOR now and then, so I know it stresses bandwidth and degrades the apparent responsiveness of web browsing somewhat; I don’t doubt many of our patrons would find this an unacceptable tradeoff. Stephen Francoeur of Baruch College noted that anti-tracking browser plugins, if poorly chosen or poorly configured, could block cookies that some websites require in order to function properly. Both critiques have merit.

To my mind, libraries can consider a continuum of responses, with universal TOR implementation, perhaps allied with a draconian Javascript-killer like NoScript that is known to break many websites, on the extreme (doubtless infeasible) end. On the other end of the continuum lies pure education: block nothing, explain everything. The website “Terms of Service; Didn’t Read,” which grades the quality of the privacy policies at many commonly-used websites, offers a plugin for many popular browsers (Internet Explorer excluded, unfortunately) that puts its grades right in the browser interface for perusal. Some anti-tracking plugins, Ghostery for example, can be configured not to block, but to display information about which trackers are active during a web browsing session. I encourage everyone who works in libraries to investigate and test these plugins, at home if not at work! Let us share what we learn, so that librarianship as a whole starts to frame a digital-privacy strategy.

Where is the middle course? The “Do Not Track” browser preference, lackadaisical though support for it is, is worth triggering by default just as a statement of intent. Anti-tracking plugins are well worth considering for library staff and patron machines also. I’ve been using them for some years, and hardly ever notice browsing problems. On the rare occasion a site does break, the fix is generally a two-click temporary disabling of the anti-tracking plugin for that site, something I hope could be relatively easily taught to reference and tech-support staff. Wifi security is rather weak still, and its implementation unquestionably creates tech-support issues, but with a heavy heart I confess it now seems preferable to open access points to me.

As for surveillance closer to home, at minimum libraries owe their staff transparent policy and procedure. Even libraries with no choice but to surveil staff, as in many schools, should be straightforward about what is happening. Even libraries who don’t control their own computers can challenge IT to be transparent and to protect privacy whenever possible. We can at least avoid turning into mini-NSAs, hiding snooping behind silence and obfuscation!

It is true that some anti-tracking technologies create browsing hassles. It’s also true that institutions we favor and rely upon, such as news media, themselves rely on tracking to improve their balance sheets as they move online. Finally, it’s true that some digital invasions of privacy are well beyond our control. As I thought about all this, though, I found myself repeating “not in libraries, not here” over and over again under my breath. The NSA may, legally or no, track the web traffic of foreign nationals, catching many American citizens in the backwash, but not here. Advertisers may compile behavior portfolios for promiscuous sale, but not here. Social media may track their users across the entire web, but not here. Digital panopticons may spring up like weeds, but not here. Not here. Here, in libraries, privacy should be the default.

I am grateful to Myron Groover, the Library Society of the World, and Twitter correspondents who wish not to be named for giving me examples of library-computer surveillance and helping me shape my thinking. I am not affiliated in any way with the websites or browser plugins mentioned herein, except as user and classroom demonstrator.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Seizing inopportune moments

Seizing inopportune moments

Or to look at it another way—we are little men, we don’t know the ins and outs of the matter, there are wheels within wheels, etcetera—it would be presumptuous of us to interfere with the designs of fate or even of kings.
—“Guildenstern,” Rosencrantz and Guildenstern Are Dead by Tom Stoppard. All epigraphs in this article quoted from the 1967 edition published by Grove Press.

Last weekend I went to Spring Green, Wisconsin for a treat I’d been anticipating most of a year: a double-bill of Shakespeare’s Hamlet and Stoppard’s Rosencrantz and Guildenstern Are Dead at American Players Theatre. I’ve loved the latter play since high school, but until then I had never seen it performed live. Ryan Imhoff as childlike Rosencrantz and John Pribyl as the scenery-chewing Player especially delighted me among a uniformly strong, quick-witted cast—but I cannot repurpose this column for a theatre review, tempting though that is. I drove home from the theatre with lines and themes from the play pulling together disparate threads in my mind, such as opportune moments and their opposites, MIT’s report on its behavior during Aaron Swartz’s prosecution, the Biss bill as the latest twist in the movement toward open access to the scholarly literature, and sundry other past and present information-related struggles in academe, and I want to share some of my musings.

My question about the MIT report is simple: where were MIT librarians? Where were the rest of us, for that matter? The repeated mass downloads were handled precisely as an academic librarian would expect them to be, but once campus access to JSTOR was restored, the MIT Libraries exited the drama, cooperating with subpoenas as needed and otherwise claiming an inability to speak except to campus legal counsel (III.A.4).

Several issues raised by Swartz’s prosecution—the impact of our licensing decisions on our patrons, information access (including for unaffiliated walk-in library users), the consequences to information users of computer-trespass law and zealous copyright enforcement—fall squarely within our professional boundaries. Yet we were silent—just about all of us, not only MIT’s librarians—until Swartz’s suicide lent us an opportune moment. We were so silent that the MIT report does not even bother to list librarians among MIT’s several silent constituencies (p. 14, list item 4). Did it not occur to the report authors that we’d have something to say? If so, I find that a terrifying omen for the influence of academic librarianship on the academy and its information practices.

Was it an inopportune moment to speak? Certainly it was for MIT librarians, so much so that even I (scenery-chewing Player that I often am) can’t fault them. The rest of us have no such excuse, and it’s our turf—and our credibility and mindshare around our turf—at stake. I regret personally that I did not speak more loudly. I hope I am not the only one.

The Biss bill

We have not been… picked out… simply to be abandoned… set loose to find our own way… We are entitled to some direction… I would have thought.
—“Guildenstern”

Illinois is facing a scholarly-communication novelty they’d likely rather have avoided: strong pressure on public institutions from the state legislature to institute an open-access policy along the general lines of Harvard’s. Unlike grant funders like the National Institutes of Health, the legislature does not legitimize its demand by waving money directly at faculty research; unlike Harvard-style policies, Illinois faculty are not deciding entirely on their own initiative to support open access.

For academic librarians caught in the middle, this is a positively paradigmatic inopportune moment to promote open access. Faculty at public institutions all over the U.S. tend to distrust state legislatures owing largely to ongoing defunding, and faculty distrust of Illinois’s legislature is even deeper owing to poorly-handled state budget-management issues during the recession as well as benefits reductions for state employees. Biss-mandated debates about open access are therefore liable to be less concerned with the merits and challenges of open access, more variations on the theme “how dare they? If they want this, we don’t!”

Living in neighboring Wisconsin, I have quite a few librarian friends at public Illinois institutions, several of whom work directly on scholarly-communication issues. I’ve even taught for Illinois’s Graduate School of Library and Information Studies a few times. I hope, and I believe, that my friends there have the courage to continue to support open access despite the inchoate faculty anger that could so easily shift from its current targets to them. I know they have the communicative skill to explain and defend their stance. I am well aware it won’t be easy for them; despite the inopportuneness of the moment, I believe it best that we support them in making their stance and the reasons for it clear.

The alternative—not just in Illinois, but for all of us—is to abdicate academic-library leadership on academe’s information issues, instead passively waiting for someone to tell us what to do, as Guildenstern does. Faculty status or no, tenure or no, why should anyone respect or heed us then?

Big Deals past and present

There must have been a moment, at the beginning, where we could have said—no. But somehow we missed it.
—“Guildenstern”

Did we have to arrive at Biss, at Swartz’s suicide, at the confusion surrounding the OSTP Memo? When could we have said no to the serials Big Deal, reasserted our privilege of journal choice? We can’t say we weren’t warned about the Big Deal’s eventual consequences. That’s past, though, and past remedy. Can we say no right now? Can we say no to the ridiculous inflation, the budget distortions by discipline, the erasure of monographs, the destruction of small independent scholarly publishers?

Some of us can. Some of us have, and lived to tell the tale. As best I can tell, what distinguishes those of us who can and have from those of us who feel they can’t is, once again, resolutely explaining the problem to our local constituencies and championing necessary change despite its unpopularity. I believe it’s better to do this work, unpalatable though it is, well before flat budgets and still-inflating costs force us to. Though such moments feel inopportune, and are, they’re still an improvement on reading the letter from some Hamlet or other on the faculty that seals our doom because we chose, like Stoppard’s Guildenstern, not to warn Hamlet of Claudius’s treachery.

As longtime readers of this column know, many universities are staring at another Big Deal in e-textbooks that boasts no structural reason to play out any better than serials did. Some academic libraries are strategically throwing their weight behind open educational resources, and good for them. What are the rest of us waiting for? As inopportune moments go, this one is as opportune as it gets.

Lights out?

We can move, of course, change direction, rattle about, but our movement is contained within a larger one that carries us along as inexorably as the wind and current…
—“Guildenstern”

I’m certainly not prepared to assert that academic librarians have become Stoppard’s hapless Danish courtiers, bereft of all sense of direction and self-propelled, purposeful action. That would be pointless panicmongering, and worse, altogether false.

I worry, though. I worry about information seekers and information sharers in heavily-surveilled digital environments, if academic librarians cannot work out how to defend them. I worry about my friends in Illinois and elsewhere. Will they have the defenders and support they need to stand firm? I worry a lot about my more idealistic and purpose-driven library-school students, who are all too likely to find themselves firmly slapped down in academic libraries even when explicitly hired as innovators and change agents. Will their library careers survive the disillusionment and frustration when mine didn’t? If not—and even in my short teaching career I’ve seen a few of my best march away from libraries—how much do we all lose? Lastly I worry about academic librarianship, that it will dwindle to darkness and death like a pair of bit players, because not enough of us could nerve ourselves to speak and act boldly.

I love Guildenstern, but I don’t want to be him.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Going where the jobs are: For the future of library education, watch today’s “topics” courses

I’m celebrating this week: after three years of teaching it, my Digital Curation course has at last graduated to the dignity of its very own course number! Welcome to the world, LIS 668!

When I first suggested the course to the formidable Louise Robbins, then director of SLIS, she immediately shot back “Where are the jobs, Dorothea?” Louise always has had a pragmatic sense of mission! It wasn’t nearly as easy to find apropos job listings then as it is now, but I dug up a few, so Louise agreed to let me pilot the course in spring 2011 under one of SLIS’s generic “topics” numbers. Two more years of course repetitions, a SLIS Curriculum Committee meeting, and a great deal of non-SLIS red tape later, the course is now an accepted, expected part of our curriculum.

This is not an unusual story, not at all. Scratch a library school—any library school—find a curriculum committee and a whole lot of “topics” numbers under which new courses get their start in life. Most schools will tell you, correctly, that this system is imposed upon them from above; the red tape involved in changing so much as a course name in the larger institution’s course catalogue is intimidating indeed. That’s not the whole story, though. There is method in this madness.

Think back to the year 2007, when Second Life hype was at its height. Hey, what a great time to put a course about virtual-reality librarianship on the books, right?

Right?

Well, perhaps not. How silly would a library school with a VR-librarianship course on the permanent books look now, with Second Life a moribund husk of its former self? To some extent, the red tape around curriculum change reins in faddishness, as well it should, selecting for courses with staying power. Topics courses change all the time—that’s what they’re for—so library schools who might have taken a flyer on a topics course in VR librarianship around 2008 merely stop teaching it a year or two later, probably in response to lack of student demand, and life goes on.

Louise’s question to me was also a useful corrective, of course. With the ongoing diversification of the information professions, packing enough knowledge into two short years to graduate students with realistic job-market hopes is already a tricky balancing act. We haven’t the luxury of adding permanent courses “just for fun,” much less on the off-chance jobs will materialize, or a given skill will turn up in job ads. Louise forced me to practice evidence-based curriculum change, and I’m grateful for it as I continue to think up and build new courses. This does mean that SLIS’s curriculum can’t lead the job market, but on the whole, I prefer that compromise to wasting student time and money on crystal-ball job-market predictions that don’t pan out.

Pragmatic logistics issues play into curriculum decisions, as they must. Digital Curation wouldn’t have gone up for its own number if I hadn’t joined SLIS permanently in 2011; it’s risky and difficult to have a permanent course on the books that’s mostly or always taught by adjuncts. (Not that it doesn’t happen, especially in schools with small faculties! It’s not even a bad thing—practitioners often make wonderful instructors, as their real-world experience informs their teaching—but it’s still logistically taxing.) Enrollment and budget constraints also govern what courses are on offer. The best course in the world can’t sustainably be offered if only three students ever sign up for it, though cooperative efforts like the WISE Consortium do help aggregate scattered demand. Core courses aside, curricula also bend to take advantage of the existing interests, often research interests, of permanent faculty and staff. Insofar as different LIS schools specialize, this is how and why: playing to local strengths.

The biggest problem with the topics-course system from my perspective as an LIS instructor is poor public relations: it makes curricular innovation (and the processes underlying it) invisible to most practitioners and LIS researchers. Looking at a library-school’s website, or the formal course catalogue, provides only a partial, highly conservative sense of the courses actually offered in that moment. (Until this very week, UW-Madison’s course catalogue gave the false impression that SLIS doesn’t offer any coursework in research-data management or digital preservation!) This, in turn, fuels omnipresent complaint about the obsolescence of LIS curricula.

I don’t have a simple solution to this. I do wish that LIS journals and their peer-reviewers would refuse to publish coursework surveys whose authors limit themselves to passively reading websites and course catalogues, as this method recalls the old joke about looking for a lost watch under a streetlight because it’s bright there, even though the watch was lost half a block away. (One such survey using this method missed what is now LIS 668 altogether.) Not only does it overlook topics courses and WISE Consortium courses, it assumes that the three-credit course is the only unit of instruction, such that an LIS school without a dedicated course in something does not teach it at all. This also is a long way from the truth. SLIS didn’t have a dedicated project-management course until this year, but I learned its basics back in 2005 from Ed Cortez’s systems-analysis course, and I’ve been teaching it and making students apply its techniques in two of my courses for some time. SLIS still doesn’t have a course dedicated to linked data, but I teach about it in the core organization-of-information course, my introductory technology course, and the database-design course (where I introduce the SPARQL query language by demonstrating its similarity to SQL). I will inherit SLIS’s hands-on XML course next spring, and I anticipate adding extensive linked-data work to that as well.

We in library schools could certainly do a better job of drawing attention to our kaleidoscope of topics courses. Keeping and updating lists is assuredly work, but I think our public-relations problem around apparent curricular obsolescence is severe enough to justify that work. Professional bodies like ALISE or ALA’s Committee on Accreditation, survey-based research projects like the multi-year WILIS, or publications like Library Journal could goose us into action by asking questions about recent topics courses. Indeed, I would call topics-course churn a vitally important indicator of the health and currency of a library school’s curriculum. Accreditors and other LIS-education watchdogs should pay attention to it!

Until then, I ask practitioners and researchers not to jump to conclusions about LIS curricula based on hopelessly incomplete information. Want to know if we’re teaching something? Whether we’re going where the jobs are? Please ask us!

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Data curation’s dirty little secret

When the Research Data Services group I helped inaugurate worked out a response process for data-management-plan assistance requests, we were careful to respect the disciplinary expertise among our members. After all, even in late 2010 it was a truism that the barrier skill for helping researchers manage data was disciplinary expertise. “In practice,” wrote Alma Swan and Sheridan Brown in 2008, “data scientists need a wide range of skills: domain expertise and computing skills are prerequisites…”

Data curation’s dirty little secret is that this isn’t always true. It isn’t even often true.

Swan and Brown’s own evidence directly contradicted their words. They wrote, quite truthfully, that aside from domain experts who teach themselves digital data management and analysis techniques, another typical data scientist or data manager “originat[ed] as a computer scientist who has acquired domain knowledge over time.” Domain knowledge, then, is not a prerequisite exactly; it can be learned on the job. This being true for computer scientists, why wouldn’t it be true for information professionals as well?

Researchers themselves are the authority for the claim that disciplinary knowledge is required for proper data management, in Swan and Brown as in many successor reports and articles. I must say, I don’t find researchers a reliable source on this point. If researchers knew what skills and techniques are necessary to manage and work with digital data, wouldn’t they be doing it better than they are? Would they even need help with data-management planning? Would they be leaving data management to wet-behind-the-ears graduate students at the very bottom of the lab hierarchy, as I have so often witnessed them doing? Would they be dumping the digital equivalent of moldy boxes from spiderwebby garages on librarians’ desks to the extent they are?

That said, some researchers do believe fiercely in the indispensability of disciplinary knowledge. The last-but-one data-management brownbag that Research Data Services sponsored prominently featured work that two groups of my digital-curation students did to help the Living Environments Laboratory (LEL) store, describe, track, and search/browse the individual images and other digital materials from which virtual-reality scenarios are built. I had to bite my tongue hard when a researcher in attendance incredulously questioned the speaker about my students’ lack of disciplinary expertise. Surely they couldn’t have done that work? The work they had demonstrably done? Myths die hard… and while they live, they cause librarians needless headaches.

I have sent a round dozen groups of students out to solve digital-data problems in the three years I’ve been teaching digital curation. In addition to the LEL researchers, my students have helped a linguist, an art historian, student artists, a demographer, a radio station with media-archiving issues, and more. I’ve also sent interns and practicum students into a campus microscopy lab, our local Forest Service research outpost, and our local Geological Survey office. I match disciplinary expertise when I can, but I usually can’t. It’s never mattered. They do fine. They’ve all done fine.

For my own part, I’ve taught basic data management to engineers, physicists, biologists, historians, clinicians, and computer scientists, and I’ve critiqued data-management plans from even more disciplines than that. My own disciplinary background is in literary analysis and historical linguistics. I can count the questions and situations I haven’t been able to resolve singlehandedly without moving from my left hand to my right. The number I failed to resolve at all? One, that I remember—a confusing workflow in instrument biology, and it was my own fault for not calling in someone else to resolve my confusion before responding.

Are disciplinary differences irrelevant to research-data management? Well, no, but the salient disciplinary differences I’ve seen come in around idiosyncratic research processes and tools. I confess to considerable skepticism, for example, about the possibility of an electronic laboratory notebook software package that will work across the entire breadth of a campus’s research initiatives. Lab notebooks are tightly tied to idiosyncratic, ungeneralizable, often project-specific processes, and my experience with researchers suggests that they expect digital notebooks to conform to their processes equally tightly, and will brook no impedance. I hope I’m wrong—an 80/20 solution seems vaguely within the realm of possibility, perhaps—but we’ll just have to see.

For the advising and consulting around data management that libraries would like to do, of course disciplinary knowledge is useful! No question about it. If nothing else, a little disciplinary knowledge helps convince researchers that librarians are useful people to talk to. (I find that a tiny bit of research before a scheduled meeting allows me to fake it convincingly.) No matter how often researchers claim it is, however, “useful” is not the same thing as “needful.” As libraries work through how we will help researchers with data management, we can take comfort, I hope, in the mythbusting I’ve just done. We don’t have to have all the disciplinary knowledge scattered across campus within our library walls before we start to help.

I once chatted with the inimitable Diane Hillmann at ALA about scholarly communication and data curation. When the disciplinary-expertise canard came up, she said judiciously, “They all think they’re special snowflakes. They’re not.” I’ve never forgotten that. I believe my students and I have abundantly proven it, and I believe academic libraries can—and should—go right on proving it.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

Can information professionals afford apprenticeships? A thought experiment

I have a gift for picking despised professional niches. I used to run institutional repositories, and if there’s a niche in academic librarianship more despised than that, I’m honestly not sure what it might be. From the frying pan into the fire—now I teach library school. If nothing else, I’ve greatly expanded the universe of librarians and archivists who despise my work!

Critiques of library school, even fairly savage ones, are nothing new; they’re an ordinary occupational hazard. (I used to write them myself, back in the day.) What’s more, critiques often come from the best, the brightest, and the most employable, those whose skillsets pre-MLS overlap most with what many libraries need: the Ph.Ds, the expert programmers and sysadmins, the experienced administrators. Given that, the worst response I could possibly muster would be empty Trithemius-like chest-thumping transparently aimed at protecting my own job: but tradition! but ethics! but the intellectual core of the profession! The best and the brightest would eviscerate me, and I’d deserve it. If what I do doesn’t serve the information professions, much less those who work in them, I have outlived my usefulness and should be put out by the curb for recycling. That’s only right.

So I’ve no intention of defending library-school curricula or library-school pedagogy in this column. That horse has been beaten to an indiscriminate zombie pulp already, many a time. Besides, I can’t reasonably defend everyone’s curriculum or everyone’s pedagogy; like any self-reflective teacher, I have days I wonder whether I even dare defend my own. I won’t assert that every library job currently requiring or desiring an MLS needs someone with MLS training, either; that’s an immediately-indefensible argument, considering the strong presence of non-MLS workers from rural libraries to research libraries.

What I’d rather do is examine a named alternative, apprenticeship, a little more closely. The grass may indeed be greener, but if it’s anything like my yard now that spring has finally sprung in earnest, it’s also full of choking weeds.

Let’s first recognize that apprenticeships already exist, beyond the practicums that many library schools offer (and some, the one I teach in included, require). Quite a few academic libraries have two-year post-MLS internships, for example, and many archives offer internships as well. Should these be the new on-ramp into the profession? I see no show-stopping reason internships couldn’t be extended into public and special libraries as well. Some positions aren’t suitable—K-12 and public-library youth services, with their strict in loco parentis obligations, might well prefer some pre-weeding of their employment pools—but many are.

I’ve had former students do brilliantly in post-MLS internships in academic and government libraries. What I notice about the internship programs in which my students have excelled is that they’re carefully designed to fulfill interns’ need to make their mark quickly within the profession: earmarked professional-development funds, readymade mentoring, tough-love requirements for professional authorship and service. Absent the MLS, I believe the training and mentoring needs, and associated costs, inherent to these apprenticeships would only increase. (They wouldn’t increase for every conceivable apprentice, true. Across the entire pool of apprentices, I do believe employer costs would increase, if only because of the longer initial learning curve, not to mention the need to cross-train for career flexibility if we want professionals and not mere Taylorist worker bees. I’m open to challenge on this point!) This means is that the best-designed apprenticeships will shift the costs of building professional onramps, from the future professional currently paying library-school tuition to the professions themselves. Can the professions afford that? I don’t know.

I also notice that good internships are not the only kind of internships out there. Some of my archives-track students lament that internships are all they can find once they graduate—not all they can land, all they can find in the jobs listings, suggesting that at least some archives are consciously and intentionally relying on contingent labor. If that weren’t bad enough, a worrisome number of those internships are Dickensianly abusive, not in the slightest aimed at making interns more competitive for permanent employment. (The same, my former students tell me, is sometimes true of other apprenticeship-like arrangements in archives, such as part-time and limited-term jobs.) In other words, apprenticeships offer considerable opportunity to exploit those who wish to work in libraries and archives. This is nothing new in business and industry, of course, in the chase to reduce labor costs as near zero as makes no odds; the Internal Revenue Service, ironically, is the exploited intern’s last line of defense these days. Academe knows the phenomenon as well: the adjunctification of undergraduate teaching. I’m not at all sure “as cheap as possible” is a labor model we want to see penetrate the information professions any more than it already has, however. It doesn’t just harm the interns themselves, unconscionable though that is; it drives down the price of all professionals, as their putative employers rely on intern-mills instead.

It’s no coincidence that the technologically-savvy often show the worst contempt for library schools, incidentally. They would likely gain most by the elimination of the MLS, since libraries and archives do not usually pay market price for technologists; the MLS serves as a clear signal that libraries and archives need not do so, in fact, because MLS-holding technologists wouldn’t bother with the MLS if all they wanted was market price for their existing skills! Even where libraries and archives do manage to squeeze out more dollars for technologists, even where they prefer but do not require the MLS, the results are not typically competitive in the larger labor market. Without the MLS, however, market logic would return in force. Technologists would rejoice! Everyone else in libraries and archives… well, the money pool isn’t expanding, so the math would seem obvious.

Perhaps appropriate formal and informal supervision by the profession could stop the abuses, existing and potential, of “labor-lite” models such as paid and unpaid internships. It hasn’t entirely halted them so far (am I the only one who remembers the howling over Library Corps?), but in an MLS-less world, the scrutiny currently lavished on keeping library schools honest, from job-placement surveys to top-ten lists to accreditation, could be trained on apprenticeships instead. But we have all this oversight now, one might object, and library schools are still awful! Indeed. Does that mean apprenticeships—including for those desperately-wanted technologists—will be awful as well? It seems a Faustian bargain, particularly if the information professions want the best and brightest, rather than merely the cheapest and most vulnerable to abuse.

Apprenticeships, particularly unpaid ones, raise another spectre as well: privilege and social justice within the information professions’ labor force. If we glance next door to computing and its supposed “meritocracy,” we find quickly that women, ethnic minorities, and other oppressed groups find themselves shut out for various reasons, some of which the information professions have unfortunately inherited as they import non-MLS technologists. Library schools haven’t solved these issues as they relate to access to the information professions, but we do work to mitigate them: targeted recruiting, scholarships, prizes, and grants are important parts of the picture, of course, but so are time-shifted distance education and technology education designed for the non-programmer. Can apprenticeships escape the bonds of geography, as distance programs do? Can they promote appropriate representation within the professions without falling afoul of anti-affirmative-action movements in state legislatures? How free can apprenticeship’s selection processes be of unconscious bias, given that hiring committees meet their prospects before judging them? Can they improve on library-school admissions, where evaluators generally do not meet candidates until well after admitting them? (Granted, I wish the applications I review had names redacted, since I’m as vulnerable to unconscious bias as any; even so, I do appreciate what I don’t know about those applicants, and I also work hard to acknowledge and minimize my own unconscious bias.) Whatever qualification system the information professions choose, I think we can agree that we don’t want it to reinscribe the same old patterns of oppression that libraries and archives exist in part to help redress.

My last words to the many, many professionals who despise my work, then: I’m sorry, but you are not my most pressing clientele, you who would do just fine in the information professions without me. Make no mistake, I enjoy teaching you, I learn a lot from you that I pass on to my other students, and I do earnestly try not to waste your time and money. Substantially, though, my mission—and, I think, that of the MLS, however poorly we manage it in practice—ranges alongside that of a community college: to open doors to as many as possible, not just those already suited to the profession because of prior privilege or even prior experience, not just those who can buy their way in through donating unpaid labor. Groups that would be disadvantaged by an apprenticeship system, or any system that lets some but not others bypass the MLS, may not include you, but I will make bold to assert that they do include some of your colleagues, and you are not necessarily more important to the profession than they are.

Apprenticeships might be able to serve such folk also—paid apprenticeships, if the profession can afford them, might well have an inherent advantage over library schools, even—but it will take quite a bit of conscious thought and design first.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

E-textbooks redux: what does Kirtsaeng mean to the market?

Librarians rejoice! The Supreme Court of the United States insisted in its Wiley v. Kirtsaeng decision that we can legally lend foreign-manufactured materials! The media noticed, too, at least the education media: both the Chronicle of Higher Education and Inside Higher Ed mentioned library lending prominently in their Wiley v. Kirtsaeng headlines and ledes.

The case was about textbooks and textbook-market arbitrage, though. That’s worth keeping sight of, as Andrew Albanese’s Publisher’s Weekly coverage does. Extrapolating from reactions on all sides, what does the Wiley v. Kirtsaeng decision likely mean for the textbook-publishing business, and what can textbook publishers and libraries do if they don’t like that?

What’s incontrovertibly clear now is that the importation of physical textbooks into the United States from countries where they are cheaper to buy is legal. My impression is that textbook importation has until now been a semi-underground industry, mostly leveraging online auction services rather than hanging out online shingles of its own. That seems likely to change, as would-be Supap Kirtsaengs build legal businesses openly. In macro-economic terms, this could mean that textbook prices in the US will be knocked down to the lowest-available worldwide price plus shipping costs and a markup for the importer—which sounds expensive, but as Kirtsaeng vividly demonstrated, can be considerably cheaper than current US prices for the same books.

What can textbook decision-makers do to keep their income high? Possibilities include:

  • Raising print-textbook prices to US levels worldwide. These prices are not tenable in many markets, so textbooks will not sell, so textbook publishers will make less money. This doesn’t seem a likely tactic.
  • Refusing to make print textbooks available anywhere but the US, as suggested in the American Association of Publishers’ reaction to the decision. This might well produce a short-term income gain, especially in a post-Wiley world. Education markets are growing so much faster overseas than in the US, however, that this strategy bids fair to lose publishers their most promising markets permanently.
  • Changing print textbooks sold abroad just enough to be poor substitutes for domestic books. (Hat tip to Andy Woodworth.) This is feasible, but far from costless, and it risks both those potentially-lucrative foreign markets and a public-relations backlash.
  • Restricting print-textbook supply in foreign countries, perhaps insisting upon demonstration of student status or enrollment in a specific class. This would have stopped Kirtsaeng’s relatives from purchasing the textbooks he resold. It’s leaky, though; what is to stop students from sharing a copy while buying one apiece for profitable resale in the US?
  • Legislative redress. Given existing agitation from students and parents over textbook prices, this seems unlikely to work, but if Maria Pallante genuinely does spur legislative activity around copyright rewriting, textbook publishers are likely to find help from her.
  • Copyright-treaty redress. The international copyright treaty space actually offers textbook decision-makers significant hope, since it’s where copyright maximalists and infringement-enforcement hawks are focusing their effort. I would not be at all surprised to see restriction of textbook arbitrage attempted in a future ACTA.
  • Moving away from print (and the ownership of print that allows first sale to come into play) toward electronic textbooks, where lucrative information-leasing is vastly more common, and DRM limits (though cannot entirely prevent) leakage.

I have already expressed significant concern in these pixels about that final possibility, which the Wiley decision motivates textbook publishers to pursue even more strongly than they already are. I don’t care to repeat myself, not least to avoid another dunking in the hot water I got into then! Instead, I’d like to argue that open-textbook programs offer a feasible, student-friendlier alternative to (or augmentation of) Big E-Textbook Deals, even for universities pursuing those deals.

At the recent Library Technology Conference 2013, reference librarian Kristina De Voe described Temple University Libraries’ pragmatic pilot program introducing open textbooks to faculty. While some pilot-program participants used the same textbook-avoidance bricolage techniques I do—cobbling together open-access journal articles, gray literature, news articles, top-drawer blog posts, digital collections, and suchlike to round out a nicely up-to-date syllabus—others dipped their toes into actual open-textbook waters and found them inviting. Crucially, Temple faculty found that their students not only saved money, but engaged more with the materials. Temple hasn’t demonstrated clear learning gains from open textbooks yet, but faculty haven’t seen learning losses either, putting paid to the oft-heard concern that electronic textbooks automatically lead to decreased learning.

Even before state legislatures force some of us to, even before most of us decide to help fund open-textbook creation, helping faculty work with open textbooks and other open readings only makes sense. At minimum, open readings mean that no student will suffer academically from the decision not to purchase a (print or electronic) textbook, as is sometimes happening now. For institutions considering (or already part of) Big E-Textbook Deals, programmatic campus use of open textbooks increases negotiating power with publishers and platforms: prices had better stay reasonable, and allied services must be usable and worthwhile, or the institution can and will switch to open alternatives.

As Temple’s example demonstrates, academic libraries can lead open-textbook programs, even though we have historically avoided involvement with textbooks and their issues. Materially helping many students should be incentive enough, but if more is needed, working directly with faculty lets librarians inject more library-purchased and library-digitized materials (including primary sources) into classrooms. Temple found the rewards well worth the effort; I strongly believe other libraries will as well.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”

How I teach technology

Roy Tennant’s recent series on assimilating new technology (start here to read it) spurs me to talk about helping library-school students do that. My workhorse course, the one I first developed and taught in 2007 that I’ve been teaching ever since, is an introduction to computer-based technologies in libraries called “Digital Tools, Trends, and Debates.” You are all welcome to browse its most recent syllabus.

Most students who take this course are at the office-suite-jockey level of computer savvy, though they range from rather less than that (brave souls!) all the way through computer-science majors and professional network administrators. Building a course that’s useful to that entire gamut taxes my ingenuity every time I sit down to revise the syllabus.

My fifteen-week course can’t turn office-suite jockeys into programmers and sysadmins; frankly, I’m not convinced an entire LIS program can. Real programmers and sysadmins have entire CS and MIS training programs, after all, never mind all the alphabet-soup certifications! Nor can I focus solely on honing the skills of the few programmers and sysadmins who take the course, enjoyable though that would be; they’re the tiny cherry atop a much larger confection.

What the course must do, I decided early on, is turn all my students, at every level of existing knowledge, into confident, self-directed applied learners of technology-related skills. They must know they’ll have to assimilate and use new technology and work through its societal implications throughout their careers, and crucially, they must know they can do that. If they don’t leave with that scaffolding, the course fails, no matter what else they learn from it.

Learned helplessness is the chief barrier to self-driven learning I’ve seen among my students, not at all aided by the well-known technology gender gap. My best weapon against learned helplessness is recoverable failure, oddly enough. I tell them openly (since many of them don’t know) that tech experts are made, not born: made by falling down, making messes, and seeking help. Classroom tech snafus become teachable moments, object lessons in troubleshooting and graceful recovery. Next year, I’m planning to add more hands-on metatechnology exercises: writing a useful bug report (and navigating online bug trackers), troubleshooting opaque error messages, hunting for API specifications, looking for and at source code, diagnosing spear-phishing attempts, and so on.

I have learned that emphasizing the “library” in “library technology” keeps the course from being too discouragingly geekery-intense for technology novices, while serving the already-expert very well indeed. I refuse to let students out of my class, for example, without a basic non-lawyerly sense of US information law, how the technology world bends it and is bent by it, and how it impacts library programs and services from digitization to e-reserves to ebook procurement to social-media use. Similarly, they all need to know the basic parts of an ILS and how add-ins and “discovery layers” are morphing those basic parts out of all recognition, whether they’ll be running ILSes, choosing and paying for them, or simply using them.

I’m still learning how to walk this high-stakes tightrope; sometimes my experiments fail. I taught the basics of regular expressions in fall 2011, but without an advance organizer explaining clearly why, I caused considerably more frustration and less enlightenment than I meant to. I took regexes back out for the latest iteration, but I want to find a way to re-include them, since even in its failed-experiment state, the exercise helped several project groups convert finding aids to EAD and Project Gutenberg plaintext to .epub ebooks. Teaching is like technology in at least one crucial way: errors happen, and the only thing to do is sort out what went wrong and patch them up.

The gradual evolution of the final group project bears witness to my try-then-patch approach; I owe Jason Griffey in particular much gratitude for showing me how to improve it. It is split into two parts: the “project plan” student groups produce introduces them to budget, staffing, training, software and hardware selection, scheduling, project management, and other “soft” issues surrounding technology implementation in libraries and archives, while their “technology implementation” makes them come to grips with unfamiliar and non-trivial technology.

I always warn them that they’ll find themselves frustrated, and they nearly always do. Mid-semester checkins occasionally contain politely-worded howls of anguish. Now and then I have to hint at workarounds, or help a group think through a roadblock. Most of the time, though, they power around or through whatever the problem is without my help. They learn that frustration isn’t the end, that research leads to recovery from errors and other failures—sometimes, that “breaking things” is fun! (One group intentionally broke built-in hotkeys on a keyboard—not, thankfully, one belonging to the school—so that the hotkeys couldn’t be used to circumvent certain security measures on the patron-destined Linux box they’d built. That’s dedication!)

What I’ve taken to heart in the five-year life of this course is that students excel when I turn them loose and express confidence in them. I’ve almost never had a student group turn in a final project that I thought was lazy or poorly-done. More often, they blow my expectations entirely out of the water. I ask them to make a patron-ready Linux installation; they come back with well-thought-through security measures, carefully-chosen open-source software for well-defined patron needs, and tailored recovery CDs for instant reinstallation. I ask for an Omeka collection, and receive works of impressive web-design artistry and complaints that they’ve pushed the limits of the software! I’ve gotten mind-bendingly complex EADs, beautiful picture-book .epubs, well-edited digital videos, and fully-functional Drupal websites, every last project with meaningful participation from students who started out as office-suite jockeys or less.

Best of all is their obvious pride as they demo their work for me, and the oft-repeated refrain, “I never thought I could do anything like this!” When they learn they’re capable of much more than they thought, they’ve learned the most important lesson I can teach them.

Note: This post is copyright 2013 by Library Journal. Reposted under the terms of my author’s agreement, which permits me to “reuse the work in whole or in part in your own professional activities and subsequent writings.”