Hello, I’m Dorothea Salo, and I teach lots of things for the library school and its continuing-education program in Madison. I’ve been asked here to talk about how the relatively new emphasis on research data is changing how researchers and scholars communicate. Since we’re nearly to Valentine’s Day, I have to tell you, I love this topic! So please excuse my overenthusiasm, and let’s go on a date-a together!
When you think scholarly communication, chances are you think about books and journals, the classic, measurable, bought-and-sold artifacts of the research process. Or if you’re hip to the digital jive, you think about JSTOR and other massive databases and archives of the journal literature, or Hathi Trust, a massive archive of library books, government documents, and other such materials. Or if open access is your thing, you might even think about open-access journal publishers like Public Library of Science.
And when you think about data, chances are you think about bar graphs and other sorts of graphs and charts, such as you might find on the pages or pixels of a published journal article, or funky pictures that capture phenomena as moments in time, once again part of the published record. Or if you’re a hipster data consumer, maybe you’re into pretty infographics.
What all these common notions of data have in common is that they’re not actually data! The publication process takes pieces of the data—not all the data by any means—and reduces them to a graph or a table or a chart that tells a story. Part of the reason this is done is that until quite recently, the scholarly publishing process was print-based, and printing most data makes absolutely zero sense, economically or in terms of scholarly reuse. Another part of the reason is that we human beings read articles to understand the stories they’re telling, and graphs and charts and tables tell stories much better than the actual data usually do. So a graph or a table or a chart or an infographic is data trapped in amber. It’s very beautiful, and human beings appreciate that beauty, but you can’t get those little particles of data back out, much less do anything useful with them if you did.
So if we want the data without all that amber in the way, can we go back in the research process and capture it? Well, here’s where we run into another paper-based problem: the lab notebook. The data’s there, but the work to reuse it is just inconceivable—and that’s if we have the space to store all those notebooks to begin with! For anyone interested in data, paper is a problem.
But that’s changing! You know, just like everything else, right?
As data are collected and stored digitally—and that’s already happening, in practically any discipline you name—suddenly research can take advantage of the different affordances of digital materials. So the way all of us treat data is changing, and changing such that data are becoming a first-class citizen of scholarly communication, every bit as important as books and journals!
In some ways this dance is a little bit like those Reese’s peanut-butter-and-chocolate commercials people who are as old as I am remember from childhood. Hey! You got your data in my scholarly communication! Hey! You got your scholarly communication in my data! Let’s see both of those in action, and talk about where we fit in.
As we all know, the literature’s pretty much gone digital. That gives us the option of treating it like a great big digital dataset. The computer-science and information-retrieval folks have been finding interesting ways to throw computers at text for decades; they often call it “text mining” or “data mining.” And it goes without saying that the research literature is a pretty rich and fascinating dataset! What’s more, it’s got pretty good metadata, as these things go, which makes it an especially juicy target.
This raises the question of how anybody gets their hands on enough of the scholarly literature or its metadata to run analyses on, given that it lives lots of different places and is owned by lots of different people and organizations. Guess what, this is one place that libraries are stepping in to help! At Madison, for example, we had a group of digital humanists who wanted to text-mine the Early English Books Online database, which is licensed. So our librarians called EEBO, worked something out, made security arrangements for the data, and got the researchers what they needed. This was great, but it was also hideously laborious. There’s no way we can pull that off for everybody we license content from, not least because a lot of them will say no! So this is another reason open access is important to the academy: it makes it so much easier to treat the scholarly literature as data and extract more knowledge from it.
So once you get beyond the licensing issues (if you do), fascinating things can happen. Just by way of example, one thing folks are looking at doing is network analysis of literature authorship—who publishes with whom, who publishes where, who’s citing whom and why, when and how do ideas jump disciplines or become truly interdisciplinary, that kind of thing.
And there’s lots more, including enriching the literature by having computers find and point to important classes of things you find in the literature, like gene names, chemical names, place names, whatever—so that it becomes easier to find and collocate all there is on certain kinds of topics.
I was caught by this tweet from the Academic Publishing in Europe conference, for example, because I think this kind of enrichment is exactly what the chemistry industry is wanting, and they might need open access to get it! All of this is an active area of research, far too active for me to give you a whole lot of detail on, but it does illustrate what Sarah [Shreeves] said about there being a lot of stakeholders and a lot of different drivers.
I could talk about lots of applications of data-mining the scholarly literature, but since we’re focusing on scholarly communication today, I want to talk about data-mining the use of the scholarly literature, because that’s bidding fair to change how our faculty face tenure and promotion committees. We know about the journal impact factor, and we also know (if we’re honest with ourselves) that the way it’s used to judge faculty publication records is a complete crock—statistically flawed, opaque data, completely useless for measuring people instead of journals… it’s seriously embarrassing to the entire academy that this hideously flawed and incomplete measure is determining people’s careers!
Seriously. Please. Anywhere you see journal impact factor figuring in tenure and promotion decisions, make that stop. I’ll be more than happy to point you to support for that decision.
But if we don’t use and abuse journal impact factor, what should we be paying attention to? That’s the question that “alternative metrics” investigators are asking, and starting to offer potential answers to. I’m talking about ImpactStory today because they’re pulling together a lot of usage information from a lot of sources, and they’re showing their work, and I think that’s important, because part of the conversation about tenure and promotion is, what should we value, and how do we collect evidence about it?
ImpactStory separates use by scholars and students from use by the general public. For scholarly use, it looks at saves to bibliographic tools like Mendeley, citations, and recommendations from review services like Faculty of 1000. For public use, it looks for social-media mentions, bookmarks, and citations in public venues like Wikipedia. What I like about this is that it’s nuanced. Some disciplines couldn’t give a flying flip what the public thinks of them. Great! They get to pretend the entire column about public mentions isn’t even there. But consider translational medicine, whose whole reason for being is getting research out of the lab and into medical practice. Do they want their information to reach the public? You bet! And altmetrics promises to give them a way to measure how well they’re doing at that, and use it to assess career accomplishments.
Where are libraries in the altmetrics arena? Well, it turns out we have a dog in this hunt, some of us, as we’ve taken on the job of making sure our faculty show to their best advantage on the Web. One of the best-known projects in this area is VIVO, which started at Cornell and has found takers elsewhere, which lets libraries aggregate their faculty’s publication history and derive data about what they write about, when, and with whom.
Here at Marquette you can see traces of this too, with the Most Popular Papers page in the library’s digital repository of papers. This kind of work, as you heard the provost say, makes Marquette more visible to folks outside the library, which is great—but there’s another step we can take, once the infrastructure is there. There’s data behind this page somewhere, usage data and usage logs! But it’s hidden, except in this very boiled-down story-telling-for-humans form. Altmetrics challenges us to move beyond telling stories to humans to freeing our data, as Sarah said, to make sure that machines can aggregate usage data from all over the web, and to make sure that our usage data shows up in places like ImpactStory. And conversely, we can help our faculty show off their impact by including ImpactStory’s number crunching in our faculty showcases, as the ImpactStory API makes possible.
That’s a little bit about how scholarly communication is becoming conscious of itself as a data-producing and data-using enterprise. The other side of the coin is that researchers are generating data, and sometimes sharing it, and they naturally want appropriate credit for the work they put in and the value those data have, and they want it to count when they go up for tenure and promotion. And researchers who want to reassess and reuse data don’t want to have to go through the runaround of emailing the author, emailing again when they don’t get a response, not understanding the response they do get, even when it’s a “yes” (which it often isn’t), and so on.
In other words, data sharing is becoming a form of scholarly communication. So let’s talk about that.
As a librarian, I’m intimately aware of the immense, cathedral-like infrastructure we’ve built to deal with scholarly communication in the form of books and journals. It’s not just library buildings and shelves! It’s policies on copyright ownership and tenure-and-promotion. It’s assessment and quality-control modalities like journal peer review and book reviews and, much though I hate it, journal impact factor. It’s citation norms and style guides. It’s library cataloging, and journal-article databases.
For data, our infrastructure looks a little like the Gaudi cathedral in Barcelona, immense and impressive—but you can see a lot of scaffolding because it’s definitely still under construction. Libraries, archives, institutions, funders, journals, everybody’s scrambling to build systems and processes that work. So a lot that I’m telling you today will look completely different—and much, much better—in five years! But that’s okay; there’s still plenty to talk about now.
So let’s go on a really quick tour of some of this emerging infrastructure for governing, managing, sharing, publishing, and crediting data. We’ll start with data policy, which is really the foundation of our cathedral here.
Most people in this room probably know already about the shot heard ’round the research world, namely the NSF requirement for data-management plans in all grant applications. And from where I’m sitting there’s still a lot of confusion about this with respect to how researchers are expected to share and communicate about their data, which is only to be expected, because different chunks of NSF have different guidelines about that. What’s clear, though, is that the NSF requirement is consciously trying to create a bias toward data sharing and publication wherever possible. I do think that emphasis will continue and in fact intensify, and certainly we’ll see more policies from other funders as well.
This has sent institutions scrambling to catch up with the new funding environment by building policies of their own. This is hard! It’s fraught with tension over data ownership, for one thing, which is an issue much more complicated than I have time to discuss here. And because the technology and training infrastructure on most campuses around data is still so fragmentary, data-retention policies can feel a little like unfunded mandates. So institutional policy development is hard and to some extent politically perilous work, but it has to be done and we’re doing it.
Now, I ran into a “Journal Research Data Policy Bank” project from the UK about a week ago, and it blew my mind. Since when have we had enough journal data policies to need this? But we do need it now, because journals in a lot of disciplines are waking up to data publication, maybe as a fraud-prevention measure, maybe partnering with a data repository to make life easier for authors and data reusers, maybe making it easier to link from a published article to a dataset held elsewhere. So if we’re talking about the naturalization of data into regular scholarly-communication processes, this is definitely one place it’s happening. But it’s scattershot. I can’t even give you a rule of thumb about which journals do and don’t have data policies, because it’s really all over the map. That’s why projects like this exist. The best I can tell you is, find out well before you submit an article to a journal!
One of the things I teach my students about data management is that you can’t just tack it on at the end of a project—that’s a recipe for disaster. The better-planned and more consistent your data management is while you’re doing the research, the more the data actually communicate at the end. The obvious corollary to this is that if datasets are to be full citizens in the scholarly-communication-verse, we have to care about the technology researchers use to store, clean, and analyze data.
We know, for example, that a great many researchers keep data in Excel—it’s convenient, it’s flexible, it’s widely available—but we also know that there are common poor practices in Excel use among researchers that make Excel data harder to understand and reuse. DataUp is a combination Excel plugin and web service that looks for those poor practices and also helps researchers describe their Excel spreadsheets so that other people can open the file and know a bit better what they’re looking at. I love this project—if you’re an Excel user, check it out!
Another example of change in data storage and communication processes is what’s called “open notebook science,” UsefulChem being a good example of that. This movement to some extent is about researchers being very impatient with the lack of tools and storage on their campuses, or the low quality or appropriateness of services provided, such that they cobble together tools on their own. And as a communications modality, I love this, I do—but as a preservationist, I hate and fear it, because one server bobble or one cloud service going under, and a lot of valuable data is just gone! This is a key missing piece of our cathedral: campus data-storage and data-processing services that work.
But we also have some responsibly-run answers to data-storage. Figshare, for example, is a data-management startup that just signed a big supplementary-data-storage agreement with the open-access journal publisher Public Library of Science. The difference between Figshare and UsefulChem is that Figshare has signed another agreement too, this one with a well-known digital archiving cooperative called CLOCKSS, which will take care of all the data in Figshare if Figshare goes out of business or has a major technical breakdown.
Some institutions are starting to build reliable data environments as well. I’m showing you Purdue’s PURR because I love its name, but there’s also Penn State’s ScholarSphere, the University of Prince Edward Island’s Virtual Research Environments, Stanford’s SULAIR, and so on. These are typically both data-management AND data-communication environments, though the back-end details differ. I know I’ve been saying this a lot, but I do expect to see more of these, especially as pioneers like Purdue show us how best to build them.
So if there’s all these data repositories and datasets swimming around out there, how do we find them? We haven’t communicated data if researchers who could use data can’t actually find it! That’s what Databib is about (and here’s where I disclose that I’m on Databib’s advisory board). Anybody can add a data repository to Databib after a short signup. Advisory board members then check the entry and approve it to go live. We’ve been around for about half a year, we hit 500 repositories the end of last year, and we’re hoping for more, so help us out! And when you’re not sure where data might be hiding, Databib is one useful place to look.
So we’ve managed, processed, stored, archived, and published data. Now we get to the good bit: getting credit for it. And this is the good bit because it’s both carrot and stick. A lot of researchers, when the idea of data sharing is first suggested to them, say “What’s in it for me?” because cleaning up data for reuse is a lot of hard work, and altruism generally isn’t a sufficient motivator. So the practice of citation feeds into the credit cycle, which offers incentive. There’s an outfit called DataCite that’s issued some recommendations on how to cite and link to data as simply as possible, while still making it possible to do all the neat credit-gathering tricks I discussed earlier when I talked about ImpactStory.
Frankly, to some extent this is a job for journals and style guides to take up. All the recommendations in the world won’t make any difference if nobody uses them. So those of you who edit journals, be thinking about this, because the sooner it becomes a normal part of publishing, the better.
I have to tell a story here. I ran Wisconsin’s analogue to Marquette’s e-repository for several years, and one winter I decided I was going to clean up the author listings, because they were a mess—people in there with just their initials, the same person with up to eight different versions of their name in there, stuff like that. And it was awful. The worst problem was graduate student co-authors who didn’t go on to publish anywhere else; finding their full names was a horrendous chore, when I could manage to do it at all. It’s not enough to clearly identify datasets; we need to identify the people who create them!
Now, libraries already do this for people who write books; it’s called “name authority control.” This is fine, but a lot of researchers never write books, so they escape library author directories. That’s what ORCID is about. Each researcher gets a unique identifier, which can then be tied to all the publications, presentations, datasets, and other research products they produce. ORCID is the plumbing under the data cathedral. You don’t necessarily see it all the time, but you really want to have it there.
And that’s our whirlwind tour of data in scholarly communication. Because I’ve been motormouthing about technology and policy all this time, I want to close by talking about how we improve the human infrastructure involved with data, because nothing matters without that. So up here is MANTRA, one of several research-data management curricula that are emerging from various grant projects. And these grant projects are fine and great and wonderful, but as somebody who does data- management training for a living, I have to tell you, curriculum is not the problem! I know what to teach. What I need are venues and learners. So circling all the way back around to policy, I’m telling you, please, help me find the people I need to be teaching, and build the policies that route them through me!
Education is a vital part of scholarly communication. If we don’t prepare our undergraduates and graduate students to thrive within an environment in which research data are a first-class citizen, we’re really doing them a disservice! So help me figure out how to train more and train better.
Thanks for sticking with me, and I hope to chat with people more at lunch and during breaks today!