Good morning, and thank you for coming. My name is Dorothea Salo, and I work for the University of Wisconsin System as an odd sort of digital archivist. I do have strong interests in the area of cyberinfrastructure, as I hope to prove to you today, and so Melissa [Woo] asked me to come here and talk to you a little bit about my angle on the whole cyberinfrastructure thing.
And I promise you will understand the title by the time I’m done talking. Cross my heart.
So, when we say the word cyberinfrastructure, some of the first things that come to mind are grid computing, in which we throw a whole lot of little computers working together at huge, massive computational problems, and data mining, in which we throw those computing resources at huge amounts of data on a scale we could never have considered before.
Of course, these processes create new data. Terabytes and petabytes of it. And now all the librarians listening to me are wincing, because our shock-and-awe sensors tripped as soon as you could fit the Library of Alexandria on a USB thumb drive, you know what I’m saying? And then the grid computing people start tossing around exabytes, and my brain just shuts down.
In the UK, what we call cyberinfrastructure is often called “e-science.” This, of course, betrays an assumption. So we don’t use “e-science” here, because it’s not just the physicists and the astronomers and the climatologists; we say “e-research” instead, because it’s certainly true that the social sciences, the arts, and the humanities are joining the party too. And with that, we add concerns over collaboration, especially across institutions and across disciplines—and doing cross-disciplinary collaboration creates sticky issues around identity and authorization and it all gets very evil and nasty and complicated very quickly.
And while we’re at it, let’s not forget the data I mentioned. An emerging professional specialty, though exactly where it’s emerging is a really good question, is that of data curation. This brings up questions of metadata, a thing dear to librarian hearts that just made the IT professionals here cringe, and data standards. We have a few of those, in a few disciplines, but not nearly enough, and unstandardized, not-uniform data is something that I think we can all agree makes us all cringe!
And then there’s the question of who’s going to do data curation. Is it an IT function? Are faculty responsible? After all, it’s their data! And what about those libraries? And by this time much screaming has ensued and much hair is being torn out.
It’s simpler than that. Thank goodness!
Scholars are using computers, in a number of different form factors, from tiny smartphones to big old server racks, in their research. This, I am sure, is not news to anyone!
All this computation produces data, sometimes as the point of the exercise, sometimes as a sort of side effect. Data takes all kinds of forms; it’s not just numbers. Word-clouds, scanned manuscripts, maps, images on wildly different scales—it’s all bits-and-bytes; it’s all reusable and recomputable—it’s all data!
This is in addition to the books and journals that librarians are familiar with and already care for. But interestingly, as these materials move digital themselves, they too can be treated as data, grist for the computational mill. This doesn’t happen as much as it should, honestly, and the reason for that is that even when these materials are digital, they’re locked up behind pay-access firewalls to protect the current scholarly-publishing business model, so the computers can’t get in to crunch on them. This is a major argument for open access to the literature—and for those of you who know me and what I do, I hereby reassure you that it’s the only open-access argument I’m going to make in this presentation.
So to recap a bit, we have our researchers, and they’re using computers, and they’re generating data. And support for that, librarians, has to happen throughout the entire data lifecycle. And that support, IT professionals, is absolutely not limited to providing computational horsepower and storage. And that support, scholars and researchers, has to include verification and documentation of data-gathering methods, so that everyone knows that everything’s on the level, and it’s got to include ways to refer back to other people’s data that you’ve used; that’s what I mean by ‘certification’ here.
That’s all this is about. Really. And that’s the cyberinfrastructure puzzle as I see it. It’s all about data.
But what is data, exactly? We’re used to thinking of data as nice bar graphs and charts, with a nice key in the corner; you can imagine one on a web page or equally well on a print journal page. This is data, right?
No. Actually no, that’s not data, not data in the sense I mean it. Charts and graphs are dead data, data that’s been killed, cut in pieces, and ground up until it’s unrecognizable, just like hamburger. Data in charts and graphs is not revivable and not reusable. For optimum reusability, we need to save data before it’s distilled into charts and graphs and tables. In other words, we need to save the cows—before they become hamburger!
(In case you’re wondering, I owe the hamburger-and-cow image to XML expert Michael Kay, who once famously said “Converting PDF to XML is a bit like converting hamburgers into cows.”)
So in tight budget times, a very good question to ask is whether it’s actually necessary to solve this problem. Even if it is, do we have to solve it now? Do we have to keep all these data?
The answer is a resounding—sometimes. But I do want to add that even when it’s not absolutely required, it’s often a really good idea. On the Madison campus, we have collected a number of stories of researchers who wish they’d done a better job keeping their data, because a new use turned up for it, often years or decades later!
So in what cases is it mandatory? Funders may require it, as the National Institutes of Health (NIH) sometimes does. Just to be clear, that’s completely separate from the Public Access Policy requiring open access to journal articles published with NIH funds. Journals may require it. Most of the funders requiring open data are in Europe at the moment, but that’s not true of journals. I can’t give you a laundry list, because it’s very discipline-dependent and also very volatile, but we are seeing more and more science journals instituting data-retention policies.
Now, the data-retention policies I’ve seen have usually been time-limited; five or ten years is common. My question is this: if you’re going to do it for five or ten years, why not plan for longer? Sure, it makes sense to assess every now and again, because some datasets do become obsolete. But don’t let your thinking be governed by journal requirements; most of the work of keeping a dataset happens before the bits hit storage, so keeping them longer is often a very low-margin business.
Here’s the catch. Some of these data stakeholders have built barns for the cows. Many haven’t. And guess who’s on the hook if they don’t? There’s nothing stopping a journal or a funder from creating an unfunded mandate to keep and preserve data. A few have. And we, collectively, researchers and librarians and IT professionals, are left dangling on the hook figuring out how to comply.
So that’s the stick. Now for the carrot. We’re keeping all these data. Why? What’s the use? What can be done with data?
- Experimental validation
- Meta-analysis, data-mining, mashups
- Interdisciplinary investigation
- Historical investigation
- Modeling and model validation
… the possibilities are endless—IF we have the cows—that is, the data.
Is all data from “big science?” I’ve answered this already, for those who were listening at the beginning, but for anybody who came late, and just to reiterate, there’s an image of cyberinfrastructure that assumes it’s all about the Higgs bosons of this world. Physics, astronomy, and biomedicine. That’s who’s got all the data, just like they’ve got all the money.
Absolutely not. And they don’t even need our sloppy help.
A broader concern is so-called “small science,” science without the big bucks, which is frankly most scientists, not that that surprises anyone. The big guns have mostly worked out their data issues, as I’ve said. The small-science folks—a lot of them hardly seem to know where to begin.
And the sting in the tail here is that there are a lot more small-science researchers than big science. This means that if you pile up all their data, there’s probably a lot more of it! Each individual data-herd is pretty small by comparison with the Large Hadron Collider, granted. But add all those herds together, and we are talking a lot of cows.
And my dearest loves, the arts and humanities, are hardly devoid of data. A digitized image is data. A digitized book is data, and can be computed upon. The performing arts are pushing out huge amounts of audio and video—and while we’re talking storage capacity, digital video is an unbelievable headache because of file sizes. I like to think about folklorists and ethnographers while I consider digital data in the arts and humanities. Anything you can imagine is grist for their analysis mill, and yes, they are both analyzing digital data and recording their conclusions digitally.
So we’ve all got data, one way or another. And here’s the catch with that: we don’t have a service-provision model for this. Not in libraries. Not in IT. Not in most regular research practice. Nobody’s sure how it’s going to get done yet. This is part of why I’m here today. UW-Milwaukee is busily trying to sort out how to do all this.
But we do know a few things…
Cows are dumb. They will not save themselves. We know that apathy is not a solution to data management. And here we often hear someone grumbling that if this was just all paper, it’d be fine; it’s this stupid digital stuff that’s the problem. Leaving aside that data on paper are completely useless as data, we shouldn’t ignore the incredibly complex safety net that libraries have built around paper. Paper doesn’t preserve itself either; librarians preserve it! Digital data are no different. We have to take intentional action to keep data viable.
It takes a village to save the cows. Right, so who’s we? Let’s have a show of hands around the room. Librarians? IT pros? Faculty and researchers? Research support, grant administrators and the like? Right. If you raised your hand at any point, part of this is probably your problem. Which part, I don’t know, and anybody who tells you they know is lying and probably trying to sell you something.
So, can you tell a Holstein from an Angus? (I’m just going to die if there’s a dairy researcher in the room.) No, I can’t either. But researchers know their cows! The point of this little parable is that we know absolutely that data curation can’t happen without researchers helping and being cooperative with other people in the village. This is because data without context and interpretation are meaningless, like a spreadsheet with the header row chopped off. Librarians and IT pros don’t automatically understand how a given dataset fits together, how it was created, how other people will expect to search for it or use it, what different parts of it even mean. Researchers will have to learn to express these things, if they don’t already know how.
IT pros, you’re going to be running the big iron, no surprises there. But there are surprises for you in this, such as time horizons you’re not used to, mass file format migrations, metadata internal and external and relational that we can hardly imagine yet… and so on. Don’t panic, we’re all in this together, and we have examples to work from, especially on the larger scales—but by the same token, don’t make the mistake of thinking you can just sail in and solve this one. It’s complicated.
Librarians, this is your call to arms. Step up and sit at the table, or the table is going to forget that we exist. This isn’t good for the table, and it’s not good for us, either. Sure, we’re used to dealing with the published literature, and we’re fond of its authority and finality. But we’re going to have to look earlier in the lifecycle for our greatest impact:
But what I see happening is … this beautiful combination of understanding the structure of information, and understanding the code that goes behind it, and how to make it usable to the people who want to access it. I think that we used to talk about blended, or the hybrid librarian — now that’s the librarian.
Palmer et al., “Identifying Factors of Success in CIC Institutional Repository Development”
Grant administrators: cows don’t corral themselves. Neither do researchers. We need you.
And then there’s the big gray area. When I said I didn’t know who would do all this? This is what I meant. Some researchers say that the solution to data-management deficits is to teach themselves—or up-and-coming newcomers—information-management skills so that they become informaticists. Some researchers say that the answer is for researchers to learn to code. All of this will probably happen, in some fields and at some levels. I don’t know how it will all shake out, in the long run. But cross-functional training, no matter what end of the research enterprise you’re on, is probably the wave of the future.
Great. So now what?
- Find use cases. Find the people with data problems needing solutions. I guarantee they exist.
- Plan for infrastructure. Data infrastructure is more than computers, let’s not forget. It’s also a policy and procedures infrastructure, without which none of this can happen. And finally, as I dearly hope I’ve made clear, infrastructure is people. Fancy supercomputers aren’t worth a penny without people to use them, care for them, and take care of what they compute.
- Build alliances. No one can do this alone.
- Keep an eye out for opportunities.
- Start conversations. Everyone in this room can do this, and I hope you will.
But, you may ask, what do you say to someone about their data? I recommend starting with Michael Witt and Jake Carlson’s Ten Questions:
- What is the story of your data?
- What form and format are the data in?
- What is the expected lifecycle of your data?
- How could your data be used, reused, and repurposed?
- How large is your dataset, and what is its rate of growth?
- Who are the potential audiences for your data?
- Who owns the data?
- Does the dataset include any sensitive information?
- What publications or discoveries have resulted from the data?
- How should the data be made accessible?
If this seems like common sense… good! It mostly is! Thank you—and save a cow today!