I am delighted not to have to do a technosocial rundown of everything that’s wrong with the contested code4lib journal article. Becky Yoose’s comment on it does a fine job of that (though I’d like to see the k-anonymity/l-diversity analysis, myself, because I want to learn how to do those properly). Being a bit of a process and organizational-behavior wonk, I want instead to look at the policy and process problems that brought about this unacceptable result, and how they can be improved—not just at this specific journal, but throughout the LIS literature.
It’s staringly obvious that “ask a privacy expert for a volunteer last-minute ethics review after the paper has already been accepted; then ignore their review, publish the article anyway, and write a self-justifying editorial with a lot of low-wattage rationalizations and poorly-thought-through technobabble; then have part of the editorial board creep on and tone-police the Twitter discussion while whinging about the editors being volunteers, as though practically all journal editors in LIS weren’t volunteers” was the wrong way to go about this. What the right way might be, however, is considerably less obvious. So let’s think that one through.
I think Rule One is that no journal gets to be surprised by ethics snafus more than once. Arguably after this fiasco no LIS journal gets to be surprised by this particular class of privacy-ethics snafu ever! And as Becky has noted, code4lib journal failed this rule—this is the second piece with dubiously-ethical-if-ethical-at-all data collection and handling processes to have been published there and immediately called out by library-privacy folks.
There simply has to be a research-ethics policy for any LIS journal that attracts (or may attract) material that is ethically problematic by library standards—honestly, that’s all LIS journals everywhere. In my head, such a policy explains as clearly as possible what’s a no-go (recognizing that being clear about this is hard, of course; people do come up with near-infinite ways to exploit and abuse other people in the name of research) and lays out a binding ethics-review process for at least the articles that trip editors’ or reviewers’ sensors as potentially ethically problematic.
“Binding” has to mean “an article that fails ethics review is rejected.” Ethics review isn’t the same as many of the characteristics peer reviewing classically looks for (originality, methodological correctness, importance, writing quality, and so on)—those can sometimes be ameliorated with a revise-and-resubmit. Ethical failure has to mean a rejection, though. The alternative is—well, it’s publishing something that’s ethically wrong, which is an ethically wrong (and potentially career-damaging for all concerned) thing to do! As Becky wrote this week, publishing ethically-tarnished work creates an information hazard by implying that ethics are disposable, to be broken without consequence.
But my head is not everyone’s (on balance, probably a good thing), and the journal portal: Libraries and the Academy is taking a different approach. (Bias alert: I’ve co-authored with this piece’s author, Kyle Jones, before. He is also PI on the Data Doubles research project I have been contributing to. It’s been a privilege and a pleasure working with Kyle—yet I think he would be the first to tell you that although I respect him highly, I am not his quisling; I disagree with him frequently, as he with me.) I want to reproduce one short paragraph from the piece before I discuss that approach, because the paragraph is just so good, I wish I had written it, and it is absolutely apropos to the code4lib journal situation:
Journals are not—or at least should not be—disinterested publishers. They have a responsibility in curating conversations on particular topics, and they also set expectations for structure, inclusive language, quality standards, and research ethics. The guidelines a publication establishes for potential authors signal its values and beliefs, and contributors must adhere to those principles should they choose to publish with a particular journal.
This is Rule Two, I think, this right here. The ethics buck stops with the editorial board. It’s their names on the masthead and About page, them getting the service credit and the résumé/CV line. It’s them setting journal policy. It’s them choosing reviewers and guest editors, and (for many LIS journals) making publish-or-not decisions. Pushing off responsibility with an airy Wikipedian “if you care so much, come fix it yourself!” is just not on (never mind that coming in to try to fix it because she cares is exactly what Becky Yoose did vis-à-vis the contested article, to zero effect). Your journal, your responsibility what’s in it. Not mine. Not Becky’s. Not even article authors’, really. Yours.
Right, back to portal’s policy statement. In essence, it’s all about expectations: stating what they expect from authors by way of privacy-ethics discussion inside learning-analytics pieces that authors submit there, and placing responsibility on reviewers and editors to make publishing decisions consonant with ethics. I like this a lot! It is good, and should be a clear enough signal to authors lacking in ethics that it will save journal staff from having to read (not to mention having no backing to reject) a lot of ethically-bankrupt dreck!
I’m… not thrilled that it only addresses learning analytics, though. Certainly that’s a dangerous ethics pain point in the LIS literature presently; those who know me well may correctly suspect that I am holding back a veritable flood of furious words about this. The contested code4lib journal article and my discoveries about University of Wisconsin circulation-record retention, though, say pretty clearly that LIS’s privacy-ethics problem is much larger than learning analytics. In fairness, portal is the first journal I know of to codify expectations with respect to privacy ethics at all; I don’t blame them one bit for wanting to start small, specific, and clear. I do think they’ll find themselves in more ethics quagmires, ones they might have avoided had this statement been broader.
I’m also not sure the statement is enough. If I were on portal’s editorial board, I’d want a clear internal process document to back this up. There may be one under development, of course! Nitty-gritty process stuff is insider-baseball enough that I wouldn’t expect a journal to make it public, though I think that given portal’s status as cutting-edge leader on this, doing so would be a service to all of LIS. I would want such a process document to answer the following questions:
- How should an editor or reviewer handle a piece that may go against journal ethics policy? (Can an editor desk-reject? With or without consulting other editors, or the board?) How should authors of pieces rejected on the basis of poor ethics be notified of the reason for rejection? (I don’t think “not telling them” is an ethical option here.)
- For pieces that need an ethics review, how does the editor go about getting one? Who’s eligible to review for this, and on what basis? (Becky certainly can’t do it all!) I don’t hate the idea of reserving one spot on a journal’s editorial board for an ethics watchdog, frankly; just be sure to set standards for the position such that foxes don’t end up watching the henhouse.
- What is an ethics review actually supposed to accomplish? Just like peer review, there needs to be a form for this, and it needs to accord with the editorial board’s policy on go/revise/no-go. (Unlike regular peer review, “revise” should be a vanishingly rare choice, for reasons discussed above.)
- For journals adopting a developmental-editing process (as code4lib journal does, I am given to understand), when does ethics review come into it? The earlier, the better: wasting a researcher’s effort on something that then fails ethics review is not great, and privacy-ethics harms often happen (as they did in the contested article) quite early in the research, at time of data collection. Can prospective authors request ethics review of their methods up-front, IRB-style? (I think that’d be a great idea LIS-wide, and I’d happily participate in reviewing.) How does that work, then?
- What are the criteria for determining whether the ethics are so bad that offices of research integrity, grant funders, or other authorities need to be notified? When notification is judged necessary, what is the process for doing so? I doubt this is super-likely in LIS, but stranger things have happened, and (as I keep saying) LIS research and assessment ethics are deteriorating and likely haven’t hit bottom.
- What’s the process if this process fails such that an unethical article slips through it? How does retraction work, and how should a retraction notice read? What’s the external communications procedure? Nobody loves “incident response” (as infosec calls it), but everybody needs a process for it, if only for peace of mind and avoiding off-the-cuff wildcat responses that make incidents even worse. An incident-response process should also include a post-mortem aimed at necessary policy and process revisions—nobody gets these completely right the first time.
I have very likely not thought of everything here! Please refine this, everyone, and COPE is likely to have useful advice I didn’t think of.
For all that code4lib journal’s editorial board bumbled this situation in almost every way they could have, I want to say (as someone who has also seriously bumbled stuff yet lived to tell the tale) that there’s an opportunity for redemption here—not only redemption, even, but helping establish ethics standards for the rest of LIS publishing, as portal is doing.
I think that opportunity looks a lot like retracting the article and editorial with a public apology (especially though not exclusively to Becky Yoose), then doing the policy and process work I have laid out here. I’m not the right person to help, given my history with code4lib—I can’t trust them and they can’t trust me—but if they do it, especially if they do it well and sincerely instead of as a mere butt-covering exercise, I will absolutely cheer them on from the sidelines. LIS needs this.