Plagiarism is perhaps the mildest academic sin, as well as the easiest to detect. There are innumerable cases of more serious forms of misconduct — such as the falsification and fabrication of data — that have stained the reputations of universities all over the world. If academia really wants to tackle the problem, it’s got to rethink the way it judges and rewards research — and tell good from bad.
In Claudine Gay’s case, the plagiarism — and I think it qualifies as plagiarism — seems a venial sin rather than a mortal one. Yes, her doctoral dissertation and several of her academic papers appear to duplicate the language of other scholars in a way that fails to give sufficient credit. But that in itself isn’t irredeemable; when a couple of punctuation marks or a footnote can be all that separates vice from virtue, there’s a lot of room for interpretation and for honest error. However, plagiarism is a signifier of potentially much more damning sloppiness: Even when, as here, it isn’t an egregious case of trying to claim credit for someone else’s ideas, it can be a sign that the work has more fundamental problems. It’s a signal to advisers and peers to give that work extra scrutiny, scrutiny that is sadly lacking.
This is a huge issue because those same advisers and peers are the ones who determine the value — or lack of it — of research, largely through their role in publishing it. The coin of the realm in academia is typically the peer-reviewed paper; an academic gets credit for the research she performs when she publishes the results in a scholarly journal. For the most part, these journals will do a quick assessment of a paper’s worthiness and then send the manuscript out to a small number of subject-matter experts (often three) to gauge the quality and importance of the work. But peer reviewers have little incentive to do a thorough job. While universities richly reward a professor’s own research output, they care almost nothing about their professors’ role in checking others’ work. Nor are academics typically paid by the journals (which make money from publishing researchers’ work); and, given the imperfect anonymity of the process, a thorough, critical review can even damage the researcher’s relationship with other scientists. As a result, countless professors, when asked to perform a peer review for a journal, fob the work off to their hapless grad students, so it’s often not the seasoned academic judging the quality of research but the greenest in the field. And given the proliferation of academic journals — and the increase in the number of academic papers published each year — the academic review process is getting more threadbare by the year.
A truly thorough review of Dr. Gay’s papers by peers should have caught the plagiarism; spot-checking every single citation in a paper takes time, but it’s a great way of catching not just plagiarism but errors in interpretation. And that’s the easy stuff. Falsification or fabrication of data is even harder to catch, but it can often be detected given enough time and effort: Another college president, Stanford’s Marc Tessier-Lavigne, resigned after it was revealed that his lab published reports with manipulated data. (A review of the allegations said there was no evidence that Dr. Tessier-Lavigne knowingly falsified data, but that his work “fell below customary standards of scientific rigor and process.”) The problems were evident in the papers published in journals — and should have raised flags earlier.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.