Peer evaluation: What is the Impact on Plagiarism in Science?

Posted on:August 19, 2010

Science cherishes traditions, jargon, and standards for research and writing. Peer evaluation is a valued tradition of scientific research. Scientists whose article has survived the peer evaluation process usually speak of this with pride. The current peer evaluation method may be inadequate for the challenges of new research technology, however, and the increasing temptation to commit plagiarism.
 
Peer Evaluation: What is it?

  • “Peer”: equal, often in a very specialized field.
  • “Evaluation”: assessment according to some standard.
    This is roughly synonymous with peer review. (However, the term peer evaluation also applies, for example, to grading student oral presentations, or periodic performance reviews.)
  • Scholarly journals requiring peer evaluation are most prestigious.
  • Grant proposals, disbursing taxpayer funds, often require peer evaluation.

Weaknesses of peer evaluation:
 

  1. However, this procedure is not without critics, such as Richard Horton, of The Lancet and Horace Judson, in the Medical Journal of Australia (http://www. ama-assn. org/public/peer/7_13_94/pv3112x.htm, http://www. mja.com. au/public/issues/172_04_210200/horton/horton.html)
  2. The anonymous Wikipedia author (merely a starting point) contends that peer evaluation has historically assumed author integrity, but provides no citation (http://en. wikipedia. org/wiki/Peer_review).
  3. A study detailed in Bioinformatics illuminates this deficit in peer evaluation. Researchers applied eTBLAST to a random sample from Medline, a database rich in journals using peer evaluation. 70,000 similar citations appeared. Researchers’ close reading of a sample revealed 207 possible plagiarism cases (Bioinformatics 2010 26(11):1453-1457; doi:10.1093/bioinformatics/btq146 ).
  4. According to a summary, these researchers anonymously surveyed the journal editors, and both original, and subsequent authors (http://www. scientificblogging. com/news_releases/what_happens_cases_peer_review_plagiarism )

Of those contacted:

  • 93% of authors were unaware they had been copied
  • 35% admitted copying
  • 28% denied borrowing
  • 22% had not participated in the write-up of a collaborative project
  • A puzzling 17% were unaware that they were listed at all!
  • 11 journal editors had no prior experience or clue as to how to deal with plagiarism not picked up by the peer evaluation process!

The software noted above, and others, can powerfully supplement the human efforts of peer evaluation referees. Peer evaluation is primed to benefit from the advances in technology.
 
Complexities of peer evaluation in science:

  1. Publish or perish pressure impacts peer evaluation’s effectiveness. Researchers may be pressed to publish similar material in different journals or subdivide research findings to present them in different outlets or “platforms”. (http://www. nbi. dk/natphil/kur/phd/3.Fraud_def_ex_01a.ppt.).
  2. Can or should peer evaluation manage this intellectual gray zone?
  3. The demands for precision and exactness lead to the re-use and over-use of boiler-plate tropes. Should peer evaluation pick up on this uniformity of style?
  4. Science can be very self-referential; it has been for decades.
  5. For example, Richard S. Greeley, PhD, employed reference figures issued by the NBS for measurements at room temperature, in his 1959 research. He determined the ph of HCl solutions at elevated temperatures, electrically. His results were published, after peer evaluation, in The Journal of Physical Chemistry (DOI:0.1021/j100841a013, and URL – http://pubs. acs. org/doi/pdf/10.1021/j100841a013).
     
    Greeley’s findings were referenced by others utilizing thermal techniques. Later, a staffer from the NBS approached Greeley, thanking him for corroborating their reference figures. Was any of this plagiarism? Maybe not, but it was very circular reasoning. Peer evaluation obviously did not pick it up.
     
    Publishing first motivates scholars and journals using peer evaluation. This may constitute a perverse incentive for peer evaluation referees to neglect a search for prior similar works.

  6. “Prejudice based on language makes it difficult for non-Anglophone scientists to publish” ( Myers, at http://www. tesl-ej. org/wordpress/issues/volume3/ej10/ej10a2/). Peer evaluation is challenged to distinguish between deliberate copying and well-nigh fatally weak writing. Both were represented in several documented Chinese cases in the early 90’s.
  7. Gareth Hughes, of Oxford points up science writing’s “more formal structure of discourse and reduced level of verbal reasoning (i.e. a tendency to use proofs rather than arguments).”
  8. Nothing novel under Sol

Plagiarism is a long-standing issue in science – consider the following prominent (and pre-peer-evaluation) examples of conflict over precedence:
 
Isaac Newton versus Leibnitz:

  • Charles Darwin versus Alfred Russell Wallace
  • Gregor Mendel, Carl Correns, and Hugo DeVries

Equipped with evolving plagiarism detection tools, peer evaluation can more effectively ensure intellectual rigor and integrity. The outcomes of scientific research are serious and affect us all.

Leave a Reply