Powered By Blogger

sexta-feira, novembro 16, 2007

Avaliaçao pelos "pares" (de chinelinhos ao jeito de Welington Salgado por supuesto...) no mundo concentracionario da pesquisa brasileira e mundial.

O Brasil é sempre diletante em análises políticas, devido à mentalidade sectária dos seus intelectuais, em maioria esmagadora. A organização em grupos de interesses (alguns deles abarcam todas as regiões do país) não representa novidade nenhuma, quando comparada aos conglomerados acadêmicos do mundo. Nos EUA, na França (leia-se o delicioso “Les intellocrates, une expedition en haute intelligentsia” de H. Hamon e O. Rotman, de 1981 mas ainda atual), na Itália, os intelectuais reúnem quadrilhas para conseguir verbo e verba, numa comunhão negra cujas estratégias e táticas escapam ao comum dos mortais, os simples pagadores de impostos.

Na faina de abocanhar recursos, os sectários ardilosos se apoderam de cargos e assessorias e alí destinam benesses aos fiéis ou aliados. E no segredo arrancam bolsas dos inimigos ou críticos, caluniam anônimente os adversários, interrompem pesquisas, tentam arranhar com tais meios sórdidos os que ousam denunciar os seus conciliábulos. Atenção! Uma seita acadêmica não se pauta exclusivamente por ideologias, muito pelo contrário! E também não se imagine que elas incluem ou excluem apenas os que divergem dos métodos, fundamentos e alvos de suas atitudes noéticas. Não raro, acadêmicos da pretensa esquerda são amparados, na luta pelos cargos e posições, por aliados da chamada direita.

A superfície ideológica apenas encobre a divisão do poder e dos recursos, tanto nas instituições de pesquisa e de ensino, quanto nas agências que as financiam. O sigilo dos pareceristas ad hoc, é uma das coisas mais anti-éticas do mundo científico. Covardes se escondem nas sombras da irresponsabilidade, negam seu nome ao público que paga todas as contas. O fenômeno é antigo mas se agravou nos últimos tempos, no Brasil e no mundo. Os ignaros ou de má fé (quase sempre esta é uma tautologia) “desconhecem” o problema, enquanto seus companheiros controlam os cargos, os dinheiros e a fábrica da difamação alheia.

Mas existe vasta literatura, baseada em pesquisas idôneas, na cena internacional. Um texto excelente foi publicado por Gordon Moran, “Silencing Scientists and Schollars in Other Fields: Power Paradigm Controls, Peer Review, and Scholarly Communication” (Ablex Publishing Corporation, Greenwich, 1998). O título imenso dá conta, no entanto, dos problemas políticos, ideológicos, metodológicos e outros, da avaliação pelos “pares”. O autor cita pesquisas de casos, nos mais diversos campos, e ressalta os “Dirty tricks” usados pelos assessores contra os adversários ou simplesmente contra os não enquadrados nas seitas.

Nas revistas acadêmicas, por exemplo, o artifício mais comum é a rejeição de um artigo “porque ele é muito extenso”. O escritor cujo texto foi recusado, abre o número em que seu trabalho deveria aparecer e...(nenhuma surprêsa) encontra artigos com o dobro do tamanho do seu. Nenhuma surprêsa também, ao constatar que o felizardo integra a seita dos que dirigem a revista, os mesmos que nomearam o “assessor objetivo e neutro” que mandou o adversário para o espaço. Outros truques sujos? Alegadas ausências de citação “dos autores nacionais importantes” (não raro, o próprio assessor ad Hoc), ou repetições de teses já publicadas em algum lugar (lugar jamais nomeado, por supuesto...).

Adianta Moran que um truque banal (não seria preciso Moran, se os docentes e pesquisadores prestassem atenção ao que se passa sob o seu nariz, ou mesmo com eles) é a rejeição de muitos trabalhos com base em “vicíos” que eles não têm. Não se declina quais são os ditos vícios, mas afinal, se recusa o projeto “devido aos vícios” insinuados. Sumariza muito bem Leslie, L. Z. : “Quase todos que um dia submeteram artigos para juízo de um periódico, têm uma história de horror para narrar”. (“Manuscript review : a view from below” Scholarly Publishing, 20, pp. 123-128).

A “ética” do segredo mantido pelas agências e revistas é bastante singular: decreta a isonomia dos pesquisadores. Mas coloca no papel de juiz arbitrário o lobo, que por sua vez devora ovelhas. Estas, por sua vez, ao encontrar um rebanho forte também no segredo estraçalham o ex-lôbo cuja quadrilha perdeu postos de comando e coordenação, sem dó nem piedade. E segue deste modo a roda infortunada dos "pares", iguais apenas na tibieza, na fuga da luz pública. Existem lobos, leões, hienas e cachorrinhos no universo acadêmico. As máscaras servem apenas para determinar quem será o carrasco da hora e quem será a vítima. Como bem diz Alexandre Kojève, o reino dos intelectuais é "o império dos ladrões roubados".

Ao contrário da justiça comum (mesmo ineficiente, como a brasileira) o direito de apelo é mais do que simulacro. Ele próprio representa uma desculpa. Vejamos: o lobo, com base no poderio de sua alcatéia nas agências ou revistas, nega um projeto ou artigo, permite a si mesmo, inclusive, caluniar o proponente que não pertence ao número dos “escolhidos”. Qual o modus operandi das agências e revistas? Mandam para o mesmo lobo o pedido de reconsideração, o que resulta em mais calúnias anônimas, mais covardia, etc. E covardia gera covardia. Em situações de injustiça, raros pesquisadores ou docentes reúnem coragem para desafiar tal sistema iniquo.

Coloco a palavra “pares” entre aspas, porque em se tratando da guerra geral entre seitas poderosas, indivíduos ou grupos “sem padrinho” para se defender ou “afilhados” para ajudar, deixam de ser pares e se tornam “o inimigo a ser abatido com o uso de qualquer meio”. Quando o jurista do nazismo, Carl Schmitt, afirmou que o conceito de político é fundamento da arcaica noção de “inimigo”, sabia perfeitamente o que falava. Como ele integrou o grupo vencedor, que também se apoderou dos recursos públicos e expulsou os colegas criticos do governo hitlerista, conhecia os procedimentos da “política intelectual”. Seu colega Martin Heidegger também foi um vencedor e não poupou, nos rituais da vitória, seus antigos mestres como Edmund Husserl. Leia-se o tremendo livro de Emmanuel Faye : Heidegger, l´introduction du nazisme dans la philosophie. Autour des séminaires inédits de 1933-1935 (Paris, Albin Michel, 2005).

Na ditadura militar brasileira —o livro negro da Usp não pode ser desmentido— intelectuais “realistas” aboletados nos cargos oferecidos pelo governo, riam às bandeiras soltas dos “ingênuos” que acreditavam na democracia. O jornal “O Pasquim” criou o slogan adequado aos colaboradores da hora : “eu preciso sobreviver, entende?”. Frase que não seria errada na boca de Heidegger, Carl Schmitt e quejandos. Frase adequadíssima aos que no anonimato, hoje, procuram destruir reputações e carreiras de seus “pares”.

Me abstenho de analisar os plágios (sempre absolvidos quando o culpado pertence “ao nosso lado”), as citações estratégicas para agradar o líder de uma seita intelectual, a dupla face no trato com os colegas. Certa feita, num simpósio importante, eu ouvia nas últimas fileiras que sobravam, a palestra de minha mulher sobre o pensamento liberal e a gênese do Estado brasileiro. Sentava-se ao meu lado uma senhora que me dizia ao ouvido, a cada enunciado da palestrante, todo o horror que lhe suscitava aquele monte de filósofos e sociólogos que “se aventuravam no terreno da História”. Ouvi pacientemente as catilinárias cochichadas. A pessoa, obviamente, não sabia quem eu era. Ao terminar a palestra, a mesmíssima senhora cochicha ao meu ouvido : “E apesar de tudo, preciso pedir um favor para aquela filósofa uspiana”. Segui atrás da censora até a mesa da palestra. Então, abri um sorriso e disse: “apresento-lhe minha espôsa”. Pelo menos aquele favor não foi solicitado. Um a menos na imensa comunhão dos favores, dos favorecidos e dos desfavorecidos no miserável Brasil universitário.




====================================================================




The Scientist 20(2) 26, 2006

Submissions are up, reviewers are overtaxed, and authors are lodging complaint after complaint about the process at top-tier journals. What's wrong with peer review?
BY ALISON MCCOOK

Peter Lawrence, a developmental biologist who is also an editor at the journal Development and former editorial board member at Cell, has been publishing papers in academic journals for 40 years. His first 70 or so papers were "never rejected," he says, but that's all changed. Now, he has significantly more trouble getting articles into the first journal he submits them to.
"The rising [rejections] means an increase in angry authors."
-Drummond Rennie

Lawrence, based at the MRC Laboratory of Molecular Biology at Cambridge, UK, says his earlier papers were always published because he and his colleagues first submitted them to the journals they believed were most appropriate for the work. Now, because of the intense pressure to get into a handful of top journals, instead of sending less-than-groundbreaking work to second- or third-tier journals, more scientists are first sending their work to elite publications, where they often clearly don't belong.

Consequently, across the board, editors at top-tier journals say they are receiving more submissions every year, leading in many cases to more rejections, appeals, and complaints about the system overall. "We reject approximately 6,000 papers per year" before peer review, and submissions are steadily increasing, says Donald Kennedy, editor-in-chief of Science. "There's a lot of potential for complaints."

Everyone, it seems, has a problem with peer review at top-tier journals. The recent discrediting of stem cell work by Woo-Suk Hwang at Seoul National University sparked media debates about the system's failure to detect fraud. Authors, meanwhile, are lodging a range of complaints: Reviewers sabotage papers that compete with their own, strong papers are sent to sister journals to boost their profiles, and editors at commercial journals are too young and invariably make mistakes about which papers to reject or accept (see Truth or Myth?). Still, even senior scientists are reluctant to give speci. c examples of being shortchanged by peer review, worrying that the move could jeopardize their future publications.

So, do those complaints stem from valid concerns, or from the minds of disgruntled scientists who know they need to publish in Science or Nature to advance in their careers? "The rising [rejections] means an increase in angry authors," says Drummond Rennie, deputy editor at Journal of the American Medical Association (JAMA). The timing is right to take a good hard look at peer review, which, says Rennie, is "expensive, difficult, and blamed for everything."

What's wrong with the current system? What could make it better? Does it even work at all?
TOO MANY SUBMISSIONS

Editors at high-impact journals are reporting that the number of submissions is increasing every year (see "Facts and Figures", the table below). Researchers, it seems, want to get their data into a limited number of pages, sometimes taking extra measures to boost their success. Lately, academia seems to place a higher value on the quality of the journals that accept researchers' data, rather than the quality of the data itself. In many countries, scientists are judged by how many papers they have published in top-tier journals; the more publications they rack up, the more funding they receive.
ARTICLE EXTRAS

Related Articles:
Truth or Myth?
We presented three common complaints about peer review at top-tier journals to editors at some of those journals. Here are their responses.

What about fast-track?
Despite a high profile incident, preliminary evidence suggests the practice does not change peer-review quality or rejection rates

Consequently, Lawrence says he believes more authors are going to desperate measures to get their results accepted by top journals. An increasing number of scientists are spending more time networking with editors, given that "it's quite hard to reject a paper by a friend of yours," says Lawrence. Overworked editors need something flashy to get their attention, and many authors are exaggerating their results, stuffing reports with findings, or stretching implications to human diseases, as those papers often rack up extra references. "I think that's happening more and more," Lawrence says. In fact, in a paper presented at the 2005 International Congress on Peer Review and Biomedical Publication, a prospective review of 1,107 manuscripts submitted to the Annals of Internal Medicine, British Medical Journal (BMJ), and The Lancet in 2003 showed that many major changes to the text demanded by peer review included toning down the manuscript's conclusions and highlighting the paper's limitations. This study suggests that boosting findings may cause more problems by overburdening reviewers even further.

Indeed, sorting through hype can make a reviewer's job at a top journal even more difficult than it already is. At high-impact journals, reviewers need to judge whether a paper belongs in the top one percent of submissions from a particular field - an impossible task, says Hemai Parthasarathy, managing editor at Public Library of Science (PLoS) Biology. Consequently, editors and reviewers sometimes make mistakes, she notes, perhaps publishing something that is really in the top 10%, or passing on a really strong paper. To an outsider, this pattern can look like "noise," where some relatively weak papers are accepted when others aren't, inspiring rejected authors to complain. But, it's an inevitable result of the system, she notes.
THE RELIGION OF PEER REVIEW

Despite a lack of evidence that peer review works, most scientists (by nature a skeptical lot) appear to believe in peer review. It's something that's held "absolutely sacred" in a field where people rarely accept anything with "blind faith," says Richard Smith, former editor of the BMJ and now CEO of UnitedHealth Europe and board member of PLoS. "It's very unscientific, really."
What's wrong with the current system?
What could make it better?
Does it even work at all?

Indeed, an abundance of data from a range of journals suggests peer review does little to improve papers. In one 1998 experiment designed to test what peer review uncovers, researchers intentionally introduced eight errors into a research paper. More than 200 reviewers identified an average of only two errors. That same year, a paper in the Annals of Emergency Medicine showed that reviewers couldn't spot two-thirds of the major errors in a fake manuscript. In July 2005, an article in JAMA showed that among recent clinical research articles published in major journals, 16% of the reports showing an intervention was effective were contradicted by later findings, suggesting reviewers may have missed major flaws.

Some critics argue that peer review is inherently biased, because reviewers favor studies with statistically significant results. Research also suggests that statistical results published in many top journals aren't even correct, again highlighting what reviewers often miss. "There's a lot of evidence to (peer review's) downside," says Smith. "Even the very best journals have published rubbish they wish they'd never published at all. Peer review doesn't stop that." Moreover, peer review can also err in the other direction, passing on promising work: Some of the most highly cited papers were rejected by the first journals to see them.

The literature is also full of reports highlighting reviewers' potential limitations and biases. An abstract presented at the 2005 Peer Review Congress, held in Chicago in September, suggested that reviewers were less likely to reject a paper if it cited their work, although the trend was not statistically significant. Another paper at the same meeting showed that many journals lack policies on reviewer conflicts of interest; less than half of 91 biomedical journals say they have a policy at all, and only three percent say they publish conflict disclosures from peer reviewers. Still another study demonstrated that only 37% of reviewers agreed on the manuscripts that should be published. Peer review is a "lottery to some extent," says Smith.
Facts and Figures
Statistics are from editors at Journal of the American Medical Association (JAMA), Public Library of Science (PLoS) Biology, Science, Nature, and the New England Journal of Medicine (NEJM). The Scientist also contacted editors at Cell, The Lancet, and the Proceedings of the National Academy of Sciences; all declined to comment.

Journal Submissions Acceptance Rate
Workload Review Criteria Editor Demographics
JAMA

6,000 major manuscripts in 2005, a doubling since 2000. Approximately 6%. Close to two-thirds are rejected before peer review. All papers that are eventually accepted are first presented and discussed at a twice-weekly manuscript meeting, attended by the editor-in-chief, other decision-making editors, and statistical editors.
In addition to scientific rigor, the journal triages submissions according to importance and to ensure subject has general medical interest. before review. There are 25 decisionmaking editors; the age range is 40-70.

PLoS Biology
Doubled in the last six months. 15%, this fluctuates wildly because publication is so new. Each paper has a hybrid team of one academic and one professional editor. Most reviewers are asked to complete reviews within seven working days. Editorial board contains ~120 members.

Science 12,000/yr, increasing "at a rate of growth rivaling the rate of Chinese economic growth," says editor Don Kennedy.
"8%, about half are rejected before peer review.
Papers reviewed by an editor and two members of the board of reviewing editors before peer review.
Most reviewers are asked to return comments within one to two weeks.
Editorial board contains ~120 members (26 PhD editors).
Median age: mid-40s. NCB
Nature Cell Biology Increasing by 10% each year.

All Nature journals have an acceptance rate of less than 10%.
Each editor sees an average of 470 papers per year.
Besides scientific rigor, the journals look for general interest (especially at Nature), conceptual advance, and breadth/scope of study.
NCB has four editors; Nature journals have no editorial boards.
Average age: mid-30s.

New England Journal of Medicine
Received 5,000 submissions in 2005, as of press time. Submissions increase 10% to 15% each year. 6% of submissions are eventually published, approximately 50% of papers are rejected before peer review. A deputy editor must approve the assigned editor's decision to reject before review. Other than scientific rigor, editors judge submissions according to "suitability and editorial consistency," says editor Jeffrey Drazen. For instance, the journal does not publish animal studies. The average age of editors is in the mid-50s. The age range is 40-78. There are 10 deputy editors and 10 associate editors.



TRYING TO CHANGE

A number of editors are working to improve the system. In recent years, BMJ has required that all reviewers must sign their reviews. All comments go to the authors, excluding only "very confidential information," says Sara Schroter, research coordinator at BMJ, who has studied peer review.

Different studies have shown conflicting results about whether signed reviews improve the quality of what's sent back and detected only minor effects, Schroter notes. One report presented at this year's Peer Review Congress showed that, in a non-English-language journal, signed reviews were judged superior in a number of factors, including tone and constructiveness by two blinded editors. However, another study published in BMJ in 1999 found that signed reviews were not any better than anonymous comments, and asking reviewers to identify themselves only increased the chance they would decline to participate.

Still, Schroter says the journal decided to introduce its policy of signed reviews based on the logic that signed reviews might be more constructive and helpful, and anecdotally, the editors at BMJ say that is the case. JAMA's Rennie says he doesn't need research data to tell him that signing reviews makes them better. "I've always signed every review I've ever done," he says, "because I know if I sign something, I'm more accountable." Juries are not anonymous, he argues, and neither are people who write letters to the editor, so why are peer reviewers? "I think it'll be as quaint in 20 years' time to have anonymous reviewers as it would be to send anonymous letters to the editor," he predicts.

But not all editors agree. Lawrence, for one, says he believes anonymity helps reviewers stay objective. Others argue that junior reviewers might become hesitant to conduct honest reviews, fearing negative comments might spark repercussions from more seniorlevel authors. At Science, reviewers submit one set of comments to editors, and a separate, unsigned set of comments to authors - a system that's not going to change anytime soon, says Kennedy. "I think candor flourishes when referees know" that not all their comments will reach the authors, he notes. Indeed, in another study presented at this year's peer review congress, researchers found that reviewers hesitated to identify themselves to authors when recommending the study be rejected. Nature journals let reviewers sign reviews, says Bernd Pulverer, editor of Nature Cell Biology, but less than one percent does. "In principle" signed reviews should work, he says, but the competitive nature of biology interferes. "I would find it unlikely that a junior person would write a terse, critical review for a Nobel prize-winning author," he says.

However, since BMJ switched to a system of signed reviews, Smith says there have been no "serious problems." Only a handful of reviewers decided not to continue with the journal as a result, and the only "adverse effect" reported by authors and reviewers involved authors exposing reviewers' conflicts of interest, which is actually a "good thing," Smith notes.

Another option editors are exploring is open publishing, in which editors post papers on the Internet, allowing multiple experts to weigh in on the results and incrementally improve the study. Having more sets of eyes means more chances for improvement, and in some cases, the debate over the paper may be more óinteresting than the paper itself, says Smith. He argues that if everyone can read the exchange between authors and reviewers, this would return science to its original form, when experiments were presented at meetings and met with open debate. The transition could transform peer review from a slow, tedious process to a scienti . c discourse, Smith suggests. "The whole process could happen in front of your eyes."

However, there are concerns about the feasibility of open reviews. For instance, if each journal posted every submission it received, the Internet would be . ooded with data, some of which the media would report. If a journal ultimately passed on a paper, who else would accept it, given that the information's been made public? How could the journals make any money? There's an argument for both closed and open reviews, says Patrick Bateson, who led a Royal Society investigation into science and the public interest, "and it's not clear what should be done about it."

Many authors are now recommending that editors use (or avoid) particular reviewers for their manuscripts; and some research suggests this step may help authors get their papers published. An abstract at the last Peer Review Congress reported that papers were more likely to be accepted if authors recommended reviewers, or asked that certain reviewers not participate. Kennedy, for one, says he believes it's "perfectly respectable" for authors to bar reviewers, although he says he does not always adhere to authors' requests, such as occasions when authors in particularly narrow specialties submit an overly long list of reviewers to bar.

Lawrence suggests that, to ease the current publishing crunch, senior scientists should occasionally submit their studies to lesser journals. However, he says he's tried this tactic, and it "hasn't helped [his] career any." Consequently, there should be major changes in how work is evaluated, he says, so researchers are not penalized for publishing in second- or third-tier journals.

Anecdotally, Parthasarathy says this is already happening. In some cases, scientists who are being evaluated simply submit their top three papers, instead of counting the number of high-impact submissions. She adds that one of the purposes of open access (the founding principle of PLoS) is to change the all-importance of where people publish. If every scientist has access to papers, she says, they can judge the paper by its contents, not just its citation. "We have to get away from [the idea that] where the paper is published [is] the be all and end all," Parthasarathy says.

Despite the number of complaints lodged at peer review, and the lack of research to show that it works, it remains a valued system, says Rennie. Scientists sigh when they're asked to review a paper, but they get upset if they're not asked, he notes. Reviewing articles is a good exercise, Rennie says, and it enables reviewers to stay abreast of what's going on. Peer review "has many imperfections, but I think it's probably the best system we've got," says Bateson.

Experts also acknowledge that peer review is hardly ever to blame when fraud is published, since thoroughly checking data could take as much time as creating it in the . rst place. Still, Pulverer says he has seen reviewers work on papers to the point where they deserve to be listed as coauthors. "I think everyone in biology would agree that peer review is a good thing," he says. "I would challenge anyone to say it hasn't improved their papers."

Correction (posted February 9): When originally posted, this package of stories contained two errors. Due to a production error, the JAMA acceptance rate in "Facts and Figures" read approximately 55% rather than 5.5%. According to JAMA, the figure is "about 6%."

In addition, the related article "What about fast-track?" reported that the International Congress on Peer Review and Biomedical Publication happens every year. The Congress takes place every four years.

The Scientist regrets these errors.

Arquivo do blog