Has Science Truth Been Lost!

Where have so many honest scientists gone. I understand that some scientists go to work afraid of management or their masters, I also understand that the scientists obligation is family first and than science. Only the tenured scientists with years and years of working for their institutions can speak freely, and who cannot be terminated, as other truthtellers are terminated. I have found that only three scientists can speak the truth out of 1000, The rest are talking heads repeating what their masters tell them. Most don't realise the harm they are even doing. What you should do is not repeat what you have learned and are told, but what you know!. "Education is what remains after one has forgotten everything he learned in school. - Einstein"

"It Difficult to get a man to understand something when his salary depends upon his not understanding it." Upton Sinclair

"I've seen scientistswho were persecuted. ridiculed, deprived of jobs, income. simply because of the facts they discovered that led to an inconvenient truth, that they insisted on Telling" Al Gore

Al Gore said this before his Climategate, How Ironic the these words from Al Gore invested in Climate Change funding and the profits from its research.

Who are we to believe? Labels dont tell the truth, Courts are being lied to, and Health is being compromised. There are so many victims!

Naugthy Scientist Link: Below this article as well.

HealthPartners Research Foundation survey:http://pubs.acs.org/cen/science/83/8326sci4.html

Scientists behaving badly

Brian C. Martinson1, Melissa S. Anderson2 and Raymond de Vries3

  1. Brian C. Martinson is at the HealthPartners Research Foundation, 8100 34th Avenue South, PO Box 1524, Mailstop 21111R, Minneapolis, Minnesota 55440-1524, USA.

  2. Melissa S. Anderson is at the University of Minnesota, Educational Policy and Administration, 330 Wulling Hall, Minneapolis, Minnesota 55455, USA.

  3. Raymond de Vries is at the University of Minnesota, Center for Bioethics, N504 Boynton, Minneapolis, Minnesota 55455, USA.

Abstract

To protect the integrity of science, we must look beyond falsification, fabrication and plagiarism, to a wider range of questionable research practices, argue Brian C. Martinson, Melissa S. Anderson and Raymond de Vries.

Serious misbehaviour in research is important for many reasons, not least because it damages the reputation of, and undermines public support for, science. Historically, professionals and the public have focused on headline-grabbing cases of scientific misconduct, but we believe that researchers can no longer afford to ignore a wider range of questionable behaviour that threatens the integrity of science.

We surveyed several thousand early- and mid-career scientists, who are based in the United States and funded by the National Institutes of Health (NIH), and asked them to report their own behaviours. Our findings reveal a range of questionable practices that are striking in their breadth and prevalence. This is the first time such behaviours have been analysed quantitatively, so we cannot know whether the current situation has always been the case or whether the challenges of doing science today create new stresses. Nevertheless, our evidence suggests that mundane 'regular' misbehaviours present greater threats to the scientific enterprise than those caused by high-profile misconduct cases such as fraud.



As recently as December 2000, the US Office of Science and Technology Policy (OSTP) defined research misconduct as "fabrication, falsification, or plagiarism (FFP) in proposing, performing, or reviewing research, or in reporting research results"1. In 2002, the Federation of American Societies for Experimental Biology and the Association of American Medical Colleges objected to a proposal by the US Office of Research Integrity (ORI) to conduct a survey that would collect empirical evidence of behaviours that can undermine research integrity, but which fall outside the OSTP's narrow definition of misconduct . We believe that a valuable opportunity was wasted as a result.

A proper understanding of misbehaviour requires that attention be given to the negative aspects of the research environment. The modern scientist faces intense competition, and is further burdened by difficult, sometimes unreasonable, regulatory, social, and managerial demands4. This mix of pressures creates many possibilities for the compromise of scientific integrity that extend well beyond FFP.

We are not the first to call attention to these issues — debates have been ongoing since questionable research practices and scientific integrity were linked in 1992 report by the National Academy of Sciences. But we are the first to provide empirical evidence based on self reports from large and representative samples of US scientists that document the occurrence of a broad range of misbehaviours.

The few empirical studies that have explored misbehaviour among scientists rely on confirmed cases of misconduct or on scientists' perceptions of colleagues' behaviour, or have used small, non-representative samples of respondents. Although inconclusive, previous estimates of the prevalence of FFP range from 1% to 2%. Our 2002 survey was based on large, random samples of scientists drawn from two databases that are maintained by the NIH Office of Extramural Research. The mid-career sample of 3,600 scientists received their first research-project (R01) grant between 1999 and 2001. The early-career sample of 4,160 NIH-supported postdoctoral trainees received either individual (F32) or institutional (T32) postdoctoral training during 2000 or 2001.

Getting data

To assure anonymity, the survey responses were never linked to respondents' identities. Of the 3,600 surveys mailed to mid-career scientists, 3,409 were deliverable and 1,768 yielded usable data, giving a 52% response rate. Of the 4,160 surveys sent to early-career scientists, 3,475 were deliverable, yielding 1,479 usable responses, a response rate of 43%.

Our response rates are comparable to those of other mail-based surveys of professional populations (such as a 54% mean response rate from physicians10). But our approach certainly leaves room for potential non-response bias; misbehaving scientists may have been less likely than others to respond to our survey, perhaps for fear of discovery and potential sanction. This, combined with the fact that there is probably some under-reporting of misbehaviours among respondents, would suggest that our estimates of misbehaviour are conservative.

Our survey was carried out independently of, but at around the same time as, the ORI proposal. The specific behaviours we chose to examine arose from six focus-group discussions held with 51 scientists from several top-tier research universities, who told us which misbehaviours were of greatest concern to them. The scientists expressed concern about a broad range of specific, sanctionable conducts that may affect the integrity of research.

To affirm the serious nature of the behaviours included in the survey, and to separate potentially sanctionable offences from less serious behaviours, we consulted six compliance officers at five major research universities and one independent research organization in the United States. We asked these compliance officers to assess the likelihood that each behaviour, if discovered, would get a scientist into trouble at the institutional or federal level. The first ten behaviours listed in above were seen as the most serious: all the officers judged them as likely to be sanctionable, and at least four of the six officers judged them as very likely to be sanctionable. Among the other behaviours are several that may best be classified as carelessness (behaviours 14 to 16).

Admitting to misconduct

Survey respondents were asked to report in each case whether or not ('yes' or 'no') they themselves had engaged in the specified behaviour during the past three years.Reports the percentages of respondents who said they had engaged in each behaviour. For six of the behaviours, reported frequencies are under 2%, including falsification (behaviour 1) and plagiarism (behaviour 5). This finding is consistent with previous estimates derived from less robust evidence about misconduct. However, the frequencies for the remaining behaviours are 5% or above; most exceed 10%. Overall, 33% of the respondents said they had engaged in at least one of the top ten behaviours during the previous three years. Among mid-career respondents, this proportion was 38%; in the early-career group, it was 28%. This is a significant difference (chi=36.34, d.f.=1, P<0.001). For each behaviour where mid- and early-career scientists' percentages differ significantly, the former are higher than the latter.

Although we can only speculate about the observed sub-group differences, several explanations are plausible. For example, opportunities to misbehave, and perceptions of the likelihood or consequences of being caught, may change during a scientist's career. Or it may be that these groups received their education, training, and work experience in eras that had different behavioural standards. The mid-career respondents are, on average, nine years older than their early-career counterparts (44 compared with 35 years) and have held doctoral degrees for nine years longer.

Another possible explanation for sub-group differences is the under-reporting of misbehaviours by those in relatively tenuous, early-career positions. Over half (51%) of the mid-career respondents have positions at the associate-professor level or above, whereas 58% of our early-career sample are post-doctoral fellows.

Addressing the problem

Our findings suggest that US scientists engage in a range of behaviours extending far beyond FFP that can damage the integrity of science. Attempts to foster integrity that focus only on FFP therefore miss a great deal. We assume that our reliance on self reports of behaviour is likely to lead to under-reporting and therefore to conservative estimates, despite assurances of anonymity. With as many as 33% of our survey respondents admitting to one or more of the top-ten behaviours, the scientific community can no longer remain complacent about such misbehaviour.

Early approaches to scientific misconduct focused on 'bad apples'. Consequently, analyses of misbehaviour were limited to discussions of individual traits and local (laboratory and departmental) contexts as the most likely determinants. The 1992 academy report5 helped shift attention from individuals with 'bad traits' towards general scientific integrity and the 'responsible conduct of research.'

Over the past decade, government agencies and professional associations interested in promoting integrity have focused on responsible conduct in research5, 11, 12. However, these efforts still prioritize the immediate laboratory and departmental contexts of scientists' work, and are typically confined to 'fixing' the behaviour of individuals.

Missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures. A 2002 report from the Institute of Medicine directed attention to the environments in which scientists work, and recommended an institutional (primarily university-level) approach to promoting responsible research13. The institute's report also noted the potential importance of the broader scientific environment, including regulatory and funding agencies, and the peer-review system, in fostering or hindering integrity, but remained mostly silent on this issue owing to a dearth of evidence.

In our view, certain features of the research working environment may have unexpected and potentially detrimental effects on the ethical dimensions of scientists' work. In particular, we are concerned about scientists' perceptions of the functioning of resource distribution processes. These processes are embodied in professional societies, through peer-review systems and other features of the funding and publishing environment, and through markets for research positions, graduate students, journal pages and grants. In ongoing analyses, not yet published, we find significant associations between scientific misbehaviour and perceptions of inequities in the resource distribution processes in science. We believe that acknowledging the existence of such perceptions and recognizing that they may negatively affect scientists' behaviours will help in the search for new ways to promote integrity in science.

Little attention has so far been paid to the role of the broader research environment in compromising scientific integrity. It is now time for the scientific community to consider what aspects of this environment are most salient to research integrity, which aspects are most amenable to change, and what changes are likely to be the most fruitful in ensuring integrity in science.

Acknowledgments

This research was supported by the Research on Research Integrity Program, an ORI/NIH collaboration, with financial support from the National Institute of Nursing Research and an NIH Mentored Research Scientist Award to R.d.V. We thank the three anonymous reviewers, Nick N. Steneck and M. Sheetz for their insightful input and responses to earlier drafts.

References

  1. OSTP Federal Policy on Research Misconduct http://www.ostp.gov/html/001207_3.html (2005).

  2. Teitelbaum, S. L. Nature 420, 739−740 (2002).ChemPort|

  3. Korn, D. Nature 420, 739 (2002).ChemPort|

  4. Freeman, R., Weinstein, E., Marincola, E., Rosenbaum, J. & Solomon, F. Science 294, 2293−2294 (2001).ChemPort|

  5. Panel on Scientific Responsibility and the Conduct of Research (Natl Acad., Washington DC, 1992).

  6. Steneck, N. H. ORI Introduction to the Responsible Conduct of Research (US Government Printing Office, Washington DC, 2004).

  7. Swazey, J. M., Anderson, M. S. & Louis, K. S. Am. Sci. 81, 542−553 (1993).

  8. Ranstam, J. et al. Control Clin. Trials 21, 415−427 (2000).ChemPort|

  9. Geggie, D. J. Med. Ethics 27, 344−346 (2001).ChemPort|

  10. Asch, D. A., Jedrziewski, M. K. & Christakis, N. A. J. Clin Epidemiol. 50, 1129−1136 (1997).ChemPort |

  11. Committee on Science Engineering and Public Policy On Being a Scientist: Responsible Conduct in Research (Natl Acad., Washington DC, 1995).

  12. Panel on Scientific Responsibility and the Conduct of Research (Natl Acad., Washington DC, 1993).

  13. Institute of Medicine and National Research Council Committee on Assessing Integrity in Research Environments Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct (Natl Acad., Washington DC, 2002).





 

 

June 27, 2005, Volume 83, Number 26, p. 50

 



 

NAUGHTY SCIENTISTS
One-third of scientists in a recent survey admitted to questionable practices; should we be worried?

 


 

BY RON DAGANI

 

 

 

 

 

O

PHOTODISC/THINKSTOCK


kay, the jig's up. scientists, it turns out, are neither perfect nor perfectly ethical. Some are flawed human beings, not unlike some businesspeople, journalists, politicians, entertainers, law-enforcement officials, and clergy (just to mention a few other groups that have been touched by scandal in recent times).

Scientists' flaws include engaging in a wide range of questionable research practices. Everyone's heard of cases of gross scientific misconduct, such as fabricating or plagiarizing results; these can end up in the headlines. But there are many other behaviors that can compromise the integrity of research, and those were thrust into the limelight earlier this month when Nature (2005, 435,737) published the results of a survey in which more than 3,200 scientists fessed up to "behaving badly."

The survey--conducted in 2002 by Brian C. Martinson of HealthPartners Research Foundation, in Minneapolis, and two colleagues at the University of Minnesota, Twin Cities--focused on early- and mid-career U.S. researchers who were supported by the National Institutes of Health. Survey respondents were asked by mail to report whether or not they had engaged in a number of behaviors during the previous three years.

One-third of the respondents said they had engaged in at least one of the 10 most serious behaviors on the list--those that a sampling of university compliance officers regarded as likely to be sanctionable. Among those "top 10" behaviors and the percentage of respondents admitting to them are the following:

  • Falsifying or "cooking" research data (0.3%).

  • Ignoring major rules protecting human subjects (0.3%).

  • Engaging in relationships with students, research subjects, or clients that may be interpreted as questionable (1.4%).

  • Using another's ideas without obtaining permission or giving due credit (1.4%).

  • Failing to present data that contradict one's own previous research (6%).

  • Overlooking others' use of flawed data or questionable interpretation of data (12.5%).

  • Changing the design, methodology, or results of a study in response to pressure from a funding source (15.5%).

Other behaviors (not in the top 10) include inappropriately assigning authorship credit (10%), dropping data points from an analysis based on a gut feeling that they were inaccurate (15.3%), and keeping inadequate records related to research projects (27.5%).

These findings certainly are not good news for the scientific enterprise, but they're not particularly surprising either. As the researchers note in their report: "The modern scientist faces intense competition, and is further burdened by difficult, sometimes unreasonable regulatory, social, and managerial demands. This mix of pressures creates many possibilities for the compromise of scientific integrity" that extend well beyond the official definition of research misconduct, which is "fabrication, falsification, or plagiarism [FFP] in proposing, performing, or reviewing research, or in reporting research results."

Nevertheless, the survey results are worrisome because they reveal a pervasive breakdown in the ethical practice of science. Furthermore, it's possible that these research behaviors are being underreported, with the worst offenders being reluctant to participate, despite assurances of anonymity.

But have these behaviors actually hurt science in a significant way? The survey offers no clue. I doubt that ethic training wil have any effect.

Commentators have pointed out other deficiencies of the survey. For example, many of the questions were worded so vaguely that they could also refer to actions that aren't objectionable. For example, a researcher might decide to modify the design of an experiment to improve it, based on a legitimate suggestion from a funding source. Yet in filling out the survey, a "yes" answer to this question would count as admitting to a no-no.

It's also unclear whether ethical lapses and other questionable behaviors have become more common as science has become more competitive. In the absence of retrospective data, I think it's likely that these behaviors have occurred widely for a long time--indeed, for as long as people have tried to get ahead with less work, less attention to detail, or less regard for ethics.

There are some indications, though, that certain kinds of misconduct are on the rise. For instance, chemistry journal editors are seeing a growing number of cases in which authors are trying to publish essentially the same manuscript in different journals, a practice known as duplicate submission, or are plagiarizing their own previously published papers--self-plagiarism.

So are ethical standards collapsing? Or are many--particularly, newly minted--scientists just unaware of what's right and wrong in the lab? Further research on scientists' behaviors, particularly the factors that motivate them to misbehave, could shed light on these questions.

Also, perhaps it's time to make ethics education more widely available and even mandatory for all budding young scientists. Some universities have long offered courses or classes in research ethics, but not everyone who should be exposed to them is.

In addition, it may be time for the official definition of misconduct in research to be expanded beyond FFP. This is a controversial notion, but it would send a strong signal that other ethical lapses will no longer be tolerated.

With any luck, such initiatives may help to curb the darker side of human nature.

Chemical & Engineering News, ISSN 0009-2347 Copyright © 2005