This research was supported by the Research on Research Integrity Program, an ORI/NIH collaboration, with financial support from the National Institute of Nursing Research and an NIH Mentored Research Scientist Award

Do we have to many "Naughty & Behaving scientists For a safe World", The study says yes and who profits, possibly other industries?, Do we blame the Scientist or their Employers who give them no choice to keep their jobs. Are there results predetermined before they start their work! Can we believe their results, Expert testimonial in court, Presentations, Books and Papers, etc. If bad scientist were just a few, but the studies shows there are a lot that warrants a serious and review at their work, ethic and credibility! Some scientist seem to manufacture a crisis just to create jobs, That troubling! History has shown us a lot of scientists were terminated arrested because they did not comply. Can Scientist no longer be believed at all. The USA TV's are now full of 1-800 Bad Drugs and their are no consequences for harm or death to the companies except for money paid but never prison or worse, but how far does this go and what other industries are effected (Pesticides, Cleaning Products and over 100,000 chemicals with questionable toxicology results. The world used to trust scientist but their work is now called into question, These studies were conducted by US Government Agencies! What happend to honest science it was the foundation of truth in the world not meant to be messed with!

Education is what remains after one has forgotten what one has learned in school,....Albert Einstein
We can't solve problems using the same kind of technology we used when we created them...Albert Einstein
The only thing that interferes with my learning is my education.....Albert Einstein
The only source of knowledge is experience.....Albert Einstein
Insanity is doing the same thing over and over again and expecting different results.....Albert Einstein
I fear the day that technology will surpass our human interactions. The world will have a generation of idiots.....Albert Einstein
It Diffiucult to get a man to understand something when his salary depends upon his not understanding it. Upton Sinclair
I've Seen Scientist who were persecuted. ridiculed, deprived of jobs, income. simply because of the facts they discovered lead to an inconvenience truth, that they insisted on Telling" Al Gore


Brian C. Martinson1, Melissa S. Anderson2 and Raymond de Vries3

  1. Brian C. Martinson is at the HealthPartners Research Foundation, 8100 34th Avenue South, PO Box 1524, Mailstop 21111R, Minneapolis, Minnesota 55440-1524, USA.
  2. Melissa S. Anderson is at the University of Minnesota, Educational Policy and Administration, 330 Wulling Hall, Minneapolis, Minnesota 55455, USA.
  3. Raymond de Vries is at the University of Minnesota, Center for Bioethics, N504 Boynton, Minneapolis, Minnesota 55455, USA.

To protect the integrity of science, we must look beyond falsification, fabrication and plagiarism, to a wider range of questionable research practices, argue Brian C. Martinson, Melissa S. Anderson and Raymond de Vries.

Serious misbehaviour in research is important for many reasons, not least because it damages the reputation of, and undermines public support for, science. Historically, professionals and the public have focused on headline-grabbing cases of scientific misconduct, but we believe that researchers can no longer afford to ignore a wider range of questionable behaviour that threatens the integrity of science.

We surveyed several thousand early- and mid-career scientists, who are based in the United States and funded by the National Institutes of Health (NIH), and asked them to report their own behaviours. Our findings reveal a range of questionable practices that are striking in their breadth and prevalence (Table 1). This is the first time such behaviours have been analysed quantitatively, so we cannot know whether the current situation has always been the case or whether the challenges of doing science today create new stresses. Nevertheless, our evidence suggests that mundane 'regular' misbehaviours present greater threats to the scientific enterprise than those caused by high-profile misconduct cases such as fraud.


As recently as December 2000, the US Office of Science and Technology Policy (OSTP) defined research misconduct as "fabrication, falsification, or plagiarism (FFP) in proposing, performing, or reviewing research, or in reporting research results"1. In 2002, the Federation of American Societies for Experimental Biology and the Association of American Medical Colleges objected to a proposal by the US Office of Research Integrity (ORI) to conduct a survey that would collect empirical evidence of behaviours that can undermine research integrity, but which fall outside the OSTP's narrow definition of misconduct2, 3. We believe that a valuable opportunity was wasted as a result.

A proper understanding of misbehaviour requires that attention be given to the negative aspects of the research environment. The modern scientist faces intense competition, and is further burdened by difficult, sometimes unreasonable, regulatory, social, and managerial demands4. This mix of pressures creates many possibilities for the compromise of scientific integrity that extend well beyond FFP.

We are not the first to call attention to these issues — debates have been ongoing since questionable research practices and scientific integrity were linked in 1992 report by the National Academy of Sciences 5. But we are the first to provide empirical evidence based on self reports from large and representative samples of US scientists that document the occurrence of a broad range of misbehaviours.

The few empirical studies that have explored misbehaviour among scientists rely on confirmed cases of misconduct6 or on scientists' perceptions of colleagues' behaviour7, 8, 9, or have used small, non-representative samples of respondents8, 9. Although inconclusive, previous estimates of the prevalence of FFP range from 1% to 2%. Our 2002 survey was based on large, random samples of scientists drawn from two databases that are maintained by the NIH Office of Extramural Research. The mid-career sample of 3,600 scientists received their first research-project (R01) grant between 1999 and 2001. The early-career sample of 4,160 NIH-supported postdoctoral trainees received either individual (F32) or institutional (T32) postdoctoral training during 2000 or 2001.

Getting data

To assure anonymity, the survey responses were never linked to respondents' identities. Of the 3,600 surveys mailed to mid-career scientists, 3,409 were deliverable and 1,768 yielded usable data, giving a 52% response rate. Of the 4,160 surveys sent to early-career scientists, 3,475 were deliverable, yielding 1,479 usable responses, a response rate of 43%.

Our response rates are comparable to those of other mail-based surveys of professional populations (such as a 54% mean response rate from physicians10 ). But our approach certainly leaves room for potential non-response bias; misbehaving scientists may have been less likely than others to respond to our survey, perhaps for fear of discovery and potential sanction. This, combined with the fact that there is probably some under-reporting of misbehaviours among respondents, would suggest that our estimates of misbehaviour are conservative.

Our survey was carried out independently of, but at around the same time as, the ORI proposal. The specific behaviours we chose to examine arose from six focus-group discussions held with 51 scientists from several top-tier research universities, who told us which misbehaviours were of greatest concern to them. The scientists expressed concern about a broad range of specific, sanctionable conducts that may affect the integrity of research.

To affirm the serious nature of the behaviours included in the survey, and to separate potentially sanctionable offences from less serious behaviours, we consulted six compliance officers at five major research universities and one independent research organization in the United States. We asked these compliance officers to assess the likelihood that each behaviour, if discovered, would get a scientist into trouble at the institutional or federal level. The first ten behaviours listed in Table 1 were seen as the most serious: all the officers judged them as likely to be sanctionable, and at least four of the six officers judged them as very likely to be sanctionable. Among the other behaviours are several that may best be classified as carelessness (behaviours 14 to 16).

Admitting to misconduct

Survey respondents were asked to report in each case whether or not ('yes' or 'no') they themselves had engaged in the specified behaviour during the past three years. Table 1 reports the percentages of respondents who said they had engaged in each behaviour. For six of the behaviours, reported frequencies are under 2%, including falsification (behaviour 1) and plagiarism (behaviour 5). This finding is consistent with previous estimates derived from less robust evidence about misconduct. However, the frequencies for the remaining behaviours are 5% or above; most exceed 10%. Overall, 33% of the respondents said they had engaged in at least one of the top ten behaviours during the previous three years. Among mid-career respondents, this proportion was 38%; in the early-career group, it was 28%. This is a significant difference ( chi2=36.34, d.f.=1, P<0.001). For each behaviour where mid- and early- career scientists' percentages differ significantly, the former are higher than the latter.

Although we can only speculate about the observed sub-group differences, several explanations are plausible. For example, opportunities to misbehave, and perceptions of the likelihood or consequences of being caught, may change during a scientist's career. Or it may be that these groups received their education, training, and work experience in eras that had different behavioural standards. The mid-career respondents are, on average, nine years older than their early-career counterparts (44 compared with 35 years) and have held doctoral degrees for nine years longer.

Another possible explanation for sub-group differences is the under-reporting of misbehaviours by those in relatively tenuous, early-career positions. Over half (51%) of the mid-career respondents have positions at the associate-professor level or above, whereas 58% of our early-career sample are post-doctoral fellows.

Addressing the problem

Our findings suggest that US scientists engage in a range of behaviours extending far beyond FFP that can damage the integrity of science. Attempts to foster integrity that focus only on FFP therefore miss a great deal. We assume that our reliance on self reports of behaviour is likely to lead to under-reporting and therefore to conservative estimates, despite assurances of anonymity. With as many as 33% of our survey respondents admitting to one or more of the top-ten behaviours, the scientific community can no longer remain complacent about such misbehaviour.

Early approaches to scientific misconduct focused on 'bad apples'. Consequently, analyses of misbehaviour were limited to discussions of individual traits and local (laboratory and departmental) contexts as the most likely determinants. The 1992 academy report5 helped shift attention from individuals with 'bad traits' towards general scientific integrity and the 'responsible conduct of research.'

Over the past decade, government agencies and professional associations interested in promoting integrity have focused on responsible conduct in research5, 11, 12. However, these efforts still prioritize the immediate laboratory and departmental contexts of scientists' work, and are typically confined to 'fixing' the behaviour of individuals.

Missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures. A 2002 report from the Institute of Medicine directed attention to the environments in which scientists work, and recommended an institutional (primarily university-level) approach to promoting responsible research13. The institute's report also noted the potential importance of the broader scientific environment, including regulatory and funding agencies, and the peer-review system, in fostering or hindering integrity, but remained mostly silent on this issue owing to a dearth of evidence.

In our view, certain features of the research working environment may have unexpected and potentially detrimental effects on the ethical dimensions of scientists' work. In particular, we are concerned about scientists' perceptions of the functioning of resource distribution processes. These processes are embodied in professional societies, through peer-review systems and other features of the funding and publishing environment, and through markets for research positions, graduate students, journal pages and grants. In ongoing analyses, not yet published, we find significant associations between scientific misbehaviour and perceptions of inequities in the resource distribution processes in science. We believe that acknowledging the existence of such perceptions and recognizing that they may negatively affect scientists' behaviours will help in the search for new ways to promote integrity in science.

Little attention has so far been paid to the role of the broader research environment in compromising scientific integrity. It is now time for the scientific community to consider what aspects of this environment are most salient to research integrity, which aspects are most amenable to change, and what changes are likely to be the most fruitful in ensuring integrity in science.

Top



Ranstam J, Controlled clinical trials, (2000 Oct) Vol. 21, No. 5, pp. 415-27. Journal code: 8006242. ISSN: 0197-2456. L-ISSN: 0197-2456. Report No.: KIE-106262; NRCBL-VF 1.3.9. .

The characteristics of scientific fraud and its impact on medical research are in general not well known. However, the interest in the phenomenon has increased steadily during the last decade. Biostatisticians routinely work closely with physicians and scientists in many branches of medical research and have therefore unique insight into data. In addition, they have methodological competence to detect fraud and could be expected to have a professional interest in valid results. Biostatisticians therefore are likely to provide reliable information on the characteristics of fraud in medical research. The objective of this survey of biostatisticians, who were members of the International Society for Clinical Biostatistics, was to assess the characteristics of fraud in medical research. The survey was performed between April and July 1998. The participation rate was only 37%. We report the results because a majority (51%) of the participants knew about fraudulent projects, and many did not know whether the organization they work for has a formal system for handling suspected fraud or not. Different forms of fraud (e.g., fabrication and falsification of data, deceptive reporting of results, suppression of data, and deceptive design or analysis) had been observed in fairly similar numbers. We conclude that fraud is not a negligible phenomenon in medical research, and that increased awareness of the forms in which it is expressed seems appropriate. Further research, however, is needed to assess the prevalence of different types of fraud, as well as its impact on the validity of results published in the medical literature.

Geggie D, Journal of medical ethics, (2001 Oct) Vol. 27, No. 5, pp. 344-6. Journal code: 7513619. ISSN: 0306-6800. L-ISSN: 0306-6800. Report No.: KIE-103706; NLM-PMC1733447. .

OBJECTIVE: To determine the prevalence of, and attitudes towards, observed and personal research misconduct among newly appointed medical consultants. DESIGN: Questionnaire study. SETTING: Mersey region, United Kingdom. PARTICIPANTS: Medical consultants appointed between Jan 1995 and Jan 2000 in seven different hospital trusts (from lists provided by each hospital's personnel department). MAIN OUTCOME MEASURES: Reported observed misconduct, reported past personal misconduct and reported possible future misconduct. RESULTS: One hundred and ninety-four replies were received (a response rate of 63.6%); 55.7% of respondents had observed some form of research misconduct; 5.7% of respondents admitted to past personal misconduct; 18% of respondents were either willing to commit or unsure about possible future research misconduct. Only 17% of the respondents reported having received any training in research ethics. Anaesthetists reported a lower incidence of observed research misconduct (33.3%) than the rest of the respondents (61.5%) CONCLUSION: There is a higher prevalence of observed and possible future misconduct among newly appointed consultants in the UK than in the comparable study of biomedical trainees in California. Although there is a need for more extensive studies, this survey suggests that there is a real and potential problem of research misconduct in the UK.


Acknowledgments

This research was supported by the Research on Research Integrity Program, an ORI/NIH collaboration, with financial support from the National Institute of Nursing Research and an NIH Mentored Research Scientist Award to R.d.V. We thank the three anonymous reviewers, Nick N. Steneck and M. Sheetz for their insightful input and responses to earlier drafts.



                                




Copyright © 2008-2018 BioBased.US All content including, without limitation, text, images, media files, videos, software, and source code is subject to copyright protection and may not be used except with the written permission.


Biobased USA, 805 Cottage Hill Way, Brandon, FL. 33511 800 995-9203 or International 336 306 0193