InvestorsHub Logo
icon url

vinmantoo

04/14/13 10:52 PM

#159865 RE: iwfal #159858

iwfal, {{The problem is a human one and is, IMO, ubiquitous whenever there is opportunity - which is whenever there is fuzz (aka noise) in the system that allows people to see what they want to see. (and note that some areas inherently have more noise than others - but I would suggest none are noise-free.)}}

I fully agree. However, that isn't the issue at all to my point. The articles JQ1234 were involved in cancer research but I got the impression that he or they were extrapolating the 50% "failure" to reproduce results to all areas of science, and that is completely wrong. In one of the articles jq1234 posted, there was a quote from a supposed researcher who said he or she die the experiment 5 or 6 times and got the answer he or she published once. That IS fraud and the company should have reported it to the journal, to the lab head, and told them in no uncertain terms that if this was key evidence for the paper, the paper must be retracted. At the very least, that data must be retracted and a detailed review made on all the other primary data in the lab notebooks to see if the rest of the paper holds true.

The whole tenor of the articles and commentary are towards the idea that there is systemic fraud rampant in all of science, and I don't buy it. The issue of signal to noise is essential, but the inherent genetic variability in a model systems, or cell lines can make one lab get the "right" answer and another lab get the wrong one. For example, a former colleague friend of mine was working on transposable elements and how they are regulated during mouse meiosis. He got a clear and powerful effect in his lab strain background when he did siRNA. He decided to check out two other mouse strain backgrounds, and one didn't show any effect at all and the other was intermediate. If he had only worked on the initial background where there was a fully penetrant and robust effect, would that make him wrong or the research wrong in subsequent labs or even his own used the other background and saw intermediate effects, or no effect at all? Of course not. It shows that there are genetic modifiers that can provide redundant regulation. He is trying to map the modifiers to better understand how they effect the phenotype.

I think this is more of what is going on with cancer research and model organisms that the biotech reports were about. There was a study on one cancer cell line, MCF7, and it involved collecting MCF7 lines from labs around the world. There was an astonishing amount of variation in what was called MCF7, so much so that it would likely affect results. Sure is it is of great concern and needs to be dealt with or at least acknowledged. There were even lines that were HeLa cells not mouse cells. The latter is shoddy science and in the age of PCR and genomic sequencing, it is a disgrace and unacceptable. There might well need to be standard tests that must be applied to cancer cell lines to ensure they are "true" MCF7 cells, or some variation thereof, and of course not restricted to this particular line.

Many of the issues are absent or minimized for other model systems where there is more genetic uniformity. Even in budding yeast, issues of different background can crop up and give different results. A gene deletion in one haploid background can be lethal but not make cells very sick in another. It isn't fraud or bad science, but genetic variation reading its head, but this provides provides opportunities for additional insights.
icon url

vinmantoo

04/14/13 11:11 PM

#159866 RE: iwfal #159858

{{To be honest I am surprised that anyone reading this board regularly would think some particular area was somehow much cleaner than other areas (see JQ's list).}}

I answered this in the other post I just made.


{{And I would further add the concept that if researchers in one particular area think they are immune it may be a good indicator that they are not - skepticism is a powerful tool.}}

I never said any area was immune, and one would have to be an idiot and a fool to think so. I am the king of skeptics. Certain research areas are far more prone to variation and certain key assays have a low signal to noise ratio.

I am also fully aware that there are pressures and egos that one most deal with and control. The way I approach reading a published manuscript, and the way I teach students to do it, is that you assume it is a complete POS. Look at the data carefully and see how conclusive it is, does it have the right controls, and what if any key controls are missing. Equally important is to figure out what experiments are missing. I also tell students not to read the authors conclusions as they aren't relevant. I also make it clear to technicians and students that I don't care what the data says, I just want them to be as sure as possible of what they are reporting. When I get results that fit my models, then I get worried.

I read the book by Richard Feynman. There was a quote that always stuck with me. He said something along the lines, "You have to be careful as you are the easiest one to fool".