This post is going to touch on a point I have often thought about: how objective is science really?
At the moment I am reading the book Unmasking Europa: the search for life on Jupiter’s ocean moon by Richard Greenberg. As so many of these seemingly academic titles this one has an unexpected depth in revealing the human component of large research programs.
One of the major points the author makes in this book is how science is coloured by the research agencies, the culture, the funding bodies, the individuals and the politics of science. In short, the human influence on science is quite considerable, and this may bring the danger of misrepresentation of data, of politics seeping into science programs in terms of what is funded, and in terms of establishing accepted doctrines that tend to be hard to break.
Let me say one thing first up: I worked as a scientist, and I can emphatically say that deliberate mis-representation of data and willful falsification is extremely rare, often dealt with, and if so, dealt with harshly.
But in terms of experiment design and research priorities, the human factor has a huge, and often subconscious, influence. This is because a human scientist decides on the hypothesis which an experiment will be designed to test.
I’m going to use an example which I found on the web, with the caveat that I have no idea if what I describe below is what actually happened.
The hypothesis: complex live organisms do not survive exposure to vacuum conditions for longer than a few minutes.
We hold this hypothesis based on similar research with multi-celled organisms.
A rigorous design for an experiment to test the effect of vacuum exposure on plants would require a number of exposure times varying from a non-exposed control treatment to a treatment set at a level where all plants are going to die, say half an hour. You could then position extra exposure times at five or ten minute intervals depending on your budget. Or maybe, since you’re expecting most death to occur after four minutes, you could have treatments for one, two, three, four and five minutes, another at fifteen and then the outlier at 30. There are all sorts of designs possible, but the key is that your expectation (hypothesis) is that plants won’t last more than a few minutes.
So what if you were entirely wrong? Oops! All plants survived, even the ones exposed for 30 minutes. You cannot draw a nice graph of time exposure vs survival and find the point where most plants die (I am of course not sure that this was the design of the experiment, but having worked in science, I hazard a fair guess that this would have been the aim).
Oh. Right. Lots of headscratching ensues. This is interesting. Very interesting indeed, but it does mean your experiment is screwed and your results are essentially worthless. Your experiment was designed around a hypothesis that proved so false that the results became statistically meaningless, if interesting. You’ll have to do this again with different, and longer, exposure times.
Of course, the failure of an experiment in this way doesn’t suggest mis-representation and affords little opportunity for data-fudging. That’s because the results are prove-able and measurable. For many sciences, such as astronomy, such as social sciences, a lot of results are much less prove-able. For example: does an atmospheric model that fits one particular set of data really explain what is going on? Does a picture of the surface of an icy moon explain what goes on underneath the surface?
No, human scientists tends to look at these amorphous blobs of data with a hypothesis already in mind. There is a result they may believe they’ll find, or a result that they think will be very news-worthy to find. Pressed for results, funding and performance reviews, the project supervisors may have a vested interest in the results pointing in a certain direction, or it may even be politically undesirable to find anything contradictory.
Or, rather than suggest any of the above subversions, it may be that the scientific doctrine has favoured a particular result for many, many years. The established scientific opinion becomes so entrenched that scientists find it hard to interpret results any other way. Impossible? Well, look at the hoops observant 17th century astronomers went through in order to make their observations fit the Earth-centric model.
As recent example, look at how long it took the medical researchers to come up with the idea that stomach ulcers were caused by a bacteria. Sure surgery fixed the problem, and therefore no one questioned the cause as much as they should have.
That said, scientists I know do their utmost best to be as objective as possible, but they are people part of this society, so it is a fallacy to think that science is not coloured by human opinions, by the current trends and beliefs and sadly, by politics.
Hear, hear. Science, as with anything else, is inevitably influenced and affected by the beliefs, desires and biases of its participants, observers and detractors.
Hence the importance of understanding the difference between an observation, a theory, a hypothesis, a model and a law.
We exist in Socrates’ Cavern, and science is one of the key pillars to understanding what really is out there, but as our tools improve and our knowledge increases, we must be willing to let go of old approximations and embrace new.
But change is hard, in science as in so much else.
Totally, but it’s those human aspects the general public doesn’t see.
Experiments are often a compromise, and funding is determined by what is the latest hot research area.
I’ll write a story about that 😉
Wow, snap! I’m reading the same book. 🙂 I’m up to page 147. As for falsification and misdirection in science being extremely rare, I have to disagree with you, Patty. I’ve worked for academic institutions as a technical editor and my husband just finished up with a university last year as a research scientist and there were some shenanigans going on, oh yes. People got away with it because of “arrangements” with other universities overseas, because a grant proposal was up for renewal, because a lucrative consulting gig was in the making, because the regional government wanted something splashy to put in their annual report, because the Head of the Department didn’t care, because a university’s standing in the Top 100 depended on a favourable paper, because Our Group has more “cachet” than Their Group … I could go on.
Different countries, different disciplines, same crap.
misdirection, yes, but straight falsification with ill intent, no
Maybe it depends what industry you’re in, because in biological sciences, grants are neither lucrative nor newsworthy. They merely keep people who love what they’re doing in a job.