horizontal rule

S C I E N C E

horizontal rule



Overview


It seems as though irrationality is spreading like a crippling, infectious disease through Western civilization. The late Professor Quigley of Georgetown University may have considered this a purely natural stage in the progression of a civilization, but I am inclined to the opinion that it is not inevitable.

The antidote is to return to teaching science as the great adventure it truly is.


My Views


Did your Science class always put you to sleep after lunch in school?

Then you had a bad teacher!

On the contrary, science can be exciting, and it ought to be taught by people who understand that. After ages of darkness and superstition and sheer guesswork that killed as often as it enlightened, we slowly and painfully developed a means by which the physical secrets of the entire universe can be unlocked: a cosmic key.

Do you remember how you felt when you learned to drive and got your own car? Do you recall the rush at realizing the opportunities that suddenly opened up for you? That is what discovering the scientific method is like for someone inclined to ask why reality works the way it does. It's like being given a key to a chest that contains every amazing thing in the world. It's even better than discovering cool things--it's discovering why they're cool.

Someone asked me recently: "Given what I know about Scientific Studies in general, it isn't all that unusual for research and/or organized studies to prove the original theory, right? And then another study comes out disproving the exact same data?"

This is a good question because it helps to pinpoint the central insight of the scientific method.

It's this: Science is an approximation to Truth. Humans are fallible creatures; we make mistakes. And we're limited creatures; even if we made no mistakes, we still couldn't know enough to understand everything perfectly. Therefore it just isn't possible for any of us to ever hope to understand any phenomenon completely, all the way to the level of perfect Truth.

The best we can do is approximate it. We can't hit 100 percent, but we can try to come as close as possible. And that's what science permits through the application of what is known as the scientific method. No experiment ever "proves" anything, not in the sense of perfect proof. (Quick definitions--an "experiment" is the testing of a hypothesis, and a "hypothesis" is a statement about cause and effect: "Mechanism A is responsible when doing X makes Y happen.")

What a proper experiment does do is provide evidence that a proposed cause-and-effect relationship may exist. If the experiment actually does test the hypothesis expressed (which isn't always the case), then a successful outcome is not absolute proof, but it is what's called a "confirming instance"--an example that, sure enough, after doing X where mechanism A was solely involved, Y happened.

And what the scientific method tells us is that, if when you do an experiment the predicted result happens... and when *I* repeat that experiment and the same thing happens... and when Professor Gdsnrjx in Gdansk gets the same result... and so on for anyone who repeats the experiment... then the summation of all those confirming instances adds up to what's called a "preponderance of the evidence." In other words, the likelihood that a glimmering of Truth has been correctly identified, because anybody who performs the same experiment gets the same result. And that only works if objective reality is in fact independent of subjective belief.

Scientists live to demonstrate that.

We can never be entirely sure of the cause-and-effect mechanism we think we've isolated; we can never reach 100 percent. But often enough we can come close enough to 100 percent that, on a practical level (the level of engineering), we can accomplish wonders.

Now, back to the popular notion of "proving" and "disproving." The reason why scientists sometimes reach different conclusions is because their hypotheses are not falsifiable (a term used by Karl Popper). Instead of saying, "Only when I remove all possibility of Mechanism A occurring will doing X fail to make Y happen," too many scientists say, "Doing X makes Y happen."

Obviously, the latter approach is a lot simpler. Scientists are people, too; sometimes they choose the easy road. Where this becomes a problem is when multiple mechanisms are required to produce an observed result. Sure, mechanism A might be necessary for doing X to make Y happen... but maybe mechanism B is necessary, too. (For example, most plants need sunlight to grow, but they also need water.)

Let's say you run an experiment where both mechanisms A and B are present. You do X, and sure enough, Y happens. But when Professor Billy-Bob down the road tries to run that same experiment, mechanism A is present but B (for whatever reason) isn't. When the good Professor does X, Y won't happen.

Because the hypothesis wasn't falsifiable--because it didn't isolate a lone mechanism to test--scientists who appear to be duplicating each other's experiments can wind up getting different results. Because instead of (as Sherlock Holmes once sort of put it) "eliminating the impossible" these scientists tried to shortcut all the way to identifying the relevant mechanism in one fell swoop, they get inconsistent results.

Then we in the public get to listen to them squabble like spoiled brats. (If you're not recalling the sound and fury over "cold fusion" right now, you should be.)

What, then, is a person to think when reasonable-sounding evidence is presented from some apparently reputable person or group which calls into question or claims to disprove an established idea? Or some other alternative is presented, with "scientific facts" to back its claims?

There are a couple of factors at work here. First and foremost is that "preponderance of the evidence" thing. Whatever you might prefer to believe one way or the other, if the preponderance of the evidence lies on one side, then you can say tentatively that that's the explanation of reality most likely to be correct. It may, in fact, not be correct. But when the available data are few, then it's the best option until more and better data come along.

By contrast, if there's a preponderance of evidence supporting a particular claim, then you don't need to be so tentative. You still need to remember that absolute truth is impossible. That claim might still be wrong. But a preponderance of evidence means that the odds of error are low; it's OK to go ahead and conduct your life as though that claim was proven. Just be willing to consider new evidence if it comes along.

The other factor concerning conflicting claims is trustability. Most serious science goes through what's called "peer review." This consists of a scientist's forming a falsifiable hypothesis; performing an experiment (or several) which try but fail to disprove that hypothesis; writing up a scholarly article on the research findings; and then allowing that scientist's peers (by which is meant other scientists in the same field, some of whom may hate the first scientist's guts) to try in every way their devious little minds can dream up to show how what the first scientist did was 1) a fluke, 2) bad science, or 3) attributable to some other factor. If the objections and jealous whining of the author's peers aren't sufficiently convincing to the editor of the prestigious science journal to which the article was submitted, and the article fits editorial requirements, then it gets published.

Most science that survives this bruising process is generally (though not always) fairly trustable. Far more so than, say, findings of "telepathy" that are published only in "parapsychology" journals, which don't apply the full-blown general-science peer review process.

So does all this mean you should trust the folks in the lab coats with all you hold dear?

No. You should always maintain a fair but principled skepticism about all claims, including scientific claims. Again, scientists are human; not only can they be swayed by political considerations (as in the disputed claim of "global warming"), they're fallible. They can screw up just like the rest of us.

But, ultimately, just because you can't always trust scientists to be right doesn't mean that the scientific process itself is flawed. Disagreement among scientists may seem to mean that it's unsafe to believe anything they say. But in fact, it is that very disagreement that makes science healthy and causes it to deserve your respect at the least.

Because if you can get a bunch of objective but egotistical scientists to agree on something, you probably won't go wrong accepting their conclusion. They may still prove to be mistaken... but their insistence on seeing reality as it actually is can only work in your favor.


Resources


WWW


Fox News Science and Technology News popularized science news

Science Daily the latest research news

Bad Astronomy ostensibly dedicated to criticizing bad astronomy (unfounded beliefs and the like), there is also good information here on critical thinking generally.

The Observatorium a public information server for NASA data (cooler than it sounds).



horizontal rule

Home

Heart

Body

Spirit

Mind

Art Writing Religion Personality
Music Travel Politics Computers
Genealogy Work History Reasoning
Fiction Games Economics Science

horizontal rule