Chapter 3

Experiment

The experiment is the initial test you give to show that your hypothesis is correct. Technically, it should be a test to show that the explanation you have guessed at does in fact imply the observations you have made so far; but in practice it functions not only to do this but to do what the "verification" mode does: it treats the hypothesis as a little theory, looks at what is implied in it, including what has not so far been observed, and checks to see if those things (which must be facts if it is true) turn out to be the case.

It is sad, in a way, that all of the exciting part of a scientific investigation has by this stage already gone by, and most of what happens from here on in is drudgery--and in fact a kind of dogged attempt to prove that your hypothesis or theory is wrong. The reason is that the theory is of the form "p implies q," as I said, where "p" is the explanation you hope or think is the cause,(1) and "q" is the observed effect that depends on it (or the predictions that follow from it). But by the logic of the hypothetical syllogism, nothing follows from knowing the truth of the consequent, and the only thing there is in the real world is the consequent; your hypothesis is a situation you made up. Even if it is a real situation, you still made it up insofar as it explains the effect in question.

Hence, there is no way to verify the hypothesis. You can falsify it by showing that something that is implied by it is not in fact the case, because the false consequent either means that the "p" doesn't exist, or that it's not connected to "q" by way of implication.

It isn't quite as cut-and-dried as this, however. In an extra-logical sense, insofar as other explanations than the one you have chosen are unlikely or impossible, the one you picked gains in credibility; and in the limit, has to be

true if it is the only explanation that makes sense. As Sherlock Holmes said somewhere, "When you have ruled out every other explanation, my dear Watson, the one remaining, however improbable, must be the truth."

There is one kind of experiment, called the "gedanken experiment" or "thought experiment," that deserves mentioning. This is one, as the name implies, that isn't actually performed, because the conditions for its performance either aren't actually possible or are so obvious as not to need bothering with. In either case, it is dangerous; in the latter because reality can sometimes be capricious and not behave the way you are convinced it will behave; and in the former because the conditions that make the actual experiment impossible are apt to be extremes, and bodies do strange things in extreme conditions (as witness the surprise of scientists who cooled objects down near absolute zero and found that they suddenly became superconducting).

I can give an illustration of a thought experiment if I treat the other topic I said I would discuss: that of probability and statistics.

The effect connected with probability is that probability deals with what is random, and yet it provides laws governing the random behavior. But "laws" are descriptions of constant, invariant behavior, and what is random is precisely what is not constant. How can there be constant inconstancy?

The first hypothesis you might offer for this is that the behavior is not really random, but only seems so; and probability reveals its non-randomness. But this won't really work. If you take one of a pair of dice (to make things simple) and roll it and you get an ace every time, then you examine the die to find out if it was weighted on the side opposite the one-spot, or if the edges were rounded, or was in some other way altered making it not behave randomly. The laws of probability will not work if something is favoring one side over the other as coming out on top; there must be an equal chance for every side to come up each time.(2)

When this is the case, then you can say that in the long run, the die will show an ace one-sixth of the time. Let me explain "in the long run." As Bernard Lonergan mentions in Insight somewhere, it means that there is no systematic divergence from the ratio in question--in this case, between the number of throws and the number of times the one-spot appears on top. There may very well be a "run" of some one side's showing up more often than a sixth of the time; but it will be counterbalanced at some other time by that side's appearing less frequently than the law predicts--and of course, these balances will be random also. The result is that as the number of rolls of the die becomes quite large, the number of times each side appears on top will be closer and closer to the number in the probability ratio.

But why is that ratio one-sixth with the die? Is this connected with the die's having six faces, only one of which can appear on top at any one roll? Suppose we make this our hypothesis. The fact that the die has six faces only one of which can show up on top causes its behavior to be such that a given one will appear on top a sixth of the time.

And here is our thought experiment. We find that with a coin which has two faces, heads will come up half the time. (You can see why this experiment doesn't have to be performed; it's already been done often enough.) And with a dodecahedron, any given side will come up a twelfth of the time, and so on.

We can now formulate a more refined hypothesis: It is the constancy of the structure underlying the random behavior that forces the behavior not to be totally random. This solves the basic effect. It isn't that the behavior is not random; there is nothing that picks which side will come up at any given roll of the die. Still, it isn't the randomness itself which is lawful, but the fact that constraints are placed on it by the structure of what is behaving randomly; and these constraints prevent totally random behavior, leading to the probability ratio between what appears on top and the number of rolls.

If this is true, then the laws of probability are not "laws of chance," but the laws of something constant that prevents chance from being complete randomness.

Let us test this hypothesis with a thought experiment. Imagine now that you have a "die" made of soft plastic, which will be deformed as it hits the table you are rolling it on. You place a spot on it somewhere, and then roll it many times randomly; and at each roll, it ends up having a different number of "faces" from what it had in the last roll from one (a sphere or oval), two (a lens), up to infinity (which would again be something like a sphere, and so would equal one). Now, what will be the ratio of the spot's coming on top to the number of rolls? There is no answer, because now everything connected with the rolls is random.

We could test it again by another experiment. Suppose your die was such that at any given throw it could have 4, 5, 6, 7, or 8 faces, but no others. Would the laws of probability apply? Yes. Without trying to figure out the actual ratio (I am terrible at applied mathematics), one time out of five the die will have four faces, and the probability of the face you are interested in coming out on top would be one in four during those times. One time out of five, it will have five faces, and the probability during these times will be one in five--and so on. If you combine all of these according to the laws of probability, you will come up with a number. Again, there is a constraint on absolute randomness because of the constant underlying structure; only in this case, the structure is whatever always keeps the die from having more than this set of numbers of faces.

So our problem is solved and reason is once again vindicated. Now we can state the theory explaining why probability works.

Theory: The laws of probability are due to the fact that some kind of constant structure behind what behaves randomly prevents the behavior from being completely random.

But there are a couple of things to note here. There is no logical necessity (as mathematicians seem to think) between, say, the fact that there are six faces on a die and only one can come up at any given roll and the prediction that in the long run the die will show an ace a sixth of the time. It "stands to reason" that this would be the case, but there are a lot of things that "stand to reason" that aren't true. It "stands to reason" that a ten-pound weight will fall down faster than a two-ounce weight; but you won't find it doing this if you discount air resistance and so on.

There is no reason why, even if the die had only six faces all the time, the one-spot's appearance couldn't in fact be totally random; you wouldn't expect it to be, under these circumstances, but there's nothing that makes it a contradiction for it not to be. After all, the ratio predicted is a ratio between the number of events of a certain type and the total number of events, and the ratio you have discovered is a ratio between actualities and possibilities for a single event. And possibilities are just possibilities; there is no necessity for all possibilities eventually to be realized, any more than the fact that a man can have sex means that he can't be celibate forever, or the fact that you could be a philosopher means that you eventually have to be one.

This lays bare the silliness of people who say that if you put enough monkeys banging away at enough typewriters, one of them would eventually type out the whole script of Hamlet, just because one of them could, by chance, do it. The reason it is silly is that it is also possible that this particular combination of letters would never be hit on by anyone (because at any given try, it is possible both to type out the script and not to do so); and so if all possibilities must be realized given an infinite number of tries, then it will eventually be true both that some monkey will do it and no monkey will do it.

So just because it seems reasonable to say that a structural constraint would lead to a probability ratio among behaviors, it isn't positively unreasonable to say that then again it might not. But it turns out that in practice, the theory works.

That is, people have tested it, and found that in the long run dice do behave as probability says they will (which is what keeps casinos in business); and so what "stands to reason" also turns out to be a law of nature. Hence, the laws of probability are basically empirical laws, not strictly mathematical ones. That is, the mathematics prescinds from what actually goes on in the world; but the fact that it applies to the world has to be empirically verified.

Note, by the way that there is a "law" that also "stands to reason" which in fact isn't verified: the "law of averages." It reasons this way: "Heads on this coin has to come up half the time in the long run. There have just been twenty heads in a row. Therefore, compensation must set in, and in the future it is more than a fifty-fifty chance that tails will come up."

Many is the man who has lost his shirt based on this fallacy. True, in the long run, the probability ratio has to obtain; and this is predictive for the total number of flips of the coin, if that number is very large. But it has no predictive value for the next flip. Why? The answer usually given is that the coin doesn't know that it's had a run of twenty heads. True, but it doesn't know either about the total number of flips, and why does it work out with the total number and not with the next one?

That is, if the odds against getting twenty heads in a row are, say, a thousand to one, the odds against getting twenty-one heads in a row are even greater--let us say ten thousand to one. Once again I am just putting in figures, but the principle is valid. Then why can't you bet using the much smaller probability based on the twenty-one in a row?

The "answer" is that most of the "unlikelihood" of the twenty-one heads has been used up in the twenty heads in a row. And, the probability theorists tell us (and it is verified again in casinos every day), the likelihood left over for the twenty-first flip after the twenty heads in a row is just exactly fifty-fifty. If you take all twenty-one together, it is enormously unlikely to happen; but if you take twenty-one after having twenty, it is a tossup. Sorry. There is no "law of averages," but there are laws of probability. But note that this is due to the fact that this is the way things actually work; there is no special reason why it can't be the case that a long run of one possibility will not be compensated for in the near future.

So what I am saying is that the universe is so built that the laws of probability work, and the law of averages doesn't.

Note that if this theory of the foundations of probability is true, those who say that the world evolved "just by chance" are dead wrong, given that the laws of probability govern evolution. If it came about just by chance, then there would be no way to apply these laws. No, once the laws of probability operate, they are laws of some nature that prevents the behavior from being totally random; and so evolution as a process is precisely not due "just" to chance but to what it is about the evolving universe that (a) enables it to perform a certain range of behaviors, (b) prevents it from doing anything outside that range, and (c) doesn't pick out which behavior in that range is going to occur at any given time.

The chance element, therefore, is only one out of three necessary conditions for evolution to occur. If the first weren't there, obviously nothing would happen. If the second weren't there, there would be no predictability at all about what had happened and what will happen. Of course, if the third weren't there, then evolution would be totally predictable, à la Laplace's discredited view, that if we knew absolutely all about the motion of one single particle, we would know the whole past history of the universe and be able to predict everything that will happen in the future.

What I am saying here is that there is no way dogs can evolve into jellyfish (at least I presume that no matter how much a bitch's genes are interfered with, it simply is not possible for her to give birth to a jellyfish). And animals evolved from other animals because chance alterations in the genetic structure were such that the resulting organism could still live and survive--which is a tall order, given how tenuous our hold on this super-high energy level is. So the genetic structure of any organism exercises constraints on what can come from it (in fact, it normally excludes anything but the same form of life, as we saw), as well as making it possible for some new living body to come from it. Just imagine a stage of evolution that resulted in the next generation's being sterile like mules. End of evolution.

Because probability (and its inverse, statistics) plays such a large role in our lives and in science now, people have been mesmerized by the chance element of it, and said that because of this there is no such thing as a "nature" any more, and everything is just random. But probability proves "natures," it doesn't deny them. It is just that the natures don't directly constrain the action to be one single, inflexible act every time; they constrain the acts, enabling several but no more than a fixed number of possible acts.

One final remark about probability, and then I will talk about statistics. Probability doesn't really have anything to do with likelihood as opposed to certainty. If you recall our discussion of certainty back in Chapter 5 of Section 1 of the first part 1.1.5, I said that certainty and likelihood had nothing to do with probability in the sense of the "laws of probability." (Incidentally, I said there that I would discuss probability "much later." You had no idea how much later, did you?)

Certainty is the knowledge that you are not mistaken, and is the lack of evidence against what you think is true, coupled with some evidence for it. Likelihood (which implies doubt) supposes that there are reasons for saying that what you think is true might not in fact be true; but the reasons for saying that it is true outweigh the reasons against it.

But probability doesn't deal with this. First of all, the laws of probability are certain (given their empirical verification), not likely. There is reason for saying that they are true, and no reason and no experience which would say that they are not true.

But they don't deal with reasons for saying that a given event is a fact; they only deal with the relation between a given actualization and the total number of tries; and that relation is certain, not likely.

You can say that it's fifty per cent likely that your coin will come up heads; but that doesn't really mean more than that there are two possibilities only one of which can be actualized; and you are certain of that. It has no real predictive value for what will happen on the next flip. On the next flip either heads will come up or it won't; and it doesn't make sense to say "half the time" it will come up, because you are talking about this definite flip, not a number of them.

Hence, probability should not be confused with likelihood. It's all right to talk about a "sixty per cent chance of rain tomorrow" in a kind of loose sense, informing the public that there is a weather situation that allows something corresponding to a hundred possibilities with sixty of them rain, and let them figure out whether the likelihood of rain (it is likely) means that they should take their umbrellas. What I am saying is not that probabilities don't generate likelihoods in people's minds; it is just that the likelihood, strictly speaking, doesn't have a number attachable to it corresponding to the probability ratio. In one sense, if something has a sixty per cent chance of happening, there is reason to expect it; but it would be hard to say that there are sixty reasons out of a hundred for expecting it.

Statistics, then, as the inverse of probability works this way: First, the scientist notices some correlation between events and the objects involved in the events, and suspects (as we saw in induction) that this is not a chance correlation. Smokers take smoke into their lungs, and there seem to be a lot of lung-cancer patients who are smokers.

The observation then establishes the correlation itself: that indeed the population of smokers is over-represented in the population of lung-cancer victims. That is, the percentage of smokers to the general population is, let us say, one in twenty. But the percentage of smokers with lung cancer to the general population of lung-cancer victims is one in ten. These are figures I am making up just to give you the idea.

For the statistician now to assure himself that this correlation is not chance, he must formulate a hypothesis that there is something in the nature of the object in question to allow one to expect the behavior observed.

This step is crucial. If it can't be done, then there's no reason for expecting probability to be at work here; and pure chance can come up with correlations that have no foundation. People, they say, have found very high correlations between such things as the number of reports of hearing the mating call of the male caribou in Washington State and the number of immigrants into the Port of New York. In fact, what the tobacco companies have been arguing for years is that the correlation between smoking and lung cancer is like this.

But of course, it stands to reason that if you take a substance known to be toxic into your lungs, it won't do your lungs any good; and experiments with animals show that the things in tobacco tar produce cancers when rubbed on animals or forced into their lungs. This is part of the experiment stage: to find what it is about the nature in question that produces the constraint on events that causes the probabilistic correlation.

The other experimental test of the hypothesis consists in showing that the same correlation stands up constantly. For instance, there are fewer smokers now than there were twenty years ago, and more lung cancer victims than there were twenty years ago. But it is still true that the smokers are over-represented in the population of lung-cancer victims. That is, today, let us say, the ratio of smokers to the general population is one in a hundred; but the ratio of smokers with lung cancer to the general population of lung-cancer victims is one in fifty. There are still twice as many smokers in the lung-cancer group as there are in the general population. The only thing the increase in lung-cancer victims and the decrease in smokers proves is that there are more things that give people lung cancer nowadays than there used to be.

But this shows why it is important when you use statistics to know what is behind the correlation, so that you can isolate the correlation from all the extraneous factors that have nothing to do with what you are focusing on.

Generally speaking, when you are dealing with statistics, the cause of the effect in question (the correlation) is some very abstract property of a number of different things. For instance, the cause of lung cancer is "a carcinogen taken into lungs that can't overcome it." But there are all kinds of different substances that are carcinogenic and can find their way into people's lungs, and there are, presumably, all kinds of levels of resistance to the activity of various carcinogens. Hence, you would be able to predict from this situation that you couldn't set up a one-for-one correspondence between getting cigarette smoke into your lungs and getting lung cancer (the way you can say that having your head removed is invariably fatal); the relationship is bound to be probabilistic. You have found the nature; but the nature allows several different behaviors, but only a limited range of them.

Theory: The use of statistics is valid when the user knows that there is something about the nature of what has a correlation attached to it that (a) allows several different behaviors, but (b) constrains them to be only these several behaviors.

Next


Notes

1. The cause is the real explanation, you will remember.

2. If this still leaves some randomness, probability can take the weighting into account.