When numerous people over thousands of years observe something like the law of gravity, we tend to believe that it is true with very high probability. This type of reasoning could be summarized by the principle of induction:
Now comes the problem. The statement "all ravens are black" is logically equivalent to the statement "all non-black-things are non-ravens". If we observe a red apple, that is consistent with that statement. A red apple is a non-black-thing, and when we examine it, we observe that it is a non-raven. So by the principle of induction, observing a red apple should increase our belief that all ravens are black! This problem has been humorously summarized as:
I never saw a purple cow But if I were to see one Would the probability ravens are black Have a better chance to be one?
Philosophers have offered many solutions to this violation of intuition. The American logician Nelson Goodman[?] has suggested adding restrictions to our reasoning, such as never considering an instance as support for "All P are Q" if it would also support "No P are Q".
Other philosophers have questioned the "principle of equivalence". Perhaps the red apple should increase our belief in the theory "all non-black-things are non-ravens", without increasing our belief that "all ravens are black". That suggestion has been questioned, though, on the grounds that you can't have a different degree of belief in two different statements, if you know they are either both true or both false.
Some philosophers have argued that it's only our intuition that is flawed. Observing a red apple really does increase the probability that all ravens are black! After all, if someone gave you all the non-black things in the universe, and you noticed that there were no ravens in the collection, then you could indeed conclude that all ravens are black. The example only seems counterintuitive because the set of non-black-things is far, far larger than the set of ravens. Thus observing one more non-black-thing which is not a raven should make a tiny difference to our degree of belief in the proposition compared to the difference made by observing one more raven which is black.
There is an alternative to the "principle of induction" described above.
Let X represent an instance of theory T, and I represent all of our background information.
Let <math>\Pr(\bullet | \circ)</math> represent the probability of <math>\bullet</math> given <math>\circ</math>. Then,
This principle is known as "Bayes' theorem". It is foundational to the mathematics of probability and statistics. When scientists publish analyses of experimental results and calculate that they are "statistically significant", they are implicitly using this principle. It could be argued that this principle is a better representation of how scientists actually reason than the original "principle of induction" described above.
Using this principle, the paradox does not arise. If you ask someone to select an apple at random and show it to you, then the probability of seeing a red apple is independent of the colors of ravens. The numerator will equal the denominator, the ratio will equal one, and the probability will remain unchanged. Seeing a red apple will not affect your belief about whether all ravens are black.
If you ask someone to select a non-black-thing at random, and they show you a red apple, then the numerator will exceed the denominator by an extremely small amount. Seeing the red apple will only slightly increase your belief that all ravens are black. You'll have to see almost every non-black-thing in the universe (and see they're all non-ravens) before your belief in "all ravens are black" increases appreciably. In both cases, the result agrees with intuition.
See also:
References:
External links:
Search Encyclopedia
|
Featured Article
|