Thursday, February 25, 2021

Critical Thinking - a Simple and Concise Look full series

 

Critical Thinking - a Simple and Concise Look

 I have written a few articles about critical thinking and read a few books on the topic and read numerous articles and watched videos.

I have found that bits and pieces from many subjects are important and necessary parts of critical thinking. Despite wanting to be brief I ended up including eight different topics in my series of articles on Cornerstones of Critical Thinking because the ideas are so crucial to the subject.

I have described several videos by Richard Paul in detail because the information he goes over is so vital to good critical thinking.


I had resigned myself to the fate of having to dive into many related subjects to begin to tackle critical thinking. It just involves so much.


I was delighted to discover one book that puts forth a lot of the most important and essential information on critical thinking forward in a very short amount of words and in a very thorough but understandable and consistent model.


I enjoyed the book and the presentation of its ideas and model so much that I decided to write this series of posts to describe the work to encourage others to read the book,  to consider the ideas presented and to explore the subject of critical thinking as a serious study. I think it's well worth the effort.


The book I am describing is How to Think about Weird Things: Critical Thinking for a New Age by Theodore Shick, Jr. and Lewis Vaughn. (Fifth Edition)







The book was recommended by David Kyle Johnson as a guest on The Sensibly Speaking podcast with Chris Shelton, by the way. 


I have to include some of the ideas used to lay groundwork before getting to the model itself but compared to other subjects this is quite brief and straightforward.


I want to start with a central question in critical thinking - why? Why do we believe one idea and not another? 


I have explored this idea in great depth in other posts and series including The Knowledge Illusion and Cornerstones of Critical Thinking.


The Knowledge Illusion full series 1 - 16


Cornerstones of Critical Thinking 1 - 8 Introducti...


The authors described it:


"Without good whys, humans have no hope of understanding all that we fondly call weird - or anything else, for that matter. Without good whys, our beliefs are simply arbitrary, with no more claim to knowledge than the random choice of a playing card. Without good whys to guide us, our beliefs lose their value in a world where beliefs are already a dime a dozen." (Page 2)


"The big question then is why? Why do you believe or disbelieve? Belief alone - without good whys - can't help us get one inch closer to the truth. A hasty rejection or acceptance of a claim can't help us tell the difference between what's actually likely to true (or false) and what we merely want to be true (or false). Beliefs that do not stand on our best reasons and evidence simply dangle in thin air, signifying nothing except our transient feelings or personal preferences." (Page 3)


"Aliens, spirits, miracle cures, mind over matter, life after death: wonders all. The world would be a more wonderful place, if these things existed. We wouldn't be alone in the universe, we would have more control over our own lives, and we would be immortal. Our desire to live in such a world undoubtedly plays a role in the widespread belief in these things. But the fact that we would like something to be true is no good reason to believe that it is. To get to the truth of the matter we must go beyond wishful thinking to critical thinking. We must learn to set aside our prejudices and preconceptions and examine the evidence fairly and impartially. Only then can we distinguish reality from fantasy." (Page 12)


"If we can't tell the difference between reasonable and unreasonable claims, we become susceptible to the claims of charlatans, scoundrels, and mountebanks." (Page 13)


(Mountebank - Entry 1 of 2) 1 : a person who sells quack medicines from a platform. 2 : a boastful unscrupulous pretender : charlatan.)

(Merriam-Webster)


"The historian Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions, has shown that science advances only by recognizing and dealing with anomalies (phenomena that don't seem to obey known laws). According to Kuhn, all scientific investigation takes place within a paradigm, or theoretical framework, that determines what questions are worth asking and what methods should be used to answer them. From time to time, however certain phenomena are discovered that don't fit into the established paradigm, that is they can't be explained by the current theory. At first, as in the case of meteorites, the scientific community is forced to abandon the old paradigm and adopt a new one. In such a case, the scientific community is said to have undergone a paradigm shift." (Page 15 -16)


"LOGICAL POSSIBILITY VERSUS PHYSICAL POSSIBILITY


Although it's fashionable to claim that anything is possible, such a claim is mistaken, for there are some things that can't possibly be false, and others that can't possibly be true. The former - such as "2 + 2 = 4," "All bachelors are unmarried," and "Red is a color" are called necessary truths, while the latter - such as "2 + 2 = 5," "All bachelors are married," and "Red is not a color" are called necessary falsehoods. The Greek philosopher Aristotle (Plato's pupil) was the first to systematize our knowledge of necessary truths. The most fundamental of them the ones upon which all other truths rest - are often called the laws of thought. They are:


The law of noncontradiction: Nothing can both have a property and lack it at the same time.


The law of identity: Everything is identical to itself.


The law of the excluded middle: For any particular property, everything either has it or lacks it.


These principles are called the laws of thought because without them thought - as well as communication would be impossible. In order to think or communicate, our thoughts and sentences must have a specific content; they must be about one thing rather than another. If the law of noncontradiction didn't hold, there would be no way to distinguish one thought or sentence from another. Whatever was true of one would be true of the other. Every claim would be equally true (and false). Thus, those who deny the law of noncontradiction can't claim that their position is superior to that of those who accept that law.

One of the most effective techniques of refuting  a position is known as reductio ad absurdum: reduction to absurdity. If you can show that a position has absurd consequences, you've provided a powerful reason for rejecting it. The consequences of denying the law of noncontradiction are about as absurd as they get. Any position that makes thought and communication theoretically impossible is, to say the least, suspect. Aristotle, in Book IV of the Metaphysics, put the point this way:


If all are alike both wrong and right, one who is in this condition will not be able either to speak or to say anything intelligible; for he says at the time both "yes" and "no." And if he makes no judgement but "thinks" and "does not think," indifferently, what difference will there be between him and a vegetable?(end quote)


What difference indeed. Without the law of noncontradiction, we can't believe things to be one way rather than another. But if we can't believe things to be one way rather than another, we can't think at all.


Logic is the study of correct thinking. As a result, the laws of thought are often often referred to as the laws of logic. Anything that violates these laws is said to be logically impossible, and whatever is logically impossible can't exist. We know, for example, that there are no round squares, no married bachelors, and no largest number because such things violate the law of noncontradiction - they attribute both a property and its negation to a thing and are thus self-contradictory. The laws of thought, then, not only determine the bounds of the real. Whatever is real must obey the law of noncontradiction. That is why the great German logician Gottlob Frege called logic "the study of the laws of the laws of science." The laws of science must obey the laws of logic. Thus, von Daniken is mistaken. Some things are logically impossible, and whatever is logically impossible cannot exist." (Page 16 - 17)


"Psychokinesis, the ability to move external objects with the power of one's mind, seems to be physically impossible because it seems to imply the existence of an unknown force. Science has identified only two forces whose effects can be felt over long distances: electromagnetism and gravity. The brain, however, is not capable of producing enough of either of these forces to directly affect objects outside of the body. So psychokinesis seems to violate the laws of science.


The notion that we have been visited by ancient astronauts or aliens from outer space seems technologically impossible because the amount of energy needed to travel to the stars is astronomical. In Beyond Star Trek, physicist Laurence Krauss considers some of the practical problems associated with interstellar travel. A spaceship traveling to Alpha Centauri the nearest star) at 25 percent the speed of light and using conventional rocket fuel, he claims, would have to carry more fuel than is available from all the matter in the universe.


A spaceship using an unconventional propulsion system like warp drive would require a generator capable of producing energy equivalent to 10 billion times the mass of the visible universe. So if Krauss is right, interstellar travel will probably forever be beyond our technological capabilities.


Contrary to what Von Daniken would have us believe, it is possible to apply the word impossible to some things. Some things are logically impossible, others are  physically impossible, and still others are technologically impossible. And as Krauss's example of interstellar travel shows, even if something is physically possible, it doesn't necessarily follow that it will ever be actual. The principle that should guide our thinking in these matters, then is this:


Just because something is logically or physically possible doesn't mean that it is, or ever will be, actual.


If logical or physical possibility were grounds for eventual actuality, we could look forward to a world containing moon-jumping cows or egg-laying bunnies. To determine whether something is actual, we have to examine the evidence in its favor.


There are those, however, who measure the credibility of a claim not in terms of the evidence for it, but in terms of the lack of evidence against it. They argue that since there is no evidence refuting their position, it must be true. Although such arguments have great psychological appeal they are logically fallacious. Their conclusions don't follow from their premises because a lack of evidence is no evidence at all. Arguments of this type are said to commit the fallacy of appeal to ignorance. Here are some examples:

No one has shown that Jones is lying. Therefore he must be telling the truth.


No one has shown that there are no ghosts. Therefore they must exist.


No one has shown that ESP is impossible. Therefore it must be possible.


All a lack of evidence shows is our ignorance; it doesn't provide a reason for believing anything. "" (Page20-21)


"If a lack of evidence against a claim actually constituted evidence for it, all sorts of weird claims would be well founded. For example, the existence of mermaids, unicorns, and centaurs - not to mention Bigfoot, the Loch Ness monster, and the abominable snowman - would be beyond question. Unfortunately, substantiating A claim is not that easy. The principle here is this:

Just because a claim hasn't been conclusively refuted doesn't mean that it is true.


A claim's truth is established by the amount of evidence in its favor, not by the lack of evidence against it.


In addition, the strategy of placing the burden of proof on the non-believer is unfair in so far as it asks him to do the impossible, namely, prove a universal negative. A universal negative is a claim to the effect that nothing of a certain sort exists. Suppose it's claimed that there are no white ravens. In support of this claim, suppose it's pointed out that no one has ever reported seeing a white raven. From this it doesn't follow that there are no white ravens, for no one may have looked in the right place. Or if somebody saw one, it may have not been reported. To prove a universal negative, you would have to be able to exhaustively investigate all of time and space. Since none of us can do that, it's unreasonable to demand it of anyone. Whenever someone proposes something novel - whether it be a policy, A fact, or a theory - the burden of proof is on her to provide reasons for accepting it.


It's not only true believers who commit the fallacy of appeal to ignorance, however. Skeptics often argue like this: No one has proven that ESP exists; therefore it doesn't. Again, this is fallacious reasoning; it's an attempt to get something for nothing. The operative principle here is the converse of the one cited above:


Just because a claim hasn't been conclusively proven doesn't mean it is false.

Even if no one has yet found a proof of ESP, we can't conclude that none will ever be found. Someone could find one tomorrow. So even if there is no good evidence for ESP, we can't claim that it doesn't exist. We can claim, however, that there is no compelling reason for thinking that it does exist. " (Page 21-22)


"Just because you can't explain something doesn't mean that it's supernatural. " (Page 24)


SUMMARY


It is not the case that anything is possible, as some people claim. Anything that violates the laws of logic is said to be logically impossible, and whatever is logically impossible can't exist. Such things as round squares and married bachelors are logically impossible, for they attribute both a  property and its negation to a thing and are therefore self-contradictory. Many extraordinary things such as ESP, alien abduction, and out-of-body experiences are logically possible - they are not self-contradictory. But if they violate the laws of science, they are physically impossible. Anything that is inconsistent with the laws of science, or nature, is physically impossible; and visitation by aliens from outer space seems to be technologically impossible. The principle to keep in mind about such things is that just because something is logically or physically possible doesn't mean that it is, or ever will be, actual.


 We must approach claims of physical impossibility with caution, for it isn't phenomena themselves that contradict physical law, but rather our theories about them - and our theories may be mistaken. " (Page 31)



Critical Thinking - a Simple and Concise Look 2

 "Arguments, Good, Bad, and Weird


The central focus of critical thinking is formulation and evaluation of arguments - and this is true whether the subject matter is ordinary or as weird as can be. Usually when we are doing critical thinking, we are trying either to devise arguments or to assess them. We are trying either (1) to demonstrate that a claim, or proposition, is true or (2) to determine whether in fact a claim is true. In either case, if we are successful, we are likely to increase our knowledge and expand our understanding - which is, after all, the main reason we use critical thinking in the first place." (Page 35)


"As noted earlier, we are entitled to believe a claim when we have good reasons to believe it. The reasons for accepting A claim are themselves stated as claims. The combination of claims - a claim (or claims) supposedly giving reasons for accepting another claim is known as an argument. Or to put it another way, when claims (reasons) provide support for another claim, we have an argument.


People sometimes use the word argument to refer to a quarrel or verbal fight. But this meaning has little to do with critical thinking. In critical thinking, an argument is defined as above - reasons supporting a claim. 


To be more precise, claims (or reasons) intended to support another claim are known as premises. The claim that the premises are intended to support is known as the conclusion." (Page 36)


"There are also different kinds of arguments. Arguments can be either deductive or inductive. Deductive arguments are intended to provide conclusive support for their conclusions. Inductive arguments are intended to provide probable support for their conclusions. A deductive argument that succeeds in providing conclusive support is said to be valid. A deductive argument that fails to provide such support is said to be invalid. A valid deductive argument has this characteristic: If its premises are true, its conclusion must be true. In other words, it is impossible for a deductively valid argument to have true premises and a false conclusion. Notice that the term valid as used here is not a synonym for true. Valid refers to a deductive argument's logical - it refers to to an argument structure that guarantees the truth of the conclusion if the premises are true. If an argument is valid, we say that the conclusion follows from the premises. Because a deductively valid argument guarantees the truth of the conclusion if the premises are true, it is said to be truth-preserving.


Here's a classic deductively valid argument:


All men are mortal.

Socrates is a man.

Therefore, Socrates is mortal.

And here's another one:

If you have scars on your body, then you have been abducted by space aliens. You obviously do have scars on your body. Therefore, you have been abducted by space aliens.


Notice that in each of these, if the premises are true, the conclusion must be true. If the premises are true, the conclusion cannot possibly be false. This would be the case regardless of the order of the premises and regardless of whether the conclusion came first or last.


Now here are deductively invalid versions of these arguments:


If Socrates is a dog, he is mortal.

Socrates is not a dog.

Therefore, Socrates is not mortal.

If you have scars on your body, then you have been abducted by space aliens. You have been abducted by space aliens. Therefore, you have scars on your body.


These arguments are invalid. In each, the conclusion does not follow from the premises." (Page 39-40)


"An inductive argument that succeeds in giving probable support to its conclusion is said to be strong. An inductive argument that fails to do this is said to be weak. In an inductively strong argument, if the premises are true, the conclusion is probably or likely to be true. The logical structure structure of an inductively strong argument can only render the conclusion probably true if the premises are true. Unlike a deductively valid argument, an inductively strong argument cannot guarantee the truth of the conclusion if the premises are true. So inductive arguments are not truth-preserving.


Here are two inductively strong arguments:


If Socrates is a man, he is most likely mortal.

He is a man.

Therefore, Socrates is probably mortal.


If you have scars on your body, there is a 90 percent chance that you have been abducted by space aliens. You have scars on your body. So you have probably been abducted by space aliens.


Look at the first inductive argument. Notice that it's possible for the premises to be true and the conclusion false. After all, the first premise says that there is no guarantee that Socrates is mortal just because he's a man. He's only likely to be mortal. Also, in the second argument, there is no guarantee that you have been abducted by space aliens if you have scars on your body. If you have scars on your body, there's still a 10 percent chance that you have not been abducted.


Good arguments must be valid or strong - but they also must have true premises. A good argument is one that has the proper logical structure and true premises. Consider this argument:


All dogs can lay eggs.

The prime minister is a dog.

Therefore, the prime minister can lay eggs.


This is a valid argument, but the premises are false. The conclusion follows logically from the premises - even though the premises are false. So the argument is not a good one. A deductively valid argument with true premises is said to be sound. A sound argument is a good argument. A good argument gives you good reasons for accepting the conclusion. Likewise, A good inductive argument must be logically strong and have true premises. An inductively strong argument with true premises is said to be cogent. A cogent argument is a good argument, which provides good reasons for accepting the conclusion. " (Page 40-41)


"DEDUCTIVE ARGUMENTS


Whether a deductive argument is valid depends on its form or structure. We can see the form most easily if we represent it by using letters to substitute for the argument's statements. Consider this deductive argument:


1. If the soul is immortal, then thinking doesn't depend on brain activity.

2. The soul is immortal.

3. Therefore, thinking doesn't depend on brain activity.


By using letters to represent each statement, we can symbolize the argument like this:


If p then q.

p.

Therefore, q.


The first line is a compound statement consisting of two constituent statements, each of which is assigned A letter: p or q. Such a compound statement is known as a conditional, or if-then, statement. The statement following the if is called the antecedent, and the statement after then is called the consequence. The whole argument is referred to as a conditional argument because it contains at least one conditional statement (If p then q.)


Conditional arguments are common. In fact, many conditional argument patterns are so common that they have been given names. These prevalent forms are worth getting to know because they can help you quickly judge the validity of arguments you encounter. Since the validity of an argument depends on its form, if you know that a particular common form is always valid (or invalid), then you know that any argument having that same form must also be valid (or invalid).


For example, the argument just examined is cast in the common form known as affirming the antecedent, or modus ponens. Any argument in this form is always valid. We may drop whatever statements we please into this form, and the argument will remain unshakably valid - whether or not the premises are true. Now consider this modus ponens argument:


1. If one human is made of tin, then every human is made of tin.

2. One human is made of tin.

3. Therefore, every human is made of tin.


The premises and conclusion of this argument are false. Nevertheless, this argument is valid because if the premises were true, then the conclusion would have to be true. A valid argument can have false premises and a false conclusion, false premises and a true conclusion, or true premises and a true conclusion. The one thing it cannot have is true premises and a false conclusion.


Here is another frequently occurring, conditional form:

If p then q.

Not q.

Therefore, not p.


For example:


1. If the soul is immortal, then thinking doesn't depend on brain activity.

2. Thinking does depend on brain activity.

3. Therefore, the soul is not immortal.


This form is known as denying the consequen, or modus tollens. Any argument patterned this way - regardless of the topic or truth of the premises - is valid.


A valid hypothetical form that people often employ to think critically about a series of events is known as hypothetical syllogism. (Hypothetical is a synonym for conditional, A syllogism is simply a deductive argument consisting of two premises and a conclusion.) In this form, every statement is conditional. See:


If p then q.

If q then r.

Therefore, if p then r.


For example:

1. If the floor creaks, someone is standing in the hallway.

2. If someone is standing in the hallway, there's a burglar in the house.

3. Therefore, if the floor creaks, there's a burglar in the house.


As you might expect, some very common argument forms are invalid. This is known as denying the antecedent:

If p then q.

Not p.

Therefore, not q.


1. If Joe is a bachelor, then Joe is a male.

2. Joe is not a bachelor.

3. Therefore, Joe is not a male.

The invalidity of the argument seems obvious. But consider this specimen in the same form:


1. If scientists can prove the existence of ghosts, then ghosts are real.

2. But scientists cannot prove the existence of ghosts.

3. Therefore, ghosts are real.


The dead giveaway of invalidity here is that it's possible for both premises to be true and the conclusion false. Even if scientists cannot prove the existence of ghosts, that doesn't show that ghosts are not real. Perhaps ghosts exist despite the failure of science to prove it.

Another popular invalid form is affirming the consequent:

If p then q.

q.

Therefore, p.

1. If Chicago is the capital of Illinois, then Chicago is in Illinois.

2. Chicago is in Illinois.

3. There, Chicago is the capital of Illinois.

We can see immediately that this argument is invalid because, you will recall, it's impossible for a valid argument to have true premises and a false conclusion - and this argument clearly does have true premises and a false conclusion.

Of course, not all common deductive arguments are conditional. Here's a nonconditional valid form known as distinctive syllogism:


Either p or q.

Not p.

Therefore, q.

1.Either Jill faked the UFO landing or Jack did.

2. Jill did not fake the  UFO landing.

3. Therefore, Jack faked the UFO landing.

A statement in the p-or-q format of premise 1 is called a disjunction, and each statement in a disjunction (p or q) is called a disjunct. In a disjunctive syllogism, either one of the disjuncts can be denied, and the conclusion is that the undenied disjunct must be true.


Being familiar with these six argument forms can come in handy when you're trying to quickly determine the validity of an argument." (Page 41-43)


Critical Thinking - a Simple and Concise Look 3

"INDUCTIVE ARGUMENTS


Even though inductive arguments are not valid, they can still give us good reasons for believing their conclusions provided that certain conditions are met." (Page 44)


"Using inadequate sample size to draw a conclusion about a target group is a common mistake, A fallacy called a hasty generalization." (Page 45)

"But how large a sample is large enough? Generally, the larger the sample, the more reliably it signifies the nature of the target group. But sometimes even small samples can be telling. One guiding principle is that the more homogenous a target group is in characteristics relevant to the property being studied, the smaller the sample needs to be. We would require, for example, a very small sample of mallard ducks to determine whether they all have bills, because the physical properties of ducks vary little throughout the species. But if we want to know the buying habits of Canadians, we would need to survey a much larger sample - hundreds or thousands of Canadians." (Page 46)

"People differ dramatically in their social or psychological properties, so surveying  a handful of them to generalize about thousands or millions is usually pointless.


Samples must not only be the right size, but also representative - that is, they must be like the target group in all the relevant ways. A sample that is not properly representative of the target group is known as a biased sample. Biased samples make weak arguments. To reliably generalize about the paranormal beliefs of New Yorkers, we should not have our sample consist entirely of members of the local occult club. The members' views on the paranormal are not likely to be representative of those of New Yorkers generally. To draw a trustworthy conclusion about water pollution in Lake X, we should not draw all the water samples from the part of the lake polluted by the factory. That area is not representative of the lake as a whole.


A sample is properly representative of the target group if it possesses the same relevant characteristics in the same proportions exhibited by the target group. A characteristic is relevant if it can affect the relevant property." (Page 46)   


"National polling organizations have perfected techniques for generating representative samples of large target groups - all American adults, for example. Because of modern sampling procedures, these samples can contain fewer than 2,000 individuals (representing about 200 million people). Such small representative samples are possible through random sampling. This technique is based on the fact that the best way to devise a genuinely representative sample is to select the sample from the target group randomly. Random selection is assured if every member of the target group has an equal chance of being chosen for the sample. Selecting sample members nonrandomly produces a biased sample." (Page 47)

"Analogical Induction


When we show how one thing is similar to another, we draw an analogy between them. When we claim that two things that are similar in some respects are similar in some further respect, we make an analogical induction. For example, before the various missions to Mars, NASA scientists may have argued as follows: The Earth has air, water, and life. Mars is like the Earth in that it has air and water. Therefore, it's probable that Mars has life. The form of such analogical inductions can be represented as follows:

Object A has properties F, G, H, etc., as well as the property Z.

Object B has properties F, G, H, etc.

Therefore, object B probably has property Z.


Like all inductive arguments, analogical inductions can only establish their conclusions with a certain degree of probability. The more similarities between the two objects, the more probable the conclusion." (Page 48)


The authors point out the dissimilar traits of Mars such as a thin atmosphere and very little water being available, trapped frozen at the poles. It is extremely unlikely that Mars supports life, especially complex life like human beings, at this time.

Others use analogical induction as well. In testing products on animals the argument is made that the products may have similar effects on humans.

The legal system similarly uses analogies of past cases and legal rulings, known as precedents. These often have tremendous influence on the decisions made by courts and judges. 


"Hypothetical Induction

(Abduction, or Inference to the Best Explanation)

We attempt to understand the world by constructing explanations of it. Not all explanations are equally good, however. So even though we may have arrived at an explanation of something, it doesn't mean that we're justified in believing it. If other explanations are better, then we're not justified in believing it.

Inference to the best explanation has the following form:

Phenomena p.

Hypothesis h explains p.

No other hypothesis explains p as well as h.

Therefore, it's probable that h is true.

The great American philosopher Charles Sanders Peirce was the first to codify this kind of inference, and he dubbed it abduction to distinguish it from other forms of induction.

Inference to the best explanation may be the most widely used form of inference. Doctors, auto mechanics, and detectives - as well as the rest of us - use it almost daily. Anyone who tries to figure out why something happened uses inference to the best explanation. Sherlock Holmes was a master of inference to the best explanation. Here's Holmes at work in A Study in Scarlet: 

I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly though my mind that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, "Here is a gentleman of a medical ty, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics would an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan." The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.


Although this passage appears in a chapter entitled "The Science of Deduction," Holmes is not using deduction here because the truth of the premises does not guarantee the truth of the conclusion. From the fact that Watson has a deep tan and a wounded arm, it doesn't necessarily follow that he has been in Afghanistan. He could have been in California and cut himself surfing. Properly speaking, Holmes is using abduction, or inference to the best explanation, because he arrives at his conclusion by citing a number of facts and coming up with the hypothesis that best explains them.

Often what makes inference to the best explanation difficult is not that no explanation can be found, but that too many explanations can be found. The trick is to identify which among all the possible explanations is the best. The goodness of an explanation is determined by the amount of understanding it produces, and the amount of understanding produced by an explanation is determined by how well it systematizes and unifies our knowledge. We begin to understand something when we see it as part of a pattern, and the more that pattern encompasses, the more understanding it produces. The extent to which a hypothesis systematizes and unifies our knowledge can be measured by various criteria of adequacy, such as simplicity, the number of assumptions made by a hypothesis; scope, the amount of diverse phenomena explained by the hypothesis; conservatism, how well the hypothesis fits with what we already know; and fruitfulness, the ability of a hypothesis to successfully predict novel phenomena. In chapter 6 we will see how these criteria are used to distinguish reasonable explanations from unreasonable ones. " (Page 49-50)


That is quite a bit on inductive arguments and abduction, but these are crucial points for understanding the entire model presented by the authors. They just can't be skipped.



Critical Thinking - a Simple and Concise Look 4

 "INFORMAL FALLACIES

A fallacious argument is a bogus one, for it fails to do what it purports to do, namely, provide a good reason for accepting a claim. Unfortunately logically fallacious arguments can be psychologically compelling. Since most people have never learned the difference between a good argument and a fallacious one, they are often persuaded to believe things for no good reason. To avoid holding irrational beliefs, then, it is important to understand the many ways in which an argument can fail. 

An argument is fallacious if it contains (1) unacceptable premises, (2) irrelevant premises, or (3) insufficient premises. Premises are unacceptable if they are at least as dubious as the claim they are supposed to support. In a good argument, you see, the premises provide a firm basis for accepting the conclusion. If the premises are shaky, the argument is inconclusive. Premises are irrelevant if they have no bearing on the truth of the conclusion. In a good argument, the conclusion follows from the premise. If the premises are logically unrelated to the conclusion, they provide no reason to accept it. Premises are insufficient if they do not establish the conclusion beyond a reasonable doubt. If they fail to do this, they don't justify the conclusion.

So when someone gives you an argument, you should ask yourself: Are the premises acceptable? Are they relevant? Are they sufficient? If the answer to any of these questions is no , then the argument is not logically compelling.

Unacceptable Premises

Begging the Question    An argument begs the question - or argues in a circle - when its conclusion is used as one of its premises. For example, some people claim that one should believe that God exists because the Bible says so. But when asked why we should believe the Bible, they answer that we should believe it because God wrote it. Such people are begging the question, for they are assuming what they try to prove, namely that God exists. Here's another example:" Jane has telepathy, " says Susan. "How do you know?" asks Ami. "Because she can read my mind," replies Susan. Since telepathy is, by definition, the ability to read someone's mind, all Susan has told us is that she believes that Jane can read her mind because she believes that Jane can read her mind. Her reason merely reiterates her claim in different words. Consequently, her reason provides no additional justification for her claim.


False Dilemma.   An argument proposes a false dilemma when it presumes that only two alternatives exist when in actuality there are more than two. For example: "Either science can explain how she was cured or it was a miracle. Science can't explain how she was cured. So it must be a miracle. These two alternatives do not exhaust all the possibilities. It's possible, for example, that she was cured by some natural cause that scientists don't yet understand. Because the argument doesn't take this possibility into account, It's fallacious. Again: " Either you have your horoscope charted by an astrologer or continue to stumble through life without knowing where you're going. You certainly don't want to continue your wayward ways. So you should have your horoscope charted by an astrologer." If someone is concerned about the direction his or her life is taking, there are other things he or she can do about it than consult an astrologer. Since there are other options, the argument is fallacious.

Irrelevant Premises


Equivocation    Equivocation occurs when a word is used in two different senses in an argument. For example, consider this argument: "(i) Only man is rational. (ii) No woman is a man. (iii) Therefore no woman is rational." The word man is used in two different senses here: In the first premise it means human being while in the second it means male. As a result, the conclusion doesn't follow from the premises. Here's another example: "It's the duty of the press to publish news that's in the public interest. There is great public interest in UFOs. Therefore the press fails in its first duty if it does not publish articles on UFOs." In the first premise, the phrase the public interest means the public welfare, but in the second, it means what the public is interested in. The switch in meaning invalidates the argument.


Composition    An argument may claim that what is true of the parts is also true of the whole; this is the fallacy of composition. For example consider this argument: "Subatomic particles are lifeless. Therefore anything made out of them is lifeless." This argument is fallacious because a whole may be greater than the sum of its parts; that is, it may have properties not possessed by its parts. A property had by a whole but not by its parts is called an emergent property. Wetness, for example, is an emergent property. No individual water molecule is wet, but get enough of them together and wetness emerges.

Just as what's true of a part may not be true of the whole, what's true of a member of a group may not be true of the group itself. For example: "Belief in the supernatural makes Joe happy. Therefore, universal belief in the supernatural would make the nation happy." This argument doesn't follow because everybody's believing in the supernatural could have effects quite different from one person's believing in it. Not all arguments from part to whole are fallacious, but there are some properties that parts and wholes share. This fallacy lies in assuming that what's true of the parts is true of the whole.


Division   The fallacy of division is the converse of the fallacy of composition. It occurs when one assumes that what is true of a whole is also true of its parts. For example: "We are alive and we are made out of subatomic particles. So they must be alive too." To argue in this way is to ignore the very real difference between parts and wholes. Here's another example: "Society's interest in the occult is growing. Therefore Joe's interest in the occult is growing." Since groups can have properties that their members do not have, such an argument is fallacious.


Appeal to the person    When someone tries to rebut an argument by criticizing or denigrating its presenter rather than by dealing with the argument itself, that person is guilty of the fallacy of appeal to the person. This fallacy is referred to as ad hominem or "to the man." For example: "This theory has been proposed by a believer in the occult. Why should we take it seriously?" Or: "You can't believe Dr. Jones' claim that there is no evidence for life after death. After all, he's an atheist." The flaw in these arguments is obvious: An argument stands or falls on its own merits; who proposes it is irrelevant to its soundness. Crazy people can come up with perfectly sound arguments, and sane people can talk nonsense.


Genetic Fallacy     To argue that a claim is true or false on the basis of its origin is to commit the genetic fallacy. For example: "Juan's idea is the result of a mystical experience, so it must be false (or true). Or: " Jane got that message from a Ouija board, so it must be false (or true)." These arguments are fallacious because the origin of a claim is irrelevant to its truth or falsity. Some of our greatest advances have originated in unusual ways. For example, the chemist August Kekule discovered the benzene ring while staring at a fire and seeing the image of a serpent biting its tail. The theory of evolution came to British naturalist Alfred Russell Wallace while in a delirium. Archimedes supposedly arrived at the principle of displacement while taking a bath, from which he leapt shouting, "Eureka!" The truth or falsity of an idea is determined not by where it came from, but by the evidence supporting it.


Appeal to Authority   We often try to support our views by citing experts. This sort of appeal to authority is perfectly legitimate - provided that the person cited really is an expert in the field in question. If not, it is fallacious. Celebrity endorsements, for example, often involve fallacious appeals to authority, because being famous doesn't necessarily give you any special expertise. The fact that Dionne Warwick is a great singer, for example, doesn't make her an expert on the efficacy of psychic hotlines. Similarly, the fact that Linus Pauling is a Nobel prize winner doesn't make him an expert on the efficacy of vitamin C. Pauling claimed that taking massive doses of vitamin C would help prevent colds and increase the life expectancy of people suffering from cancer. That may be the case, but the fact that he said it doesn't justify our believing it. Only rigorous clinical studies confirming these claims can do that.


Appeal to the Masses   A remarkably common but fallacious form of reasoning is, "It must be true (or good) because everybody believes it (or does it)." Mothers understand that this argument is a fallacy; they often counter it by asking, "If everyone else jumped off a cliff, would you do it, too?" Of course you wouldn't. What this response shows is that just because a lot of people believe something or like something doesn't mean that it's true or good. A lot of people used to believe that the Earth was flat, but that certainly didn't make it so. Similarly, a lot of people used to believe that women should not have the right to vote. Popularity is not a reliable indication of either reality or valve.


Appeal to Tradition   We appeal to tradition when we argue that something must be true (or good) because it is part of an established tradition. For example: "Astrology has been around for ages, so there must be something to it." Or: "Mothers have always used chicken soup to fight colds, so it must be good for you." These arguments are fallacious because traditions can be wrong. This error becomes obvious when you consider that slavery was once an established tradition. The fact that people have always done or believed something is no reason for thinking that we should continue to do or believe something.


Appeal to Ignorance   The appeal to ignorance comes in two varieties: Using an opponent's inability to disprove a conclusion as proof of the conclusion's correctness, and using an opponent's inability to prove a conclusion as proof of its incorrectness. In the first case, the claim is that since there is no proof that something is true, it must be false. For example: "There is no proof that parapsychology experiments were fraudulent, so I'm sure they weren't." In the second case, the claim is that since there is no proof that something is false, it must be true. For example: "Bigfoot must exist because no one has been able to prove that he doesn't." The problem with these arguments is that they take a lack of evidence for one thing to be good evidence for another. A lack of evidence, however, proves nothing. In logic, as in life, you can't get something from nothing.


Appeal to Fear   To use the fear of harm to advance one's position is to commit the fallacy of the appeal to fear. It is also known as swinging the big stick. For example: "If you do not convict this criminal, one of you might be her next victim." This argument is fallacious because what a defendant might do in the future is irrelevant to determining whether she is responsible for a crime committed in the past. Or: "You should believe in God because if you don't you'll go to hell." Such an argument is fallacious because it gives us no reason to believe God exists. Threats extort, they do not help us arrive at the truth.


Straw Man       You indulge in the straw man fallacy when you misrepresent someone's claim to make it easier to dismiss or reject. Instead of addressing the actual claim presented, you concoct  a weak one to assault - a fake, or straw, man that can be easily struck down. Suppose Senator Brown asserts that she favors strong gun control measures, and against her view you argue this way: "Senator Brown says she wants to outlaw guns, an extreme position that flies in the face of the Second Amendment right to bear arms. But we should absolutely oppose any move to gut the Constitution." Your argument, however, would distort the senator's view. She says she wants the possession of firearms to be controlled, not outlawed altogether. You could, of course, use the straw man fallacy just as easily on the other side of the issue, arguing that someone opposed to strict gun control wants to put guns in the hands of every citizen. Another distortion. Either way, your argument would be fallacious - and irrelevant to the real issue.


Insufficient Premises


Hasty Generalization      You are guilty of hasty generalization, or jumping to conclusions, when you draw a general conclusion about all things of a certain type on the basis of evidence concerning only a few things of that type. For example: "Every medium that's been investigated has turned out to be a fraud. You can't trust any of them." Or: "I know one of those psychics. They're all a bunch of phonies." You can't make a valid generalization about an entire class of things from observing only one - or even a number of them. An inference from a sample of a group to the whole group is legitimate only if the sample is representative - that is, only if the sample is sufficiently large and every member of the group has an equal chance to be part of the sample.


Faulty Analogy   An argument from analogy claims that things that resemble one another in certain respects resemble one another in further respects. Recall our previous example: "The Earth has air, water, and living organisms. Mars has air and water. Therefore Mars has living organisms." The success of such arguments depends on the nature and extent of the similarities between the two objects. The greater their dissimilarities,the less convincing the argument will be. For example, consider this argument: "Astronauts wear helmets and fly in spaceships. The figure in this Mayan carving seems to be wearing a helmet and flying in a spaceship. Therefore it is a carving of an ancient astronaut." Although features of the carving may bear a resemblance to a helmet and a spaceship, they may bear a greater resemblance to a ceremonial mask and a fire. The problem is that any two things have some features in common. Consequently an argument from analogy can be successful only if the dissimilarities between the things being compared are insignificant.



False Cause      The fallacy of false cause consists of supposing that two events are causally connected when they are not. People often claim, for example, that because something occurred after something else it was caused by it. Latin scholars dubbed this argument the fallacy of post hoc, ergo propter hoc, which means "After this, therefore because of this." Such reasoning is fallacious, because from the fact that two events are constantly conjoined, it doesn't follow that they are causally related. Night follows day, but that doesn't mean that day causes night. Suppose that ever since you wore crystals around your neck you haven't caught a cold. From this action you can't conclude that the crystals caused you to stay healthy, because any number of other factors could be involved. Only if it has been established beyond a reasonable doubt that other factors were not involved - through a controlled study, for example - can you justifiably claim that there is a causal connection between the two events.


Slippery Slope   Sometimes people argue that performing a specific action will inexorably lead to an additional bad action (or actions), so you should not perform that first action. An initial wrong step starts an inevitable slide toward an unpleasant result that could have been avoided only if the first step had never been taken. This way of arguing is legitimate if there is good reason to believe that the chain of actions must happen as alleged. If not, it is an example of the fallacy of slippery slope. For example: "Teaching evolution in schools leads to loss of faith in God, and loss of faith leads to the weakening of moral values, which causes increases in crime and social disorder. Therefore, evolution should not be taught in schools." This argument is fallacious because there are no good reasons to believe that the sequence of calamities would happen as described. If there were good reasons, then the argument though molded in the slippery-slope pattern - would not be fallacious.


SUMMARY


The combination of a claim (or claims) supposedly giving reasons for accepting another claim is known as an argument. Arguments can be either deductive or inductive. Deductive arguments are intended to provide are intended to provide conclusive support for their conclusions. An inductive argument is intended to provide probable support for its conclusions. A deductive argument that succeeds in providing conclusive support is said to be valid; one that fails to do it is said to be invalid. An inductive argument that succeeds in giving probable support to its conclusions is said to be strong; one that fails in this is said to be weak. A valid argument with true premises is sound; a strong argument with true premises is cogent. 

There are several common deductive argument forms. Some are valid: modus ponens, modus tollens, hypothetical syllogism, and disjunctive syllogism. Some are invalid: denying the antecedent and affirming the consequent. Being familiar with these forms can help you quickly determine the validity of an argument. The counterexample method can also help.

Some common inductive argument forms are enumerative induction, analogical induction, and hypothetical induction (inference to the best explanation). Enumerative induction is the kind of reasoning we use when we arrive at a generalization about a group of things after observing only some of them. The group of things - the entire class of individuals we're interested in - is known as the target group. The observed members of the target group are the sample, and the characteristic we're studying is the relevant property. An enumerative induction is strong only if the sample is large enough and properly representative. Using an inadequate sample size to draw a conclusion about a target group is a common mistake, a fallacy called hasty generalization.

A fallacious argument, or fallacy fails to provide a good reason for accepting a claim. An argument is fallacious if it contains (1) unacceptable premises, (2) irrelevant premises, or (3) insufficient premises. Fallacies with unacceptable premises: begging the question and false dilemma. Fallacies with irrelevant premises: equivocation, composition, division, appeal to the person,  genetic fallacy, appeal to authority, appeal to the masses, appeal to tradition, appeal to ignorance, appeal to fear, and straw man. Fallacies with insufficient premises: hasty generalization, faulty analogy, false cause, and slippery slope. " (Page 44-57)


Critical Thinking - a Simple and Concise Look 5

 "If knowledge requires certainty, however, there is little that we know, for precious few propositions are absolutely indubitable." (Page 66)


"To demand that a proposition be certain in order to be known, then, would severely restrict the extent of our knowledge, perhaps to the vanishing point." (Page 66)


"So if knowledge doesn't require certainty, how much evidence does it require? It doesn't require enough to put the claim beyond any possibility of doubt but, rather, enough to put it beyond any reasonable doubt. There comes a point beyond which doubt, although possible, is no longer reasonable. It's possible, for example, that our minds are being controlled by aliens from outer space, but to reject the evidence of our senses on that basis would not be reasonable. The mere possibility of error is not a genuine reason to doubt. To have knowledge, then, we must have adequate evidence, and our evidence is adequate when it puts the proposition in question beyond a reasonable doubt.


A proposition is beyond a reasonable doubt when it provides the best explanation of something. In Chapter 6, we spell out the notion of best explanation in some detail. For now, it's important to realize that a claim doesn't have to possess any particular degree of probability in order to be beyond a reasonable doubt. All that is required is that it explain the evidence and account for it better than any of its competitors.

Even though we can't be absolutely sure that we're not living in the Matrix, we're justified in believing that we're not because the matrix hypothesis does not provide the best explanation of our sense experience. The hypothesis that our sensations are caused by a computer that directly stimulates our brains is not as simple as the hypothesis that they are caused by physical objects; it raises more questions than it answers; and it makes no testable predictions. The acceptability of a hypothesis is determined by how the amount of understanding it produces, and the amount of understanding produced by a hypothesis is determined by how well it systematizes and unifies our knowledge. Since the physical object hypothesis systematizes and unifies our knowledge better than the matrix hypothesis, we're justified in believing that we're not living in the Matrix.


We are justified in convicting someone of a crime if we have established his or her guilt beyond a reasonable doubt. Similarly, we are justified in believing  a proposition if we have established its truth beyond a reasonable doubt. But being justified in believing a proposition no more guarantees its truth than being justified in convicting someone guarantees his or her guilt. It is always possible that we have overlooked something that undermines our justification. Since we are not omniscient, we can never be sure that we have considered all the relevant evidence. Nevertheless, if we are justified in believing a proposition, we are claiming that it is true; indeed, we are justified in claiming that we know it. Such a claim could be mistaken, but it would not be improper, for our justification gives us the right to make such a claim. " (Page 66-67)


"If our belief in a proposition is not justified - if we have good reason to doubt it - then we have no right to claim that we know it. We have reasonable grounds for doubt when we have credible evidence to the contrary." (Page 68)


"In other words, if we have good reason for believing a proposition to be false, we are not justified in believing it to be true, even if all of our sensory evidence indicates that it is. When two propositions conflict with one another, we know that at least one of them must be false. Until we determine which one it is, we cannot claim to know either. Thus:


There is good reason to doubt a proposition if it conflicts with propositions we have good reason to believe.


The conflict of credible propositions provides reasonable grounds for doubt. And where there are reasonable grounds for doubt, there cannot be knowledge.

The search for knowledge, then, involves eliminating inconsistencies among our beliefs. When the conflict is between different reports of current observations, as in the case of the surface that appears to be pink, it's easy enough to find out which one is mistaken: Look more closely. When the conflict involves propositions that cannot be directly verified, finding the mistaken belief can be more difficult.


Sometimes we observe or are informed about things that seem to conflict with our background information - that vast system of well-supported beliefs we use to guide our thought and action, much of which falls under the heading "common sense." When this conflict happens, we have to decide whether the new piece of information is credible enough to make us give up some of our old beliefs. When we cannot directly verify a questionable claim, one way to assess its credibility is to determine how much is at stake in accepting it. When all other things are equal: 


The more background information a proposition conflicts with, the more reason there is to doubt it." (Page 68)


"The structure of our belief system can be compared to that of a tree. Just as certain branches support other branches, so certain beliefs support other beliefs. And just as bigger branches support more branches than little ones, so fundamental beliefs support more beliefs and ancillary ones. Accepting some dubious claims is equivalent to cutting off a twig, for it requires giving up only peripheral beliefs. Accepting others, however, is equivalent to cutting off a limb or even part of the trunk, for it requires giving up some of our most central beliefs." (Page 69)


"When there is good reason to doubt a proposition, we should proportion our belief to the evidence." (Page 70)


Here is a Bertrand Russell quote:


"(1) that when the experts are agreed, the opposite opinion cannot be held to be certain; (2) that when they are not agreed, no opinion can be regarded as certain by a non-expert; and (3) when they all hold that no sufficient grounds for a positive opinion exist, the ordinary man would do well to suspend his judgement." Bertrand Russell (Page 72)


"These propositions may seem mild, yet, if accepted, they would absolutely revolutionize human life.

The opinions for which people are willing to fight and persecute all belong to one of these three classes which this skepticism condemns. When there are rational grounds for an opinion, people are content to set them forth and wait for them to operate. In such cases, people do not hold their opinions with passion; they hold them calmly, and set forth their reasons quietly. The opinions that are held with passion are always those for which no good ground exists; indeed the passion is the measure of the holder's lack of rational conviction." Bertrand Russell (Page 72)


"There is good reason to doubt  a proposition if it conflicts with expert opinion." (Page 72)


"Just because someone is an expert in one field doesn't mean that he or she is an expert in another." (Page 73)


I want to contrast this with a quote from Carl Sagan:

“Arguments from authority carry little weight – authorities have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.”


― Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark


Sagan pointed out the crucial fact that accepting something as true merely because an authority said it is a road to accepting all kinds of things with no real basis. Sometimes authorities are right, sometimes they are wrong. Sometimes they agree on an issue (and could be right or wrong) and sometimes they disagree. You can have two authorities who have equal credentials, experience and education to your knowledge and they can profoundly disagree.


You have to be able to weigh evidence, consider arguments and use your own judgement ultimately. Sometimes we simply lack the education to form an educated opinion about a topic and we need to decide which expert to listen to. Two doctors can disagree about treatment and we have to either have a treatment or not. We don't have time to become a doctor before deciding. We sometimes have to pick which experts to follow the advice of. If we do this we should understand what we actually know and what we are relying on someone else to understand. These are important distinctions.

As no authority or expert Margaret is infallible, no idea from an authority or expert should ever be accepted merely based on the source. Similarly as non-expert opinions are sometimes correct no idea should be rejected merely based on the source.


We may need to choose to trust people and we are best off being very careful in this regard. We don't want to mistake an evaluation of expertise or general trustworthy character with knowledge and evidence.


If you don't have solid evidence to support a claim then it's not consistently reliable to be too trusting of experts or people who we feel are trustworthy. We eventually get burned if we keep playing with this fire. 

If at all possible we should find evidence to support the claims of people and understand that grounds for reasonable doubt do not disappear just because we wish it so. 


COHERENCE AND JUSTIFICATION


Ordinarily, if a proposition fails to cohere with the rest of our beliefs, we are not justified in believing it. So coherence is a necessary condition for justification. But is it also sufficient? If a proposition coheres with the rest of our beliefs, are we justified in believing it? Remarkably enough, the answer to this question is no. Just because a proposition coheres with our beliefs, it is not necessarily likely to be true. " (Page 75)


"The traditional sources of knowledge - perception, introspection, memory, and reason - are not infallible guides to the truth, for our interpretations of them can be negatively affected by all sorts of conditions, many beyond our control. But if we have no reason to believe that such conditions are present, then we have no reason to doubt what these sources of knowledge tell us. The principle that emerges from these considerations is this:

If we have no reason to doubt what's disclosed to us through perception, introspection, memory, or reason, then we're justified in believing it.


In other words, the traditional sources of knowledge are innocent until proven guilty. Only if we have good reason for believing that they are not functioning properly should we doubt them." (Page78)


Whole books and subjects are devoted to exploring the weaknesses and vulnerabilities in our perception, introspection, memory and reason.


I have read and written on quite a few of them such as Subliminal by Leonard Mlodinow and A Theory of Cognitive Dissonance by Leon Festinger and Thinking, Fast and Slow by Daniel Kahneman. Suffice to say the subject is well worth exploring but the authors are correct - unless we know good reasons to NOT TRUST our perception, introspection, memory and reason we should rely on them. After all, it would very hard to know or believe anything if we didn't rely on and trust these essential tools.  


"SUMMARY

Factual knowledge concerns the truth of propositions and is therefore referred to as propositional knowledge. We possess this kind of knowledge when we have a true belief supported by good reasons. Reasons confer probability on propositions. The better the reasons, the more likely it is that the proposition they support is true. Some think that to know a proposition, we must have reasons that establish it beyond a shadow of a doubt. But knowledge requires only that we have reasons good enough to put the proposition beyond s reasonable doubt. A proposition is beyond a reasonable doubt when it provides the best explanation of something.

If we have good reasons to doubt a proposition, then we cannot be said to know it. We have good reasons to doubt it if it conflicts with other propositions we have good reason to believe. We also have good reason to doubt it if it conflicts with other propositions we have good reason to believe. We also have good reasons to doubt it if it conflicts with our background information - our massive system of well-supported beliefs, many of which we regard as common sense. The more background information a proposition conflicts with, the more reason there is to doubt it. Likewise, since the opinions of experts are generally reliable, we have good reason to doubt a proposition if it conflicts with such opinions. But we must be careful: Just because someone is an expert in one field doesn't mean that he or she is an expert in another.

The traditional sources of knowledge are perception, introspection, memory, and reason. They are not infallible guides to the truth, for our use of them can be distorted by many factors. But if we have no reason to doubt what's disclosed to us through these, then we're justified in believing it. Faith - unjustified belief - is often considered to be another source of knowledge. But unjustified belief cannot constitute knowledge. Intuition conceived as a kind of sixth sense like ESP cannot be regarded as a source of knowledge without evidence showing that it is in fact a reliable guide to truth. Intuition as a type of heightened sensory perception, however, has been shown to be actual. Some people consider mystical experiences reliable guides to deep truths. They may be correct, but we cannot simply assume that they are - we must corroborate the experiences by applying our usual tests of knowledge.

In light of this, we can ask whether we have good reasons for believing in astrology. The answer is no: The claim that astrology is true is not supported by any good evidence, and it conflicts with a tremendous amount of our background information. " (Page 95-96)



Critical Thinking - a Simple and Concise Look 6

 "The fact is, though our experiences (and our judgements about those experiences) are reliable enough for most practical purposes, they often mislead us in the strangest, most unexpected ways - especially when the experiences are exceptional or mysterious. This is because our perceptual capacities, our memories, our states of consciousness, our information-processing abilities have perfectly natural but amazing powers and limits. Apparently, most people are unaware of these powers and limits. But these odd characteristics of our minds are very influential. Because of them, as several psychologists have pointed out, we should expect to have many natural experiences that seem for all the world like supernatural or paranormal events. So even if the supernatural or paranormal didn't exist, weird things would still happen to us.

The point is not that every strange experience must indicate a natural phenomenon - nor is it that every weird happening must be supernatural. The point is that some ways of thinking about personal experience help increase our chances of getting to the truth of the matter. If our minds have peculiar characteristics that influence our experience and how we judge that experience, we need to know about those characteristics and understand how to think our way through them - all the way through, to conclusions that make sense. This feat involves critical thinking. But it also requires creative thinking - A grand leap powered by an open mind past the obvious answer, beyond the will to believe or disbelieve, toward new perspectives, to the best solution among several possibilities." (Page 103)


"Just because something seems (feels, appears) real doesn't mean that it is." (Page 103)


"We can't infer what is from what seems. To draw such a conclusion is to commit an elementary fallacy of reasoning. It's clearly fallacious to say, " This event or phenomenon seems real; therefore, it is real. " What's more, the peculiar nature of our minds guarantees that what seems will frequently not correspond to what is.


Now, in our daily routines, we usually do assume that what we see is reality - that seeming is being. And we're generally not disappointed. But we're at much greater risk for being dead wrong with such assumptions when (1) our experience is uncorroborated (no one else has shared our experience), (2) our conclusions are at odds with all known previous experience, or (3) any of the peculiarities of our minds could be at work. " (Page 104)


"PERCEIVING: WHY YOU CAN'T ALWAYS BELIEVE WHAT YOU SEE

The idea that our normal perceptions have a direct, one-to-one correspondence to external reality - that they are like photographs of the outer world - is wrong. Much research now suggests that perception is constructive, that it's in part something that our minds manufacture. Thus what we perceive is determined, not only by what our eyes and ears and other senses detect, but also by what we know, what we expect, what we believe, and what our physiological state is. This constructive tendency had survival value - it helps us make sense of the world. But it also means that seeing is often not believing - rather, the reverse is true." (Page 105)


"We sometimes perceive exactly what we expect to perceive, regardless of what's real.

Research has shown that when people expect to perceive a certain stimulus (for example, see a light or hear a tone), they often do perceive it - even when no stimulus is present. In one experiment, subjects were told to walk along a corridor until they saw a light flash. Sure enough, some of them stopped, saying they had seen a flash - but the light hadn't flashed at all. In other studies, subjects expected to experience an electric shock, or feel warmth, or smell certain odors, and many did experience what they expected even though none of the appropriate stimuli had been given. All that was really given was the suggestion that a stimulus might occur. The subjects had hallucinated (or perceived, or apparently perceived, objects or events that have no objective existence). So if we're normal, expectancy or suggestion can cause us to perceive what simply isn't there. Studies show that this perception is especially true the stimulus is vague or ambiguous or when clear observation is difficult." (Page 107)


"Claims that conflict with expert opinion cannot be known, unless it can be shown beyond a reasonable doubt that the experts are mistaken." (Page 117)


Now this is important and worth considering. If you find experts that disagree so that a consensus or near consensus is unattainable then the experts are unsettled, at least regarding a consensus. The experts can always simplify be wrong but unfortunately if they have a consensus then demonstrating that they are wrong becomes a difficult task because the standard of proof becomes extremely high.


We have many topics that experts in the past have held a consensus or near consensus on that ultimately surrendered to a different idea because strong evidence was able to be presented. It does not mean we should say, "Experts have been proven wrong time and again, so I can dismiss their best arguments and evidence without serious and careful examination!" It does mean we should not accept and never doubt or question ideas merely because experts support them, even if they hold a consensus. 

"all individuals are suggestible" (Page 120)


I am including this quote because I have read a bit about hypnosis and psychology and suggestible subjects and this may be the first time I have seen any author claim ALL individuals are suggestible. It may or may not be true. I honestly don't know.

I do know that every school of hypnosis I have ever examined, even slightly, has some version of the idea that hypnosis "works" on some people, works extremely well on some and is not effective at all in persuading some, no matter what method is used. Apparently lots of people in the thousands of years that people have tried to hypnotize each other have discovered subjects that their techniques simply didn't work on, try as they might. But the authors may not be only referring to hypnosis as such and other methods of suggestion may be capable of affecting these "hypnosis resistant" subjects. I honestly don't know.


"Hypnosis and sodium amytal administration ("truth serum") are unacceptable procedures for memory recovery. Courts reject hypnosis as a memory aid. Subjects receiving hypnosis or amytal as general memory aids (even in instances where there is no question of sexual abuse) will often generate false memories. Upon returning to their normal state of consciousness, subjects assume all their refreshed" memories " are equally true. " (Page 121)


"Psychologist Elizabeth Loftus, A prominent critic of the misguided therapy techniques that often result in False Memory Syndrome, says that the phenomenon has taken an enormous toll:" (Page 121)

I have written on the research of false memory expert Elizabeth Loftus before in several blog posts that address our malleable memories including my series on the book Subliminal by Leonard Mlodinow which digs deep into the topic with extensive descriptions of experiments and research to show that we have imperfect and changeable memories.

I highly recommend that anyone who is skeptical about this or interested check out the numerous articles and interviews with Elizabeth Loftus or even her books.


"REMEMBERING: WHY YOU CAN'T ALWAYS TRUST WHAT YOU RECALL" (Page 121)


"A lot of research now indicates that our memories aren't literal records or copies. Like our perceptual powers, our memories are constructive, or rather, creative. When we remember an experience, our brains reach for a representation of it; then, piece by piece, they reconstruct a memory based on this fragment. This reconstructive process is inherently inexact. It's also vulnerable to all kinds of influences that guarantee that our memories will frequently be inaccurate.

For an example of your memory's reconstructive powers, try this. Remember an instance when you were sitting today. Recall your surroundings, how you were dressed, how you positioned your legs and arms. Chances are, you see the scene from the perspective of someone looking at it, as though you were watching yourself from this perspective. You now remember certain pieces of the experience, and your brain constructed everything else, television perspective and all.


For well over half a century, research has been showing that the memory of witnesses can be unreliable, and the constructive nature of memory helps explain why. Studies demonstrate that the recall of eyewitnesses is sometimes wrong because they reconstruct events from memory fragments and then draw conclusions from the reconstruction. Those fragments can be a far cry from what actually transpired. Further, if eyewitnesses are under stress at the time of their observations, they may not be able to remember crucial details, or their recall may be distorted. Stress can even distort the memory of expert witnesses, which is one of several reasons why reports of UFOs, seances, and ghosts must be examined carefully: The experiences are stressful. Because memory is constructive and liable to warping, people can sincerely believe that their recall is perfectly accurate - and be perfectly wrong. They may report their memory as honestly as they can, but alas, it's been worked over.


Like perception, memory can be dramatically affected by expectancy and belief. Several studies show this effect, but a classic experiment illustrates the point best. Researchers asked students to describe what they had seen in a picture. It portrayed a white man and a black man talking to each other on the subway. In the white man's hand was an open straight razor. When the students recalled the picture, one-half of them reported that the razor was in the hand of the black man. Memory reconstruction was tampered with by expectancy or belief.


The same thing can happen in our successful "predictions." After some event has occurred, we may say, "I knew that would happen; I predicted it." And we may truly believe that we foretold the future. But research suggests that our desire to believe that we accurately predicted the future can sometimes alter our memories of the prediction." (Page 121-122)


"Past Life Remembered or Cryptomnesia


If, under hypnosis, you recall living 200 years ago and can vividly remember doing and seeing things that you've never experienced in your present life, isn't that proof you lived a "past life"? Isn't this evidence of reincarnation? Some people would think so. There is, however, another possibility, explained by Ted Schultz:


Beatle George Harrison got sued for rewriting the Chiffons'" He's So Fine " into "My Sweet Lord." He was the innocent victim of the psychological phenomenon of cryptomnesia. So was Helen Keller, the famous blind and deaf woman, when she wrote a story called "The Frost King." After it was published in 1892, she was accused of plagiarizing Margaret Canby's "The Frost Fairies," though Helen had no conscious memory of ever reading it. But, sure enough, inquiries revealed that Canby's story had been read to her (by touch) in 1888. She was devastated...

Cryptomnesia, or "hidden memory," refers to thoughts and ideas that seem new and original, but which are actually memories of things that you've forgotten you knew. The cryptomnesic ideas may be variations on the original memories, with details switched around and changed, but still recognizable.

Cryptomnesia is a professional problem for artists; it also plays an important role in past-life regression. In the midst of the hoopla surrounding the Britney Murphy [reincarnation] case the Denver Post decided to send newsman William J. Barker to Ireland to try to find evidence of Bridey's existence. [Bridey was the alleged past-life personality of Virginia Tighe.] Unfortunately for reincarnation enthusiasts, careful checking failed to turn up anything conclusive. Barker couldn't locate the street Bridey said she lived on, he couldn't find any essays by Bridey's husband in the Belfast News-Letter between 1843 and 1864 (during which time Bridey said he was a contributor), and he couldn't find anyone who had heard of the "Morning Jig" that Bridey danced.

Research by reporters from the Chicago American and later by writer Melvin Harris finally uncovered the surprising source of housewife Virginia Tighe's past-life memories. As a teenager in Chicago, Virginia had lived across the street from an Irish woman named Mrs. Anthony Corkell, who had regaled her with tales about the old country. Mrs. Corkell's maiden name was Bridie Murphy! Furthermore, Virginia had been active in high school dramatics, at one point memorizing several Irish monologues which she learned to deliver with a heavy Irish brogue. Finally, the 1893 World's Columbian Exposition, staged in Chicago, had featured A life-size Irish Village, with fifteen cottages, A castle tower, and a population of genuine Irish women who danced jigs, spun cloth, and made butter. No doubt Virginia had heard stories of this exhibition from many of her neighbors while growing up in Chicago in the '20s.

Almost every other case of "past-life memory" that has been objectively investigated has followed the same pattern: the memories, often seemingly quite alien to the life experiences of the regressed subject, simply cannot be verified by historical research; on the other hand, they frequently prove to be the result of cryptomnesia." (Page 123)


"Research also shows that our memory of an event can be drastically changed if we later encounter new information about the event - even if the information is brief, subtle, and dead wrong. Here's a classic example: In one experiment, people were asked to watch a film depicting a car accident. Afterward, they were asked to recall what they had seen. Some of the subjects were asked, "About how fast were the cars going when they smashed into each other?" The others were asked the same question with a subtle difference. The word smashed was replaced by hit. Strangely enough, those who were asked the "smashed" question estimated higher speeds than those asked the "hit" question. Then, A week later, all the subjects were asked to recall whether they had seen broken glass in the film. Compared to the subjects who got the "hit" question, more than twice as many of those who got the "smashed" question said they had seen broken glass. But the film showed no broken glass at all. In a similar study, subjects recalled that they had seen a stop sign in another film of a car accident even though no stop sign had appeared in the film. The subjects had simply been asked a question that presupposed as stop sign and thus created the memory of one in their minds.

These studies put in doubt any long-term memory that's subjected to leading questions or is evoked after exposure to a lot of new, seemingly pertinent information." (Page 124)


"CONCEIVING: WHY YOU SOMETIMES SEE WHAT YOU BELIEVE


Our success as a species is due in large part to our ability to organize things into categories and to recognize patterns in the behavior of things. By formulating and testing hypotheses, we learn to predict and control our environment. Once we have hit upon a hypothesis that works, however, it can be very difficult to give it up. Francis Bacon was well aware of this bias in our thinking:


The human understanding when it has adopted an opinion... draws all things else to support and agree with it. And though there be a great number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside, and rejects, in order that by this great and pernicious predetermination, the authority of its former conclusion may remain inviolate.


While this intellectual inertia can keep us from jumping to conclusions, it can also keep us from seeing the truth." 

(Page 126)


"Max Planck was well aware of how tenaciously we can cling to a hypothesis when we have invested a lot of time and effort in it. He once remarked, " A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." (Page 127)


"Our ability to make sense of things is one of our most important abilities. But we are so good at it that we sometimes fool ourselves into thinking something's there when it's not. Only by subjecting our views to critical scrutiny can we avoid such self-delusion." (Page 131)


"Confirmation Bias


Not only do we have a tendency to ignore and misinterpret evidence that conflicts with our own views; we also have a tendency to look for and recognize only evidence that confirms them. A number of psychological studies have established this confirmation bias." (Page 133)


"A wise man knows his own ignorance, A fool knows everything - Charles Simmons" (Page 133)


"Facts do not cease to exist because they are ignored."

- Aldous Huxley


"This experiment demonstrates that we tend to look for confirming rather than disconfirming evidence, even though the later can often be far more revealing. Disconfirming evidence can be decisive when confirming evidence is not.

Consider the hypothesis: All swans are white. Each swan we see tends to confirm that hypothesis. But even if we've seen a million white swans, we can't be absolutely sure that all swans are white because there could be black swans in places we haven't looked. In fact, it was widely believed all swans were white until black swans were discovered in Australia. Thus:


When evaluating a claim, look for disconfirming as well as confirming evidence.


Our tendency to confirm rather than disconfirm our beliefs is reflected in many areas of our lives. Members of political parties tend to read literature supporting their positions. Owners of automobiles tend to pay attention to advertisements touting their makes of car. And all of us tend to hang out with people who share our views about ourselves.

One way to cut down on confirmation bias is to keep a number of different hypotheses in mind when evaluating a claim. In one experiment, subjects were shown a sequence of numbers - 2, 4, 6 - and were informed that it follows A certain rule. Their task was to identify this rule by proposing other triplets of numbers. If a proposed triple fit the rule - or if it did not - the subjects were informed. They were not supposed to state the rule until they were sure of it.

Most subjects picked sets of even numbers like 8, 10, 12 or 102, 104, 106. When told these too followed the rule, subjects often announced that they knew the rule: Any three consecutive even numbers. But that rule was incorrect. This fact led some people to try out other triplets such as 7, 9, 11 or 93, 95, 97. When told that these triplets fit the rule, some claimed that the rule was any three numbers ascending by two. But that rule, too, was incorrect. What was the correct rule? Any three numbers in ascending order.

Why was this rule so difficult to spot? Because of confirmation bias: Subjects tried only to confirm their hypotheses; they did not try to disconfirm them.

If subjects were asked to keep two hypotheses in mind - such as, any three numbers in ascending order and any three numbers not in ascending order - they did much better. They picked a wider range of triplets, each of which confirmed or disconfirmed one of the rules. Thus, keeping a number of different hypotheses in mind can help you avoid confirmation bias. " (Page 135 - 136)


"The Availability Error


Confirmation bias can be exacerbated by the availability error. The availability error occurs when people base their judgements on evidence that's vivid or memorable instead of reliable or trustworthy." (Page 136)

"Mankind, in the gross, is a gaping monster that loves to be deceived and has seldom been disappointed."

 - Harry Mackenzie


"Those who base their judgements on psychologically available information often commit the fallacy of hasty generalization. To make a hasty generalization is to make a judgement about a group of things on the basis of evidence concerning only a few members of that group. It is fallacious, for example, to argue like this: " I know one of those insurance salespeople. You can't trust any of them. " Statisticians refer to this error as the failure to consider sample size. Accurate judgements about a group can be made on the basis of a sample only if the sample is sufficiently large and every member of the group has an equal chance to be part of the sample.

The availability error also leads us to misjudge the probability of various things. For example, you may think that amusement parks are dangerous places. After all, they are full of rides that hurl people around at high speeds, and sometimes those rides break. But statistics show that riding the rides at an amusement park is less dangerous than riding a bicycle on main roads. We tend to think that amusement parks are dangerous places because amusement park disasters are psychologically available - they are dramatic, emotionally charged, and easy to visualize. Because they stick in our minds, we misjudge their frequency.

When confirming evidence is more psychologically compelling than disconfirming evidence, we are likely to exhibit confirmation bias." (Page 138)

"When evaluating a claim, look at all the relevant evidence, not just the psychologically available evidence." (Page 138)


"The availability error not only leads us to ignore relevant evidence; it also leads us to ignore relevant hypotheses. For any set of data, it is, in principle, possible to construct any number of different hypotheses to account for the data. In practice, however, it is often difficult to come up with many different hypotheses. As a result, we often end up choosing among only those hypotheses that come to mind - that are available.

In the case of unusual phenomena, the only explanations that come to mind are often supernatural or paranormal ones. Many people take the inability to come up with a natural or normal explanation for something as proof that it is supernatural or paranormal. "How else can you explain it?" they often ask.

This sort of reasoning is fallacious. It's an example of the appeal to ignorance. Just because you can't show that the supernatural or paranormal explanation is false doesn't mean that it is true. Unfortunately, although this reasoning is logically fallacious, it is psychologically compelling.

The extent to which the availability of alternate hypotheses can affect our judgements of probability was demonstrated in the following experiment. Subjects were presented with a list of possible causes of a car's failure to start. Their task was to estimate the probability of each of the possible causes listed. Included on every list was a catchall hypothesis labeled "all other problems [explanations]." Researchers discovered that the probability the subjects assigned to a hypothesis was determined by whether it was on the list - that is, by whether it was available. If more possibilities were added, subjects lowered the probability of the existing possibilities instead of changing the probability of the catchall hypothesis (which they should have done if they were acting rationally).

Although the unavailability of natural or normal explanations does not increase the probability of supernatural or paranormal ones, many people think that it does. To avoid this error, it's important to remember that just because you can't find a natural explanation for a phenomenon doesn't mean that the phenomenon is supernatural. Our inability to explain something may simply be due to our ignorance of the relevant laws or conditions. " (Page 140)

"Although supernatural or paranormal claims can be undercut by providing a natural or normal explanation of the phenomenon in question, there are other ways to cast doubt on such claims. A hypothesis is only acceptable if it fits the data. If the data are not what you would expect if the hypothesis were true, there is reason to believe that the hypothesis is false.

Take the case of the infamous Israeli psychic Uri Geller. Geller claims to have psychokinetic ability: the ability to directly manipulate objects with his mind. But the data, psychologist Nicholas Humphrey says, do not fit this hypothesis:

If Geller has been able to bend a spoon merely by mind-power, without his exerting any other sort of normal mechanical force, then it would immediately be proper to ask: Why has this power of Geller's worked only when applied to metal objects of a certain shape and size? Why indeed only to objects anyone with a strong hand could have bent if they had the opportunity (spoons or keys, say, but not pencils or pokers or round coins)? Why has he not been able to do it unless he has been permitted, however briefly, to pick the object up and have sole control of it? Why has he needed to touch the object with his fingers, rather than his feet or with his nose? Etcetera, etcetera. If Geller really does have the power of mind over matter, rather than muscle over metal, none of this would fit.


Humphrey calls this sort of skeptical argument, the argument from "unwarranted design" or "unnecessary restrictions," because the phenomena observed are more limited or restricted than one would expect if the hypothesis were true. To be acceptable, a hypothesis must fit the data: This means not only that the hypothesis must explain the data, but also  that the data explained must be consistent with what the hypothesis predicts. If the hypothesis makes predictions that are not borne out by the data, there is reason to doubt the hypothesis." (Page 140-141)


"The Representativeness Heuristic

Our attempt to comprehend the world is guided by the world is guided by certain rules of thumb known as heuristics. These heuristics speed up the decision-making process and allow us to deal with a massive amount of information in a short amount of time. But what we gain in speed we sometimes lose in accuracy. When the information we have to work with is inaccurate, incomplete, or irrelevant, the conclusions we draw from it van be mistaken.

One of the heuristics that governs both categorization and pattern recognition is this one: Like goes with like. Known as the representativeness heuristic, this rule encapsulates the principles that members of a category should resemble a prototype and that effects should resemble their causes. While these principles often lead to correct judgements, they can also lead us astray. A baseball game and a chess game are both games, but their dissimilarities may be greater than their similarities." (Page 141)


"He that is not aware of his ignorance will only be mislead by his knowledge." - Richard Whately (Page 141)


"Man's mind is so formed that it is far more susceptible to falsehood than truth." - Desiderius Erasmus (Page 142)

"Superstition, which is widespread among the nations, has taken advantage of human weakness to cast its spell over the mind of almost every man." - Cicero (Page 143)

"Man prefers to believe what he prefers to be true." -  Francis Bacon (Page 143)


"One problem is that most of us don't realize that because of ordinary statistical laws, incredible coincidences are common and must occur. An event that seems highly improbable can actually be highly probable - even virtually certain - given enough opportunities for it to occur. Drawing a royal flush in poker, getting heads five times in a row, winning the lottery - all these events seem incredibly unlikely in any instance. But they're virtually certain to happen sometime to someone. With enough chances for something to happen, it will happen." (Page 145)


"Rationalizing Homo Sapiens

People not only jump to conclusions, they frequently rationalize or defend whatever conclusions they jump to. Psychologist Barry Singer summarizes research findings that show just how good our rationalizing skills are: 

Numerous psychological experiments on problem solving and concept formation have shown that when people are given the task of selecting the right answer by being told whether particular guesses are right or wrong, they will tend to do the following:

1. They will immediately form a hypothesis and look only for examples to confirm it. They will not seek evidence to disprove their hypothesis, although this strategy would be just as eff, but will in fact try to ignore any evidence against it.

2. If the answer is secretly changed in the middle of the guessing process, they will be very slow to change the hypothesis that was once correct but has suddenly become wrong.

3. If one hypothesis fits the data fairly well, they will stick with it and not look for other hypotheses that might fit the data better.

4. If the information provided is too complex, people will cope by adopting overly simple hypotheses or strategies for solution, and by ignoring any evidence against them.

5. If there is no solution, if the problem is a trick and people are told "right" and "wrong" about their choices at random, people will nevertheless form all sorts of hypotheses about causal relationships they believe are inherent in the data, will believe their hypotheses through thick and thin, and will eventually convince themselves that their theories are absolutely correct. Causality will invariably be perceived even when it is not present.

It is astonishing that rats, pigeons, and small children are often better  solving these sorts of problems than are human adults. Pigeons and small children don't care so much whether they are always right, and they do not have such a developed capacity for convincing themselves they are right, no matter what the evidence is." (Page 147)

"It's reasonable to accept personal experience as reliable evidence only if there's no reason to doubt its reliability." (Page 147)

"Our beliefs may predispose us to misinterpret the facts, when ideally the facts should serve as the evidence upon which we base beliefs." - Alan M. MacRobert and Ted Schultz (Page148)


"When there's reason to think that any of these limitations or conditions may be present, our personal experience can't prove that something is true. In fact, when we're in situations where our subjective limitations could be operating, the experiences that are affected by those limitations not only can't give us proof that something is real or true; they can't even provide us with low-grade evidence. The reason is that at those moments, we can't tell where our experience begins and our limitations end. Is that an alien spacecraft in the night sky or Venus, embellished for us by our own high level of expectancy? Is that strange conjunction of events a case of cosmic synchronicity or just our inability to appreciate the true probabilities? If subjective limitations might be distorting our experience, our personal evidence is tainted and can't tell us much at all. That is why anecdotal evidence - evidence based on personal testimony - carries so little weight in scientific investigations. When we can't establish beyond a reasonable doubt that a person was not influenced by these limitations, we aren't justified in believing that what they report is real." (Page 148)


"Personal experience alone generally cannot establish the effectiveness of a treatment beyond a reasonable doubt.

There are three reasons why this principle is true: Many illnesses simply improve on their own; people sometimes improve even when given A treatment known to be ineffective; and other factors may cause the improvement in a person's condition." (Page 149)


"The power of suggestion to alter body function is well established by research with hypnosis. Blisters have been induced and warts made to disappear through suggestion." - William T. Jarvis (Page 151)

"Case reports are accounts of a doctor's observations of individual patients." (Page 154)

"Case reports are also vulnerable to several serious biases that controlled research is better able to deal with. One is called social desirability bias. It refers to patients' tendency to strongly wish to respond to treatment in what they perceive as a correct way. People will sometimes report improvement in their condition after treatment simply because they think that's the proper response or because they want to please the doctor.

Another bias can come from doctors themselves. Called investigator bias, it refers to the well-documented fact that investigators or clinicians sometimes see an effect in a patient because they want or expect to see it." (Page 155)


 "Case studies alone generally cannot establish the effectiveness of a treatment beyond a reasonable doubt." (Page 155)

"SUMMARY

An important principle to use when evaluating weird phenomena is that just because something seems real doesn't mean that it is. Part of the reason for this caution is the constructive nature of our perceptions. We often perceive exactly what we expect to perceive, regardless of what's real, and we sometimes experience the misperception of seeing distinct forms in vague and formless stimuli. These constructive processes are notoriously active in UFO sightings, where under poor viewing conditions average people mentally transform lights in a dark sky into alien spacecraft.

Our memories are also constructive and easily influenced by all sorts of factors: stress, expectation, belief, and the introduction of new information. Added to this is the selectivity of memory - we selectively remember certain things and ignore others, setting up a recall bias. No wonder the recall of eyewitnesses is often so unreliable.

How we conceive the data we encounter is also problematic. is also problematic. We often refuse to accept contrary evidence,  a reluctance that can be found in just about everyone, including scientists and trained investigators. We have a tendency to believe that a very general personality description applies uniquely to ourselves, a phenomenon known as the Forer effect. The Forer effect is at work in the readings of astrology, biorhythms, fortune-telling, tarot cards, palmistry (palm-reading), and psychic performances. We are often prey to confirmation bias, the tendency to look for and recognize only evidence that confirms our own views. We fall for the availability error and base our judgements on evidence that's vivid or memorable instead of reliable or trustworthy. We are sometimes led astray by the representative heuristic, the rule of thumb that like goes with like. And we are generally poor judges of probability and randomness, which leads us to erroneously believe that that an event could not possibly be a mere coincidence.

All this points to the fact that anecdotal evidence is not a reliable guide to the truth. Our principle should be that it's reasonable to accept personal experience as reliable evidence only if there's no reason to doubt its reliability. The problems with this kind of evidence are illustrated well in people's personal attempts to judge the effectiveness of treatments and health regimens. The reality is that personal experience alone generally cannot establish the effectiveness of a treatment beyond a reasonable doubt - but controlled scientific studies can." (Page 156)


Critical Thinking - a Simple and Concise Look 7

 "It is not what the man of science believes that distinguishes him, but how and why he believes it." Bertrand Russell

"SCIENCE AND DOGMA

It's tempting to say that what distinguishes science from all other modes of inquiry is that science takes nothing for granted. But this statement is not strictly true, for there is at least one proposition that must be accepted before any scientific investigation can take place - that the world is publicly understandable. This proposition means at least three things: (1) The world has a determinate structure; (2) we can know that structure; and (3) this knowledge is available to everyone. Let's examine each of these claims in turn.

If the world has no determinate structure - if it were formless and nondescript - it couldn't be understood scientifically because couldn't be explained or predicted. Only where there is an identifiable pattern can there be explanation or prediction. If the world lacked a discernable pattern, it would be beyond our ken. 

But a determinate structure is not enough for scientific understanding, we also need a means of apprehending it. As we've seen, humans possess at least four faculties that put us in touch with the world: perception, introspection, memory, and reason. There may be others, but at present, these are the only ones that have proven themselves to be reliable. They're not 100 percent reliable, but the beauty of the scientific method is that it can determine when they're not. The scientific method is self-correcting, and as a result it is our most reliable guide to the truth." (Page 165)


"What makes scientific understanding public is that the information upon which it is based is, in principle, available to everyone. All people willing to make the appropriate observations can see for themselves whether any particular claim is true. No one has to take anybody's word for anything. To be accepted as true, a scientific claim must be able to withstand the closest scrutiny, for only if it does can we be reasonably sure that it's not mistaken." (Page 166)


"SCIENTIFIC METHODOLOGY


The scientific method is often said to consist of the following four steps:


1. Observe

2. Induce general hypotheses or possible explanations for what we have observed

3. Deduce specific things that must also be true if our hypothesis is true

4. Test the hypothesis by checking out the deduced implications

But this conception of the scientific method provides a misleading picture of scientific inquiry. Scientific investigation can occur only after a hypothesis has been formulated, and induction is not the only way of formulating a hypothesis." (Page 167)


"A moment's reflection reveals that data collection in the absence of a hypothesis has little or no scientific value. Suppose, for example, that one day you decide to become a scientist and having read a standard account of the scientific method, you set out to collect some data. Where would you begin? Should you start by cataloguing all the items in your room, measuring them, weighing them, noting their color and composition, and so on? Should you then take these items apart and catalog their parts in a similar manner? Should you note the relationship of these objects to one another, to the fixtures in the room, to objects outside? Clearly there's enough data in your room to keep you busy for the rest of your life.

From a scientific point of view, collecting this data wouldn't be very useful because it wouldn't help us evaluate any scientific hypotheses. The goal of scientific inquiry is to identify principles that are both explanatory and predictive. Without a hypothesis to guide our investigations, the, there is no guarantee that the information gathered would help us evaluate any scientific hypotheses. The goal of scientific inquiry is to identify principles that are both explanatory and predictive. Without a hypothesis to guide our investigations, there is no guarantee that the information gathered would help us accomplish that goal." (Page 167 - 168)

"Philosopher Karl Popper graphically demonstrated the importance of hypotheses for observation:

Twenty-five years ago I tried to bring home the same point to a group of physics students in Vienna by beginning a lecture with the following instructions: " Take pencil and paper; carefully carefully observe, and write down what you have observed!" They asked, of course, what I wanted them to observe. Clearly the instruction, "Observe!" is absurd. (It is not even idiomatic, unless the object of the transitive verb can be taken as understood.) Observation is always selective. It needs a chosen object, a definite task, an interest, a point of view, a problem." (Page 168)

"Scientific inquiry begins with a problem - why did something occur? How are two or more things related? What is something made of? An observation, of course, is needed to recognize that a problem exists, but any such observation will have been guided by an earlier hypothesis. Hypotheses are needed for scientific observation because they tell us what to look for - they help us to distinguish relevant from irrelevant information.

Scientific hypotheses indicate what will happen if certain conditions are not met. By producing these conditions in the laboratory or observing them in the field, we can assess the credibility of the hypotheses proposed. If the predicted results occur, we have reason to believe that the hypothesis in question is true. If not, we have reason to believe that it's false

Although hypotheses are designed to account for data, they rarely can be derived from data. Contrary to what the traditional account of the scientific method would have us believe, inductive thinking is rarely used to generate hypotheses." (Page 168)

"Hypotheses are created, not discovered, and the process of their creation is just as open-ended as the process of artistic creation. There is no formula for the generating hypotheses. That's not to say that the process of theory construction is irrational, but it is to say that the process is not mechanical. In searching for the best explanation, scientists are guided by certain criteria, such as testability, fruitfulness, scope, simplicity, and conservatism. Fulfilling any one of these criteria, however, is neither a necessary nor A sufficient condition for being a good hypothesis. Science therefore is just as much a product of the imagination as it is of reason.

Even the most beautifully crafted hypotheses, however can turn out to be false. That's why scientists insist on checking all hypotheses against reality. Let's examine how this check might be done in a particular kind of scientific work - medical research.

In medical research, clinical studies offer the strongest and clearest support for any claim that a treatment is effective because they can establish cause and effect beyond a reasonable doubt. Clinical trials allow scientists to control extraneous variables and test one factor at a time. Properly conducted clinical trials have become the gold standard of medical evidence, having proven themselves again and again." (Page 169)


"Finding the occasional straw of truth awash in a great ocean of confusion and bamboozle requires intelligence, vigilance, dedication and courage." Isaac Asimov (Page 172)

"Conducting medical research is exacting work, and many things can go wrong - and often do. Several scientific reviews of medical studies have concluded that a large proportion of published studies are seriously flawed. (In the words of one review: " The mere fact that research reports are published, even in the most prestigious journals, is no guarantee of their quality. " An expert on the medical literature cautions, "the odds are good that the authors [of published clinical research] have arrived at invalid conclusions.") Confounding variables and bias may creep in and skew results. The sample studied may be too small or not representative. The statistical analysis of data may be faulty. In rare cases, the data may even turn out to be faked or massaged. There may be many other detected or undetected inadequacies, and often these problems are serious enough to cripple A study and cast substantial doubt on its conclusions.

To minimize this potential for error, inadequacy or fraud, medical scientists seek replication. Several studies yielding essentially the same results can render A hypothesis more probable than would a lone study. " Two studies seldom have identical sources of error or bias," says epidemiologist Thomas Vogt. "With three or four studies, the chance is even less that the same flaws are shared." Replication means that evidence for or against a certain treatment generally accumulates slowly. Despite the impression often left by the media, medical breakthroughs arising out of a single study are extremely rare.

It should be clear from this sketch of medical research why the scientific method is such an effective means of acquiring knowledge. Knowledge, you will recall, requires the absence of reasonable doubt. By formulating their hypotheses precisely and controlling their observations carefully, scientists attempt to eliminate as many sources of doubt as possible. They can't remove them all, but often they can remove enough of them to give us knowledge.

Not all science can perform controlled experiments, because not all natural phenomena can be controlled. Much as we might like to, there's little we can do about earthquakes, volcanoes , and sinkholes, let alone comets, meteors, and asteroids. So geological and astronomical hypotheses can't usually be tested in the laboratory. They can be tested in the field, however. By looking for the conditions specified in their hypotheses, geologists and astronomers can determine whether the events predicted actually occur.

Since many legitimate sciences don't perform controlled experiments, the scientific method can't be identified with any particular procedure because there are many different ways to assess the credibility of a hypothesis. In general any procedure that serves systematically to eliminate reasonable grounds for doubt can be considered scientific." (Page 172 - 173)

"You don't have to be a scientist to use the scientific method. In fact, many of us use it every day; as biologist Thomas H. Huxley realized, " Science is simply common sense at its best - that is , rigidly accurate in observation, and merciless to fallacy in logic." When getting the right answer is important, we do everything we can to ensure that both our evidence and our explanations are as complete and accurate as possible. In doing so, we are using the scientific method. " (Page 173 - 174)

"Science is organized common sense where many a beautiful theory was killed by an ugly fact." Thomas H. Huxley (Page 174)


"CONFIRMING AND REFUTING HYPOTHESES

The result of scientific inquiry are never final and conclusive but are always provisional and open. No scientific hypothesis can be conclusively confirmed because the possibility of someday finding evidence to the contrary can't be ruled out. Scientific hypotheses always go beyond the information given. They not only explain what has been discovered; they also predict what will be discovered. Since there's no guarantee that these predictions will come true, we can never be absolutely sure that a scientific hypothesis is true.

Just as we can never conclusively confirm a scientific hypothesis, we can never conclusively refute one either. There is a widespread belief that negative results prove a hypothesis false. This belief would be true if predictions followed from individual hypotheses alone, but they don't. Predictions can be derived from a hypothesis only in conjunction wih a background theory. This background theory provides information about the objects under study as well as the apparatus used to study them. If a prediction turns out to be false, we can always save the hypothesis by modifying the background theory." (Page 174)

"It's not true, however, that every hypothesis is as good as every other. Although no amount of evidence logically compels us to reject a hypothesis, maintaining a hypothesis in the face of adverse evidence can be manifestly unreasonable. So even if we cannot conclusively say that a hypothesis is false, we can conclusively say that it's unreasonable." (Page 176)


"A hypothesis threatened by recalcitrant data can often be saved by postulating entities or properties that account for the data. Such a move is legitimate if there's an independent means of verifying their existence. If there is no such means, the hypothesis is ad hoc.

Ad hoc literally means "for this case only." It's not simply that a hypothesis is designed to account for a particular phenomena that makes it ad hoc (if that were the case, all hypotheses would be ad hoc). What makes a hypothesis ad hoc is that it can't be verified independently of the phenomenon it's supposed to explain." (Page 176)

"The real purpose of scientific method is to make sure Nature hasn't misled you into thinking you know something you don't actually know." Robert M. Pirsig (Page 178)


"The moral of this story is to offer a hypothesis to increase our knowledge, there must be some way to test it, for if there isn't, we have no way of telling whether or not the hypothesis is true." (Page 179)

"CRITERIA OF ADEQUACY

To explain something is to offer a hypothesis that helps us understand it. For example, we can explain why a penny left outside turns green by offering the hypothesis that the penny is made out of copper and that when copper oxidizes, it turns green. But for any set of facts, it's possible to devise any number of hypotheses to account for them. Suppose that someone wanted to know what makes fluorescent lights work. One hypothesis is that inside each tube is a little gremlin who creates light (sparks) by striking his pickax against the side of the tube. In addition to the one gremlin hypothesis, there is the two gremlin hypothesis, the three gremlin hypothesis, and so on. Because there is always more than one hypothesis to account for any set of facts and because no set of facts can conclusively confirm or refute any hypothesis, we must appeal to something besides the facts in order to decide which explanation is the best. What we appeal to are criteria of adequacy. As we saw in Chapter 3, these criteria are used in any inference to the best explanation to determine how well a hypothesis accomplishes the goal of increasing our understanding." (Page 179)

"Hypotheses produce understanding by systematizing and unifying our knowledge. They bring order and harmony to facts that may have seemed disjointed and unrelated. The extent to which a hypothesis systematizes and unifies our knowledge is determined by how well it meets the criteria of adequacy. In its search for understanding, science tries to identify those hypotheses that best meet these criteria. As anthropologist Marvin Harris puts it: " The aim of scientific research is to formulate explanatory theories which are (1) predictive (or retrodictive), (2) testable (or falsifiable), (3) parsimonious [simple], (4) of broad scope, and (5) integratable or cumulative within a coherent and expanding corpus of theories." The better a hypothesis meets these criteria, the more understanding it produces. Let's take a closer look at how these criteria work.

Testability

Since science seeks understanding, it's interested only in those hypotheses that can be tested - if a hypothesis can't be tested, there is no way to determine whether it's true or false. Hypotheses, however, can't be tested in isolation, for as we've seen, hypotheses have observable consequences only in the context of a background theory. So to be testable, a hypothesis, in conjunction with a background theory, must predict something more than what is predicted by the background theory alone. If a hypothesis doesn't go beyond the background theory, it doesn't expand our knowledge and hence is scientifically uninteresting.

Take the gremlin hypothesis, for example. To qualify as scientific, there must be some test we can perform - other than turning on the lights - to detect the presence of gremlins. Whether there is such a test will depend on what the hypothesis tells us about gremlins. If it tells us that they are  visible to the naked eye, it can be tested by simply breaking open a fluorescent light and looking for them. If it tells us that they are invisible but sensitive to heat and capable of emitting sounds, it can be tested by putting a fluorescent light in boiling water and listening for tiny screams. But if it tells us that they are incorporeal or so shy that any attempt to detect them makes them disappear, it can't be tested and hence is not scientific.

Scientific hypotheses can be distinguished from nonscientific ones, then, by the following principle:

A hypothesis is scientific only if it is testable, that is, only if it predicts something more than what is predicted by the background theory alone."  (Page 180)

"The gremlin hypothesis predicts that if we turn on a fluorescent light, it will emit light. But this action doesn't mean that the gremlin hypothesis is testable, because the fact that fluorescent lights emit light is what the gremlin hypothesis was introduced to explain. That fact is part of its background theory. To be testable, a hypothesis must make a prediction that goes beyond its background theory. A prediction tells us that if certain conditions are realized, then certain results will be observed. If a prediction can be derived from a hypothesis and its background theory that cannot be derived from its background theory alone, then the hypothesis is testable.

Karl Popper realized long ago that untestable hypotheses cannot legitimately be called scientific. What distinguishes genuine scientific hypotheses from pseudoscientific ones, he claims, is that the former are falsifiable. Although his insight is a good one, it has two shortcomings: First, the term is unfortunate, for no hypothesis is, strictly speaking, falsifiable because it's always possible to maintain a hypothesis in the face of unfavorable evidence by making suitable alterations in the background theory.

The second weakness in Popper's theory is that it doesn't explain why we hold onto some hypotheses in the face of adverse evidence. When new hypotheses are first proposed, there is often a good deal of evidence against them. As philosopher of science Imre Lakatos notes, "When Newton published his Principia, it was common knowledge that it could properly explain even the motion of the moon; in fact lunar motion refuted Newton.... All hypotheses, in this sense, are born refuted and die refuted." Nonetheless, we give credence to some and not others. Popper's theory is hard-pressed to explain why this is so. Recognizing that other criteria play a role in evaluating hypotheses makes sense of this situation." (Page 181 - 182)

"Fruitfulness

One thing that makes some hypotheses attractive even in the face of adverse evidence is that they successfully predict new phenomena and thus open new lines of research. Such hypotheses possess the virtue of fruitfulness. For example, Einstein's theory of relativity predicts that light rays traveling near massive objects will appear to be bent because the space around them is curved. At the same time Einstein proposed his theory, common wisdom was that since light has no mass, light rays travel in Euclidean ststraight lines. To test Einstein's theory, physicist Sir Arthur Eddington mounted an expedition to Africa in 1919 to observe a total eclipse of the sun. If light rays are bent by massive objects he reasoned, then the position of stars whose light passes near the sun should appear to be shifted from their true position. The shift should be detectable by comparing a photograph taken during the eclipse with one taken at night of the same portion of the sky. When Eddington compared the two photographs, he found that stars near the sun during the eclipse did appear to have moved more than those farther away and that the amount of their apparent movement was what Einstein's theory predicted. (Einstein's theory predicted a deflection of 1.75 seconds of arc. Eddington observed a deflection of 1.64 seconds of arc, well within the possible error of measurement.) Thus Einstein's theory had successfully predicted a phenomenon that no one had previously thought existed. In so doing, it expanded the frontiers of our knowledge." (Page 182)


"Other things being equal, the best hypothesis is the one that is the most fruitful, that makes the most successful novel predictions.

If two hypotheses do equally well with regard to all the other criteria of adequacy, the one with greater fruitfulness is better.

Having greater fruitfulness by itself does not necessarily make a hypothesis superior to its rivals, however, because it might not do as well as they do with respect to other criteria of adequacy." (Page 184)


"SCOPE

The scope of a hypothesis - or the amount of diverse phenomena explained and predicted by it is also an important measure of its adequacy: the more a hypothesis explains and predicts, the more it unifies and systematizes our knowledge and the less likely it is to be false. For example, one reason that Einstein's theory of relativity came to be preferred over Newton's theories of gravity and motion is that it had greater scope. It could explain and predict everything that Newton's theories could, as well as some things that  they couldn't. For instance, Einstein's theory could explain a variation in Mercury's orbit, among other phenomena.

It had been known since the middle of the nineteenth century that the planet Mercury's perihelion (the point at which it is closest to the sun) does not remain constant - that the point rotates slowly, or precesses, around the sun at a rate of about 574 seconds of arc per century. Using Newton's laws of motion and gravity, it was possible to account for about 531 seconds of arc of this motion. Leverrier tried to account for the missing 43 seconds of arc in the same way he had accounted for the discrepancies in the orbit of Uranus - by postulating the existence of another planet between Mercury and the sun. He named this planet Vulcan (Star Trek fans take note), but repeated observations failed to find it. Einstein's theory of relativity, however, can account for the precession of Mercury's perihelion without postulating the existence of another planet. According to relativity theory, space is curved around massive objects. Since Mercury is so close to the sun, the space it travels through is more warped (again, Star Trek fans take note) than is the space that the rest of the planets travel through. Using relativity theory, it is possible to calculate the extent to which space is thus bent. It turns out to be just enough to account for the missing 43 seconds of arc in the procession of Mercury's perihelion." (Page 185 - 186)


"For Langevin, Einstein's theory is superior to Newton's because it has greater explanatory and predictive power. The principle he's relying on is this one:

Other things being equal, the best hypothesis is the one that has the greatest scope, that is, that explains and predicts the most diverse phenomena." (Page 186)  


"Simplicity

Interestingly enough, even though considerations of fruitfulness and scope loomed large in the minds of many of those scientists who accepted Einstein's theory, simplicity was what Einstein saw as its main virtue. He wrote, " I do not by any means find the chief significance of the general theory of relativity in the fact that it has predicted a few minute observable facts, but rather in the simplicity of its foundation and in its logical consistency." For Einstein, simplicity is a theoretical virtue par excellence.

Simplicity is notoriously difficult to define. For our purposes, we may say that the simpler of two hypotheses is the one that makes the fewest assumptions. Simplicity is valued for the same reason that scope is - the simpler a theory is, the more it unifies and systematizes our knowledge and the less likely it is to be false because there are fewer ways for it to go wrong." (Page 186 - 187)

"Other things being equal, the best hypothesis is the simplest one, that is, the one that makes the fewest assumptions.

As we've seen, hypotheses often explain phenomena by assuming that certain entities exist. The simplicity criterion tells us that, other things being equal, the fewer such assumptions a theory makes, the better it is. When searching for an explanation, then, it's wise to cleave to the principle known as Occam's Razor (in honor of the medieval philosopher, William of Occam, who formulated it): Do not multiply entities beyond necessity. In other words, assume no more than is required to explain the phenomenon in question. If there's no reason to assume that something exists, it's irrational to do so." (Page 188)

"Conservatism

Since consistency is a necessary condition of knowledge, we should be wary of accepting a hypothesis that conflicts with our background information. As we've seen, not only does accepting such a hypothesis undermine our claim to know; it also requires rejecting the beliefs it conflicts with. If those beliefs are well established, the chances of the new hypothesis being true are not good. In general, then, the more conservative a hypothesis is (that is, the fewer well-established beliefs it conflicts with), the more plausible it is. The criterion of conservatism can be stated as follows:

Other things being equal, the best hypothesis is the one that is the most conservative, that is, the one that fits best with established beliefs.

Things aren't always equal, however. It may be perfectly reasonable to accept a hypothesis that is not conservative provided that it possesses other criteria of adequacy. Unfortunately, there's no foolproof method for determining when conservatism should take a backseat to other criteria.

Indeed, there is no fixed formula for applying any of the criteria of adequacy. We can't quantity how well a hypothesis does with respect to any them, nor can we definitively rank the criteria in order of importance. At times we may rate conservatism more highly than scope, especially if the hypothesis in question is lacking in fruitfulness. At other times we may rate simplicity higher than conservatism more highly than conservatism, especially if the hypothesis has at least as much scope as our existing hypothesis. Choosing between theories is not the purely logical process it is often made out to be. Like judicial decision making, it relies on factors of human judgement that resist formalization.

The process of theory selection, however, is not subjective. There are many distinctions we can't quantify that nevertheless are perfectly objective. We can't say, for example, exactly when day turns into night or when a person with a full head of hair turns bald. Nevertheless, the distinctions between night and day or baldness and hirsuteness are as objective as they come." (Page 189)


"There are certainly borderline cases that reasonable people can disagree about, but there are also clear-cut cases where disagreement would be irrational. It would simply be wrong to believe that a person with a full head of (living) hair is bald. If you persisted in such a belief, you would be irrational. Similarly, it would simply be wrong to believe that the phlogiston theory is a good scientific theory. In general, if someone believes a theory that clearly fails to meet the criteria of adequacy, that person is irrational. " (Page 189 - 190)

"CREATIONISM,EVOLUTION, AND CRITERIA OF ADEQUACY

Criteria of adequacy are what we appeal to when trying to decide which hypothesis best explains a phenomenon. The best hypothesis is the one that explains the phenomenon and meets the criteria of adequacy better than any of its competitors. To make a rational choice among hypotheses, then, it's important to know what these criteria are and how to apply them. Philosopher and historian Thomas Kuhn agrees," It is vitally important," he tells us, "that scientists be taught to value these characteristics and that they be provided with examples that illustrate them in practice." (Page 190)

"In recent years, a number of people (as well as a number of state legislatures) have claimed that the theory of creationism is just as good as the theory of evolution and thus should be given equal time in the classroom. Our discussion of the criteria of adequacy has given us the means to evaluate this claim. If creationism is just as good a theory as evolution, then it should fulfill the criteria of adequacy just as well as evolution does. Let's see if that is the case.

The theory of evolution, although not invented by Darwin, received its most impressive formulation at his hand. In 1859, he published The Origin of Species, in which he argued that the theory of evolution by natural selection provided the best explanation of a number of different phenomena:

It can hardly be supposed that a false theory would explain, in so satisfactory a manner as does the theory of natural selection, the several large classes of facts above specified. It has recently been objected that this is an unsafe method of arguing, but it is a method used in judging of the common events of life, and has often been used by the greatest natural philosophers.

Darwin found that organisms living in isolated habitats (such as islands) have forms related to but distinct from organisms living in neighboring habitats, that there are anatomical resemblances between closely related species, that the embryos of distantly related species resemble one another more than the adults of those species, and that fossils show a distinct progression from the simplest forms to the most complex. The best explanation of these facts, Darwin argued, was that organisms adapt to their environment through a process of natural selection. The hypothesis that all creatures were created by God in one fell swoop, he argued, offers no explanation for these facts.

Darwin realized that many more creatures possess different physical characteristics, and that the characteristics they possess are often inherited from their parents. He reasoned that when an inherited characteristic (like an opposable thumb) increased an organism's chances of living long enough to reproduce, that characteristic would be passed to the next generation. As this process continued, the characteristic would become more prevalent in succeeding generations. This process Darwin called natural selection, which was the driving force behind evolution. Darwin was not aware of the mechanism by which these characteristics were transmitted. The discovery of that mechanism the science of genetics - has further bolstered Darwin's theory, for it has been found that the number of chromosomes and their internal organization is similar among closely related species." (Page 190 - 191)

"A fact is a true statement. A theory is a statement about the way the world is. If the world is the way the theory says it is - if the theory is true - then the theory is a fact. For example, if the Copernican theory of the solar system is true - if planets revolve around the sun - then the Copernican theory is a fact. If Einstein's theory of relativity is true - if E = mc - then Einstein's theory of relativity is a fact. Similarly, if the theory of evolution is true, then it's a fact.

So the question arises: when are we justified in believing something to be true? We have already seen the answer: when it provides the best explanation of some phenomena. Biologists consider evolution to be a fact because, in the words of Theodosius Dobzhansky, "Nothing in biology makes sense except in the light of evolution." Evolution is a fact because it's the best theory of how biological change occurs over time.

What often goes unnoticed in these discussions is that every fact is a theory. Take the fact that you're reading a book right now, for example. You're justified in believing that to be a fact because it provides the best explanation of your sense experience. But it's not the only theory that explains your sense experience. After all, you could be dreaming, you could be hallucinating, you could be a brain in a vat, you could be plugged into the matrix, you could be receiving telepathic messages from extraterrestrials, and so on. All of those theories explain your sense experience. You shouldn't accept any of them, however, because none of them is as good an explanation as the ordinary one.

The Intelligent Design theory is on a par with the theory that extraterrestrials are putting thoughts in your head. It's a possible explanation of the evidence, but not a very good one because, like the extraterrestrial theory, it doesn't identify the designer nor does it tell us how the designer did it. Consequently, it doesn't meet the criteria of adequacy as well as the evolutionary theory does. In a court of law, no one would take seriously an explanation of a crime that didn't identify the criminal or how he committed the crime. Similarly, in a science classroom, no one should take seriously an explanation that doesn't identify the cause or how the cause brings about its effect. Evolution does both and does it better than any competing theory. So we're justified in believing it to be true." (Page 195)


"As Plato realized over 2,500 years ago, to say that "God did it" is not to offer an explanation, but to offer an excuse for not having an explanation (Cratylus 426a)." (Page 199)

"We should accept an extraordinary hypothesis only if no ordinary one will do."  (Page 211)


The authors described a problem which I have run into with true believers. They described how parapsychology researchers dismiss research from people who don't believe in psychic abilities. The researchers claim a lack of belief in psychic abilities causes the abilities to fail.

"The ad hoc character of this hypothesis should be obvious. There's no way to test it because no possible data could count against it. Every apparent counterexample can be explained away by appeal to the unconscious. Moreover, accepting it would make the whole field of parapsychology untestable. No unsuccessful experiments could count against the existence of psi because they could simply be the result of experimenter bias. This sort of reasoning convinces many researchers that parapsychology is a pseudoscience." (Page 214)

"Can individually unconvincing studies be collectively convincing? No. What a study lacks in quality cannot be made up in quantity. The evidence generated by questionable studies remains questionable, no matter how many of them there are." (Page 215)

"The amount of understanding produced by a theory is determined by how well it meets the criteria of adequacy: testability (whether it can be tested), fruitfulness (whether it successfully predicts new phenomena), scope (the amount of diverse phenomena explained by it), simplicity (how many assumptions it makes), and conservatism (how well it fits with established beliefs)." (Page 220)


Critical Thinking - a Simple and Concise Look 8

 "We've seen that, in themselves, strong feelings of subjective certainty regarding a personal experience don't increase the reliability of that experience one bit. Only if we have no good reason to doubt a personal experience can we accept it as a reliable guide to what's real - whether about UFOs, ghosts, witches, or the curative power of vitamin C - personal experience is frequently shakier than we realize.

We've seen why we can't escape the fact that there is indeed such a thing as objective truth. There is a way the world is. The idea that truth is relative to individuals, to societies, or to conceptual schemes is unreasonable. Similarly, the fashionable notions that people create their own reality or create reality by group consensus have little to recommend them. 

We've all investigated what it means to say that we know something. We can know many things - including weird things - if we have good reasons to believe them and no good reasons to doubt them. We have good reasons to doubt a proposition when it conflicts with other propositions we have good reasons to believe, when it conflicts with well-established background information, or when it conflicts with expert opinion regarding the evidence. If we have good reason to doubt a proposition, we can't know it. The best thing we can do is proportion our belief to the evidence. If we don't know something, a leap of faith can never help us know it. We can't make something true just by believing it to be true. To accept a proposition on faith is to believe it without justification. Likewise, mystical experience doesn't provide us with a privileged way of knowing. Claims of knowledge based on mystical experience must pass the same rational tests as any other kind of experience. 

We've explored why - even though the scientific method can never prove or disprove anything conclusively - science is our most reliable means of establishing an empirical proposition beyond a reasonable doubt. It offers us a model for assessing new hypotheses, or claims, about all manner of extraordinary events, and entities - a model that can serve scientists and nonscientists alike. If we want to know whether a hypothesis is true, we'll need to use this model in one form or another. The model requires that we judge a new hypothesis in light of alternative, competing hypotheses and apply to each of these alternatives the best yardstick we have - the criteria of adequacy - to see which hypothesis measures up. Under pressure from the criteria of adequacy, some hypotheses may collapse from lack of sturdy evidence or sound reasons to support them. Other hypotheses may not tumble completely but will be shown to be built on weak and rickety foundations."  (Page 228 - 229)


"One, though, may emerge as the best hypothesis of them all, strong and tall because it rests on a firm base of good reasons." (Page 229 - 230)

"Judge a man by his questions rather than his answers. - Voltaire" (Page 230)

"THE SEARCH FORMULA

Our formula for inquiry consists of four steps, which we represent by the acronym SEARCH. The letters stand for the key words in the four steps:

1. State the claim.

2. Examine the Evidence for the claim.

3. Consider Alternative hypotheses.

4. Rate, according to the Criteria of adequacy, each Hypothesis.

The acronym is arbitrary and artificial, but it may help you remember the formula's vital components. Go through these steps any time you're faced with an extraordinary claim.

Note that throughout this chapter we use the words hypothesis and claim interchangeably. We do so because any weird claim, like any claim about events and entities, can be viewed as a hypothesis - as an explanation of a particular phenomenon. Thinking of weird claims as hypotheses is important because effectively evaluating weird claims involves essentially the same hypothesis-assessing procedure used in science." (Page 230 - 231)

"Step 1: State the Claim

Before you can carefully examine a claim, you have to understand what it is. It's vital to state the claim in terms that are as clear and as specific as possible. "Ghosts are real" is not a good candidate for examination because it's vague and nonspecific. A better claim is "The disembodied spirits of dead persons exist and are visible to the human eye." Likewise, "Astrology is true" is not much to go on. It's better to say, "Astrologers can correctly identify someone's personality traits by using sun signs." Even these revised claims aren't as unambiguous and definitive as they should be. (Terms in the claims, for example, could be better defined. What is meant by "spirit"? What does it mean to "correctly identify someone's personality traits"?) But many of the extraordinary claims you run into are of this caliber. The point is that before examining any claim, you must achieve maximum clarity and specificity of what the claim is." (Page 231)

"Step 2: Examine the Evidence for the Claim

Ask yourself what reasons there are for accepting the claim. That is, what empirical evidence or logical arguments are there in the claim's favor? Answering this question entails taking inventory of both the quantity and quality of the reasons for believing that the claim is true. An honest and thorough appraisal of reasons must include:

1. Determining the exact nature and limitations of the empirical evidence. You should assess not only what the evidence is but whether there are any reasonable doubts regarding it. You have to try to find out if it's subject to any of the deficiencies we've discussed in this book - the distortions of human perception, memory, and judgement; the errors and biases of scientific research; the difficulties inherent in ambiguous data. Sometimes even a preliminary survey of the facts may force you to admit there really isn't anything mysterious that needs explaining. Or perhaps investigating a little mystery will lead to a bigger mystery. At any rate, attempting an objective assessment of the evidence takes courage. Many true believers have never taken this elementary step.

2. Discovering if any of these reasons deserve to be disqualified. As we've seen, people frequently offer considerations in support of a claim that should be discounted. These considerations include wishful thinking, faith, unfounded intuition, and subjective certainty. The problem is that these factors aren't reasons at all. In themselves, they can't provide any support for a claim.

3. Deciding whether the hypothesis in question actually explains the evidence. If it doesn't - if important factors are left out of account - the hypothesis is not a good one. In other words, a good hypothesis must be relevant to Tue evidence it's intended to explain. If it isn't, there's no reason to consider it any further." (Page 231 - 232)

"No man really becomes a fool until he stops asking questions. - Charles Steinmetz" (Page 232)

"Step 3: Consider Alternative Hypotheses

It's never enough to consider only the hypothesis in question and its reasons for acceptance. If you ever hope to discover the truth, you must also weigh alternative hypotheses and their reasons.

Take this hypothesis, for example: Rudolph the Red-Nosed Reindeer - Santa's funny, flying, furry headlight - is real and lives at the North Pole. As evidence for this hypothesis we could submit these facts: Millions of people (mostly children) believe Rudolph to be real; his likeness shows up everywhere during the Christmas holidays; given the multitude of reindeer in the world and their long history, it's likely that at some point in time a reindeer with flying capabilities would either evolve or be born with the necessary mutations; some people say that they have seen Rudolph with their own eyes. We could go on and on and build a fairly convincing case for the hypothesis - soon you may even come to believe that we were on to something.

The hypothesis sounds great by itself, but when considered alongside an alternative hypothesis - that Rudolph is a creature of the imagination created in a Christmas song - it looks ludicrous. The song hypothesis is supported by evidence that's overwhelming; it doesn't conflict with well-established theory in biology (as the real-Rudolph hypothesis does); and unlike its competitor, it requires no postulations about new entities.

This third step involves creativity and maintaining an open mind. It requires asking whether there are other ways to account for the phenomenon at hand and, if there are, what reasons are in favor of these alternative hypotheses. This step involves applying step 2 to all competing explanations.

It's also important to remember that when people are confronted with some extraordinary phenomenon they often immediately offer a hypothesis involving the paranormal or supernatural and then can't imagine a natural hypothesis to account for the facts. As a result, they assume that the paranormal or supernatural hypothesis must be right. But this assumption is unwarranted. Just because you can't think of a natural explanation doesn't mean there isn't one. It may be (as has often been the case throughout history) that you're simply unaware of the correct natural explanation. As pointed out in chapter 2, the most reasonable response to a mystifying fact is to keep looking for a natural explanation.

We all have a built-in bias that urges us to latch onto a favorite hypothesis and ignore or resist all alternatives. We may believe that we needn't look at other explanations since we know that our favorite one is correct. This tendency may make us happy (at least for a while), but it's also a good recipe for delusion. We must work to counteract this bias. Having an open mind means being willing to consider any possibility and changing your view in light of good reasons." (Page 232 - 233)

"Step 4: Rate, According to the Criteria of Adequacy, Each Hypothesis

Now it's time to weigh competing hypotheses and see which are found wanting and which are worthy of belief. Simply cataloging the evidence for each hypothesis but isn't enough. We need to consider other factors that can put that evidence into perspective and help us weigh hypotheses when there's no evidence at all, which is often the case with weird things. To command our assent, extraordinary claims must provide exemplary explanations. That is, they must explain the phenomena better than any competing explanation. As we saw in Chapter 6, the way to determine which explanation is best is to apply the criteria of adequacy. By applying them to each hypothesis, we can often eliminate some hypotheses right away, give more weight to some than to others, and decide between hypotheses that may at first seem equally strong.

1. Testability. Ask: Can the hypothesis be tested? Is there any possible way to determine whether the hypothesis is true or false? Many hypotheses regarding extraordinary phenomena aren't testable. This does not mean they're false. It means they're worthless. They are merely assertions that we'll never be able to know. What if we claim that there is an invisible, undetectable gremlin in your head that sometimes causes you to have headaches. As an explanation for your headaches, this hypothesis is interesting but trivial. Since by definition there's no way to determine if this gremlin really exists, the hypothesis is amazingly uninformative. You can assign no weight to such a claim.

2. Fruitfulness. Ask: Does the hypothesis yield observable, surprising predictions that explain new phenomena? Any hypothesis that does so gets extra points. Other things being equal, hypotheses that make accurate, unexpected predictions are more likely to be true than hypotheses that don't. (Of course, if they yield no predictions, this in itself doesn't show that they're false.) Most hypotheses regarding weird things don't make observable predictions.

3. Scope. Ask: How many different phenomena can the hypothesis explain? Other things being equal, the more it explains, the less likely it is to be mistaken. In Chapter 5 we discussed the well-confirmed hypothesis that human perception is constructive. As we pointed out, the hypothesis explains a broad range of phenomena, including perceptual size constancy, misperception of stimuli, hallucinations, pareidolia, certain UFO sightings, and more. A hypothesis that explains only one of theses phenomena (for example, the hypothesis that UFO sightings are caused by actual alien spacecraft) would be much less impressive - unless it had other things in its favor like compelling evidence.

4. Simplicity. Ask: Is this hypothesis the simplest explanation for the phenomenon? Generally, the simplest hypothesis that explains the phenomenon is the best, the one least likely to be false. Simplest means makes the fewest assumptions. In the realm of weird things, simplicity is often a matter of postulating the existence of the fewest entities. Let's say you get into your car one morning, put the key in the ignition, and try to start the engine but find that it won't start. One hypothesis for this phenomenon is that the car battery is dead. Another is that a poltergeist (A mischievous spirit) has somehow caused your car not to start. The battery hypothesis is the simplest (in addition to being testable, able to yield predictions, and capable of explaining several phenomena) because it doesn't require postulating the existence of any mysterious entities. The poltergeist hypothesis, though, does postulate the existence of an entity (as well as assuming that the entity has certain capabilities and tendencies). Thus the criterion of simplicity shows us that the battery hypothesis has the greater chance of being right.

5. Conservatism. Ask: Is the hypothesis consistent with our well-founded beliefs? That is, is it consistent with the empirical evidence - with results from trustworthy observations and scientific tests, with natural laws, or with well-established theory? Trying to answer this question takes you beyond merely cataloging evidence for hypotheses to actually assigning weight to hypotheses in light of all the available evidence. Other things being equal, the hypothesis most consistent with the entire corpus of our knowledge is the best bet, the one most likely to be true." (Page 233 - 234)

"All is mystery; but he is a slave who will not struggle to penetrate the dark veil. - Benjamin Disraeli" (Page 233)

"The mind is like the stomach. It is not how much you put into it that counts, but how much it digests. - Albert Jay Nock" (Page 235)

"I honestly believe it is better to know nothing than to know what ain't so - Josh Billings" (Page 240)

"He that will not reason is a bigot; he that cannot reason is a fool; and he that does not reason is a slave. - William Drummond" (Page 246)

"As for hypnosis, it's not the revealer of truth that many believe it to be. Research has shown that even deeply hypnotized people can willfully lie and that a person can fake hypnosis and fool even very experienced hypnotists. More to the point, research also shows that when hypnotized subjects are asked to recall a past event, they will fantasize freely, creating memories of things that never happened. Martin T. Orne, one of the world's leading experts on the use of hypnosis to obtain information about past events, sums up the situation like this: 

The hypnotic suggestion to relive a past event, particularly when accompanied by questions about specific details, puts pressure on the subject to provide information . . . This situation may jog the subject's memory and produce some increased recall, but it will also cause him to fill in details that are plausible but consist of memories or fantasies from other times. It is extremely difficult to know which aspects of hypnotized recall are historically accurate and which aspects have been confabulated [made up and confused with real events]. . . .There is no way, however, by which anyone - even a psychologist or psychiatrist with extensive training in the field of hypnosis - can for any particular piece of information determine whether it is actual memory versus a confabulation unless there is independent verification.

Orne and other experts have also emphasized how extremely suggestible hypnotic subjects are and how easy it is for a hypnotist to unintentionally induce pseudomemories  in the subject:

If a witness is hypnotized and has factual information casually gleaned from newspapers or inadvertent comments made during prior interrogation or in discussion with others who might have knowledge about the facts, many of these bits of knowledge will become incorporated and form the basis of any pseudo-memories that develop. . . . If the hypnotist has beliefs about what actually occurred, it is exceedingly difficult for him to prevent himself from inadvertently guiding the subject's recall so that [the subject] will eventually "remember" what he, the hypnotist, believes actually happened.


Orne describes a simple experiment he has repeatedly conducted that shows the limits of hypnotism. First he verifies that a subject went to bed at a certain time at night and slept straight through until morning. Then he hypnotizes the subject and asks her to relive that night. Orne asks the subject if she heard two loud noises during the night (noises that didn't, in fact, happen). Typically, the subject says that she was awakened by the noises and then describes how she arose bed to investigate. If Orne asks her to look at the clock, the subject  identifies a specific time - at which point the subject was actually asleep and in bed. After hypnosis, the subject remembers the non-event as though it actually happened. A pseudomemory was thus created by a leading question that may seem perfectly neutral.

A study has even been conducted to see if people who had never seen a UFO nor were well informed about UFOs could, under hypnosis, tell "realistic" stories about being abducted by aliens. The conclusion was that they can. The imaginary abductees easily and eagerly invented many specific details of abductions. The researchers found "no substantive differences" between these descriptions and those given by people who have claimed to be abducted.

Research also suggests that hypnosis not only inducesinduces pseudo-memories, but also increases the likelihood that they'll become firmly established. As psychologist Terence Hines says:

What hypnosis does do - and this is especially relevant to the UFO cases - is to greatly increase hypnotized subjects' confidence that their hypnotically induced memories are true. This increase in confidence occurs for both correct and incorrect memories. Thus, hypnosis can create false memories, but the individual will be especially convinced that those memories are true. Their belief, of course, does not indicate whether the memory is actually true or false. "(Page 247 - 249)

The authors go on at length with numerous details about certain personality types being likely to have vivid fantasies and dreams that can be mistaken for actual occurrences. Research in this area is used to explore both UFO abduction claims and ghost sightings. The authors describe this in extreme detail and I have seen very plausible explanations regarding this in other literature.


The authors also debunk the 911 truther claims in detail. I can assure you that if you want to dig into these specific claims the book is well worth your time.


" Weird things are events or objects that seem impossible, given what we know about the world. To explain these things, people often postulate powers or properties that are just as weird. We're justified in believing in these powers or properties, however, only if they provide the best explanation of the phenomena in question. We can evaluate how good competing explanations are in relation to each other by determining how much understanding they produce. The amount of understanding produced by an explanation is determined by the extent to which it systematizes and unifies our knowledge, and that is determined by how well it meets the criteria of adequacy: simplicity, conservatism, scope, and fruitfulness. The SEARCH method highlights the steps that should be taken when evaluating an explanation: State the claim, examine the Evidence for it, consider Alternative explanations, and Rate according to the Criteria of adequacy, each Hypothesis. " (Page 300)









No comments:

Post a Comment

Note: Only a member of this blog may post a comment.