Errors in Inductive Reasoning – [TEST] The Objective Standard

Author’s note: The following is excerpted from Chapter 6 of my book in progress, “The Inductive Method in Physics.”

In contrast to perception, thinking is a fallible process. This fact gives rise to our need for the method of logic.

Logic, when properly applied, enables us to arrive at true conclusions. But it comes with no guarantee that we will apply the method correctly. The laws of deduction were identified by Aristotle more than two millennia ago, and yet people still commit deductive fallacies. If one remains attentive to the evidence, however, further use of logic leads to the correction of these errors. The same is true of false generalizations reached by induction. Although even the best thinkers can commit inductive errors, such errors wither and die in the light shed by continued application of observation and logic.

During the past century, however, many philosophers have rejected the validity of induction and argued that every generalization is an error. For example, Karl Popper claimed that all the laws of Kepler, Galileo, and Newton have been “falsified”; in his view, no laws or generalizations have ever been or can ever be proven true. By demanding that a true generalization must apply with unlimited precision to an unlimited domain, Popper upheld a mystical view of “truth” that is forever outside the reach of man and accessible only to an omniscient god. In the end, he was left with two types of generalizations: those that have been proven false and those that will be proven false. He was then accused by later philosophers of being too optimistic; they insisted that nothing can be proven, not even a generalization’s falsehood.

Such skeptics commit—on a grand scale—the fallacy of dropping context. The meaning of our generalizations is determined by the context that gives rise to them; to claim that a generalization is true is to claim that it applies within a specific context. The data subsumed by that context are necessarily limited in both range and precision.

Galileo, for example, committed no error when he identified the parabolic nature of trajectories. Obviously, he was not referring to the 6,000-mile path of an intercontinental ballistic missile (to which his law does not apply). He was referring to terrestrial bodies that could be observed and studied in his era—all of which remained close to the surface of the earth, traveled perhaps a few hundred feet, and moved in accordance with his law. Similarly, when Newton spoke of bodies and their motion, he was not referring to the movement of an electron in an atom or of a proton in a modern accelerator. He was referring to observable, macroscopic bodies, ranging from pebbles to stars. The available context of knowledge determines the referents of the concepts that are causally related in a generalization.

The context also includes the accuracy of the data integrated by a law. For example, Kepler’s laws of planetary motion are true; they correctly identify causal relationships that explain and integrate the data available to Kepler. By Newton’s era, however, the measurement errors in astronomical data had been reduced by more than a factor of ten, and today they have been reduced by another factor of ten. In order to explain the more accurate data, one must grasp not only that the sun exerts a force on the planets, but also that the planets exert forces on each other and on the sun. The truths discovered by Kepler were essential to these later discoveries; they made it possible to identify deviations of the new data from the original laws, which in turn made it possible to identify the additional causal factors and develop a more general theory.

In cases where the data are insufficient to support a conclusion, it is important to look closely at the exact nature of the scientist’s claim. He does not commit an error simply by proposing a hypothesis that is later proven wrong, provided that he correctly identified the hypothetical status of the idea. If he can cite some supporting evidence, and he has not overlooked any data that contradict it, and he rejects the idea when counterevidence is discovered, then his thinking is flawlessly logical. An example is provided by the work of Albert Ladenburg, a 19th-century German chemist, who proposed a triangular prism structure of the benzene molecule.1 Ladenburg’s hypothesis was consistent with the data available in the 1860s, but it was rejected a few years later when it clashed with Jacobus Henricus van’t Hoff’s discovery of the symmetrical arrangement of carbon bonds. In such cases, the scientist’s thinking is guided by the evidence at every step, and he deserves nothing but praise from the logician.

A true generalization states a causal relationship that has been induced from observational data and integrated within the whole of one’s knowledge (which, in terms of essentials, spans the range of facts subsumed by the generalization). A scientist makes an error when he asserts a generalization without achieving such an integration. In such cases, the supporting evidence is insufficient, and often the scientist has overlooked counterevidence.

I make no attempt here to give an exhaustive list of the essential inductive fallacies. Rather, I have chosen five interesting cases in which scientists have investigated a complex phenomenon and reached false generalizations by deviating from the principles of induction. In each case, I examine the context of knowledge available to the scientist and seek to identify the factors that cast doubt on his conclusion.

Plant Growth

In the early 17th century, the Dutch chemist J. B. van Helmont investigated the cause of plant growth. Most people of the time thought that plants absorb material from the soil and convert it to wood and foliage, but this was merely a plausible assumption. Van Helmont attempted to reach a definitive answer to the question by performing a quantitative experiment.

He filled a large planter with two hundred pounds of dry soil. He then planted a willow sapling, which weighed five pounds, and covered the soil to prevent any accumulation of dust from the air. For five years, he added only distilled water or rainwater to the planter. When he finally removed the willow tree, he found that it weighed 169 pounds, but the soil had lost only a few ounces in weight. Van Helmont concluded: “Therefore 164 pounds of wood, bark, and root have arisen from water alone.”2 In general, he concluded that plant growth is a process in which water is transformed into the substances making up plants.

The experiment used the method of difference: Van Helmont focused on the fact that he had added one factor—water—and the result was the growth of the willow tree. He certainly did prove that very little of the tree’s additional weight came from the soil. We now know, however, that only about half of fresh willow wood is water. Van Helmont’s error was to reject the possibility that plants absorb material from the air. Much later, in the 1770s, experiments performed by Joseph Priestley and Jan Ingenhousz showed that plants in sunlight absorb carbon dioxide and release oxygen.3 A large portion of their weight is carbon, which they obtain from the air.

Ironically, it was van Helmont who originated the concept “gas” and identified the gas we now call carbon dioxide as a product of burning charcoal or wood. So why did he dismiss the possibility that a gas in the air could be an essential cause of plant growth?

Some of van Helmont’s experiments led him to conclude that gases “can neither be retained in vessels nor reduced to a visible form.”4 When he mixed nitric acid and sal ammoniac in a closed glass vessel, for example, he found that gases were produced that burst the vessel. When he compressed air in an “air-gun” and then released it, he found that the air explosively expanded with a force sufficient to propel a ball through a board. To him, this “wild” and “untamable” nature of gases seemed to preclude the possibility that they could be absorbed to become part of a plant.

Everyone knew, of course, that land animals must breathe air in order to live. The nature of respiration, however, was not yet understood. Van Helmont thought that the air “intermingled” with the blood in our lungs and played an essential role in heating it, but he did not grasp that part of the air (oxygen) is absorbed and another gas (carbon dioxide) is exhaled. Similarly, when he studied combustion, he did not grasp that it involves the consumption of air. He observed that a candle burning in a closed vessel causes a decrease in the air volume, but he thought that the flame was consuming something that existed in the spaces between particles of air. So his hypothetical ideas about respiration and combustion were made to be consistent with his claim that gases could not be converted into liquid or solid substances.

Other phenomena appeared to contradict van Helmont’s view of gases as “wild” and “untamable.” Steam obviously condenses into water, and he knew of gaseous acids that dissolved in water. In these cases, however, he attempted to protect his view by introducing a distinction between “condensable vapors” and true “gases.” But the protection offered by this false distinction was illusory. In regard to plant growth, it simply modified the question, which now became: How did he know that a plant does not absorb “condensable vapors” that exist in the air? Van Helmont offered no convincing argument to eliminate the air as a casual factor in plant growth. His conclusion about the nature of gases was not an integration of all available data, but a leap from a few facts to a broad generalization.

Wider issues guided van Helmont’s thinking. Like many natural philosophers before him, he was struck by the ubiquitous role of water in nature. Water exists as a vapor, liquid, and solid; it fills great oceans, falls from the sky, forms rivers, and shapes our world; it reacts with and/or dissolves many substances; and it is essential to all life. Following a long tradition that began with the pre-Socratic Greek philosopher Thales, van Helmont identified water as the fundamental element that can be modified in countless ways. He even connected his views on the nature of water and gas to his metaphysical views on the relation of matter and spirit.5 These connections made up part of the background that predisposed him to accept water as the sole source of plant growth.

Notice the complexity of the context that is relevant to interpreting an apparently simple experiment. The elasticity of gases, the nature of respiration and combustion, the similarities and differences between gases and their proper conceptualization, the prominent role of water in natural processes, the relation between the matter and the “essence” of a body—all these considerations influenced van Helmont in arriving at his conclusion. Such is the nature of inductive reasoning; the results of a particular experiment are interpreted by means of one’s entire conceptual framework. This is why induction is difficult, and why it is valid when properly performed. Van Helmont was led astray on the issue of plant growth by the errors in his conceptual framework, which contained numerous elements that did not follow from observed facts—in other words, that were not reached by properly applying induction.

Acidity

A scientist with a valid conceptual framework can still commit an error. A good example is provided by Antoine Lavoisier’s analysis of the cause of acidity.

Acids are compounds that are corrosive, have a sour taste, turn blue litmus red, and react with bases to form neutral substances. Lavoisier hypothesized that these properties derive from some one element that all acids have in common. Thus his approach was to use the method of agreement; he studied the known acids and sought to identify their common element.

He discovered that some substances are transformed into acids when they are burned in the presence of water vapor. Combustion of phosphorus, sulfur, and carbon led to phosphoric acid, sulfuric acid, and carbonic acid. Thus it appeared that the element that is absorbed in combustion—oxygen—might also be the cause of acidity.

Lavoisier’s investigation of nitric acid seemed to provide further support for this idea. In 1776, he combined nitric acid with mercury to form a white salt (mercuric nitrate), which decomposed to form red mercury oxide and nitric oxide gas. On further heating, the red oxide was decomposed into metallic mercury and oxygen gas. He collected the gases in bell jars over water. When he combined the nitric oxide and the oxygen in the presence of water, he came full circle and regenerated the original nitric acid. But Lavoisier misinterpreted this result; he overlooked the crucial presence of water and assumed that the acid was a product of the two gases. Later, in 1783, he discovered that water is a compound of hydrogen and oxygen. By overlooking the role of water in his synthesis of acids, he was neglecting the presence of hydrogen—which left oxygen as the only candidate for the common element in acids.

Lavoisier continued to accumulate evidence that seemed to support his idea that oxygen is essential to acidity. He investigated two organic acids (acetic and oxalic) and proved that both contain oxygen. Furthermore, he showed that sulfur forms two acids, and the one with the higher oxygen content is the stronger acid (today we express this result by saying that H2SO4 is more acidic than H2SO3).

Lavoisier’s oxygen theory of acidity faced one major obstacle. A well-known and very strong acid, referred to as “muriatic” acid, had not been shown to contain oxygen. Muriatic acid decomposed into hydrogen gas and another green gas. Lavoisier referred to the green gas as “oxymuriatic” acid, thereby making explicit his assumption that it was the part of muriatic acid in which oxygen would eventually be found. But the years went by, and nobody managed to extract oxygen from the “oxymuriatic” gas. Finally, in 1810, after the most effective methods of extracting oxygen had been tried in vain, Humphry Davy declared that the green gas is an element and recommended that it be called “chlorine.” So hydrochloric acid provided the counterexample that refuted Lavoisier’s oxygen theory.

In form, Lavoisier’s error is like the old joke about the man who vowed to stop getting drunk at parties. The man remembered that at one party he had been drinking bourbon and soda; at another, scotch and soda; at yet another, brandy and soda. Obviously, soda was the common factor and the cause of his intoxication. So he resolved to drink his whiskey straight.

This type of error is relatively easy to correct. When the man gets drunk on straight bourbon at the next party, he will realize that the soda was irrelevant. Then he will look for a factor common to bourbon, scotch, and brandy. Similarly, when it was discovered that all of Lavoisier’s oxygen-containing acids also have hydrogen, and that muriatic acid contains hydrogen but no oxygen, then hydrogen was recognized as the only element common to all known acids. Later, the hydrogen theory of acidity was proven when it was discovered that bases neutralize acids by absorbing a hydrogen ion from them (which typically combines with a hydroxyl ion to form water).

Lavoisier’s theory of acidity illustrates the precarious nature of a generalization that is derived from an observed regularity rather than a causal connection. Lavoisier had no evidence that bases act on the oxygen when they neutralize an acid. In the absence of such knowledge, he did not have sufficient grounds to assert that oxygen would necessarily be found in all acids. Thus we can characterize his error as the fallacy of substituting a regularity for a cause.

Electric Current

Scientists were greatly interested in electricity during the Enlightenment. It was discovered that some materials conduct electricity and others do not; that electric charge exists in two varieties, called positive and negative; that charge can be stored in “Leyden jars,” which can then be discharged through a conductor; that lightning is an atmospheric discharge; and that opposite charges attract and like charges repel with a force that varies as the inverse square of the distance. But even after decades of intensive study, the only known way to generate electricity was by rubbing together dissimilar materials, and the only known movement of electricity was the momentary discharge that occurred during a time interval too short to measure. Near the end of the 18th century, however, the science of electricity took a giant leap forward with a breakthrough discovery made by Luigi Galvani.

Galvani was a professor of anatomy at the University of Bologna who became interested in the effects of electricity on animals. It had been discovered previously that electrical discharges through animals could cause muscular contractions, and Galvani investigated this phenomenon using dissected frogs and discharges from a static electricity generator. His breakthrough came, however, when he was not using the generator at all. In one experiment, he found that when a dead frog was held by means of a bronze hook through its spinal cord and its feet were placed on a silver box, then connecting the hook and box caused muscular contractions that made the frog appear to jump and dance. Galvani realized that electricity was moving through the frog’s leg muscles, but its source was a mystery.

This discovery had crucial implications for both physics and biology. From the perspective of the physicist, Galvani had discovered a new way to generate a flow of electricity. From the perspective of the biologist, he had apparently discovered the physical mechanism controlling the movement of our bodies: The contraction of our muscles is somehow caused by electricity that can flow through our nerves.

Because Galvani was a biologist, it is not surprising that his primary focus was on the frog rather than on the bronze hook or the silver box. He developed a theory in which the source of the electricity is in the animal, whereas the metals played the passive role of mere conductors that allowed the electricity to flow. His theory claimed that muscles store electricity in much the same way as Leyden jars, and that when a conducting circuit is completed the resulting discharge causes the contraction.

Galvani noted that strong muscular contractions were observed only when two different metals were used (e.g., bronze and silver). When he placed the frog on an iron surface and used an iron hook, the effect did not occur. But he failed to appreciate the significance of this fact, and his theory offers no explanation for it. If the metals act simply as conductors, then he should have observed the muscle contractions in the experiment that used only iron. Initially, Galvani did not seem to recognize that the requirement of two different metals posed a severe problem for his theory.

Alessandro Volta, a physics professor at the University of Pavia, seized on the fact that was overlooked by Galvani’s theory. Volta became convinced that the source of electricity was the different properties of the two metals, and that the frog played a passive role of merely providing a conducting fluid between the metals. In a series of experiments, he proved that the further apart the two metals stood in the following series—zinc, tin, lead, iron, copper, platinum, gold, silver—the greater the electrical current.

In an attempt to prove that the frog had no part in producing the electricity, Volta performed an experiment in which the frog (and any other conducting fluid) was entirely eliminated. He attached a copper disk and a zinc disk to insulating handles, and then pressed the disks together. When they were separated, he used a delicate electroscope to demonstrate that both disks had acquired an electric charge (the zinc was positive and the copper negative). So a transfer of electric charge is caused by the mere contact of two different metals. This, Volta claimed, was what had occurred in Galvani’s experiments: The contact of the two metals had caused a flow of electricity that in turn had caused the muscle contractions of the frog.

Galvani was unconvinced, and he responded by performing an experiment in which the metals were entirely eliminated. When he held a dissected frog by one foot and swung it vigorously so that the sciatic nerve touched the muscle of the other leg, he observed contractions of the muscle. Here was a case in which the contact between nerve and muscle caused a flow of electricity and muscle contractions, without any metals present. Galvani regarded this experiment as a decisive refutation of Volta’s theory.

Volta and Galvani had each committed a similar error. In their efforts to localize the cause in either the metals or the frog, they changed the experimental conditions in a way that introduced causal factors not present in the original experiment. Volta brought large metallic surface areas into contact, but no such contact between dissimilar metals was necessary in the “dancing frog” experiment; the effect occurs when the experimenter grips the bronze hook with one hand while touching the silver box with the other hand (in other words, the experimenter himself can be the conducting path between the two metals). Likewise, Galvani was misled by his experiment in which the metals were eliminated; his vigorous swinging of the frog had caused muscle injury, which had stimulated the nerve and caused contractions. But the original experiment had involved no such injury; it had been a simple hop-step, not a swing dance.

Although Volta’s “contact theory” was untenable, his investigations did refute Galvani’s idea that the phenomenon was caused by a special capacity of animals to store and discharge electricity. While keeping other relevant conditions the same, Volta showed that the animal could be replaced by a salt or acid solution between the two metals, and the effect—a flow of electricity—still occurred. This discovery led to his invention of the electric battery. In March of 1800, he wrote a paper in which he described how to generate a continuous electrical current with zinc and silver disks separated by cardboard soaked in salt water.

When Volta announced his invention, the cause of the electrical current was still unknown. But it did not remain a mystery for long. A month after receiving Volta’s paper, Anthony Carlisle and William Nicholson constructed a battery and observed evidence of chemical reactions occurring at the metallic surfaces. They used their battery to perform the first electrolysis experiment, decomposing water into hydrogen and oxygen gas. This landmark experiment inspired Humphry Davy to investigate the phenomenon. Only seven months later, Davy wrote: “[The battery] acts only when the conducting substance between the plates is capable of oxidizing the zinc; and that, in proportion as a greater quantity of oxygen enters into combination with the zinc in a given time, so in proportion is the power of the [battery]. It seems therefore reasonable to conclude, although with our present quantity of facts we are unable to explain the exact mode of operation, that the oxidation of the zinc in the battery, and the chemical changes connected with it, are somehow the cause of the electrical effects it produces.”6 It took decades to identify the “exact mode of operation”—that is, the dissociation of molecules into electrically charged ions and the reaction of those ions at the electrodes—but the essential cause was understood in 1800: The electrical current is generated by a chemical reaction involving the metals and the fluid connecting them.

So Galvani had been right to claim that the frog in his experiments played an indispensable role in causing the electrical current: The frog’s fluids provided the salt solution essential to the reaction. But Volta had been right to claim that the metals have a crucial role in generating the electricity, not merely carrying it. Both erred only in denying the claim of the other. The cause could not be found in one of the factors, but only in the chemical interaction of the two.

The main lesson illustrated by these errors is the importance of proper experimental controls. Galvani and Volta both thought they had performed crucial experiments that refuted the claim of the other, but the experiments were flawed. When Galvani eliminated the metals and still observed an effect, and when Volta eliminated everything but the metals and still saw an effect, both changed the conditions of the original “dancing frog” experiment in a way that left the interpretation of their results ambiguous.

On a broader level, we can see the potential danger of being prejudiced by one’s specialized background. As a biologist, Galvani seemed predisposed to find the cause in the animal; as a physicist, Volta seemed predisposed to find the cause in the physical properties of the metals. It was Davy, a chemist, who correctly identified the cause as a complex interaction involving both factors.

Age of Earth

Let us examine another famous controversy involving a clash between different sciences. During the last four decades of the 19th century, the British physicist Lord Kelvin engaged in a spirited debate with geologists. In order to explain a fast-growing body of evidence, the geologists were proposing an ever-longer history of the Earth. They had discovered that the natural processes shaping our world occur very slowly, and therefore their science had a basic requirement: time. But they found themselves in conflict with one of the most prominent physicists of the era. Kelvin would not give them the time they needed; he became convinced that the fundamental laws of physics implied a very restrictive upper limit on the age of the Earth.

For most of human history, people attempted to understand the world around them as the result of sudden, global, cataclysmic events in the past (usually of supernatural origin). In the late 18th century, however, James Hutton identified the principle that gave rise to modern geology: “The present,” he wrote, “is the key to the past . . . No powers are to be employed that are not natural to the globe, no action to be admitted to except those of which we know the principle, and no extraordinary events to be alleged in order to explain a common appearance.”7 Hutton and the geologists who followed his lead explained the features of Earth by means of natural forces we observe today: wind, rain, chemical reactions, ocean and river currents, expansion and contraction caused by temperature changes, the uplift of land areas caused by sinking ocean sediment, the slow movements of glaciers, and the cumulative effects of local volcanoes and earthquakes.

It takes a great deal of time, of course, for water erosion to carve out a valley and for mechanical pressures to lift a mountain range. Determining how much time was a central issue for 19th-century geology. By estimating rates of erosion and sediment deposit, geologists began to construct the time line for the formation of the various strata they observed in the Earth’s crust. They conducted detailed studies of the world’s great river basins, measuring and analyzing the sedimentary contents being washed out to sea. By the 1870s, they had reached agreement about the average rate of continental erosion. They also collected data from around the globe on the rates of processes that result in the renewal of land masses. The data led to a consensus among geologists that the Earth’s crust we observe today could not have formed in less than 100 million years.

To his credit, Kelvin was the first to recognize a potential conflict between the laws of physics and the new geology. Geologists were claiming that the temperature and other physical conditions on Earth had remained roughly constant over the past 100 million years. The sun and Earth, however, have a limited amount of energy, which they are expending at a prodigious rate. This energy dissipation must eventually cause a decrease of temperature that will leave Earth barren and lifeless. For Kelvin, the question was: Do the laws of physics sanction or veto the time line proposed by geologists?

To answer the question, Kelvin began by considering the possible sources of solar and terrestrial energy. He quickly convinced himself that the energy released in exothermic (heat-releasing) chemical reactions was far too little to play any significant role. Furthermore, because the sun and Earth are made of electrically neutral matter, the energy could not be electromagnetic in origin. That seemed to leave only one possibility: The primary source of energy in the solar system must be gravitational in nature.

The solar system, Kelvin argued, must have begun as a large gaseous nebula. As the matter condensed, gravitational potential energy was converted into kinetic energy or heat. Thus Earth was originally an extremely hot molten ball, which has been cooling ever since. “We may follow in imagination,” he wrote, “the whole process of shrinking from gaseous nebula to liquid lava and metals, and the solidification of liquid from the central regions outward.”8

In the 1860s, Kelvin performed his first analysis of the rate at which Earth is losing heat. He acknowledged that the parameters required for the calculation were not precisely known, but he argued that enough was known to make a reasonable estimate. For the temperature of Earth’s core, he used the melting point of surface rocks; for the thermal conductivity of Earth, he used the measured value for surface rocks; for the temperature gradient at Earth’s surface, he used a measurement of about one degree Fahrenheit per fifty feet. With these parameters, he arrived at a rate of heat loss that implied the Earth’s crust had formed less than 100 million years ago. Contrary to the geologists, the conditions observed today could have existed for only a small fraction of that time. Kelvin stated his conclusion unequivocally: “It is quite certain that a great mistake has been made—that British popular geology at present is in direct opposition to the principles of natural philosophy.”9

In the decades that followed, Kelvin extended and refined his calculations in ways that escalated the conflict. His estimates of Earth’s age steadily decreased, and even more importantly, he arrived at a very restrictive upper limit on the age of the sun. Even when he assumed that the sun’s energy was partly replenished by falling meteors, the energy lost in radiation was of such magnitude that he was compelled to conclude: “It would, I think, be exceedingly rash to assume as probable anything more than twenty million years of the sun’s light in the past history of the earth, or to reckon on more than five or six million years of sunlight to come.”10

Kelvin was a brilliant mathematical physicist, and his calculations were essentially correct. Putting aside quibbles, we must grant that his conclusion follows from his premises. The entire analysis, however, rested on the generalization that the energy of stars and their satellite planets derives from the gravitational potential energy of the primeval gaseous nebula (supplemented by falling meteors). If this were true, solar systems would become cold and dark within a relatively short time (tens of millions of years). So, in evaluating Kelvin’s view, the key question is: How strong was the argument for this basic premise?

The form of the argument was a process of elimination. Only three possible sources of the internal energy of the sun and Earth were known at the time: chemical, electromagnetic, and gravitational. Kelvin cited good reasons for dismissing the first two, which left gravitational energy as the only viable candidate. This type of argument can be valid, but it carries a heavy burden of proof. One must be able to argue that all the possibilities have been identified; there can be no reason to suspect the existence of any further sources of energy.

The evidence cited by geologists, however, cast doubt on Kelvin’s argument. Geologists had not concocted an arbitrary theory; their conclusions integrated an impressive range of observations, including careful studies of strata in the Earth’s crust and measured rates of erosion and deposit. Here Kelvin’s error was to adopt an attitude that can be described as “elitist”; he seemed to think that evidence from physics trumps evidence from geology. But facts are facts, and all demand equal respect. Physics is the fundamental science of matter, which means that it integrates the widest range of facts about the physical world. But this does not imply that the facts of geology are subservient to the facts of physics. In this case, the facts of geology provided some (indirect) evidence for the existence of an undiscovered source of energy omitted from Kelvin’s analysis.

There was another reason for doubting that gravitation provided the only possible source of energy. The atomic theory had opened a new frontier in physical science. A large body of data—dealing with such topics as chemical bonding, electrical affinities, ionization, and light emission—provided evidence that atoms have a complex structure. Yet little was known about this structure. What is the nature of the parts composing atoms, how is this subatomic matter distributed within the atom, and what forces hold it together? In the late 19th century, various discoveries had raised these fundamental questions but had not yet shed light on the answers. In this context, Kelvin could not reasonably rule out the possibility that the internal energy of atoms may provide a major source of heat.

The 19th-century American geologist Thomas Chamberlin made precisely this point. He wrote: “Is our present knowledge relative to the behavior of matter under such extraordinary conditions as obtain in the interior of the sun sufficiently exhaustive to warrant the assertion that no unrecognized sources of heat reside there? What the internal constitution of the atoms may be is yet an open question. It is not improbable that they are complex organizations and seats of enormous energies. Certainly no careful chemist would affirm either that the atoms are really elementary or that there may not be locked up in them energies of the first order of magnitude. . . . Nor would they probably be prepared to affirm or deny that the extraordinary conditions which reside at the center of the sun may not set free a portion of this energy.”11

In the early 20th century, physicists proved that the possibility suggested by Chamberlin was a reality. Marie and Pierre Curie discovered that an extraordinary amount of energy is released in the decay of radioactive atoms. This major source of terrestrial heat had been omitted from Kelvin’s analysis. Furthermore, Ernest Rutherford—the discoverer of the atomic nucleus—wrote in 1913: “At the enormous temperature of the sun, it appears possible that a process of transmutation may take place in ordinary elements analogous to that observed in the well-known radio-elements.” Therefore, he concluded, “The time that the sun may continue to emit heat at the present rate may be much longer than the value computed from ordinary dynamical data.”12 The source of the seemingly inexhaustible supply of solar heat was identified in the 1930s when physicists discovered nuclear fusion.

The emerging field of nuclear physics not only provided the energy missing from Kelvin’s analysis, it also provided an accurate means of calculating the age of Earth. Radioactive elements decay at invariable rates into known products. So the age of a rock can be determined from the relative abundance of its radioactive element and its decay products. In 1904, Rutherford analyzed a piece of uranium ore and calculated its age to be 700 million years.13 The next year, the British physicist Robert Strutt measured the helium content of a radium bromide salt and estimated its age to be 2 billion years.14 Suddenly, the situation was reversed: Physicists were insisting that Earth is much older than the geologists had dared to suggest.

Kelvin assumed that the basic laws of physics were already known in the late 19th century. He was reluctant to concede the possibility that investigations on the frontiers of physics—including investigations of atomic structure—could lead to the discovery of new types of forces and energy. In 1894, this attitude was expressed by the American physicist Albert Michelson: “[I]t seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. . . . An eminent physicist [Lord Kelvin] has remarked that the future truths of physical science are to be looked for in the sixth place of decimals.”15

Thus Kelvin’s basic error can be described as the fallacy of “cognitive fixation.” It is instructive to contrast his attitude with that of Isaac Newton, the foremost champion of the inductive method. Newton always regarded his laws of motion and gravitation as a foundation to build upon, never as the completed edifice of physics. He was keenly aware of the vast range of phenomena that remained unexplained. He began his career with many questions—and throughout his life, despite the fact that he discovered so many answers, his list of questions only grew. When Newton surveyed the frontiers of physical science, he saw many areas of investigation—for example, electricity, magnetism, light, heat, chemistry—from which he expected new principles to emerge. Kelvin, on the other hand, had a more “deductive” frame of mind; in his view, the primary task of the physicist is to find further applications of the known principles. This attitude led him to conclude that the energy of a solar system must be gravitational in origin, which, in turn, led to his losing battle with modern geology.

Cold Fusion

It is possible to commit the opposite kind of error, which can be called the fallacy of “cognitive promiscuity.” A scientist commits this error when he chooses to embrace a new idea despite weak evidence and a context that makes the idea implausible. A clear example of this fallacy was provided by the recent proponents of “cold” nuclear fusion.

In 1989, Stanley Pons and Martin Fleischmann announced that they had achieved a sustained deuterium fusion reaction in a room-temperature electrolysis experiment. The experiment consisted of passing an electric current between a palladium electrode and a platinum electrode submerged in a bath of heavy water containing some lithium. The two chemists reported that the heat generated in such experiments was far greater than could be explained by any chemical reaction. In one case, they claimed, the palladium electrode melted and burned a hole through the laboratory floor. They concluded that deuterium from the heavy water was being absorbed within the lattice of palladium atoms, where the deuterium nuclei were squeezed together with sufficient pressure to cause fusion.

To put it mildly, this was a radical idea. Physicists had been studying deuterium fusion since the 1930s and the process was well understood. The reaction occurs only when the nuclei are extremely close together, and this requires enormous energy in order to overcome the electrical repulsion between the protons. Such reactions occur within the sun because the core temperature is over ten million degrees and thus the required energy is available. But how could the deuterium nuclei approach each other so closely in a room-temperature electrolysis experiment?

Pons and Fleischmann offered no answer to this basic question. As experimentalists, their goal was to demonstrate that the effect occurred. They were content to let the theorists wrestle with the problematic question of how it could possibly occur. It was a difficult assignment for the theorists, as cold fusion seemed to contradict everything known about nuclear physics.

For experimental evidence, Pons and Fleischmann relied primarily on their observations of excess heat. Scientific theory, however, cannot be turned upside down every time an unexplained explosion occurs in a chemistry lab. The evidence required to support a claim of deuterium fusion is clear: One must detect the products of the reaction. These products include helium, neutrons, and gamma rays with a specific energy. The helium should have been embedded in the palladium electrode, and a lethal quantity of neutrons and gammas should have been flying through the laboratory. All efforts to detect these products failed, and no researchers suffered any ill effects from radiation.

The cold fusion episode quickly became a media circus in which politics and dreams of Nobel prizes took precedence over scientific facts. Pons and Fleischmann had shouted “Fire!” and caused a lot of needless excitement. Initially, many scientists took the claims seriously. Laboratories across the country performed their own cold fusion experiments in an attempt to reproduce the results. These scientists thought it best to adopt an open-minded attitude. As one researcher put it, “[Pons and Fleischmann] said this could be some hitherto unknown nuclear process. Who knows? If it is an unknown process, maybe it doesn’t produce neutrons. You can always rationalize anything. . . . One way or the other there has to be a definitive proof, and we wanted to be the first ones to definitively prove it.”16

But it is not true that one can “rationalize anything”—not if “rationalize” means giving a rational argument for it. Nor does the demand for a rational argument make one “closed minded”—unless that phrase means closed to the irrational. A scientific mind is focused on the facts and is not swayed by paeans to “open-mindedness,” which is a “package deal” intended to equate the thinker with the skeptic, and thereby to give sanction to the latter. But the thinker who actively integrates evidence to arrive at new ideas has nothing in common with the skeptic, who feels free to assert possibilities without the required evidence. A mind that is open to any “possibility,” regardless of its relation to the total context of knowledge, is a mind detached from reality and therefore closed to knowledge.

Pons and Fleischmann suggested the existence of a new source of energy that had not yet been identified by physicists, just as geologists did in the late 19th century. But Pons and Fleischmann were unwarranted in doing so, whereas the geologists had been warranted. The difference in the two cases lies in the context of knowledge. In the late 19th century, physicists were just beginning to explore the structure of the atom and the hidden energy it contains; they could not rule out the possibility that such energy might play a major role in heating the sun and Earth. In contrast, Pons and Fleischmann trespassed into an area of physics that was already thoroughly investigated. Their inference from “excess heat” to a new type of deuterium fusion—a type that was allegedly overlooked by an army of nuclear physicists who had studied this reaction for more than fifty years—was not justified by the context of knowledge.

The idea of cold fusion persisted for longer than necessary. It was initially given more credence than it deserved, but scientists soon applied the proper standards of experimental evidence. After a few months, the idea was discredited and discarded.

Induction as Self-Corrective

In each of the above cases, we saw the commission of an inductive error. Van Helmont committed a type of error that was common in the pre-Newton era: Lacking proper standards of proof, he leaped from a few facts to conclusions about the basic nature of water and gases. Lavoisier generalized about acids on the basis of an observed regularity, without sufficient evidence of a causal connection. In their investigations of electric current, Galvani and Volta both eliminated an essential part of the cause by reasoning from ambiguous experiments that lacked proper controls. Kelvin assumed that the basic principles of physics were already known, and thus he disregarded evidence that supported the possibility of a nongravitational source of solar and terrestrial heat. Finally, the advocates of cold fusion neglected a large context of knowledge that made their idea implausible. A false generalization is always the result of a failure to identify and/or to properly apply the principles of inductive logic.

A scientist’s context of knowledge can be limited, of course, in a way that makes it easy to overlook a relevant factor. In my article “Induction and Experimental Method” (TOS, Spring 2007), I cited the example of Galileo’s failure to distinguish between sliding and rolling balls in his inclined plane experiments. The importance of this distinction was much more obvious after the development of Newtonian mechanics. Today, any student of physics can grasp that the speed of the rolling ball is reduced because some of the gravitational potential energy is converted into rotational motion. Galileo did not have the benefit of the concepts of “energy” and “gravity.” Even in his less advanced context, however, he could have recognized the difference in the two cases. Rolling is caused by friction, and Galileo certainly knew that friction impedes motion. Furthermore, he had the means to measure the final speed of the ball, and thereby to discover that it was less than predicted by his law (which applies only to the case of frictionless sliding). So the error was detectable, and it did result from a failure to integrate his law with the total of his knowledge.

When the relevant context of knowledge is in a primitive state, an error may remain undetected for a long time. For example, van Helmont investigated plant growth before chemistry had developed into a science. Thus it is not surprising that the causal factor he overlooked (carbon dioxide in the air) was not identified until 150 years later. On the other hand, when the relevant context of knowledge is in an advanced state, errors are usually short-lived (as we saw in the case of cold fusion).

The inductive method is self-corrective. This feature of the method follows from the demand that every idea must be induced from observational evidence and integrated without contradiction into the whole of available knowledge. A false idea cannot live up to this standard. Further investigations will bring to light facts that undercut rather than support it; the idea will lead to predictions of events that are not observed, or it will contradict events that are or have been observed, or it will contradict other ideas for which there is strong evidence. A proper method keeps one in cognitive contact with reality, and therefore any clash between a false idea and reality is eventually revealed.

Thus misapplications of induction pose no significant threat to the progress of science. They provide only routine setbacks that are overcome in the normal course of further research.

Author’s note: The remainder of this chapter discusses the two basic ways of abandoning proper inductive method: rationalism and empiricism. I argue that these corrupt methods are not self-corrective.

Endnotes

1 W. G. Palmer, A History of the Concept of Valence to 1930 (London: Cambridge University Press, 1965), p. 66.

[groups_can capability="access_html"]

2 The Beginnings of Modern Science, edited by Holmes Boynton (New York: Walter J. Black, Inc., 1948), pp. 393–94.

3 Ibid., pp. 443–61.

4 J. R. Partington, A Short History of Chemistry (New York: Dover Publications, Inc., 1989), p. 48.

5 Walter Pagel, The Religious and Philosophical Aspects of van Helmont’s Science and Medicine (Baltimore: John Hopkins Press, 1944), pp. 16–22.

6 Edmund Whittaker, A History of the Theories of Aether and Electricity (New York: Thomas Nelson and Sons, 1951), p. 75.

7 A. E. E. McKenzie, The Major Achievements of Science (Cambridge: Cambridge University Press, 1960), p. 111.

8 Ruth Moore, The Earth We Live On (New York: Alfred A. Knopf, Inc., 1956), p. 268.

9 Joe D. Burchfield, Lord Kelvin and the Age of the Earth (New York: Science History Publications, 1975), p. 81.

10 Ibid., p. 42.

11 Ibid., pp. 143–44.

12 Ibid., p. 168.

13 Moore, The Earth We Live On, p. 385.

14 Burchfield, Lord Kelvin, p. 176.

15 Steven Weinberg, Dreams of a Final Theory (New York: Vintage Books, 1992), p. 13.

16 Gary Taubes, Bad Science: The Short Life and Weird Times of Cold Fusion (New York: Random House, 1993), p. 127.

[/groups_can]

Return to Top
You have loader more free article(s) this month   |   Already a subscriber? Log in

Thank you for reading
The Objective Standard

Enjoy unlimited access to The Objective Standard for less than $5 per month
See Options
  Already a subscriber? Log in

Pin It on Pinterest