SFAA On-Line Project 2061 Home Page search Order SFAA Online













There are two principal reasons for including some knowledge of history among the recommendations. One reason is that generalizations about how the scientific enterprise operates would be empty without concrete examples. Consider, for example, the proposition that new ideas are limited by the context in which they are conceived; are often rejected by the scientific establishment; sometimes spring from unexpected findings; and usually grow slowly, through contributions from many different investigators. Without historical examples, these generalizations would be no more than slogans, however well they might be remembered. For this purpose, any number of episodes might have been selected.

A second reason is that some episodes in the history of the scientific endeavor are of surpassing significance to our cultural heritage. Such episodes certainly include Galileo's role in changing our perception of our place in the universe; Newton's demonstration that the same laws apply to motion in the heavens and on earth; Darwin's long observations of the variety and relatedness of life forms that led to his postulating a mechanism for how they came about; Lyell's careful documentation of the unbelievable age of the earth; and Pasteur's identification of infectious disease with tiny organisms that could be seen only with a microscope. These stories stand among the milestones of the development of all thought in Western civilization.

All human cultures have included study of nature—the movement of heavenly bodies, the behavior of animals, the properties of materials, the medicinal properties of plants. The recommendations in this chapter focus on the development of science, mathematics, and technology in Western culture, but not on how that development drew on ideas from earlier Egyptian, Chinese, Greek, and Arabic cultures. The sciences accounted for in this report are largely part of a tradition of thought that happened to develop in Europe during the last 500 years—a tradition to which people from all cultures contribute today.

The emphasis here is on ten accounts of significant discoveries and changes that exemplify the evolution and impact of scientific knowledge: the planetary earth, universal gravitation, relativity, geologic time, plate tectonics, the conservation of matter, radioactivity and nuclear fission, the evolution of species, the nature of disease, and the Industrial Revolution. Although other choices may be equally valid, these clearly fit our dual criteria of exemplifying historical themes and having cultural salience. Top button



To observers on the earth, it appears that the earth stands still and everything else moves around it. Thus, in trying to imagine how the universe works, it made good sense to people in ancient times to start with those apparent truths. The ancient Greek thinkers, particularly Aristotle, set a pattern that was to last for about 2,000 years: a large, stationary earth at the center of the universe, and—positioned around the earth—the sun, the moon, and tiny stars arrayed in a perfect sphere, with all these bodies orbiting along perfect circles at constant speeds. Shortly after the beginning of the Christian era, that basic concept was transformed into a powerful mathematical model by an Egyptian astronomer, Ptolemy. His model of perfect circular motions served well for predicting the positions of the sun, moon, and stars. It even accounted for some motions in the heavens that appeared distinctly irregular. A few "wandering stars"—the planets—appeared not to circle perfectly around the earth but rather to change speed and sometimes even go into reverse, following odd loop-the-loop paths. This behavior was accounted for in Ptolemy's model by adding more circles, which spun on the main circles.

Over the following centuries, as astronomical data accumulated and became more accurate, this model was refined and complicated by many astronomers, including Arabs and Europeans. As clever as the refinements of perfect-circles models were, they did not involve any physical explanations of why heavenly bodies should so move. The principles of motion in the heavens were considered to be quite different from those of motion on earth.

Shortly after the discovery of the Americas, a Polish astronomer named Nicolaus Copernicus, a contemporary of Martin Luther and Leonardo da Vinci, proposed a different model of the universe. Discarding the premise of a stationary earth, he showed that if the earth and planets all circled around the sun, the apparent erratic motion of the planets could be accounted for just as well, and in a more intellectually pleasing way. But Copernicus' model still used perfect circular motions and was nearly as complicated as the old earth-centered model. Moreover, his model violated the prevailing common-sense notions about the world, in that it required the apparently immobile earth to spin completely around on its axis once a day, the universe to be far larger than had been imagined, and—worst of all—the earth to become commonplace by losing its position at the center of the universe. Further, an orbiting and spinning earth was thought to be inconsistent with some biblical passages. Most scholars perceived too little advantage in a sun-centered model—and too high a cost in giving up the many other ideas associated with the traditional earth-centered model.

As astronomical measurements continued to become more precise, it became clear that neither the sun-centered nor the earth-centered system quite worked as long as all bodies had to have uniform circular motion. A German astronomer, Johannes Kepler, who lived at the same time as Galileo, developed a mathematical model of planetary motion that discarded both venerable premises—a stationary earth and circular motion. He postulated three laws, the most revolutionary of which was that planets naturally move in elliptical orbits at predictable but varying speeds. Although this law turned out to be correct, the calculations for ellipses were difficult with the mathematics known at the time, and Kepler offered no explanation for why the planets would move that way.

The many contributions of Italian scientist Galileo, who lived at the same time as Shakespeare and Rubens, were of great significance in the development of physics and astronomy. As an astronomer, he built and used the newly invented telescope to study the sun, moon, planets, and stars, and he made a host of discoveries that supported Copernicus' basic idea of planetary movement. Perhaps the most telling of these was his discovery of four moons that orbited around the planet Jupiter, demonstrating that the earth was not the only center of heavenly motion. With the telescope, he also discovered the inexplicable phenomena of craters and mountains on the moon, spots on the sun, moonlike phases of Venus, and vast numbers of stars not visible to the unaided eye.

Galileo's other great contribution to the cosmological revolution was in taking it to the public. He presented the new view in a form and language (Italian) that made it accessible to all educated people in his time. He also rebutted many popular arguments against an orbiting and spinning earth and showed inconsistencies in the Aristotelian account of motion. Criticism from clergy who still believed in Ptolemy's model—and Galileo's subsequent trial by the Inquisition for his allegedly heretical beliefs—only heightened the attention paid to the issues and thereby accelerated the process of changing generally accepted ideas on what constituted common sense. It also revealed some of the inevitable tensions that are bound to occur whenever scientists come up with radically new ideas. Top button



But it remained for Isaac Newton, an English scientist, to bring all of those strands together, and go far beyond them, to create the idea of the new universe. In his Mathematical Principles of Natural Philosophy, published near the end of the seventeenth century and destined to become one of the most influential books ever written, Newton presented a seamless mathematical view of the world that brought together knowledge of the motion of objects on earth and of the distant motions of heavenly bodies.

The Newtonian world was a surprisingly simple one: Using a few key concepts (mass, momentum, acceleration, and force), three laws of motion (inertia, the dependence of acceleration on force and mass, and action and reaction), and the mathematical law of how the force of gravity between all masses depends on distance, Newton was able to give rigorous explanations for motion on the earth and in the heavens. With a single set of ideas, he was able to account for the observed orbits of planets and moons, the motion of comets, the irregular motion of the moon, the motion of falling objects at the earth's surface, weight, ocean tides, and the earth's slight equatorial bulge. Newton made the earth part of an understandable universe, a universe elegant in its simplicity and majestic in its architecture—a universe that ran automatically by itself according to the action of forces between its parts.

Newton's system prevailed as a scientific and philosophical view of the world for 200 years. Its early acceptance was dramatically ensured by the verification of Edmund Halley's prediction, made many years earlier, that a certain comet would reappear on a particular date calculated from Newton's principles. Belief in Newton's system was continually reinforced by its usefulness in science and in practical endeavors, right up to (and including) the exploration of space in the twentieth century. Albert Einstein's theories of relativity—revolutionary in their own right—did not overthrow the world of Newton, but modified some of its most fundamental concepts.

The science of Newton was so successful that its influence spread far beyond physics and astronomy. Physical principles and Newton's mathematical way of deriving consequences from them together became the model for all other sciences. The belief grew that eventually all of nature could be explained in terms of physics and mathematics and that nature therefore could run by itself, without the help or attention of gods—although Newton himself saw his physics as demonstrating the hand of God acting on the universe. Social thinkers considered whether governments could be designed like a Newtonian solar system, with a balance of forces and actions that would ensure regular operation and long-term stability.

Philosophers in and outside of science were troubled by the implication that if everything from stars to atoms runs according to precise mechanical laws, the human notion of free will might be only an illusion. Could all human history, from thoughts to social upheavals, be only the playing out of a completely determined sequence of events? Social thinkers raised questions about free will and the organization of social systems that were widely debated in the eighteenth and nineteenth centuries. In the twentieth century, the appearance of basic unpredictability in the behavior of atoms relieved some of those concerns—but also raised new philosophical questions.Top button



As elaborate and successful as it was, however, the Newtonian world view finally had to undergo some fundamental revisions around the beginning of the twentieth century. Still only in his twenties, German-born Albert Einstein published theoretical ideas that made revolutionary contributions to the understanding of nature. One of these was the special theory of relativity, in which Einstein considered space and time to be closely linked dimensions rather than, as Newton had thought, to be completely different dimensions.

Relativity theory had several surprising implications. One is that the speed of light is measured to be the same by all observers, no matter how they or the source of light happen to be moving. This is not true for the motion of other things, for their measured speed always depends on the motion of the observer. Moreover, the speed of light in empty space is the greatest speed possible—nothing can be accelerated up to that speed or observed moving faster.

The special theory of relativity is best known for asserting the equivalence of mass and energy—that is, any form of energy has mass, and matter itself is a form of energy. This is expressed in the famous equation E=mc2, in which E stands for energy, m for mass, and c for the speed of light. Since c is approximately 186,000 miles per second, the transformation of even a tiny amount of mass releases an enormous amount of energy. That is what happens in the nuclear fission reactions that produce heat energy in nuclear reactors, and also in the nuclear fusion reactions that produce the energy given off by the sun.

About a decade later, Einstein published what is regarded as his crowning achievement and one of the most profound accomplishments of the human mind in all of history: the theory of general relativity. The theory has to do with the relationship between gravity and time and space, in which Newton's gravitational force is interpreted as a distortion in the geometry of space and time. Relativity theory has been tested over and over again by checking predictions based on it, and it has never failed. Nor has a more powerful theory of the architecture of the universe replaced it. But many physicists are looking for ways to come up with a more complete theory still, one that will link general relativity to the quantum theory of atomic behavior.Top button



The age of the earth was not at issue for most of human history. Until the nineteenth century, nearly everyone in Western cultures believed that the earth was only a few thousand years old, and that the face of the earth was fixed—the mountains, valleys, oceans, and rivers were as they always had been since their instantaneous creation.

From time to time, individuals speculated on the possibility that the earth's surface had been shaped by the kind of slow change processes they could observe occurring; in that case, the earth might have to be older than most people believed. If valleys were formed from erosion by rivers, and if layered rock originated in layers of sediment from erosion, one could estimate that millions of years would have been required to produce today's landscape. But the argument made only very gradual headway until English geologist Charles Lyell published the first edition of his masterpiece, Principles of Geology, early in the nineteenth century. The success of Lyell's book stemmed from its wealth of observations of the patterns of rock layers in mountains and the locations of various kinds of fossils, and from the close reasoning he used in drawing inferences from those data.

Principles of Geology went through many editions and was studied by several generations of geology students, who came to accept Lyell's philosophy and to adopt his methods of investigation and reasoning. Moreover, Lyell's book also influenced Charles Darwin, who read it while on his worldwide voyages studying the diversity of species. As Darwin developed his concept of biological evolution, he adopted Lyell's premises about the age of the earth and Lyell's style of buttressing his argument with massive evidence.

As often happens in science, Lyell's revolutionary new view that so opened up thought about the world also came to restrict his own thinking. Lyell took the idea of very slow change to imply that the earth never changed in sudden ways—and in fact really never changed much in its general features at all, perpetually cycling through similar sequences of small-scale changes. However, new evidence continued to accumulate; by the middle of the twentieth century, geologists believed that such minor cycles were only part of a complex process that also included abrupt or even cataclysmic changes and long-term evolution into new states.Top button



As soon as fairly accurate world maps began to appear, some people noticed that the continents of Africa and South America looked as though they might fit together, like a giant jigsaw puzzle. Could they once have been part of a single giant landmass that broke into pieces and then drifted apart? The idea was repeatedly suggested, but was rejected for lack of evidence. Such a notion seemed fanciful in view of the size, mass, and rigidity of the continents and ocean basins and their apparent immobility.

Early in the twentieth century, however, the idea was again introduced, by German scientist Alfred Wegener, with new evidence: The outlines of the underwater edges of continents fit together even better than the above-water outlines; the plants, animals, and fossils on the edge of one continent were like those on the facing edge of the matching continent; and—most important—measurements showed that Greenland and Europe were slowly moving farther apart. Yet the idea had little acceptance (and strong opposition) until—with the development of new techniques and instruments—still more evidence accumulated. Further matches of continental shelves and ocean features were found by exploration of the composition and shape of the floor of the Atlantic Ocean, radioactive dating of continents and plates, and study both of deep samples of rocks from the continental shelves and of geologic faults.

By the 1960s, a great amount and variety of data were all consistent with the idea that the earth's crust is made up of a few huge, slowly moving plates on which the continents and ocean basins ride. The most difficult argument to overcome—that the surface of the earth is too rigid for continents to move—had proved incorrect. The hot interior of the earth produces a layer of molten rock under the plates, which are moved by convection currents in the layer. In the 1960s, continental drift in the form of the theory of plate tectonics became widely accepted in science and provided geology with a powerful unifying concept.

The theory of plate tectonics was finally accepted because it was supported by the evidence and because it explained so much that had previously seemed obscure or controversial. Such diverse and seemingly unrelated phenomena as earthquakes, volcanoes, the formation of mountain systems and oceans, the shrinking of the Pacific and the widening of the Atlantic, and even some major changes in the earth's climate can now be seen as consequences of the movement of crustal plates.Top button



For much of human history, fire was thought to be one of the four basic elements—along with earth, water, and air—out of which everything was made. Burning materials were thought to release the fire that they already contained. Until the eighteenth century, the prevailing scientific theory was that when any object burned, it gave off a substance that carried away weight. This view confirmed what people saw: When a heavy piece of wood was burned, all that was left was a residue of light ashes.

Antoine Lavoisier, a French scientist who made most of his discoveries in the two decades after the American Revolution and was later executed as a victim of the French Revolution, conducted a series of experiments in which he accurately measured all of the substances involved in burning, including the gases used and the gases given off. His measurements demonstrated that the burning process was just the opposite of what people thought. He showed that when substances burn, there is no net gain or loss of weight. When wood burns, for example, the carbon and hydrogen in it combine with oxygen from the air to form water vapor and carbon dioxide, both invisible gases that escape into the air. The total weight of materials produced by burning (gases and ashes) is the same as the total weight of the reacting materials (wood and oxygen).

In unraveling the mystery of burning (a form of combustion), Lavoisier established the modern science of chemistry. Its predecessor, alchemy, had been a search for ways to transform matter—especially to turn lead into gold and to produce an elixir that would confer everlasting life. The search resulted in the accumulation of some descriptive knowledge about materials and processes, but it was unable to lead to an understanding of the nature of materials and how they interact.

Lavoisier invented a whole new enterprise based on a theory of materials, physical laws, and quantitative methods. The intellectual centerpiece of the new science was the concept of the conservation of matter: Combustion and all other chemical processes consist of the interaction of substances such that the total mass of material after the reaction is exactly the same as before it.

For such a radical change, the acceptance of the new chemistry was relatively rapid. One reason was that Lavoisier devised a system for naming substances and for describing their reactions with each other. Being able to make such explicit statements was itself an important step forward, for it encouraged quantitative studies and made possible the widespread dissemination of chemical discoveries without ambiguity. Furthermore, burning came to be seen simply as one example of a category of chemical reactions—oxidation—in which oxygen combines with other elements or compounds and releases energy.

Another reason for the acceptance of the new chemistry was that it fit well with the atomic theory developed by English scientist John Dalton after reading Lavoisier's publications. Dalton elaborated on and refined the ancient Greek ideas of element, compound, atom, and molecule—concepts that Lavoisier had incorporated into his system. This mechanism for developing chemical combinations gave even more specificity to Lavoisier's system of principles. It provided the basis for expressing chemical behavior in quantitative terms.

Thus, for example, when wood burns, each atom of the element carbon combines with two atoms of the element oxygen to form one molecule of the compound carbon dioxide, releasing energy in the process. Flames or high temperatures, however, need not be involved in oxidation reactions. Rusting and the metabolism of sugars in the body are examples of oxidation that occurs at room temperature.

In the three centuries since Lavoisier and Dalton, the system has been vastly extended to account for the configuration taken by atoms when they bond to one another and to describe the inner workings of atoms that account for why they bond as they do.Top button



A new chapter in our understanding of the structure of matter began at the end of the nineteenth century with the accidental discovery in France that a compound of uranium somehow darkened a wrapped and unexposed photographic plate. Thus began a scientific search for an explanation of this "radioactivity." The pioneer researcher in the new field was Marie Curie, a young Polish-born scientist married to French physicist Pierre Curie. Believing that the radioactivity of uranium-bearing minerals resulted from very small amounts of some highly radioactive substance, Marie Curie attempted, in a series of chemical steps, to produce a pure sample of the substance and to identify it. Her husband put aside his own research to help in the enormous task of separating out an elusive trace from an immense amount of raw material. The result was their discovery of two new elements, both highly radioactive, which they named polonium and radium.

The Curies, who won the Nobel Prize in physics for their research in radioactivity, chose not to exploit their discoveries commercially. In fact, they made radium available to the scientific community so that the nature of radioactivity could be studied further. After Pierre Curie died, Marie Curie continued her research, confident that she could succeed despite the widespread prejudice against women in physical science. She did succeed: She won the 1911 Nobel Prize in chemistry, becoming the first person to win a second Nobel Prize.

Meanwhile, other scientists with better facilities than Marie Curie had available were making major discoveries about radioactivity and proposing bold new theories about it. Ernest Rutherford, a New Zealand-born British physicist, quickly became the leader in this fast-moving field. He and his colleagues discovered that naturally occurring radioactivity in uranium consists of a uranium atom emitting a particle that becomes an atom of the very light element helium, and that what is left behind is no longer a uranium atom but a slightly lighter atom of a different element. Further research indicated that this transmutation was one of a series ending up with a stable isotope of lead. Radium was just one element in the radioactive series.

This transmutation process was a turning point in scientific discovery, for it revealed that atoms are not actually the most basic units of matter; rather, atoms themselves consist of three distinct particles each: a small, massive nucleus—made up of protons and neutrons—surrounded by light electrons. Radioactivity changes the nucleus, whereas chemical reactions affect only the outer electrons.

But the uranium story was far from over. Just before World War II, several German and Austrian scientists showed that when uranium is irradiated by neutrons, isotopes of various elements are produced that have about half the atomic mass of uranium. They were reluctant to accept what now seems the obvious conclusion—that the nucleus of uranium had been induced to split into two roughly equal smaller nuclei. This conclusion was soon proposed by Austrian-born physicist and mathematician Lise Meitner and her nephew Otto Frisch, who introduced the term "fission." They noted, consistent with Einstein's special relativity theory, that if the fission products together had less mass than the original uranium atom, enormous amounts of energy would be released.

Because fission also releases some extra neutrons, which can induce more fissions, it seemed possible that a chain reaction could occur, continually releasing huge amounts of energy. During World War II, a U.S. scientific team led by Italian-born physicist Enrico Fermi demonstrated that if enough uranium were piled together—under carefully controlled conditions—a chain reaction could indeed be sustained. That discovery became the basis of a secret U.S. government project set up to develop nuclear weapons. By the end of the war, the power of an uncontrolled fission reaction had been demonstrated by the explosion of two U.S. fission bombs over Japan. Since the war, fission has continued to be a major component of strategic nuclear weapons developed by several countries, and it has been widely used in the controlled release of energy for transformation into electric power.Top button



The intellectual revolution initiated by Darwin sparked great debates. At issue scientifically was how to explain the great diversity of living organisms and of previous organisms evident in the fossil record. The earth was known to be populated with many thousands of different kinds of organisms, and there was abundant evidence that there had once existed many kinds that had become extinct. How did they all get here? Prior to Darwin's time, the prevailing view was that species did not change, that since the beginning of time all known species had been exactly as they were in the present. Perhaps, on rare occasions, an entire species might disappear owing to some catastrophe or by losing out to other species in the competition for food; but no new species could appear.

Nevertheless, in the early nineteenth century, the idea of evolution of species was starting to appear. One line of thought was that organisms would change slightly during their lifetimes in response to environmental conditions, and that those changes could be passed on to their offspring. (One view, for example, was that by stretching to reach leaves high on trees, giraffes—over successive generations—had developed long necks.) Darwin offered a very different mechanism of evolution. He theorized that inherited variations among individuals within a species made some of them more likely than others to survive and have offspring, and that their offspring would inherit those advantages. (Giraffes who had inherited longer necks, therefore, would be more likely to survive and have offspring.) Over successive generations, advantageous characteristics would crowd out others, under some circumstances, and thereby give rise to new species.

Darwin presented his theory, together with a great amount of supporting evidence collected over many years, in a book entitled Origin of Species, published in the mid-nineteenth century. Its dramatic effect on biology can be traced to several factors: The argument Darwin presented was sweeping, yet clear and understandable; his line of argument was supported at every point with a wealth of biological and fossil evidence; his comparison of natural selection to the "artificial selection" used in animal breeding was persuasive; and the argument provided a unifying framework for guiding future research.

The scientists who opposed the Darwinian model did so because they disputed some of the mechanisms he proposed for natural selection, or because they believed that it was not predictive in the way Newtonian science was. By the beginning of the twentieth century, however, most biologists had accepted the basic premise that species gradually change, even though the mechanism for biological inheritance was still not altogether understood. Today the debate is no longer about whether evolution occurs but about the details of the mechanisms by which it takes place.

In the general public, there are some people who altogether reject the concept of evolution—not on scientific grounds but on the basis of what they take to be its unacceptable implications: that human beings and other species have common ancestors and are therefore related; that humans and other organisms might have resulted from a process that lacks direction and purpose; and that human beings, like the lower animals, are engaged in a struggle for survival and reproduction. And for some people, the concept of evolution violates the biblical account of the special (and separate) creation of humans and all other species.

At the beginning of the twentieth century, the work of Austrian experimenter Gregor Mendel on inherited characteristics was rediscovered after having passed unnoticed for many years. It held that the traits an organism inherits do not result from a blending of the fluids of the parents but from the transmission of discrete particles—now called genes—from each parent. If organisms have a large number of such particles and some process of random sorting occurs during reproduction, then the variation of individuals within a species—essential for Darwinian evolution—would follow naturally.

Within a quarter of a century of the rediscovery of Mendel's work, discoveries with the microscope showed that genes are organized in strands that split and recombine in ways that furnish each egg or sperm cell with a different combination of genes. By the middle of the twentieth century, genes had been found to be part of DNA molecules, which control the manufacture of the essential materials out of which organisms are made. Study of the chemistry of DNA has brought a dramatic chemical support for biological evolution: The genetic code found in DNA is the same for almost all species of organisms, from bacteria to humans.Top button



Throughout history, people have created explanations for disease. Many diseases have been seen as being spiritual in origin—a punishment for a person's sins or as the capricious behavior of gods or spirits. From ancient times, the most commonly held biological theory was that illness was attributable to some sort of imbalance of body humors (hypothetical fluids that were described by their effects, but not identified chemically). Hence, for thousands of years the treatment of disease consisted of appealing to supernatural powers through offerings, sacrifice, and prayer, or of trying to adjust the body humors by inducing vomiting, bleeding, or purging. However, the introduction of germ theory in the nineteenth century radically changed the explanation of what causes diseases, as well as the nature of their treatment.

As early as the sixteenth century, there was speculation that diseases had natural causes and that the agents of disease were external to the body, and therefore that medical science should consist of identifying those agents and finding chemicals to counteract them. But no one suspected that some of the disease-causing agents might be invisible organisms, since such organisms had not yet been discovered, or even imagined. The improvement of microscope lenses and design in the seventeenth century led to discovery of a vast new world of microscopically small plants and animals, among them bacteria and yeasts. The discovery of those microorganisms, however, did not suggest what effects they might have on humans and other organisms.

The name most closely associated with the germ theory of disease is that of Louis Pasteur, a French chemist. The connection between microorganisms and disease is not immediately apparent—especially since (as we know now) most microorganisms do not cause disease and many are beneficial to us. Pasteur came to the discovery of the role of microorganisms through his studies of what causes milk and wine to spoil. He proved that spoilage and fermentation occur when microorganisms enter them from the air, multiplying rapidly and producing waste products. He showed that food would not spoil if microorganisms were kept out of it or if they were destroyed by heat.

Turning to the study of animal diseases to find practical cures, Pasteur again showed that microorganisms were involved. In the process, he found that infection by disease organisms—germs—caused the body to build up an immunity against subsequent infection by the same organisms, and that it was possible to produce vaccines that would induce the body to build immunity to a disease without actually causing the disease itself. Pasteur did not actually demonstrate rigorously that a particular disease was caused by a particular, identifiable germ; that work was soon accomplished, however, by other scientists.

The consequences of the acceptance of the germ theory of disease were enormous for both science and society. Biologists turned to the identification and investigation of microorganisms, discovering thousands of different bacteria and viruses and gaining a deeper understanding of the interactions between organisms. The practical result was a gradual change in human health practices—the safe handling of food and water; pasteurization of milk; and the use of sanitation measures, quarantine, immunization, and antiseptic surgical procedures—as well as the virtual elimination of some diseases. Today, the modern technology of high-power imaging and biotechnology make it possible to investigate how microorganisms cause disease, how the immune system combats them, and even how they can be manipulated genetically.Top button



The term "Industrial Revolution" refers to a long period in history during which vast changes occurred in how things were made and in how society was organized. The shift was from a rural handicraft economy to an urban, manufacturing one.

The first changes occurred in the British textile industry in the nineteenth century. Until then, fabrics were made in homes, using essentially the same techniques and tools that had been used for centuries. The machines—like all of the tools of the time—were small, handmade, and powered by muscle, wind, or running water. That picture was radically and irreversibly changed by a series of inventions for spinning and weaving and for using energy resources. Machinery replaced some human crafts; coal replaced humans and animals as the source of power to run machines; and the centralized factory system replaced the distributed, home-centered system of production.

At the heart of the Industrial Revolution was the invention and improvement of the steam engine. A steam engine is a device for changing chemical energy into mechanical work: Fuel is burned, and the heat it gives off is used to turn water into steam, which in turn is used to drive wheels or levers. Steam engines were first developed by inventors in response to the practical need to pump floodwater out of coal and ore mines. After Scottish inventor James Watt greatly improved the steam engine, it also quickly came to be used to drive machines in factories; to move coal in coal mines; and to power railroad locomotives, ships, and later the first automobiles.

The Industrial Revolution happened first in Great Britain—for several reasons: the British inclination to apply scientific knowledge to practical affairs; a political system that favored industrial development; availability of raw materials, especially from the many parts of the British Empire; and the world's greatest merchant fleet, which gave the British access to additional raw materials (such as cotton and wood) and to huge markets for selling textiles. The British also had experienced the introduction of innovations in agriculture, such as cheap plows, which made it possible for fewer workers to produce more food, freeing others to work in the new factories.

The economic and social consequences were profound. Because the new machines of production were expensive, they were accessible mainly to people with large amounts of money, which left out most families. Workshops outside the home that brought workers and machines together resulted in, and grew into, factories—first in textiles and then in other industries. Relatively unskilled workers could tend the new machines, unlike the traditional crafts that required skills learned by long apprenticeship. So surplus farm workers and children could be employed to work for wages.

The Industrial Revolution spread throughout Western Europe and across the Atlantic to North America. Consequently, the nineteenth century was marked in the Western world by increased productivity and the ascendancy of the capitalistic organization of industry. The changes were accompanied by the growth of large, complex, and interrelated industries, and the rapid growth in both total population and a shift from rural to urban areas. There arose a growing tension between, on the one hand, those who controlled and profited from production and, on the other hand, the laborers who worked for wages, which were barely enough to sustain life. To a substantial degree, the major political ideologies of the twentieth century grew out of the economic manifestations of the Industrial Revolution.

In a narrow sense, the Industrial Revolution refers to a particular episode in history. But looked at more broadly, it is far from over. From its beginnings in Great Britain, industrialization spread to some parts of the world much faster than to others, and is only now reaching some. As it reaches new countries, its economic, political, and social effects have usually been as dramatic as those that occurred in nineteenth-century Europe and North America, but with details shaped by local circumstances.

Moreover, the revolution expanded beyond steam power and the textile industry to incorporate a series of new technological developments, each of which has had its own enormous impact on how people live. In turn, electric, electronic, and computer technologies have radically changed transportation, communications, manufacturing, and health and other technologies; have changed patterns of work and recreation; and have led to greater knowledge of how the world works. (The pace of change in newly industrializing countries may be even greater because the successive waves of innovation arrive more closely spaced in time.) In its own way, each of these continuations of the Industrial Revolution has exhibited the inevitable and growing interdependence of science and technology.Top button

BACK Table of Contents Next Chapter

Copyright © 1989, 1990 by American Association for the Advancement of Science