Proceedings of the First AAAS Technology Education Research Conference

Towards a Research Agenda

Andrew Ahlgren
Project 2061
American Association for the Advancement of Science

On at least two occasions I have been involved in research-agenda conferences in science education. In both, the output was to endorse everything that anyone was already doing. It is entirely possible to spend a career doing research for the fun of it (or the publication credit) without seriously considering how it might be useful—or what might be more useful. Because there are nowhere near enough time and resources for all the research we can think of, or even all that we would like to do, not having priorities is tantamount to saying that the research doesn’t really matter much.

A model in science education?

Science-education research has been far more voluminous than technology-education research, but is not necessarily a good model. The mass of the research before 1970 was of very little value, because educators just had not recognized the most powerful tool for understanding teaching and learning: What and how students think and learn. The vigorous "quantitative optimism" of researchers was premature, for they had not yet identified the important variables to study. Method-A-vs.-method-B studies had little hope of shedding light on learning (and usually resulted in "no significant difference"). Just finding out what students already think is difficult.

The source of students’ difficulties in learning even one idea or skill can be very puzzling; researchers find that it may take a year or more of studying students to figure out just what the problem is. And even then there is no guarantee that the nail has been struck on the head. But identifying common problems in student thinking isn’t tantamount to knowing what to do about them. Researchers who thought they had uncovered problems and modified instruction to ameliorate them are often frustrated by the resilience of the students’ naïve ideas.

Yet, it should be said, lack of progress did not keep science-education researchers from flourishing professionally. Camps sprang up, journals proliferated, national conferences were established, and a cultural identity evolved. All with little enlightenment on how to teach students well. A good example: studies of the value of laboratory in school science have been going on for over a hundred years, and the literature is a mostly incoherent collection of different goals, different settings, different measures, and conflicting results. By and large, no benefits—in understanding ideas, appreciating the nature of science, or positive attitudes toward learning science—have been demonstrated other than "finger skills." (For example, the skill of pouring water from one test tube into another will obviously improve with practice.) No reliable answer yet.

So even good research does not point inexorably to good instructional practice. Nevertheless, some excellent research has been done in the last two decades or so on how students understand topics in science and mathematics. It is this aspect of science-education research that can be most profitably copied in technology education.

Consider, for example, this passage from the Benchmarks for Science Literacy (1993) chapter "The Research Base":

Newton's laws of motion. Students believe constant speed needs some cause to sustain it. In addition, students believe that the amount of motion is proportional to the amount of force; that if a body is not moving, there is no force acting on it; and that if a body is moving there is a force acting on it in the direction of the motion (Gunstone & Watts, 1985). Students also believe that objects resist acceleration from the state of rest because of friction—that is, they confound inertia with friction (Jung et al., 1981; Brown & Clement, 1992). Students tend to hold onto these ideas even after instruction in high-school or college physics (McDermott, 1983). Specially designed instruction does help high-school students change their ideas (Brown & Clement, 1992; Minstrell, 1989; Dykstra et al., 1992).

Research has shown less success in changing middle-school students' ideas about force and motion (Champagne, Gunstone & Klopfer, 1985). Nevertheless, some research indicates that middle-school students can start understanding the effect of constant forces to speed up, slow down, or change the direction of motion of an object. This research also suggests it is possible to change middle-school students' belief that a force always acts in the direction of motion (White & Horwitz, 1987; White, 1990). (p. 339)

This passage illustrates some important aspects of a research program: multiple researchers, in multiple institutions, are involved in studying the same important concept; the study is sustained over time; and cognitive investigation is supplemented by trials of instruction that take account of it (and typically lead to more of it). Another important point evident here: though educators often make the assumption that things can’t be too bad, for after all the students are learning some things well enough, that assumption (in science, anyway) is obviously faulty.

The text of Benchmarks for Science Literacy—which includes specific learning goals in the chapters "The Nature of Technology," "The Designed World," "Common Themes," and "Habits of Mind"—can be found at the AAAS Project 2061 web site.

Some simplistic examples of coherent agendas for research

So what agendas for research in technology education might make sense? The main tenets are: there are far more interesting research projects than there are researchers and time to carry them out; and we can expect useful results only if we focus on limited strategies. So tough decisions have to be made, if there is to be any substantial progress. Here are some admittedly simplistic examples of what coherent agendas for focusing research might look like:

A. Replicate promising findings in many different labs and contexts, to see how generally valid they are. (This in contrast to everyone choosing his own favorite—and perhaps unique—topics, activities, and measures.)

B. Just the opposite. Cover the widest possible range of research questions that can be imagined. (Little progress would likely be made for a long time in any of them, but general patterns for methods could be scoped out.)

C. Track students over multiple years, to be able to describe how student ideas and skills develop over time—and get hints about what may have helped. (This costs time, and delays research papers, but may in the long run be a very valuable direction.)

All of these examples assume that student learning is by far the highest priority to study. (If we don’t know how students learn, then all the potentially supporting attention to attitudes, variations on design activities, curriculum surveys, professional development, and social setting research seems pretty pointless.) But this was only an exercise, not yet a proposition for any particular research agenda.

Clearly, it will not be acceptable to endorse doing all the things we are already doing, and maybe more besides. No doubt multiple methods can be helpful. Case studies, for example, are probably going to be more useful for a while than statistical surveys of what happens to be being done now. But case studies should not be driven by fondness for particular activities, as seems most often the case now, but rather by how students learn particular ideas and/or skills. The rationale is not "Here is this activity I like, so I’ll study how it works" but rather "Here is this skill I think is important, and I’ll study how it’s possible for students to learn it." In either case, particular activities will almost certainly have to be involved, but the perspective on them will be very different.