Proceedings of the Second AAAS Technology Education Research Conference

Setting Research Agendas in Science, Mathematics, and Technology Education: The National Research Council's How People Learn Report

James W. Pellegrino

University of Illinois at Chicago

Thank you for inviting me to address this group. I must start by admitting that I am not in the field of technology education. Rather, I am a cognitive psychologist who has worked in the fields of mathematics and science education on the design of technology-enhanced learning environments in collaboration with my colleagues at the Vanderbilt Learning Technology Center. I have also had the opportunity, over the last several years, to work on projects with the National Research Council which led to published reports on how people learn and how to think about issues of assessment. What I have tried to do for this presentation is to bring some perspectives from the 1999 How People Learn (HPL) reports as they apply to general research issues that cut across multiple areas of education. I have had the chance to read the summary of your last conference as well as the piece that Fernando Cajas wrote on directions for research. I have also looked at the benchmarks in the science literacy maps for technology. These have helped me structure the presentation to match some of the issues you are confronting. What I hope to do today is to provoke some thinking about the elements of a research agenda in technology education. Much of what I have to say will be consistent with ideas that I found in the aforementioned reports. Hopefully my remarks will help frame things in a way that reinforces some of the discussion you have been having while also contributing to some new thinking.

 

My presentation has four different parts. The first part provides a general perspective on where knowledge about How People Learn fits relative to overall issues of curriculum, instruction, and assessment. The second part looks at what we know from research on learning and considers what it implies for questions that need to be addressed in technology education research. In the third part, I want to discuss using the HPS principles for the design of powerful learning environments. If I have the time I will say a bit about a project that I am involved in at Vanderbilt in the area of bioengineering and biomedical engineering education. It is a domain example that helps concretize some of the more general ideas about the design of learning environments. I think it has relevance to what you are trying to do in technology education. And finally, I will cover issues that go beyond what we currently know, focusing on how to go about connecting research to educational practice using How People Learn as an organizing schema.

 

So, let me start with the first general issue. I think it is very important to realize that no matter what field of education we are discussing, there are three interconnected things that should always concern us. They constitute the C-I-A triad-curriculum, instruction, and assessment. In many educational endeavors we deal with these in terms of separate pairs. For example, we go back and forth between defining what the curriculum is supposed to be in a given area and asking how instruction matches that curriculum, or alternatively considering what is the curriculum and then asking what should be the assessment? Or perhaps we ask how does assessment line up with instruction? In many instances in science education, math education, and other curricular areas it appears that we end up racing around the edges of this triangle with the result that these three things are often poorly coordinated. It is also very important to recognize what is implicit in the middle of the C-I-A triangle. What sits there is some theory of learning and knowing, whether we can articulate it or not. That theory of learning and knowing impacts how we frame the curriculum. It impacts the methods of instruction we use, and it impacts, in one fashion or another, the forms of assessment we employ.

 

As an example, we can consider some of the consequences of the learning theory that dominated much of the 20th century. I am referring to the sort of behaviorist-associationist learning theory which came to dominate quite a bit of curriculum and curriculum development, instructional methods and assessment. This approach treated knowledge in a generic form, and it conceptualized knowledge in terms of discrete elements. The consequences were that the curriculum often came to be construed as a set of separate pieces. Essentially, you assemble a curriculum by identifying the multiple elements and you mix and match them to put it together. In terms of instruction, the model was basically knowledge telling, what is often called a transmission model, and learning was viewed as a process of absorption-I tell, you listen, you learn. A great little example of this approach is a favorite piece of video from the movie, Ferris Bueller's Day Off :

 

In 1930, the Republican-controlled House of Representatives, in an effort to alleviate the effects of the-anyone, anyone-Great Depression passed the-anyone, anyone-a tariff bill, the Hawley Smoot Tariff Act which-anyone-raised or lowered, raised, tariffs in an effort to collect more revenue for the federal government. Did it work-anyone, anyone know the effects? It did not work and the United States sank deeper into the Great Depression.

 

Of course, none of us teaches like that. We all have a much more enlightened and interactive model of instruction. The third concern is what the behaviorist theoretical framing about learning led to in terms of assessment. Assessment at the classroom level as well as in the form of standardized tests relies on a simple statistical and conceptual scheme. Problems are treated as independent entities and statistical item difficulty is the primary concern, not what the item is supposedly assessing conceptually. Scores are often computed on uni-dimensional item response theory scales designed to maximally differentiate among persons because the underlying scaling/measurement conception presumes differences in how much knowledge people have.

 

These are some of the consequences, whether we realize it or not, of the kind of theory that tended to dominate much of education throughout the 20th century. I would argue that such a theory is still prevalent today in many classrooms, including many university classrooms. For example, if you engage in a discussion with faculty members in various disciplines about their underlying epistemology of teaching and learning, you may well discover that it may not be very far away from what I have just characterized. That is sad given what we know to the contrary.

 

So, let me now turn to what we do know about How People Learn and its implications for instruction. I am going to be drawing upon ideas from both the original report, How People Learn: Brain, Mind, Experience, and School (Bransford, Brown, and Cocking, 1999), and the subsequent report, How People Learn: Bridging Research and Practice (Donovan, Bransford, and Pellegrino, 1999). The two reports have been put together in an expanded edition which is currently available from the National Academy Press.

 

What are the implications of contemporary knowledge about how people learn? I will argue that we want to put domain-based models of learning and understanding in the middle of the C-I-A triangle rather than a generic, associationist, behaviorist view. It should drive how we conceptualize the curriculum, the methods of instruction, how those two are aligned, and also the kinds of assessments that are appropriate relative to our curricular goals and instructional strategies. A theory of HPS really has to do with all the C-I-A correspondences and their interaction. Everything should be mediated through knowledge of HPL in a domain rather than going around the edges of the C-I-A triangle.

 

An interesting question for us to answer is what is the status of knowledge about how people learn in the field of technology and technology education? What I can infer from various reports is that the field is grappling with the fact that the knowledge base is not as well developed as you need it to be. For example, I found it interesting that there were relatively few research citations in the Project 2061 Benchmarks for Science Literacy (AAAS, 1993) or in the science literacy maps for the area of technology (Atlas of Science Literacy, AAAS, 2001). This is in contrast to many of the other areas that were discussed in the Project 2061 Benchmarks. That is an indication of the state of empirical knowledge about how people acquire the core concepts in the field of technology.

 

The real core of the research agenda in technology education, or in any instructional field for that matter, is derived from study of the nature of competence and the development of expertise in particular areas of the larger intellectual domain. In any of the areas that we might consider studying, what we know about expertise and the development of competence must be driven by analyses that are specific to that domain. Fortunately, when we summarize across multiple areas we can see general characteristics of learning and knowing that are important. Given that this is the case, I will summarize some of the main ideas about How People Learn and the questions they yield for research that needs to be done in the field of technology education.

 

We begin with the matter of knowledge organization. We know from studies of multiple disciplines that effective knowledge organization in any area of inquiry means that individuals have a deep foundation of knowledge-factual knowledge and procedural knowledge. But more importantly, it is not just how much knowledge or what knowledge one has but how it is organized into conceptual frameworks and schemas that facilitate the retrieval and application in contexts of use. So, the question in the field of technology is what defines the key conceptual knowledge and schemas for areas of technology and technology education? To what extent are the 2061 science literacy maps an adequate starting point? How should they be expanded to capture core concepts?

 

Another issue has to do with expertise and its development. We know that experts have well organized knowledge. We talk about it as organized in ways to support understanding. It is conditionalized for use and it is highly tailored to the conditions in which it is intended to be applied. That is what makes it so useful. It is one of the reasons why experts often are very poor at communicating their knowledge. They often fail to recognize how thoroughly it is contextualized, proceduralized, and conditionalized. We know that experts have fluent access to their knowledge. What they do is recognize patterns and chunks. Whereas individuals lacking expertise see bits and pieces, experts see whole patterns. Experts also develop domain specific problem solving strategies. We know that expertise is not based on general problem solving strategies. Those are weak methods.

 

Expertise is based on domain specific strategies and schemas which are essential to success. Another thing that we often forget is that expertise is acquired over time. This requires multiple contextualized experiences. The contextualization supports a process of generalization and discrimination that allows knowledge to become appropriately conditionalized.

 

What are examples of expertise and what are the consequences in the technology domain? First, can we define what expertise looks like? Second, what is the relevance of doing so for how we would set up technology education? What assumptions can we make about the conditions necessary to support an appropriate course of acquiring expertise in specific areas of technology?

 

A third issue has to do with the concept of metacognition. One of the characteristics of expertise is that competent performers consciously monitor their own thinking. They adjust their understanding to the local conditions, and while learning they are constantly checking whether they are understanding things or not, and whether they are making progress. We know that self aware learners can explain which strategies they are using and why they are using them, whereas poor learners and less competent students often monitor their thinking sporadically and ineffectively. One of the most interesting things about metacognition is that it often takes the form of a kind of internal conversation one has with oneself. Many students do not realize that individuals who are proficient learners are using these metacognitive strategies and, as a result, they often have to be modeled for the less expert.

 

The question then in the area of technology is how does metacognition develop in specific areas of technology? What does this monitoring look like? What is specific about technology as a domain? While we can talk about metacognition in terms of general strategies, individuals must develop domain specific metacognitive monitoring processes. So, if you are solving certain kinds of problems in physics, you are monitoring them in a different way than if you are doing something in history or you are doing something in literature. If you have to acquire domain specific metacognitive rules and strategies, what are those appropriate to areas of technology?

 

Now I want to turn to another issue, that of multiple paths to competence. This may come as no great surprise to you, but not all children learn in the same way or follow the same paths to competence. We know that children's problem-solving strategies and schemas tend to become more effective and proficient over time and with practice. What we also know from studies in mathematics and other curricular areas is that the growth process is not a simple uniform progression. It is certainly not a transition from an error prone state to a completely accurate state. Students often go through all sorts of intermediate stages. They use partially correct strategies. They invent things. One of the implications is that we need to know where students are in terms of this progression, particularly in areas of problem solving. What does this look like in areas of technology education? What patterns exist in the growth of understanding and competency? For example, the Project 2061 literacy maps (AAAS, 2001) are an attempt to lay out a kind of progression. Are they real? What do things really look like as students move along the developmental continuum? There are methods for doing such work such as micro-genetic analysis. Researchers who use this approach do very rich and detailed analyses of what students are thinking as they are acquiring knowledge about certain parts of the curriculum. I suspect that few such analyses exist for aspects of technology.

 

Now let us consider another topic, that of preconceptions and mental models. We know from studies of physics, history, and a host of other fields that students come to the classroom with various models of how the world works-students are anything but blank slates. Often those mental models are partially correct. Many times they are seriously flawed. If we do not get at these forms of student understanding, get them out on the table so to speak, we may end up with demonstrations such as the finding that students can pass a course in physics at Harvard but then demonstrate the same misconceptions that students have in the sixth grade. The successful college students behave as if they have two different conceptual systems. There is the physics of the classroom at Harvard, and there is the physics of the real world, and the two have no obvious reciprocal connection.

 

What preconceptions and mental models apply to the domain of technology? What do people believe about the concept of constraints? What do people believe about systems or design, and to what extent are those ideas correct or incorrect? Do they hold beliefs or possess representations that we can build on? Must we systematically intervene and modify the states of knowledge because they include misperceptions that will get in the way of what we want people to understand? Which of these things are really serious concerns for future learning in the area of technology?

 

It may come as a surprise to many people that in the domain of history one of the impediments to learning what history is all about is that students have a wrong conception of the domain. Either they believe that it is just a set of pre-established facts, and everybody agrees on the facts, or that history is a conflict between the good guys and the bad guys. It is just a matter of figuring out who the good guys are and who the bad guys are. In history, physics, biology, math, and other subject areas, we have to determine where and how individuals are developing these preconceptions and how to modify them through the course of instruction.

 

The last issue about How People Learn I will mention is something we have only come to appreciate much more deeply in the last 10 or 15 years. It is the fact that our knowledge is very much situated in context. I will talk about two features of situated knowing. One is the fact that knowledge often develops in highly contextualized and inflexible forms, which means that it does not transfer very easily or very effectively. Transfer depends on the development of an explicit understanding of when to apply what we know. It is not just that you know something. It is knowing when to use it. What then constitutes evidence of transfer in technology education? How context bound is the knowledge base in this area? And how do current educational practices constrain transfer? We need to look at the ways in which individuals acquire their knowledge of technology and whether the process is contextualizing their knowledge in particular ways that we do not intend.

 

The other part of situated knowledge and expertise is the fact that individuals do not just learn on their own; rather, they learn in larger social contexts. There are important relationships among learners and the contexts in which they learn which define parts of knowing and expertise. We can study any expert group and discover that they are part of a community of practice. They have certain critical and shared ways in which they think about things within their field. They have certain tools and approaches. Experts learn these through interaction with their peers. They build communities of practice and these practices are critical to defining what it means to know something. Thus, the issue in a field or sub-field of technology is what are the communal and participatory practices? What are the rules that constitute knowing and behaving effectively in the field of technology and technology education? What are the tools that people must learn to use for participating effectively in that community? How is community established in this field?

 

Clearly there are many questions and issues in the field of technology education that derive from important topics about how people learn. If we are going to put knowledge of how people learn at the core of curriculum, instruction, and assessment, we have to have considerable knowledge about domain-based models of learning and understanding for those areas of technology that are of key interest. This is a major part of the core research agenda.

 

Let me mention quickly some further implications of HPL for the curriculum, instruction, and assessment triad. First, one thing that focusing on the core knowledge base does is that it allows us to transcend false dichotomies about the goals of instruction such as teaching facts versus thinking skills. Basically, it causes us to focus on determining what are the key concepts and skills and how they are organized. What are the big ideas? It also allows us to get past the debates about best instructional strategies. If we focus on what we want people to know, then there are a variety of strategies that can be used depending upon the aspects of knowledge being emphasized. There is not one single best way to teach. Rather, there are a multitude of approaches that can be used that can be mapped against what we want students to learn in light of their current level of understanding. This avoids debates that occur all the time in the field of education.

 

Consider what has to be one of the most frustrating situations to those of us who work with technology in education. This is when somebody asks whether computers work to improve learning. It is the wrong question, and not unlike asking whether a book works to improve learning. It is assuming that this tool is a generic solution to produce better learning. We should be asking under what circumstances does a computer, or any other form of technology like books facilitate learning. There are a variety of ways technology can do so. But people want to ask the simple question partly because they want to know whether they should invest in this technology. They do not want to deal with the complexity associated with issues regarding the nature of knowledge and instruction.

 

We have also come to understand how knowledge of how people learn helps us to think about what to assess, how to assess it, and why assessment is so critical to the learning process. I want to offer a general way to think about assessment that builds from this base of understanding.

 

Assessment can always be construed as a process of trying to reason from evidence. Whenever we are engaged in a situation of educational assessment, we are trying to reason from the evidence of what somebody says or does and infer what they know and understand. There are three elements to this process. First, we need to have a domain-based model of learning and understanding. That is the conceptual base that defines what it is we are trying to make inferences about. That leads to obtaining certain kinds of observations that can be mapped against the conceptual model. But that is not enough. You need an interpretation scheme. Sometimes it requires a complex statistical or computational scheme to see how the data and observations match up with the underlying conceptual model. Intelligent tutoring systems are wonderful examples of the marriage of these three elements. In intelligent tutors there is often a rich domain-based model. There is a set of tasks that students are performing, and underlying everything is an elaborate statistical model, sometimes a Bayesian inference net, that assists in making judgments about the extent to which an individual's actions match up against the underlying model. This guides discussions about where instruction should go next.

 

What I am arguing is that assessment is a critical feature of supporting learning. If you want to know more about these and many other issues of assessment, there is a National Research Council report called Knowing What Students Know: The Science and Design of Educational Assessment (Pellegrino, Chudowsky, and Glaser, 2001). In this report we bring together the implications of contemporary knowledge from cognitive science with contemporary advances in measurement and psychometrics, and show what this implies for the design and use of assessments, including the role of technology in facilitating the entire process. The details constitute a separate talk, but the point for us today is that knowledge about how people learn is critical in helping us think about multiple issues encompassing curriculum, instruction, and assessment, including problems that we continually encounter in trying to effect better synchrony among C-I-A. HPS also provides a framework for designing enhanced classroom learning environments and for effectively integrating technologies into the teaching and learning process.

 

What I have considered to this point are those aspects of how people learn that focus on research about the nature of learning, knowing and understanding. A critical question is how to use the ideas about how people learn as principles for the design of powerful learning environments. In the HPS report mention is made of a framework for designing and evaluating the effectiveness of learning environments using four overlapping and intersecting components. They are called the learner centered, knowledge centered, assessment centered and community centered components. When we consider the knowledge centered elements of a learning environment, we are looking at the extent to which careful attention has been given to what is being taught. This includes: (1) identifying the central subject matter concepts, (2) whether they are being taught to support learning with understanding, and (3) the extent to which the environment is sensitive to the nature of competence and mastery. This leads to a principle of instruction which should resonate with many of you.

 

Instruction should be organized around meaningful problems with appropriate goals. There are a variety of reasons for doing so. For one thing, it helps individuals overcome the inert knowledge problem defined by Whitehead long ago in which students learn things in the classroom but their knowledge remains inert in the sense that it is not connected to anything meaningful and thus cannot be applied when it is needed. The other thing that meaningful problems with appropriate goals accomplish is to increase motivation for learning and student interest. A challenge is how to create the kinds of problems that permit such learning, problems that permit sustained inquiry and the development of understanding. One of the ways in which information technologies have proven very powerful is allowing the design and implementation of such problems, particularly those where the learner can exercise some degree of control. Use of multimedia has proven extremely effective in our development of these types of problems in math and science for the middle school level.

 

When we consider the learner centered elements of an environment, we are looking at whether attention has been paid to what the learner brings to the situation and where the learner is during the course of instruction. Are we building on what students know? This too leads to a principle that instruction must provide scaffolds for solving meaningful problems and supporting learning with understanding. It is not sufficient; in fact it is sometimes very problematic, to present complex and interesting problems because learners are often relative novices who have to be helped in dealing with the problem complexity. So, we need to have scaffolds which are consistent with concepts like the zone of proximal development. This is a place where information technologies can help. They can provide a variety of tools to assist the process of scaffolding learning.

 

When we consider the assessment centered elements, we are looking at frequent opportunities to make students' thinking visible through processes of formative assessment. While we know this is valuable, we also know how difficult it is to do this in normal instruction. Thus, the third principle is that instruction should provide opportunities for practice with feedback, revision and reflection. This is critical for individuals to develop metacognitive skill and understanding, to take control over their own learning, and develop the metacognitive strategies appropriate to the domain in which they are working. Unfortunately, there is a dilemma here as well. Novices need to be assisted in terms of modeling the appropriate monitoring and self-regulation skills. Fortunately, there are some wonderful examples of technology based systems that provide such support and which include diagnostic assessment and feedback while students are engaged in complex problem solving.

 

Finally, we have community centered elements which consider the extent to which a community of learners has been established, and how it functions. The fourth principle is that the social arrangement of instruction needs to promote collaboration and distributed expertise as well as independent learning. One of the hallmarks of expertise is the fact that knowledge is distributed. How can we recognize such a reality while at the same time ensuring that individuals acquire specific content knowledge? It is not just a matter of having people work in groups but rather using methods that allow individuals to make their thinking visible to themselves and to others. Here too there are a variety of technology tools to support this process.

 

What are the implications of the preceding ideas for research on instructional environments? One of the ways to use knowledge about How People Learn is to look at existing instructional materials, practices, settings, as well as the learner outcomes, and see how well they correspond with many of the previously mentioned HPL principles. Thus, we can use knowledge of HPL as a lens for analysis and revision. After students in my class on cognition and instruction learned the principles about how people learn, I had them select a unit of instruction in an area that interested them. Their job was to pick it apart and critique it. All of a sudden, they had to face the issue of whether something is good or poor instruction and why? They had to pull things apart, and consider how well the pieces worked together using learner centered, knowledge centered, assessment centered and community centered issues as individual and collective analytic lenses.

 

There are also ways to use the HPL principles as a means for designing new materials, practices and settings and then studying the impact on the learner outcomes. This often involves conducting "design experiments." I know that Janet Kolodner has previously talked about design experiments with this group. Let me give you a quick example of what we mean by such a design experiment. My example comes from VaNTH, an NSF-supported, multi-university bioengineering and biomedical engineering research center involving Vanderbilt, Northwestern, University of Texas at Austin, Harvard, and MIT. The idea behind this center's work is to impact the nature and quality of bioengineering and biomedical engineering education. The VaNTH approach is to combine domain experts in bioengineering and biomedical engineering with learning scientists and educational technologists and see how well this collective expertise can be blended together to design effective learning environments.

 

The approach that is being pursued in VaNTH is applicable to many domains. These are the components in brief. First is execution of a domain analysis. Learning scientists work with bio-engineers to develop taxonomies which try to identify the knowledge and competencies necessary for success in several emergent and hybrid fields of bioengineering. For example, when working with someone who is teaching biomechanics, we get the domain experts to articulate the important concepts and skills. This leads to a partial list of topics for introductory biomechanics. Such a list covers basic principles of physics, principles of biological systems, and principles governing how those two areas fit together. Such an analysis provides an interesting challenge because the disciplinary experts typically do not think about their domains in this way. It is a real challenge for them to sort out the elements of the taxonomies first and then say how all the content is related conceptually.

 

On the one hand there is the production of a taxonomy and then there is the production of a conceptual map. It takes a while for people to realize that the two are not the same. Such an analysis becomes part of the process of determining what it is that we want students to know. Then we apply HPL principles to several aspects of course design. This very naturally leads to conducting design experiments which simultaneously pursue research on instructional processes and student learning outcomes.

 

In summary, VaNTH is coming at bioengineering domains from three different angles. What is the important content in the domain? How can that be connected with HPL principles to design better instruction, and then how can studies be done of whether the students are learning the content in the desired ways?

 

Let me conclude things by considering some general issues of linking research to educational practice. Much of the research on learning and teaching has a very weak connection to actual educational practice. Partly that is because the great portion of what we publish is not picked up by teachers, often because they do not have the time to search and translate the research literature into guides for practice. There are, however, some cases where the link is more direct. This typically comes in the form of design experiments which involve collaborations with teachers to change educational practice. The work of VaNTH is one such example. But most of what happens is that research impacts practice indirectly by influencing one of four mediating arenas which then in turn influence actual practice.

 

For example, research often leads to the design of educational materials that incorporate ideas from research. Or research finds its way into the content and design of teacher education programs. Sometimes research impacts policy. An example is educational testing, much of which is still rooted in the behaviorist model I discussed earlier. And finally, research finds its way into the public arena. Sometimes this occurs very badly, such as popularizing neuroscience research and drawing inappropriate implications for instructional practice. What we need is to build a cumulative knowledge base which serves both research and practice, is rooted in both, and which becomes the common frame of reference for impacting all four mediating arenas.

 

In technology education the same four mediating arenas exist and impact practice. Part of your agenda is building a cumulative knowledge base that supports learning and teaching about technology. It means defining the core knowledge constructs, conducting research on fundamental learning and teaching issues, as well as doing research on current instructional practices. It also means applying HPL to the systematic analysis of your existing educational materials, your teacher education practices, and educational policies influencing technology's role in the P-16 curriculum. A final piece, not to be underestimated, is public understanding of technology as a field, including the extent to which such understanding influences educational practice.

 

My final few comments focus on the idea of situating your research in Pasteur's quadrant. The term comes from the title of a book by Donald Stokes (1997) which provides an analysis of America 's science and technology policy, including the model used to guide research in the NSF. The argument Stokes makes is that we often think of research as falling along a uni-dimensional scale that ranges from basic to applied. In contrast, Stokes maps research in a two-dimensional space with research being low to high in terms of its pursuit of general theoretical principles, and low to high in terms of its attempt to solve practical problems. Pasteur's work serves as the prototype for research that operates at the high end of each scale. His work typifies the high-high quadrant. The contrast quadrants are named after Bohr, whose work was high on theory but low on application, and Edison, whose work was low on theory but high on application. The final quadrant, defined by being low on both scales, remains unnamed. I surmise that this is the area where many doctoral studies belong. Besides, who would like to have a quadrant named after them which implies that one's work has neither practical nor theoretical value?

 

What is the larger point of mentioning Pasteur's quadrant? The point is that much of the work that needs to be pursued in the field of technology education, and in many fields of education, sits squarely in Pasteur's quadrant. It is work that should be high in its contributions to theory building. At the same time the research should contribute to solving practical problems of learning and instruction.

 

References

 

American Association for the Advancement of Science. (1993). Benchmarks for science literacy. New York: Oxford University Press.

American Association for the Advancement of Science. (2001). Atlas of science literacy. Washington, DC: Author.

Bransford, J. D., Brown, A. L., and Cocking, R. R. ( Eds.). (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

Donovan, M. S., Bransford, J. D., and Pellegrino, J. W. (Eds.). (1999). How people learn: Bridging research and practice. Washington, DC: National Academy Press.

Pellegrino, J. W., Chudowsky, N., and Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.

Stokes, D. (1997). Pasteur's Quadrant: Basic science and technological innovation. Washington, DC: Brookings Institution Press.

Discussion

 

Voice: This question is actually stolen from Ed who was sitting next to me and just left. What we are wondering is whether any research exists, or what research exists, about developmental issues in understanding learning patterns. Clearly, pattern finding and recognizing and identifying is at the core of expertise in those fields. As teachers, it seems to me a lot of what we try to do is help students understand more patterns but what we do not know, I do not think, or maybe you do or maybe somebody does, is what are the developmental stages? What is the developmental appropriateness about teaching children about patterns? I mean, what kinds of, you know, is that too general a question?

 

Pellegrino: No. Your question is fine. I think at issue is the fact that there is not a general prescription for teaching children about patterns. The issue is identifying the patterns that are important in a particular area and then, how to help build such a knowledge representation.

 

I think that one problem is that we try to give students the end state, the final pattern; that pattern is what experts see and it is how everything fits together rather than something that might be intermediate and not as complex, but still pedagogically appropriate because it is not a distortion. It is a simplification that can be built on to go further. I think another problem is knowing what kids know already and then not overloading their working memory. One of the key issues is realizing that what we want is for individuals to activate patterns from long term memory. We are mediating this by presenting a lot of material for them to comprehend and hold in working memory, but we have not built up the long-term memory representations to help in this process. So, the answer is to figure out what the patterns are and figure out if there is a simplified sequence to help develop that form of knowledge representation. Part of this concerns general principles for knowledge development and overcoming working memory capacity limits. And always you have to deal with the domain-specific content patterns. Sometimes, the externalization of content patterns reduces the working memory load and allows students to begin to develop the proper pattern recognition process.

 

Voice: One thing that is interesting and a little problem is when you talk about core understandings, core knowledge, when we try to reconcile that with the standards movement, and this plethora of standards and indicators of students' performance and, you know, there are certain assumptions that are made that the need and validity of all that material and it becomes so overwhelming that it is very difficult to begin to develop a curriculum based on that kind of thinking.

 

Pellegrino: I think the problem is not about standards per se. There is a problem with standards when standards become lengthy enumerations of topics rather than attempts to identify the big ideas in certain areas of science or math or history. Examples in science include principles of balance and systems, concepts around which you can build significant understanding and where you can then hang the facts. The problem comes when the curriculum gets dominated by the facts rather than the conceptual schemes. You can teach about principles of systems and balance very early on and you can keep elaborating that in the areas of biology, ecology, and so on.

 

Although they are getting better, I think the problem with a lot of the standards is not seriously attempting to identify the core concepts and big ideas and how to substantiate them in curriculum and instruction. If students have the idea of systems and balance, they can understand the body. They can understand ecology. They can understand many aspects of the physical and biological world, and they can also understand issues at the atomic level. If the standards identify a bunch of topics such that curriculum and instruction becomes focused on topics rather than core concepts, then that is what will become part of students' knowledge structures. In contrast, if you look at what experts know their knowledge is organized around big ideas and big principles and everything else sort of hangs off of that.

 

Voice: Concerning the issue on the fact that Harvard students do not know all kinds of things that one would assume they would, I traced some of that back to assessment issues because we just published a study on high school physics and its contribution to college physics and we found that taking a high school physics course has very little impact on how kids do when they get to college physics. One of the biggest factors is the kids who studied the fewest topics (in high school physics), did the best when they got to college.

 

Pellegrino: I guess I am not surprised at that and I think you are right. It has to do with what is happening in terms of the kinds of instruction and assessment that are prevalent in high school and college.

 

Voice: The killer piece of the assessment is that the assessment that teachers make up is probably the worst assessment that could probably be given to these kids because there is so many contextual cues that the kids use in doing well on these kinds of tests and even the standardized tests that we have are better than the tests that the teachers make up.

 

Pellegrino: Well, in the report on assessments, we talk very much about classroom assessment and the fact that one of the problems is that it frequently mirrors a lot of standardized assessments. It operates on the same weak conceptual base about learning and measurement and yes, it contributes in some pernicious ways to students lack of understanding. We could do a whole lot better and one of our arguments is that the power of connecting cognitive theory with technology and measurement is realized much more completely at the classroom level than it is at the large scale assessment level and that more emphasis needs to be placed on assessment in the service of learning at the classroom level. Fortunately, there are some wonderful systems which diagnose conceptual understanding. I am familiar with Minstrell's work in physics that was designed and built from a cognitive analysis of students' states of knowing.

 

Voice: The problem with those diagnostic tests and items is that they do not fit the standard psychometric profiles that were developed in the 1950s. And so, we are not going to see them on standardized tests until the psychometrics change.

 

Pellegrino: Well, it is not just the psychometrics that need to be changed. It is also the constraints that are involved in standardized testing. There are many facets of testing that need to change.

 

Voice: I would like you to comment on the fact that we are now facing the first generation of students that have grown up with this internet and so on. All of us in the classroom who feel there is somehow visual things and attention span is shorter, and somewhat slower. Can you comment on how this is affecting student thinking?

 

Pellegrino: I do not know that we have hard evidence that suggests that there is any fundamental change in terms of the way cognition operates with kids today versus the past populations that we have been studying. I think if there is a difference it may be that they are more facile with certain kinds of representations and certain symbolic systems, which is perhaps what you are sensing. And part of it may be the fact that we are used to modes of instruction that are heavily verbal and text based and they may be more facile with making meaning from other forms of media. It is not that there is some fundamental shift in processing. Think of it in terms of the research on expertise. Students may be more expert at doing certain kinds of inferential processing now than previous cohorts of students. And so, the question is, to what extent have our materials adapted to students' capacities. Perhaps we need to adapt our materials to forms of representation that students are now capable of processing. If we want them to make use of alternate kinds of instructional materials, then we have to help them develop expertise in working within that medium. By the way, it is important to note that many of the instructional texts we use are poorly organized and they put a heavy load on working memory and inferential processing.