Proceedings of the First AAAS Technology Education Research Conference

The Design Experiment as a Research Methodology for Technology Education

Janet Kolodner
Georgia Institute of Technology

Let me start with a few questions and then try to tell you what I mean by that.

  1. What should students be learning and when?
  2. How can students best learn each of those things?
  3. What are the developmental progressions of their understanding of concepts?
  4. How do they get better over time? Or their practices, how do their practices get better over time?
  5. What's the relationship between knowing and doing? How can we capture implicit knowledge?
  6. Where do students have difficulties and why?
  7. Where do teachers have difficulties and why?

We have heard about different methodologies. But, what we do depends on what questions we are trying to ask. Each requires different methods. So, for example, the question of what students should be learning and when requires going back to the design cognition literature and seeing what it says about designing. Then asking what we want students to be learning. Then going to developmental literature and looking at what it is they might be able learn at certain times in order to build up to this thing you want to get to.

Perhaps it requires going to our engineering faculty and understanding from them what are the things they feel that kids are missing when they get to engineering school, so therefore, what are the things that they need to be learning before they get there? It's a critical question, and it's not qualitative research, it's not quantitative research. It really requires doing a lot of interviewing and doing a lot of making sense of the literature that's there, finding holes in it, and perhaps going out and filling those holes as well.

How can students best learn each of these things? Well, how do we go about doing that? The first thing we might have to do is to map out the development of each of these practices or concepts that we want them to know, and take what we know about educational practices, take what we know about learning in general, concept formation, for example, and make hypotheses. Take it to the classroom, see how it works, maybe compare—if we want to answer that question best, then maybe we need to compare classes that are doing it one way with classes that are doing it another way.

What's the progression, but why is that progression happening?

We want to understand developmental progressions of their understanding of certain things. So what do we want to do? Probably we want to do a combination of surveys of large groups of students to see the trends, doing a post—a pretest and some midway tests and a post test, to see the longitudinal kind of things, to see how the progression—the trends tend to be.

But maybe then we want to do an in-depth analysis of a few students, and by means of any one of a whole number of different methodologies that are available: discourse analysis, verbal protocol analysis, interaction analysis, clinical interviews. Each one of them lets you focus in a different place. Interaction analysis has a focus on the interactions between different agents in a learning situation, and how that impacts the learning.

Verbal protocol analysis allows you to look at the things that people are doing, and what they're saying as they're doing it. And it helps to draw out from there, based on what they say, and you could add to that based on what they do, the things they know at any time.

Capturing implicit knowledge, how would we figure out how to capture implicit knowledge? We have to have kids draw, do, design and talk aloud and look at all of those things. Then we try to come up with what can we pull out of those—we have to do an exploratory thing, collecting lots of different kinds of data to figure out the combination of assessment strategies that we might use to be able to get at the implicit knowledge.

Where do students have difficulties? Why do they have difficulties? We might want to interview some students, do ethnographies of classrooms. Where do teachers have difficulties? Again, we might want to do ethnographies of a few teachers, and we might want to interview a whole bunch of them. So in fact, the kinds of methodologies we want to use depend very much on the questions.

The big issue for me with respect to putting the research agenda together is are you going to find out these things in context or are you going to find them out in the lab? And, luckily, the technology education community does not have the history from psychology of finding out everything in the lab. So in fact, this community is free to come up methodologies where we can find things out in context. And the very best of those that I know about is something called design experiment. The best article on design experiment is in The Journal of the Learning Sciences (Brown, 1992).

Design Experiments

A design experiment bases research in classrooms. And basically what happens in a design experiment is that you engineer the environment to promote learning. So basically what you are doing is taking what you know about learning, theories of learning, and you are taking what you know about practice, and you are somehow putting those together to figure out what you want to go on in there in order for learning to happen. And you are making many predictions about what is going to happen as a result of the different things that are going on. And then you answer questions within that environment.

Presumably you engineer the environment so you are going to find out the things that you want to learn. Basically, design experiments allow you to do research on learning, and to do test analysis and refinement of the learning environment, that is curriculum, the set of activities that kids do. The materials you use, you get to do your formative analysis of that at the same time that you're learning about learning. And in fact, it could be a very nice model to disseminate materials.

Think of your classroom environment. Assume you are writing the units that you want to happen in the classroom. You also are teaching the teachers about how to do it. You have the materials that you create for the teachers and for the students, the books and then the activities that they are going to do. You engineer the environment based on all of that. Then within that environment, you study the learning and you look and see how well your materials are working.

Now, it is real hard. I have more hours going into evaluation right now in my project than I do in line of the learning materials, of the books and everything else.

There are some interesting things about design experiments. Sometimes you are comparing things across classes. We need to worry about who your audience is as well and we need to be able to tell people that our kids are now learning by design. They are actually learning science and they are learning science better than kids in normal classrooms. So in fact, we do comparisons to other classrooms.

But in gathering explanatory information, that is, understanding why something is happening the way it's happening, you do not have controls. What you need to do is gather corroborating evidence at the same time. This evidence comes from interviews with kids and teachers. It also comes from the design diaries that students are keeping as they work on their projects, evidence from the products that they come up with in the end, and the tests they take.

Reference

Brown, A. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141-178.