Proceedings of the Second AAAS Technology Education Research Conference

Assessment in Technology Education: What, Why, and How?

 

Malcolm Welch

Queen's University

 

Introduction

 

Technology education is the school subject in which students learn to design and make products that are useful both to themselves and to other people. It is both intellectual and practical. It introduces students to the powerful process of designing; a process in which new ideas are conceived and taken from the mind's eye into the made world. It requires creativity and problem-solving abilities. It develops hand-eye coordination in the precise use of tools and materials. It fosters the ability to make decisions, plan a course of action and carry it out working as an individual or as a member of a team. It introduces students to the world of technology outside school, a world in which they will need to operate effectively.

 

Now clearly, the assessment of student achievement is a characteristic and significant component of formal instruction in technology education. The completion of tests, assignments, projects, and portfolios for evaluation purposes are typical student activities in the classroom (Anderson, 1999). It is understood that a teacher will take examples of a student's work, mark them, and then combine these marks into a final grade. But how does a teacher translate students' products into marks or grades? This question is not well researched. And which examples of a student's work should be used to determine his or her final mark? What is to be assessed? One's answer to this latter question will reflect what one considers the important learning in technology education. And this seems to me to be a critical issue, one that continues to be debated in the technology education literature. When I think about this issue I am reminded of Young and Wilson's claim that "assessment is a public declaration of what is valued" (2000, p. ii). What learning do we value in technology education? And how can it be assessed?

 

This paper will address, albeit briefly, the following three questions:

  • What is important learning in technology education that needs to be assessed?
  • What key ideas in assessment are important for our discussion?
  • What sorts of evidence are available to teachers, and researchers, to enable assessment of students' work?

 

The paper will continue with a description of the assessment strategies, and some issues arising from them, in three new technology curricula in Ontario. The paper ends with some suggestions for research that might enhance our understanding of assessment, and hence teaching and learning, in technology education.

 

What is important learning in technology education?

Technology education is concerned with developing students' capability. This capability requires students to combine their designing skills with knowledge, skill and understanding in order to design and make products. Kimbell (1997) has defined capability as "that combination of skills, knowledge and motivation that transcends understanding and enables pupils creatively to intervene in the world and 'improve' it" (p. 46). This is quite different from students acquiring a range of separate skills and abilities as achievements in their own right. This is not to deny, however, that capability depends to some extent on the acquisition of appropriate knowledge and skills.

 

This is not a new idea. As long ago as 1970 Project Technology, a UK research project launched in the 1960s, identified the centrality of the process of design and development in technology education. In a 1970 report it was noted that "technology is.an activity, not a readily definable area of knowledge" (Schools Council, 1970, p. 4). A second national research initiative, the Design and Craft Education Project at Keele University under the direction of Eggleston expressed the view that if the process of designing is to be the core concern, then the content must be a secondary matter.

 

When we think of a capable student in technology education we envision one who is able to reflect while taking action and who can act on his or her reflections. As they demonstrate their capability, students will draw on a developing repertoire of skills and knowledge that includes designing skills, making skills, and knowledge and understanding of materials and components, of structures, and of existing products.

 

As an end note to this section of the paper, I want to remind us of what Roberts, at Loughborough University, wrote: "the purpose of teaching children to design is not to bring about change in the made world, but change in the student's cognitive skills. Designing artifacts is a vehicle for the educative ends of engaging students in modeling ideas" (Roberts, 1994, p. 172).

 

Some key ideas in assessment

 

In this second part of the paper I want to respond to the question: Why do we assess students' work?

 

In an influential document in Canada entitled The Principles for Fair Student Assessment Practices for Education in Canada (Joint Advisory Committee, 1993), assessment is broadly defined as:

The process of collecting and interpreting information that can be used (i) to inform students, and their parents.about the progress they are making toward attaining the knowledge, skills, attitudes and behaviors to be learned or acquired, and (ii) to inform the various personnel who make educational decisions.about students. (p. 3)

 

In other words, assessment can be used to support learning and to report learning. My focus is on the first of these; supporting student learning.

 

Shulha (1999) reminded us that whether the purpose is to diagnose learning, provide feedback to students, make decisions about next steps, or report to parents and other stakeholders in education, it is the assessment of student growth and achievement that is central to the ongoing activity of teacher practice.

 

The Assessment Reform Group (1999), a task group of the British Educational Research Association, noted that they "have become more and more convinced of the crucial link between assessment, as carried out in classrooms, and learning and teaching" (p. 1).

 

I am reiterating a principle that we know well, but which is sometimes overlooked in the scramble toward being accountable to many stakeholders: the central purpose of assessment is to provide feedback to the learner and the teacher and to guide growth ( Wilson , 1999a). Assessment by both students and teachers should be used to shape decisions concerning the curriculum and what happens in the classroom.

 

Assessment in technology education

 

Because we are aiming to assess students' capability, assessment in technology education is complex. To assess capability is complex because we are looking for a whole that is more than the sum of its constituent parts, much more than displaying knowledge, or understanding, or manual skills. Capability includes the processes that students experience, as well as the skill and understanding developed and employed.

 

Kimbell (1997) stated that "design and technology activity is so integrative, the approach to the assessment of pupil performance in this area should ideally be holistic" (p. 73). Wilson and Shulha (2001) reported that teachers participating in their research "were adamant that quality assessment requires forming a holistic understanding of students" (p. 8). Kimbell (1997) has identified and described in great detail the difficulties of atomized assessment. He wrote "the assumption that it is possible to use small, clear discriminators as a means for assessment in design and technology is a snare and a delusion" (p. 37). According to Kimbell, teachers are at their most reliable when assessing holism and at their worst when assessing the bits.

 

Let me turn now to the question of evidence. What evidence is available to the teacher that would enable capability to be assessed? There are two types of evidence available to any teacher: transitory evidence and permanent evidence. Transitory evidence may be collected through teacher observation of students, as well as through teacher interaction with students. Transitory evidence is often left as a gestalt impression of the student inside the mind's eye of the teacher. So it is only available to the teacher, but the impression could be open to scrutiny if some attempt at record keeping is made.

 

The centrality of this transitory evidence in teachers' assessment practices is not to be underestimated. Wilson (1999b) has shown that the most important evidence that teachers collect for assessing students' growth and achievement comes from observations. Bachor and Anderson (1994) have shown that teachers report the phenomenon of gut feeling, in which the teacher would somehow develop a global estimate of the performance or achievement level of students in the class and all assessment results for an individual student would be related to this global assessment.

 

In a four-year research project conducted by Wilson and his colleagues at Queen's University (Wilson, 1999b; Wilson & Shulha, 2001), objective evidence about a student's performance did not in and of itself determine that student's grade. Novice teachers allowed their expectations about how a student might do to affect their judgements about performance. For example, if a student were showing improvement over the term, this would be rewarded with higher grades. While teachers' support for assessment based on observations, done spontaneously and without records, lacks reliability and validity in psychometric terms, it fits well with their orientation toward a growth model for assessment.

 

Permanent evidence may be collected (a) about the process of designing and making, and (b) about the final product submitted by the student. It is the assessment of this permanent evidence that I will discuss in the next section of the paper.

 

There is an emerging consensus that the most appropriate form of permanent evidence for the assessment of a student's capability with the process of technology is through the use of a design portfolio. So let me now turn to a brief discussion of the use of portfolios in technology education.

 

Evidence from the design and make activity: The designer's portfolio

 

There is no doubt that designing is an intensely personal business. A designer's drawings from preliminary doodles to finished renderings and accurate plans are in some ways as intimate as an artist's sketch book. This is particularly so for the early work, where the ideas are emerging and developing into an as yet incomplete and uncertain design.

 

Evidence from my work with both teachers and teacher candidates suggests that a designer's portfolio will provide evidence of the student's struggle to bring ideas in the mind to the reality of a product. It will provide evidence of the intellectual and practical endeavors that turn ideas into products that can be used and evaluated. The designer's portfolio can tell a clear, internally consistent story of the decisions the student made as they were designing and making a product.

 

In my work with teacher candidates, I require that the portfolio include:

•  A title page

•  A description of the context for the designing and making

•  A description of the problem

•  A design brief

•  A description of the user

•  Evidence of research that investigates existing products

•  A list of specifications for the product to be designed

•  Evidence of the generation of ideas using 2D and 3D modeling techniques

•  Evidence of the development of ideas using 2D and 3D modeling techniques

•  Critical reflection on those ideas

•  An appropriate use of communication techniques

•  Evidence of a plan for making the product

•  A description of how the product will be tested

•  Evidence of testing

•  Results of testing and reflection on those results

 

The aim of this structure is not to produce a uniform work across the class: quite the opposite. The structure provided allows students to concentrate on developing their own ideas to the full, not in isolation but as part of a class in which there is a culture of sharing and cooperation to everyone's benefit. The individual signature of each student will be developed and revealed; every student gains from the sharing of ideas and working with a partner. The worth of the work in the designer's portfolio will be recognized and valued. The teacher finds the situation manageable and sees students making progress. The contents of a designer's portfolio will provide insights into the mind of the student.

 

My experience indicates that a successful portfolio will only be achieved by a student who has ownership of his or her portfolio. There is some research to support this idea. For example, Notman (2000) showed that his use of portfolio assessment at the high school level, combined with student-led conferencing, provided his learners "with a high degree of ownership and control [that], in turn, [had] a positive effect on their learning, motivation, and behavior" (p. 2).

 

For the portfolio to provide an accurate record of the student's capability, there needs to be a great deal of teacher/student interaction. Every such interaction provides an opportunity for the teacher to glean transitory evidence of the student's progress and capability.

 

Barlex (1995) prefers the term Designer's Notebook rather than Designer's Portfolio . As David describes it, a Designer's Notebook is a working design diary in which the student can record all that is necessary to tell the story of their designing and making as it happens. Note the last three words: "as it happens." As Barlex described it to me, "there is no room in this book for neat nonsense or retrospective titivation" (personal communication).

 

In one of our conversations, Barlex described how in England there are two schools of thought about the most appropriate form for a Designer's Notebook. Givens, a lecturer in technology education at Exeter University, is very keen on it being a book, with a hard cover and alternative blank and lined paper. According to Givens, the key requirement is that the pages stay in the right order and nothing can get lost. There is a complete record. Parker, a technology advisor, takes a different view. Parker advocates that a Designer's Notebook should be like a largish Filofax, containing all sorts of different papers-plain, lined, colored and graph. This makes it possible to stick in all sorts of samples either directly onto a page or in a plastic pocket. There is a need for technology educators to discuss issues around the form and use of a designer's portfolio or notebook.

 

Let me now turn to the issue of how a teacher might be expected to assign a grade to a portfolio. To begin this discussion, I will refer to three technology education curriculum documents in Ontario, each of which provides teachers with a rubric. I will describe these rubrics, point out some difficulties, suggest some ways forward, and identify some of the ongoing dilemmas.

 

There are three new technology curricula in Ontario :

•  Grade 1-8 Science and Technology

•  Grade 9-10 Technological Education

•  Grade 11-12 Technological Education

 

The approach to assessment is essentially identical in all three. In each curriculum document a rubric is provided that describes four levels of achievement on four criteria. In the Grade 1-8 Science and Technology curriculum, these criteria are as follows:

 

•  Understanding of basic concepts

•  Inquiry and design skills

•  Communication of required knowledge

•  Relating science and technology to each other and to the world outside school

 

Each level in the rubric contains a brief description of degrees of achievement on which teachers will base their assessment of students' work. Reduced to its simplest form, the rubric reads as follows.

 

 

Level 1

Level 2

Level 3

Level 4

Understanding basic concepts

Shows understanding of few basic concepts

Shows understanding of some basic concepts

Shows understanding of most basic concepts

Shows understanding of all basic concepts

Inquiry & design skills

Applies few of the required skills

Applies some of the required skills

Applies most of the required skills

Applies all of the required skills

Communication of knowledge

Communicates with little clarity

Communicates with some clarity

Communicates with most clarity

Consistently communicates with clarity

Relating S&T to each other & world

Shows little understanding of connections between S&T

Shows some understanding of connections between S&T

Shows understanding of connections between S&T in familiar contexts

Shows understanding of connections between S&T in both familiar & unfamiliar contexts

Notice the language of the rubric. The words few, little, some, most, all are indicative of the quantitative nature of the rubric and how it is used. Applying this particular rubric to students' work would result in a quantitative analysis of the work, enabling teachers to make some inferences about each student's current level of competency. And while this snap-shot of student learning does provide information about the student's current performance, the nature of the descriptors does not permit inferences to be made that would guide decisions about how to improve. The rubric has little use as a formative aid for instructional and learning purposes (Young & Wilson, 2000). These rubrics reflect a behaviorist approach to learning-more is equivalent to better.

 

Nor does the rubric allow a teacher to report progression in terms of capability. Take, for example, a Grade 1 student who is assessed at Level 3 (knows "most" of the material). Now picture this same student at Grade 4. He or she is once again assessed at Level 3 (again, knows "most" of the material). But Grade 1 "most" cannot be the same as a Grade 4 "most." The demands of the assigned tasks (Design and Make Activities) have increased, as has the amount and complexity of the subject matter. But none of this is evident in the "grade" assigned (Level 3). This could continue until the student completes Grade 8. In other words, "levelness" is grade dependent. It does not reflect progression in capability.

 

The Elementary Science and Technology (EST) approach

We are working to overcome the problems I have just described. As part of a much larger project to provide professional development for elementary teachers as they write curriculum materials to implement the curriculum, I am working with four faculty colleagues and 17 teachers to rewrite the rubric. We will move away from a behaviorist approach and toward an approach that reflects the developmental nature of learning.

 

We have begun to develop what we call "an assessment toolkit." It contains six steps, but essentially involves adding one more column to the rubric, which we are calling Key features of designing and making .

Areas for assessment

Key features of designing & making

Level 1

Level 2

Level 3

Level 4

Understanding of basic concepts

Technical matters

 

 

 

 

 

Design skills

User needs

 

Generating, Developing, & Communicating design ideas

 

Making

 

Safety

 

 

 

 

Communication of knowledge

2D & 3D modeling

 

 

 

 

Relating S&T to each other & world

Evaluation of product

 

 

 

 

 

 

We are writing descriptors for each key feature at each of the four levels that will reflect progression in capability in technology education as we understand it.

 

Establishing a research agenda to address issues of assessment in technology education

 

In this section of the paper, I want to address some of the research issues that derive from what I have said so far about technology education and about assessment.

 

There appears to be very little research that focuses on assessment in technology education. In a count conducted for this paper, the last five volumes (that's five years) of the International Journal of Technology and Design Education contain just four articles that address the topic. Clearly there is much to be done!

 

When Jenkins wrote very recently about research in science education, he could just as easily have been writing about technology education. Jenkins suggested that "research in science education.is concerned with that which critically informs.judgments and decisions in order to improve action in the field of. education" (Jenkins, 2001, p.11). If one substitutes the term "technology" for "science," I think we have an important objective for our work: improving practice.

 

I think it important that as we formulate a program of research about assessment in technology education, we remain aware of some common research assumptions. First, for example, in the educational literature on student assessment, there is the tradition of providing teachers with "best practice" as defined by those with expertise in measurement and evaluation. This conceptual model assumes that if classroom assessment practice is logically derived from psychometric practice, then better judgements and decisions for students will result, thereby enhancing student learning. There is a great deal of literature in this Mode 1 view of the world. However, research by Philipp (1994) and his colleagues has demonstrated the pitfalls of this approach.

 

Second, a word about research design. Quite recently Haynie (1998) called for more case studies and more experimental research in technology education. I suggest we adopt what Phillips (1992) calls a non-foundationalist stance. By this he means that a study should not be anchored in any one research paradigm. We must allow the research problem and not a particular research paradigm to give form to the questions, designs and strategies for data management and analysis. This approach does not ask whether a study should be quantitative or qualitative. Instead, we should consider the information we would need for a more informed discussion of our problems. The priority must be to gather information that will help us understand assessment in technology education.

 

Earlier in this paper, I discussed the power of portfolios as part of a student's demonstration of their capability. In a series of studies Shulha (1999), Shulha, Wilson and Anderson (1999), and Wilson (1999b) demonstrated that using controlled portfolios is a powerful method of investigating complex phenomena such as capability. Structured portfolios, which are an ecologically valid research instrument, may be a promising research tool for studying both capability and assessment of student achievement in technology education.

 

I want to suggest the need to conduct research in classrooms to identify teachers' assessment practices. Research into teachers' assessment practices would need to uncover how participants thought about their students, what shaped these perceptions, and how this thinking led to decisions about achievement. And if our research results are to help teachers to move their assessment practices from administrative-driven to instruction-driven, then we must examine how assessment can best be integrated with teaching and learning in the technology classroom.

 

I also think that teachers must increasingly be seen as co-principals in undertaking research into teaching, learning and assessment in technology education. This was suggested in the presentation earlier today by Barlex, when he quoted Hargreaves (1998), "knowledge creation and dissemination in education must now move into Mode 2: teacher-centered knowledge creation through partnerships."

 

The National Research Council (1999) also called for more collaborative forms of research. In contrast to a traditional, linear progression from research to development and dissemination, the authors of this document argue for investing in research projects that would advance fundamental understandings at the same time that they would work to solve practical problems in real-world settings, i.e., classrooms. To consider how particular classroom assessment strategies might be used to improve achievement, teams of teachers in schools might collaborate on projects aimed at investigating the efficacy of new and existing assessment practices.

 

I also think that what we learn about assessment in schools should have a direct bearing on teacher education. Brookhart (1999) argued that "classroom assessment must be taught to aspiring teachers in relation to both instruction and classroom management, not simply as decontextualized application of measurement principles" (p. 13).

 

I will end with another quote from Kimbell's: "if we are seriously concerned with raising standards in technology, then it is the understanding of teachers, the experience of teachers and the practice of teachers that we should be supporting" (p. 102). I would suggest that a major focus of an extended research program should be to investigate the understanding, the experience and the practice of teachers' assessment strategies.

 

References

 

Anderson, J. O. (1999). Modeling the development of student assessment. The Alberta Journal of Educational Research, 45, 278-287.

 

Assessment Reform Group. (1999). Assessment for learning: Beyond the black box. Cambridge: University of Cambridge School of Education.

 

Bachor, D. G., & Anderson, J. O. (1994). Perspectives on assessment practices in the elementary classroom in British Columbia, Canada. Assessment in Education: Principles, Policy and Practice, 1, 65-95.

 

Barlex, D. (1995). Nuffield design and technology: Teacher's guide. Harlow, UK: Longman.

 

Barlex, D. (2001, April). Possibilities for research in technology education. Paper presented at the Second American Association for the Advancement of Science Research in Technology Education conference, Washington, DC.

 

Bogdan, R. C., & Biklen, S. K. (1982). Qualitative research for education: An introduction to theory and methods. Boston, MA: Allyn & Bacon.

 

Brookhart, S. M. (1999). Teaching about communicating assessment results and grading. Educational Measurement: Issues and Practice, 18(1), 5-13.

 

Dewey, J. (1957). Reconstruction in philosophy. Boston, MA: Beacon.

 

Hargreaves, D. (1998). Creative professionalism: The role of teachers in the knowledge society. London: Demos.

 

Haynie, W. J. (1998). Experimental research in technology education: Where is it? Journal of Technology Education. http://scholar.lib.vt.edu/ejournals/JTE/v9n2/haynie.html

 

Herman, J. L., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment . Alexandria, VA: Association for Supervision and Curriculum Development.

 

Jenkins, E. (2001). Science education as a field of research. Canadian Journal of Science, Mathematics and Technology Education, 1(1), 9-21.

 

Joint Advisory Committee. (1993). Principles for fair student assessment practices for education in Canada. Edmonton, AB: Centre for Research in Applied Measurement and Evaluation.

 

Kimbell, R. (1997). Assessing technology: International trends in curriculum and assessment . Buckingham, UK: Open University.

 

Ministry of Education and Training. (1998). The Ontario Curriculum Grades 1-8: Science and Technology. Toronto: Queen's Printer for Ontario.

 

National Research Council. (1999). Improving student learning: A strategic plan for education research and its utilization. Washington, DC: National Academy Press.

 

Notman, D. (2000, May). The effects of portfolio assessment and student-led conferences on ownership and control. Paper presented at the annual meeting of the Canadian Society for Studies in Education, Edmonton, AB.

 

Phillips, D. C. (1992). The social scientist's bestiary: A guide to fabled threats to, and defenses of, naturalistic social science. New York: Pergamon.

 

Phillipp, R. A., Flores, A., Sowder, J. T., & Schappelle, B. P. (1994). Conceptions and practices of extraordinary mathematics teachers. Journal of Mathematical Behavior, 13, 155-180.

 

Roberts, P. (1994). The place of design in technology education. In D. Layton (Ed. ). Innovations in science and technology education: Vol. 5, 171-179. Paris: UNESCO.

 

Schools Council. (1970). The next two years. Nottingham, UK: National Centre for School Technology.

 

Shulha, L. M. (1999). Understanding novice teachers' thinking about student assessment. The Alberta Journal of Educational Research, 45, 288-303.

 

Shulha, L. M., Wilson, R. J., & Anderson, J. O. (1999). Investigating teachers' assessment practices: Exploratory non-foundationalist, mixed-method research. The Alberta Journal of Educational Research, 45, 304-313.

 

Wilson, R. J. (1999a, May). Assessment as an integrative activity. Paper presented at the annual meeting of the Canadian Society for Studies in Education, Montreal.

 

Wilson, R. J. (1999b). Factors affecting the assessment of student achievement. The Alberta Journal of Educational Research, 45, 267-277.

 

Wilson, R. J., & Shulha, L. M. (2001, April). Effects of method on conclusions about teachers' assessment practices. Paper presented at the annual meeting of the American Educational Research Association, Seattle, WA.

 

Young, S. F., & Wilson, R. J. (2000). Assessment & learning: The ICE model. Winnipeg, MB: Portage & Main.