Education Development Center, Inc.
Center for Children and Technology

Toward a Design Science of Education

CTE Technical Report Issue No. 1
January 1990

Prepared by:
Alan Collins
Bolt Beranek and Newman

Many technologies have been introduced in classrooms all over the world, but these innovations have provided remarkably little systematic knowledge or accumulated wisdom to guide the development of future innovations. Bolt Beranek and Newman (BBN) is part of the new Center for Technology in Education located at Bank Street College of Education in New York City. The Center's goals are to synthesize research on technological innovations; to develop a methodology for carrying out design experiments; to study different ways of using technology in classrooms and schools; and to begin to construct a systematic science of how to design educational environments so that new technologies can be introduced successfully.

Historically, some of the best minds in the world have addressed themselves to education; for example, Plato, Rousseau, Dewey, Bruner, and Illich. But they addressed education essentially as theorists, even when they tried to design schools or curricula to implement their ideas. Today, some of the best minds in the world are addressing themselves to education as experimentalists. Their goal is to compare different designs to see what affects what. Technology provides us with powerful tools to try out different designs so that, instead of theories of education, we can begin to develop a science of education. However, it cannot be an analytic science, such as physics or psychology, but rather a design science, such as aeronautics or artificial intelligence. For example, in aeronautics the goal is to elucidate how different designs contribute to lift, drag, and maneuverability. Similarly, a design science of education must determine how different designs of learning environments contribute to learning, cooperation, and motivation.

Unfortunately, major problems with current design experiments prevent our gaining much information from them. For the most part, these experiments are carried out by the designers of a technological innovation who have a vested interest in seeing that it works. Typically, they look only for significant effects (which can be very small) and test only one design, rather than trying to compare the size of effects for different designs or innovations. Furthermore, such experiments are so variable in their design and implementation that it is difficult to draw conclusions about the design process by comparing different experiments. Finally, they are carried out without any underlying theory; thus, the results are largely un-interpretable with respect to constructing a design theory of technological innovation in education. Although we plan to look at past experiments in detail, we believe that the conclusions to be drawn from them are very limited.

Our goals, then, will be (a) to construct a more systematic methodology for conducting design experiments, and (b) to develop a design theory that can guide implementation of future innovations. We anticipate a methodology that will involve working with teachers as co-investigators to compare multiple innovations (media and software) at one site and with no vested interest in the outcome. The design theory we envision will identify all the variables that affect the success or failure of any innovation, and will specify critical values and combinations of values with respect to these variables.

Methodology for Design Experiments

While we will describe our initial ideas about a methodology for carrying out design experiments, we expect to make refinements during the first years of the project. First, there is a huge space of possible designs that might be tried out in schools. Thus, a major goal of such a methodology must be to explore systematically the space of designs in relatively few experiments in order to extrapolate into the regions of the space that cannot be tested directly. Second, a large number of constraints, which derive from the school setting and the capabilities of administrators, teachers, and students to deal with new technologies, limit our ability to try out different designs. Therefore, the goal must be to maximize the information gained within the limitations of any particular experiment.

There are several desiderata that we think are critical in developing such a methodology:

1. Teachers as co-investigators. To be successful, the experiments must work within the constraints defined by the teachers and must address their questions. Hence, it is critical that teachers take on the role of co-investigators, helping to formulate the questions to be addressed and the designs to be tested, making refinements in the designs as the experiment progresses, evaluating the effects of the different aspects of the experiment, and reporting the results of the experiment to other teachers and researchers.

2. Comparison of multiple innovations. In order to assess the relative effects of different innovations, it is important to try out multiple innovations within and across sites. Within a site, it is possible to hold constant such factors as the teachers, the students, and the school culture in order to make comparisons. Across sites, it is possible to vary these same factors systematically.

3. Objective evaluation. In order to develop a design theory, we want to break the pattern of developers' testing their own innovations. In order to address questions of how well different innovations work and under what circumstances, we need to view these innovations objectively. While we will be test-ing some of our own technologies, we will do so in situations where they can be compared with other technologies, and where the developer is not included in the design team for that site.

4. Testing of technologies most likely to succeed first. In school settings, tool-based technologies such as word processors or graphing packages are most likely to have wide application and be used most successfully because they do not require the restructuring of the school milieu.

5. Multiple expertise in design. In any design of a classroom (or larger unit), a vast number of variables may affect the outcome. The goal should be to optimize these variables within the constraints of the setting. To accomplish this requires an interdisciplinary team of expertsteachers, designers, technologists, anthropologists, and psychologists.

6. Systematic variation within sites. In order to test hypotheses about particular design questions, it is best to make specific comparisons within a site. In this way, most variables can be held constant while addressing such questions as the structure of the classroom, the role of the teacher, or the activities using a particular technology. The teacher(s) must be interested but neutral about questions addressed, and confident that they can execute the two variations successfully.

7. Flexible design revision. It may often happen early in the school year that the teachers or researchers feel that a particular design is not working. It is important to analyze the reasons for failure and to take steps to fix them. It is critical to document the nature of the failures and the attempted revisions, as well as the overall results of the experiment, because this information informs the path to success.

8. Multiple evaluation of success or failure. Success or failure of an innovation cannot be evaluated simply in terms of how much students learn on some criterion measure. A number of questions must be addressed, such as: How sustainable is the design after the researchers leave? How easy is it to realize the design in practice? How much does the design emphasize reasoning as opposed to rote learning? How does the design affect the attitudes and motivation of teachers and students? How much does the design encourage students to help other students learn? To evaluate these variables, it is necessary to use a variety of evaluation techniques, including standardized pre- and post-tests and ongoing evaluations of the classroom milieu. For these latter evaluations, we anticipate using both observation and interview techniques and, perhaps, primary trait scoring based on videotapes of the classrooms. Issues such as sustainability require follow-up studies to see what happens to the design in later years.

A major goal of the Center, then, will be to develop a specific methodology incorporating these desiderata (and others discovered in the course of our research). The design experiment described below gives an idea of the kind of design we think might be viable in sites we have worked with in the past. It is not final because the teachers and researchers must arrive at a final design within the constraints of a particular setting. But it concretizes the abstract principles described above.

What are Design Experiments?

The best way to describe design experiments is to give an example of an experiment we may carry out. We have been thinking about developing a technology-based unit on the relative motion of the earth and sun and the seasons; that is, why it is warmer in the summer and colder in the winter. Several of us have been working with fourth grade classrooms in Cambridge (with large numbers of minority children) observing teachers, developing materials, and interviewing students about the seasons. Philip Sadler, who interviewed 24 graduating seniors at Harvard, found that only one understood the causes of the seasons. Clearly, this is a topic that students are failing to learn in school, although the seasons are taught in most K-12 curricula.

We propose to consider five technologies in developing a unit about the seasons: (1) The television series, The Voyage of the Mimi 2, developed at Bank Street College, has several programs devoted to astronomyin particular, the relative motions of the earth and sun. (2) Associated with The Voyage of the Mimi series, Bank Street has developed a series of computer programs that allow students to explore different views of the earth-sun relationship (e.g., an orbital view with earth rotation and day/night cycles; a view out of a window in New York showing the sun at different times of the year; a dome of the sky view showing how the sun moves across the sky relative to New York and Capetown at different times of year; and a view of projected shadows at different times of the year). (3) The ELASTIC program developed at BBN for teaching students how to construct tables of data and to graph them in different ways. (4) A computer network, such as Earthlab or Kidnet, to encourage students to communicate with other students about their findings. (5) Word processors and drawing programs that students can use to produce documents about their findings.

Our first step would be to observe a number of teachers and to choose two who are interested in using technology to teach students about the seasons. The teachers must be comparably effective, but must have different teaching styles; for example, one might work with activity centers in the classroom and the other with the entire class. Ideally, the teachers should have comparable populations of students.

We plan to devise a unit that optimally integrates the available technology. For example, we might have students watch The Voyage of the Mimi episodes and then work with the various computer views. Students might then be encouraged to collect data on the sun's position as seen at different times from their school and put these data in ELASTIC. They could then compare their data with those in the window-view program from Bank Street, and perhaps with students in another location. Finally, they might produce books explaining their observations and understanding of the movements of the earth and sun and the causes of the seasons.

Assuming that both teachers teach a number of classes, we would ask each to teach half her classes using the design we have developed. In the other classes, we would help the teacher design her own unit on the seasons using these various technologies, one that is carefully crafted to fit with her normal teaching style.

In evaluating the results of the experiment, we would look at a number of different aspects:
One of the purposes of the study is to determine the form a design theory should take: Can it try to characterize the most effective designs in terms of activities and technologies, or must the theory differentiate different designs given different teaching styles? Similar issues are raised in the next section.

While the grain size of this experiment is at the individual classroom level, design experiments should also be done at the grade, school, and district levels. Such larger experiments would permit variation in cooperation between teachers, length of class period, peer tutoring across grade levels, relations of community to school, which cannot be viably altered at the classroom level.

A Design Theory for
Educational Innovations

Our long-term goal in studying various technological innovations in schools and in carrying out a series of design experiments is to construct a design theory for technology innovation. This design theory will attempt to specify all the variables that affect the suc-cess or failure of different designs. Furthermore, it will attempt to specify what values on these variables maximize chances for success, and how different variables interact in creating successful designs. Crafting such a design theory for technological innovation in education has not been attempted heretofore, but we think it is the most critical role for a national center for educational technology.

The first phase of our work in constructing such a theory will be to identify all the relevant variables: dependent variables, by which we measure the success or failure of any innovation; and independent variables, which are the variables we control in creating any design. Identifying the relevant variables will be a major goal of our analysis of different innovations that have been attempted to date. Because they have been so varied in their designs, they should have uncovered most of the critical variables needed for a design theory.

Some of the dependent variables we think are important are listed above in the section on multiple evaluations. The independent variables cover a wide range that includes the technologies, software, and associated activities; the number of machines and their configuration in the classroom; the roles that students and teachers play in working with the technologies; the maintenance and other kinds of support for teachers using technology; the amount of planning time and preparation for using the technologies; and the organization of time and activities in the class period. While neither the list of independent or dependent variables is complete, they do give a flavor of the space over which a design theory will be constructed.

The second phase of our work will specify how the independent variables interact to produce success or failure with respect to the dependent variables. A vast array of issues surrounds the interaction of variables. For example:
Tables 1 and 2, which are based on interviews with Denis Newman and Andee Rubin, illustrate our first attempts to evolve a design theory. The interviews sought to determine what the respondents thought were critical factors affecting the success of technology in classrooms. What emerged was a set of principles that tacitly specified three things: (a) the scope of the principle (e.g., network-based software, computer technology); (b) the dependent variable affected by the factor (e.g., adoption, continued use, learning); and (c) the independent variable or factor itself (e.g., student-computer ratio, restart capability). Andee Rubin began to group together factors that affect a particular variable, such as adoption, because she has done some prior analysis. This kind of analysis leads to a systems-dynamic model, such as the models in econometrics or climatology.

These issues are meant only to be illustrative of the kind of issues a design theory must address. There are many issues that have important consequences for how we should deploy the technologies we develop, and it is important that we start addressing them in a systematic way.

Table 1. Factors Affecting Success of Technology
1. For all technology, adoption depends on whether the teacher has a lot of activities or is starved for innovative things to do. This variable might be thought of as activity saturation and depends on how much the teacher values the activities currently used.

2. Network-based software (e.g., Earthlab, Kidnet) takes coercion to reach critical mass and simultaneously to achieve continued use. There must be enough people communicating from the beginning to hold people's interest. Critical mass requires enough machines (20) and enough participants.

3. All technology used in projects must have the ability to stop work and restart easily on another machine (portability or restart capability) in to achieve continued use.

4. All computer technology requires multiple users for each machine (optimal between 2 to 1 and 4 to 1) in order to achieve cooperative learning (or kids teaching each other). This variable ought to be called student-computer ratio.

Format: Scope, Dependent variable, Independent variable.

Source: Denis Newman.

Table 2. Factors Affecting Success of Technology
There are 4 variables that affect the likelihood of adoption for any technology:

1. Teacher interest in technology. Some male teachers tend to be motivated by this variable, particularly if they have a computer at home.

2. Enhance subject-matter learning. If the teacher feels technology can help students learn a particular subject better, she is more likely to adopt the technology.

3. Teaching career enhancement. If the teacher feels administrators expect or would value her using technology, she is more likely to do it.

4. Teacher interest in experimentation. If the teacher wants to try something new, then the technology has appeal.

There are at least 5 variables that affect institutionalization and continued use of any technology.

1. Coordination between decision makers. Computer coordinators, curriculum specialists, and teachers are all involved in making decisions about how technology is used. Sometimes they are at different levels in the school district, which makes coordination difficult. Various decisions, such as who orders software, are assigned to different people in different systems.

2. Powerful advocate. To the degree that there is a budget-controlling administrator who is a strong advocate, the more likely it is that institutionalization will occur.

3. Student enthusiasm. To the degree that teachers see that students are enthusiastic and self-motivated to work on tasks, teachers are rewarded and likely to continue use.

4. Student learning. Not only do teachers want to see students enthusiastic, but in time (about a month) they want to see some tangible effects on learning. Again this affects continued use.

5. Teacher enthusiasm. If the teacher likes the technology and feels it improves her teaching, then she is likely to continue use.

There are some classroom management variables that affect both adoption and continued use of computer technology.

1. Activity-centered classrooms. If teachers structure classrooms around activity centers, then it is easy to incorporate computers into classrooms by adding one or two computers to the activity centers. This style allows for effective use in low student-computer ratio settings.

2. Whole-class teaching. If a teacher normally teaches to the whole class at one time, she has several options for trying to deal with the classroom management problem:

a. Some students miss the lesson. If there are one or two computers in the classroom, the teacher may let a few students, who can afford to miss the lesson, work on computers at the same time as she conducts the lesson with the class. This can lead to problems about making up work. Teachers do not like to do this because they feel their lessons are important for everyone, and so this strategy works against continued use.

b. Works with whole class on computers together. This is what happened in Columbus ACOT classroom with 1:1 student-computer ratio (computers mostly sit idle). Normally this strategy is implemented by going to computer labs, which is somewhat disruptive of lesson continuity. This strategy works somewhat better than (a) for continued use.

c. Teacher uses computer for demonstrations. If there is only one computer, then by using large screen projection, the teacher can run demonstrations on the computer. This probably leads to very little student learning.

Continued use of any technology also depends on the teacher's level of use of the technology. Susan Loucks identifies seven levels of expertise teachers move through as they gain greater ease and sophistication. Teacher training and professional development need to help teachers move through each of these levels.

Source: Andee Rubin

To appear in Scanlon, E.,& O'Shea, T. (Eds.). (in press). New directions in educational technology. New York: Springer-Verlag. This work was supported by the Center for Technology in Education under Grant No. 1-135562167-A1 from the Office of Educational Research and Improvement, U.S. Department of Education to Bank Street College of Education.

[ Home | About CCT | Projects | Newsletters | Reports | Staff | Links | EDC Home ]

Last Update: 11/18/96
Comments on the CCT Web site: Webspinner.

©1996 Education Development Center, Inc. All Rights Reserved.