Integrating technology: What does it mean?

by Brian Drayton

A remarkable amount of policy around education (teaching, learning, and assessing) is interwoven with “technology,” by which is usually meant “digital technology” and most often something Web-based and commercial. As you know well, our Information Age is said to have transformed the way people learn (or “consume knowledge”) and therefore the role of the teacher has to change (or, has changed). Moreover, the infiltration of the Web into every nook and cranny of life is generating Big Data, which will provide unheard of insights into the education process, putting Utopian tools into the hands of schools, bureaucracies, and teachers (or whatever they will become).

There is, however, still nowhere near enough research to enable us to critique such claims, or even decide which widgets to buy. Moreover, most technology innovations getting a classroom connected to the Internet, for example, are in fact more than one innovation in parallel, making it pretty hard to make rigorous or even well-formed claims about impact. Such work takes time and lots of careful analysis, and time is a commodity that is rarely lavished on research once the sales have been made, the machines or software is installed, and actual people use the new system for a solid length of time (something our team at TERC spent some time exploring a few years ago with respect to 1-to-1 computing).

Allow me, therefore, to direct your attention to a growing series of posts at Larry Cuban’s blog. Larry is (sort of) live-blogging a recent research project trying to get at the mysteries of technology education, and just what kind of impact it is having. In the first post, Larry makes the case for a series of closely observed case studies of integration. His research questions are:

How have classroom, school, and district exemplars of technology integration been fully implemented and put into classroom practice?
Have these exemplars made a difference in teaching practice?

In his second post, Cuban discusses how he decided to describe varying degrees of integration, a notoriously vexed term, and takes a bottom-up approach to creating a definition, asking practitioners to direct him to examples of “best cases”. From these, he derived a set of indicators for tech integration:

*District had provided wide access to devices and established infrastructure for use.
*District established structures for how schools can improve learning and reach desired outcomes through technology.
*Particular schools and teacher leaders had requested repeatedly personal devices and classroom computers for their students.
*Certain teachers and principals came regularly to professional development workshops on computer use in lessons.
*Students had used devices frequently in lessons.

In part 3, Cuban, noting that integration is not “all or nothing,” discusses some “stage models” for integration, reflecting on some of their assumptions about what is happening in each stage, or what enables the shift from one stage to another (such as the popular but debatable notions of PCK, or even TPCK) . Most importantly, he notes that these models tend to assume that when these various levels of use are reached, each can enable us to infer what it is the student is doing and especially what she is learning. As he says (in post #4), just because you see lots of functioning technology being used by the students very often, this does not tell you anything about student learning, nor even about the pedagogy of the class.

Cuban fans will not be surprised that it’s for this reason that Larry asks (his question #2) Has the integration of technology actually changed teaching in the classroom, and in what ways? As he writes,

Far too little research has been done in answering this question about changes in teaching practices. So in researching and writing this book, I, too, focus on the process of classroom change and not yet how much and to what degree students have learned from these lessons. Once changes in classroom practices can be documented, then, and only then, can one begin to research how much and to what degree students have learned content and skills.

Our “wireless high school” study examined teaching practices in high school science in our case study work, looking at dimensions like curriculum content, pedagogical practices (with a particular interest in student inquiry) and assessment; all in relation to the intended goal of the technology innovation. We could not look at change in practice, since we had not done a “pre test” on these classrooms, though we did ask teachers to report changes in their practice that they were aware of. Given how many kinds of instrumentation and technologies (including things like microscopes, multimeters, and glassware) science teachers have, the process of integrating the Web into all that is pretty formidable, and represents not just “teacher learning,” but in a broader sense, teacher growth, as they make choices about what is most important for their students to encounter and wrestle with; the student experience, and the design of tasks, is constantly being revisited. It takes time to figure out your pedagogical values again, especially when the technology keeps changing. As Cuban is quite aware, case study work is still pretty important, because our models of the challenge posed by technology integration, and the learning and experimentation needed to make good use of it are quite incomplete

Larry follows his 4-part reflection on his study design with a guest article by Mary Jo Madda of EdSurge, entitled “Did that ed tech tool really cause that growth?” In this post, Madda makes some recommendations for how to evaluate studies claiming student learning impacts from new technology.

First, for educators, she recommends

#1: Look for the caveat statements, because they might discredit the study.
#2: Be wary of studies that report huge growth without running a proper experiment or revealing complexities in the data.

Then, for tech companies:

#1: Consider getting your study or research reviewed.
#2: Continue conducting or orchestrating research experiments.

Each of these recommendations is accompanied by a helpful discussion, and all the posts in this series include many links to research and other resources. In coming weeks, I will review at least some of Larry’s cases. I encourage you to at least check out the posts I’ve described here (and the vigorous discussions accompanying them!).

Read the full conversation: http://hub.mspnet.org/index.cfm/31558