Dr. Ward Mitchell Cates |
To wrap up the series, I'm starting 2015 with an interview with one of my favorite instructional design thinkers and my personal evaluation mentor, Ward M. Cates. As you probably guessed, our connection started at LehighU. Let's see what he has to say about innovation and evaluation. Looking forward to some thoughtful blog interactions. I know instructional designers and graduate students are pros at blog interactions. Right AECT graduate student assembly?
Dr. Cates
is a past International President of the Association for Educational
Communications and Technology, which awarded him their Distinguished Service
Award in 2008, and is the current President of the ECT Foundation Board, the
non-profit wing of AECT. He has just stepped down after reviewing for ETR&D
for 20 years. His research focuses on
instructional and interface design for educational software. Longtime associate dean for the College of
Education at Lehigh University, Ward is just completing his 40th
year as a tenure track faculty member.
For more
info, visit www.lehigh.edu/~wmc0
How do we recognize innovation in teaching and learning with technology?
How do we recognize innovation in teaching and learning with technology?
“Innovation” simply means, “new, not previously tried.” When we talk about “innovation in teaching and learning,” we typically mean more than that, however. We use the phrase to mean trying new delivery systems, new instructional and learning methods and assessments and new approaches to learners that are both new and effective. Innovation without success is seldom worth talking about, except for purposes of case learning. Thus, what we really mean is (effective) innovation. I think the only real way to recognize (effective) innovation is to look at outcomes. That is, (effective) innovation produces enhanced outcomes that take the learner closer to what is desired, both by the instructor and the learner. Technology is the tool here, and using technology to do things that either have been done before or have failed to be effective is not (effective) innovation. Technology offers affordances and opportunities. If, however, the design of the teaching and learning episode(s) is not, by itself innovative, adding technology will not make it so. (Effective) innovation in teaching and learning with technology seems to be about the confluence of the innovation of the design with the effects of the opportunities and affordance of the technology employed to produce enhanced outcomes.
How we evaluate outcomes has to be based on what is desired by both the instructor and the learner. Thus, we must begin with what each seeks and how the design and delivery help attain the desired outcomes. Secondarily, we should consider cost-effectiveness. That is, how has the (effective) innovation increased scalability or reduced staffing or affected other similar financial limitations that hamper current practice. But we must be cautious not to sacrifice desired outcomes to cost analyses. Cost-effective, but lower-expectations, learning can sacrifice teaching and learning on a golden altar..
What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
The greatest threat, it seems to me, is that one might mistake the creativity and power of the technology for an innovation in teaching and learning. We technophiles tend to be easily enthralled by new technologies’ ability to do things we could not have even imagined a few years earlier. Thus, we can be wowed by the flash of the technology and not recognize that a technology actually enables nothing that has not been done before the introduction of that technology, or it simply adds a new toy without enhancing outcomes meaningfully.
“Meaningful” is a key criterion here. I often cite what I call the technology coefficient. It is a fraction with the numerator made up of the need for the new technology in terms of gains in teaching and learning and the denominator made up with how much the technology costs to implement and how difficult it is to learn and use. When (need/gain) is great and (cost/difficulty) is low, the resultant coefficient is large and the use of the technology is likely to be warranted. When the inverse is true, the coefficient is small and the “true cost” of the use of the technology exceeds its meaningful contribution. Think of the coefficient this way: Educational need over technology.
The way to overcome this threat is to focus on outcomes and not technology. One should look for what an (effective) innovation contributes and why. If one cannot explain WHY something is an innovation, one might be led to suspect the influence of the novelty effect. Further, we need to avoid being persuaded by champions or affiliating too strongly with the technology. Focus on design and desired outcomes. Even if the innovation produces new and interesting outcomes, we need to remember that we seek not just new outcomes, but DESIRED outcomes.
Can you point to some promising innovations in teaching and learning?
I am encouraged by the focus on depth versus breadth and by the periodic re-emphasis on “critical thinking.” I am not sure these are innovations but they represent foci that could lead to (effective) innovations. I fear the dictatorship of standardized testing acts to kill such innovation, however. While I am not in any way opposed to standardized testing, I have little regard for testing that measures low-level learning, memorization and simple calculation. If testing is to be a major measure of outcome, it must measure something we desire to see accomplished.
I recall something I overhead over the fence in my yard many, many years ago. A 12-year old neighbor was talking with her 8-year-old brother about a standardized test they had both taken that day in school. The younger brother asked his sister what she had answered to the question, “How many feet are in a yard?” She told him, “3.” He then said, “Darn, I missed it. I wrote, ‘Depends on how many animals are standing in the yard.’” So, was he wrong? Was her answer a better answer? It seems to me, we need to decide what the desired outcome is and to design tests that allow us to measure those outcomes, not just the ones that are easy to ask and easy to score.
Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
I am disinclined to name some initiatives/studies as “effective” and, by omission, imply others are not. Let me suggest instead that we not become so caught up in “presentism” that we fail to examine and explore the research done over the last 50+ years. I constantly encounter researchers who state as their topic of research something like “online video” but clearly have done no reading in the long history of research on related areas, such as film use. The medium/delivery method is less important than the affordances. We need to consider broadly, rather than narrowly define our topic to minimize what we need to read. Much excellent research has been done over the years, not just in the last 10 years. I suspect “presentism” is aided and abetted by online searching. Sometimes, one just has to go into the stacks and handle a paper source. Not everything worth reading has been scanned or converted.
Lastly, I think reading broadly to get context is crucial to true scholarship. An evaluator/researcher needs to have a grasp on a wide range of literature and previous work. Each thing one reads adds to one’s understanding and may enhance one’s arsenal of evaluation weapons.
Previous posts in the series: