Over the next several weeks, I will be inviting several educational researchers and practitioners to offer their perspectives on the role of evaluation in teaching, learning, and technology innovation. Each is asked to freely respond to the same four questions. My hope is to foster discussions that enrich understandings, challenge assumptions, and strengthen processes for those who evaluate proposals or design studies. The goal is to compile a useful reference here.
Stephen C. Ehrmann, Ph.D. |
Most recently, Stephen C. Ehrmann served as Vice Provost for
Teaching & Learning at the George Washington University. Previously he was founding Vice President of
the Teaching, Learning, and Technology Group (TLT Group) and Director of the
Flashlight Program for the Evaluation of Educational Uses of Technoloy; Senior
Program Officer with the Annenberg/CPB Projects; Program Officer with the Fund
for the Improvement of Postsecondary Education (FIPSE), and Director of
Educational Research and Assistance at The Evergreen State College. He has a
Ph.D. in Management and Higher Education from MIT.
Blog and
bio: http://sehrmann.blogspot.com
How do we recognize innovation in teaching and learning with technology?
It combines something unfamiliar with something promising.
What challenges might practitioners and researchers face
when evaluating innovation? Do you have ideas for how to overcome these challenges?
How do we recognize innovation in teaching and learning with technology?
It combines something unfamiliar with something promising.
- Evaluation is one of those things that everyone assumes he or she can do; the biggest challenge is not realizing there is history here – methods, previous findings, etc.
- Because innovation is by definition somewhat unfamiliar, to some degree you don’t know what you’re looking for.
- Those in charge of the evaluation are often advocates of the project. This role often leads them to counter-productive actions such as:
o
Delaying
the creation of an evaluation plan until all the available money has been
committed to other purposes and, if it’s a grant proposal, the proposal is due
in a day or two. Two major reasons why
evaluation plans for grant-funded edtech innovations are often so weak.
o
Another
reason: Delaying the evaluation until ‘all the kinks have been worked
out’. This means the evaluation isn’t
being used to identify the kinks or to help figure out how to work them out. It also often means that the evaluation is
delayed until a time when no one remembers to do it, or has any commitment to
do it. The initiative can end (as an
initiative) with stacks of data sitting somewhere, unanalyzed.
- Rapture of the technology: paying too much attention to the technology itself as though, by itself it drives change. That’s almost never true, so techno-centric evaluations can easily produce puzzling, misleading results. Imagine two institutions, each using a different brand of learning management system. Institution A uses LMS-A, while B uses LMS-B, a competitor. Institution A has a long tradition of learning communities, seminars, group projects while institution B, valuing access highly, historically has asked little group work of its commuting students and distant learners. An evaluation is done of online communication among students – how frequent? How productive? A beats B. A technocentric evaluation is designed on the assumption that it’s the technology that determines the use of the technology, so the author concludes that LMS-A is better for online learning communities than LMS-B.
In an
unpublished article, I’ve described 10 principles for doing a better job
evaluating innovations in education, especially innovations using technology.
Briefly:
- Above all, do no harm. (For example, don’t ask for data
if you don’t have a firm idea of how you can satisfy your survey
respondents and other informants that helping you was, indeed, worth their
valuable time.)
- Design the evaluation so that, no matter what you find,
each stakeholder is somehow better off.
- Be ready to compare apples with oranges because that
kind of comparison is almost inevitable in a study of an educational
innovation.
- Focus on what people DO most often with the program
(their ‘activities’) rather than looking only at the tools with which they
do it.
- Study why those people do those things in those ways.
- Compare the program with the most valuable alternative,
not with doing nothing.
- Study costs, not just benefits.
- Remember that education does not work like a well-oiled
machine – doing the ‘same thing’ often does not produce the ‘same
results.’
- Recognize that different people using the program often
have different motives, perceptions, circumstances, and, therefore,
outcomes. Realizing this leads to a
dramatically different approach to assessment and evaluation – the
students are doing different things (#4 above) because they have different
reasons and different influences acting on them (#5 above). So the
outcomes will differ from student to student.
- Start now! It’s never too early in the life of innovation, even going back to before it’s introduced, to do useful studies.
Can you point to some promising innovations in teaching and
learning?
The use
of questions designed to make students think with polling techniques (such as
clickers or cell phones), and peer instruction.
See here for a great video on this: https://www.youtube.com/watch?v=WwslBPj8GgI
Are there some effective research initiatives or studies our
readers should examine? If not, why do you think that is the case?
This is a provocative study, done over 15 years ago: J. C. Wright, S. B. Millar, S. A. Kosciuk, D.
L. Penberthy, P. H. Williams, and B. E. Wampold (1998) “A Novel Strategy for
Assessing the Effects of Curriculum Reform on Student Competence,” Journal of Chemical Education, v. LXXV,
pp. 986-992 (August).