Translate

Thursday, July 24, 2014

Evaluation & Innovation: Steve's Perspective

Over the next several weeks, I will be inviting several educational researchers and practitioners to offer their perspectives on the role of evaluation in teaching, learning, and technology innovation. Each is asked to freely respond to the same four questions. My hope is to foster discussions that enrich understandings, challenge assumptions, and strengthen processes for those who evaluate proposals or design studies. The goal is to compile a useful reference here.

Steve Ehrmann photo
Stephen C. Ehrmann, Ph.D.
Most recently, Stephen C. Ehrmann served as Vice Provost for Teaching & Learning at the George Washington University.  Previously he was founding Vice President of the Teaching, Learning, and Technology Group (TLT Group) and Director of the Flashlight Program for the Evaluation of Educational Uses of Technoloy; Senior Program Officer with the Annenberg/CPB Projects; Program Officer with the Fund for the Improvement of Postsecondary Education (FIPSE), and Director of Educational Research and Assistance at The Evergreen State College. He has a Ph.D. in Management and Higher Education from MIT.

Blog and bio: http://sehrmann.blogspot.com 


How do we recognize innovation in teaching and learning with technology?
It combines something unfamiliar with something promising. 


What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges? 
  • Evaluation is one of those things that everyone assumes he or she can do; the biggest challenge is not realizing there is history here – methods, previous findings, etc.
  • Because innovation is by definition somewhat unfamiliar, to some degree you don’t know what you’re looking for.
  • Those in charge of the evaluation are often advocates of the project. This role often leads them to counter-productive actions such as:

o   Delaying the creation of an evaluation plan until all the available money has been committed to other purposes and, if it’s a grant proposal, the proposal is due in a day or two.  Two major reasons why evaluation plans for grant-funded edtech innovations are often so weak.
o   Another reason: Delaying the evaluation until ‘all the kinks have been worked out’.  This means the evaluation isn’t being used to identify the kinks or to help figure out how to work them out.  It also often means that the evaluation is delayed until a time when no one remembers to do it, or has any commitment to do it.  The initiative can end (as an initiative) with stacks of data sitting somewhere, unanalyzed.
  • Rapture of the technology: paying too much attention to the technology itself as though, by itself it drives change.  That’s almost never true, so techno-centric evaluations can easily produce puzzling, misleading results.  Imagine two institutions, each using a different brand of learning management system.  Institution A uses LMS-A, while B uses LMS-B, a competitor.  Institution A has a long tradition of learning communities, seminars, group projects while institution B, valuing access highly, historically has asked little group work of its commuting students and distant learners.  An evaluation is done of online communication among students – how frequent? How productive?  A beats B. A technocentric evaluation is designed on the assumption that it’s the technology that determines the use of the technology, so the author concludes that LMS-A is better for online learning communities than LMS-B.  

In an unpublished article, I’ve described 10 principles for doing a better job evaluating innovations in  education, especially innovations using technology. Briefly:
  1. Above all, do no harm. (For example, don’t ask for data if you don’t have a firm idea of how you can satisfy your survey respondents and other informants that helping you was, indeed, worth their valuable time.)
  2. Design the evaluation so that, no matter what you find, each stakeholder is somehow better off.
  3. Be ready to compare apples with oranges because that kind of comparison is almost inevitable in a study of an educational innovation.
  4. Focus on what people DO most often with the program (their ‘activities’) rather than looking only at the tools with which they do it.
  5. Study why those people do those things in those ways.
  6. Compare the program with the most valuable alternative, not with doing nothing.
  7. Study costs, not just benefits.
  8. Remember that education does not work like a well-oiled machine – doing the ‘same thing’ often does not produce the ‘same results.’
  9. Recognize that different people using the program often have different motives, perceptions, circumstances, and, therefore, outcomes.  Realizing this leads to a dramatically different approach to assessment and evaluation – the students are doing different things (#4 above) because they have different reasons and different influences acting on them (#5 above). So the outcomes will differ from student to student.
  10. Start now!  It’s never too early in the life of innovation, even going back to before it’s introduced, to do useful studies.
Can you point to some promising innovations in teaching and learning?
      The use of questions designed to make students think with polling techniques (such as clickers or cell phones), and peer instruction.  See here for a great video on this: https://www.youtube.com/watch?v=WwslBPj8GgI
      
     Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
      This is a provocative study, done over 15 years ago:  J. C. Wright, S. B. Millar, S. A. Kosciuk, D. L. Penberthy, P. H. Williams, and B. E. Wampold (1998) “A Novel Strategy for Assessing the Effects of Curriculum Reform on Student Competence,” Journal of Chemical Education, v. LXXV, pp. 986-992 (August).







Thursday, July 10, 2014

New Interview Series on Evaluation Coming Soon!

I am lining up a series of researchers, practitioners, and leaders to share their thoughts on various types of evaluation. The first series of interviews will focus on how one evaluates innovation in teaching, learning and/or research. So be sure to check back in a couple of weeks for the first post. I'll also be lining folks up for the blog in Madison at the 30th Annual Conference on Distance Teaching & Learning being held August 12-14 in Madison, Wisconsin. I'm presenting at the conference as well.

More about the Conference!

Sponsored by the University of Wisconsin-Madison, this event is a great place to hear leading experts and share best practices with colleagues from around the world in the field of online education and training. Plus, Madison is a beautiful place to visit in the summer. If you’re not familiar with this event, and want to find out more, I encourage you to visit their website at www.uwex.edu/disted/conference. Hope to see you there.