Translate

Friday, May 22, 2015

Program Evaluation Resources




I haven't fallen off of the face of the evaluation map or of social media. My activity level needed to decrease. Not only are we finalizing the details of our move from PA to VA, but I've been working also in the building you see in this picture. What is she doing? I'm writing a book chapter: Massive Open Evaluation: The Potential Role of Crowd Sourced Input to Improve E-learning.

During this journey, I've come across several valuable resources. Two were passed on to me by Tom Reeves, PhD Professor Emeritus of Learning, Design, and Technology, UGA:


Here's Anita Baker's Evaluation Services website. It has excellent resources and tools.

Have a great summer!
  


Friday, March 6, 2015

Characteristics of an Effective Evaluator


                  Who wants to be an evaluator?

I'm fond of the saying, never take advice from someone you're not willing to trade places with in that area. Some of Whom would you consult before your next evaluation project?

Why would you take technology program or e-learning evaluation advice from this source? What characteristics makes that person or that evaluation resource valuable to you?

Share your thoughts here. 

Friday, January 30, 2015

Evaluation & Innovation: Ward's View



Dr Ward Cates
Dr. Ward Mitchell Cates
To wrap up the series, I'm starting 2015 with an interview with one of my favorite instructional design thinkers and my personal evaluation mentor, Ward M. Cates. As you probably guessed, our connection started at LehighU. Let's see what he has to say about innovation and evaluation. Looking forward to some thoughtful blog interactions. I know instructional designers and graduate students are pros at blog interactions. Right AECT graduate student assembly?

Dr. Cates is a past International President of the Association for Educational Communications and Technology, which awarded him their Distinguished Service Award in 2008, and is the current President of the ECT Foundation Board, the non-profit wing of AECT. He has just stepped down after reviewing for ETR&D for 20 years.  His research focuses on instructional and interface design for educational software.  Longtime associate dean for the College of Education at Lehigh University, Ward is just completing his 40th year as a tenure track faculty member.
For more info, visit www.lehigh.edu/~wmc0

How do we recognize innovation in teaching and learning with technology?
“Innovation” simply means, “new, not previously tried.”  When we talk about “innovation in teaching and learning,” we typically mean more than that, however.  We use the phrase to mean trying new delivery systems, new instructional and learning methods and assessments and new approaches to learners that are both new and effective. Innovation without success is seldom worth talking about, except for purposes of case learning.  Thus, what we really mean is (effective) innovation.  I think the only real way to recognize (effective) innovation is to look at outcomes.  That is, (effective) innovation produces enhanced outcomes that take the learner closer to what is desired, both by the instructor and the learner.  Technology is the tool here, and using technology to do things that either have been done before or have failed to be effective is not (effective) innovation.  Technology offers affordances and opportunities.  If, however, the design of the teaching and learning episode(s) is not, by itself innovative, adding technology will not make it so.  (Effective) innovation in teaching and learning with technology seems to be about the confluence of the innovation of the design with the effects of the opportunities and affordance of the technology employed to produce enhanced outcomes.
How we evaluate outcomes has to be based on what is desired by both the instructor and the learner.  Thus, we must begin with what each seeks and how the design and delivery help attain the desired outcomes. Secondarily, we should consider cost-effectiveness.  That is, how has the (effective) innovation increased scalability or reduced staffing or affected other similar financial limitations that hamper current practice.  But we must be cautious not to sacrifice desired outcomes to cost analyses.  Cost-effective, but lower-expectations, learning can sacrifice teaching and learning on a golden altar..
What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
The greatest threat, it seems to me, is that one might mistake the creativity and power of the technology for an innovation in teaching and learning.  We technophiles tend to be easily enthralled by new technologies’ ability to do things we could not have even imagined a few years earlier.  Thus, we can be wowed by the flash of the technology and not recognize that a technology actually enables nothing that has not been done before the introduction of that technology, or it simply adds a new toy without enhancing outcomes meaningfully.  
“Meaningful” is a key criterion here.  I often cite what I call the technology coefficient.  It is a fraction with the numerator made up of the need for the new technology in terms of gains in teaching and learning and the denominator made up with how much the technology costs to implement and how difficult it is to learn and use. When (need/gain) is great and (cost/difficulty) is low, the resultant coefficient is large and the use of the technology is likely to be warranted.  When the inverse is true, the coefficient is small and the “true cost” of the use of the technology exceeds its meaningful contribution.  Think of the coefficient this way: Educational need over technology.
The way to overcome this threat is to focus on outcomes and not technology.  One should look for what an (effective) innovation contributes and why.  If one cannot explain WHY something is an innovation, one might be led to suspect the influence of the novelty effect.  Further, we need to avoid being persuaded by champions or affiliating too strongly with the technology.  Focus on design and desired outcomes.  Even if the innovation produces new and interesting outcomes, we need to remember that we seek not just new outcomes, but DESIRED outcomes.
Can you point to some promising innovations in teaching and learning?
I am encouraged by the focus on depth versus breadth and by the periodic re-emphasis on “critical thinking.”  I am not sure these are innovations but they represent foci that could lead to (effective) innovations.  I fear the dictatorship of standardized testing acts to kill such innovation, however.  While I am not in any way opposed to standardized testing, I have little regard for testing that measures low-level learning, memorization and simple calculation.  If testing is to be a major measure of outcome, it must measure something we desire to see accomplished.
I recall something I overhead over the fence in my yard many, many years ago.  A 12-year old neighbor was talking with her 8-year-old brother about a standardized test they had both taken that day in school.  The younger brother asked his sister what she had answered to the question, “How many feet are in a yard?”  She told him, “3.” He then said, “Darn, I missed it.  I wrote, ‘Depends on how many animals are standing in the yard.’”  So, was he wrong?  Was her answer a better answer?  It seems to me, we need to decide what the desired outcome is and to design tests that allow us to measure those outcomes, not just the ones that are easy to ask and easy to score.
Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
I am disinclined to name some initiatives/studies as “effective” and, by omission, imply others are not.  Let me suggest instead that we not become so caught up in “presentism” that we fail to examine and explore the research done over the last 50+ years.  I constantly encounter researchers who state as their topic of research something like “online video” but clearly have done no reading in the long history of research on related areas, such as film use.  The medium/delivery method is less important than the affordances.  We need to consider broadly, rather than narrowly define our topic to minimize what we need to read.  Much excellent research has been done over the years, not just in the last 10 years.  I suspect “presentism” is aided and abetted by online searching.  Sometimes, one just has to go into the stacks and handle a paper source.  Not everything worth reading has been scanned or converted.
Lastly, I think reading broadly to get context is crucial to true scholarship.  An evaluator/researcher needs to have a grasp on a wide range of literature and previous work. Each thing one reads adds to one’s understanding and may enhance one’s arsenal of evaluation weapons.

Previous posts in the series:

Monday, December 29, 2014

Evaluation & Innovation: Dirk's View

The last post for 2014 belongs to Dirk Ifenthaler. Dirk and I ran into each other in Jacksonville, FL at AECT. In the midst of a hectic schedule, he took the time to share his thoughts. Although he would have liked to provide more insights, he says he does have many arguments ready for us, as necessary. Ready to hear from Dirk?
Dr. Dirk Ifenthaler

Dirk Ifenthaler’s previous roles include Professor and Director, Centre for Research in Digital Learning at Deakin University, Australia, Manager of Applied Research and Learning Analytics at Open Universities Australia, Professor for Education and Interim Department Chair at the University of Mannheim, Germany.

He was a 2012 Fulbright Scholar-in-Residence at the Jeannine Rainbolt College of Education, at the University of Oklahoma, USA. Dirk’s research outcomes include numerous co-authored books, book series, book chapters, journal articles, and international conference papers as well as successful grant funding in Australia, Germany, and USA . He is the Editor-in-Chief of the Springer journal Technology, Knowledge and Learning.
Website: http://www.ifenthaler.info
Technology, Knowledge and Learning: http://www.springer.com/10758/

How do we recognize innovation in teaching and learning with technology?
The major lesson to be learned from fifty years of research in the area of teaching and learning with technology is that what happens in learning environments is quite complex and multi-faceted. Students cannot learn from technology, rather, technology is a vehicle to support the processes of learning.
Due to the often very slow publication process, reports and results of innovative projects are published years later which does not facilitate the often discussed link between research and practice. The recent activities of researchers in social media help to disseminate some aspects of innovation, however, important results are often held back for publications which again slows down the innovation process.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
There is a clear lack of empirical evidence focusing on innovation in teaching and learning with technology. We need to conduct multidisciplinary and multimethods research within innovative learning environments involving researchers, education providers, and international institutions. Clearly, practitioners and researchers need to talk to each other. One place is AECT (www.aect.org) which brings together these two groups at their annual International Convention.

Can you point to some promising innovations in teaching and learning?
The current debate focusing on learning analytics is very promising. However, most often learning analytics is misunderstood as being a tool for reporting educational data. The real innovation of learning analytics is the real-time support and feedback for all stakeholders involved in teaching and learning with technology. This will result in facilitation of learning processes, optimization of teaching practices and materials, as well as improved curricular designs and administrative processes.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Check out Technology, Knowledge and Learning (www.springer.com/10758). There, I am supporting Work-in-Progress Studies and Emerging Technology Reports. These two publication types shall help to disseminate results from innovative projects already in their early phase.

Previous posts in the series:

Wednesday, November 19, 2014

Evaluation & Innovation: Anissa's View


Thanks to an ITForum nomination, I introduce to some and present to others this month's post by 
Dr. Anissa Likey-Vega. As faculty at Kennesaw State University, Anissa serves as the coordinator for the Online Teaching Endorsement and Certificate programs for the Department of Instructional Technology.  She is also founder and lead consultant at Curriculum Research and Development (CR&D), an educational technology program evaluation firm that specializes in serving the K12 sector. She has her Ph.D. in Instructional Design and Technology from Georgia State University. Curriculum Research & Development: http://www.curriculumrd.com/
Department of Instructional Technology at Kennesaw State University http://bagwell.kennesaw.edu/departments/itec

How do we recognize innovation in teaching and learning with technology?
I’m sure you’ve heard this before: If you look for a problem, you will find one; however, near that problem you will also likely find innovation. Whether K12, higher education, or industry training, the teaching and learning process is in itself an ill-structured problem complicated by unique contexts, limited resources, and (those pesky) humans. People will be innovating to solve the problems that serve as the biggest barriers to them in their context. This doesn’t mean innovation will always work or will always be adopted to saturation, but each innovation will stir debate and nay-sayers. This tension occurs because once an innovation has proven itself effective enough to be widely adopted in the environment, it is no longer innovative. It becomes the new status quo.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
Each evaluation is unique. The combination of context, time, problem, and innovation is unique every single time. Yes, there are general principles and models that can guide an evaluation plan; however, flexibility is critically important to the process and knowing when it is appropriate to change course can be challenging. Evaluation expertise is not always valued as much as it should be and this can result in evaluation outcomes that offer little direction to organizational managers. Practitioners should read up on evaluation practices in their fields and knowledgably look for evaluation expertise within their organization. If such expertise is not readily available, then it is wise to consider outsourcing the process. While it is not always possible, the best evaluations are pursued early in the innovation process, or maybe even before. This allows for the data to drive the change process and optimize the outcomes.

Can you point to some promising innovations in teaching and learning?
I’m probably a bit biased on this one since I coordinate the online teaching certificate at Kennesaw State, but I think blended and online learning practices and tools have the most potential to disrupt teaching and learning in a positive and meaningful way.  Of course, if you teach at a fully-online school, these tools and practices are now the status quo; however, many settings particularly in the K12 sector are not there, yet. Online instruction grants the learner more power in the differentiation of instruction and it puts a teacher’s instructional design on display for self-reflection. This self-review is not as easily achieved in a real-time unrecorded class session. Also, their best designs are more easily replicated when stored in a digital format rather than gone with the passing of time. This allows for teachers to put their best foot forward even on an off day.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Well, this is a bit challenging given that evaluation is context and problem-type specific.  This means it would be difficult to curate a research journal on the topic of innovation evaluation alone.  The audience would likely be scattered in context and problem-type, resulting in few articles to warrant devoted readership for each individual. I read about evaluation in journals that cross all disciplines. In fact, when I wrote my dissertation there was no published method to evaluate one or more intended curricula. I had to develop that process on my own. That said, my evaluation-related books that are resting closest to me right now are also the ones that show the most wear. These are the works of Thomas Guskey and Joellen Killion. I have to modify their designs as expected for each situation, but they both promote excellent planning and structure in the evaluation process.  Each time I start to wrap my brain around a new program or innovation that I need to evaluate, their works serve as my evaluation muses.

Previous posts in the series: