Translate

Wednesday, November 19, 2014

Evaluation & Innovation: Anissa's View


Thanks to an ITForum nomination, I introduce to some and present to others this month's post by 
Dr. Anissa Likey-Vega. As faculty at Kennesaw State University, Anissa serves as the coordinator for the Online Teaching Endorsement and Certificate programs for the Department of Instructional Technology.  She is also founder and lead consultant at Curriculum Research and Development (CR&D), an educational technology program evaluation firm that specializes in serving the K12 sector. She has her Ph.D. in Instructional Design and Technology from Georgia State University. Curriculum Research & Development: http://www.curriculumrd.com/
Department of Instructional Technology at Kennesaw State University http://bagwell.kennesaw.edu/departments/itec

How do we recognize innovation in teaching and learning with technology?
I’m sure you’ve heard this before: If you look for a problem, you will find one; however, near that problem you will also likely find innovation. Whether K12, higher education, or industry training, the teaching and learning process is in itself an ill-structured problem complicated by unique contexts, limited resources, and (those pesky) humans. People will be innovating to solve the problems that serve as the biggest barriers to them in their context. This doesn’t mean innovation will always work or will always be adopted to saturation, but each innovation will stir debate and nay-sayers. This tension occurs because once an innovation has proven itself effective enough to be widely adopted in the environment, it is no longer innovative. It becomes the new status quo.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
Each evaluation is unique. The combination of context, time, problem, and innovation is unique every single time. Yes, there are general principles and models that can guide an evaluation plan; however, flexibility is critically important to the process and knowing when it is appropriate to change course can be challenging. Evaluation expertise is not always valued as much as it should be and this can result in evaluation outcomes that offer little direction to organizational managers. Practitioners should read up on evaluation practices in their fields and knowledgably look for evaluation expertise within their organization. If such expertise is not readily available, then it is wise to consider outsourcing the process. While it is not always possible, the best evaluations are pursued early in the innovation process, or maybe even before. This allows for the data to drive the change process and optimize the outcomes.

Can you point to some promising innovations in teaching and learning?
I’m probably a bit biased on this one since I coordinate the online teaching certificate at Kennesaw State, but I think blended and online learning practices and tools have the most potential to disrupt teaching and learning in a positive and meaningful way.  Of course, if you teach at a fully-online school, these tools and practices are now the status quo; however, many settings particularly in the K12 sector are not there, yet. Online instruction grants the learner more power in the differentiation of instruction and it puts a teacher’s instructional design on display for self-reflection. This self-review is not as easily achieved in a real-time unrecorded class session. Also, their best designs are more easily replicated when stored in a digital format rather than gone with the passing of time. This allows for teachers to put their best foot forward even on an off day.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Well, this is a bit challenging given that evaluation is context and problem-type specific.  This means it would be difficult to curate a research journal on the topic of innovation evaluation alone.  The audience would likely be scattered in context and problem-type, resulting in few articles to warrant devoted readership for each individual. I read about evaluation in journals that cross all disciplines. In fact, when I wrote my dissertation there was no published method to evaluate one or more intended curricula. I had to develop that process on my own. That said, my evaluation-related books that are resting closest to me right now are also the ones that show the most wear. These are the works of Thomas Guskey and Joellen Killion. I have to modify their designs as expected for each situation, but they both promote excellent planning and structure in the evaluation process.  Each time I start to wrap my brain around a new program or innovation that I need to evaluate, their works serve as my evaluation muses.

Previous posts in the series: