I've had the privilege of collecting the thoughts from various people on this blog about how we might approach evaluating innovation. Many regard e-learning as disruptive with the potential to innovate teaching and learning. This year, I will start a new series on the potential role of crowd sourced data in evaluation: innovation, instructional design, e-learning, and technology program. As before, I'm looking for candidates to interview over the next five to six months. Please send me nominations.
In the meantime, I'm looking forward to a discussion this fall with NATO E-Learning (August) and AECT conference (November) goers. I posit that there may be opportunities for crowd data to inform our instructional designs. The wisdom of a defined crowd can be beneficial during instructional design and redesign processes. For organizations such as NATO that has tremendous human capital, the crowd can help its members solve their unique design problems and make decisions about the unknown or unfamiliar in ways essential to their goals of promoting stability, security, and prosperity. I maintain Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. It's worth a conversation in a world of lifelong learners and MOOCs.
Do you agree? Let's talk in Jacksonville, AECT. Others, let's talk here. Read more about my thoughts on the topic in my latest book chapter Massive Open Program Evaluation: Crowdsourcing’s Potential to Improve E-Learning Quality in the book pictured below.