Translate

Monday, September 11, 2017

Is there a Viable Role for Crowdsourced Data in Evaluation Studies? Steve’s Thoughts

Dr. Steve Ehrmann
I’m starting to engage researchers in a conversation about crowdsourced data. In more formal ways, I started with the publication of a chapter in Revolutionizing Modern Education through Meaningful E-learning (Khan, B. ed., 2016). I will continue in Jacksonville, at a Research & Theory AECT round table. Over the next four months, I’ll also post the thoughts of various instructional design or distance learning researchers, leaders, and practitioners on the potential role in crowdsourced data. We’ll look at e-learning, innovation, and technology program evaluations in particular. There may be some who have some experience and feedback. I’m looking forward to the exchanges.

For our first installment, we’ll hear from Dr. Steve Ehrmann. Steve and I met when he was at George Washington University and later we exchanged ideas around a couple of redesign initiatives that started with his time at the Kirwan Center for Academic Innovation at the University System of Maryland and continue until today.
Steve shared thoughts below about how we might approach crowdsourcing and innovation. First take a look at his unpublished paper attached here and his description of the uniform impact and unique uses perspectives (pp. 9-11).  
The uniform impact perspective focuses on the same outcome for each user of the program (how well did they learn X, for example).  In contrast, the unique uses perspective assumes that each user may interpret the program differently, use it differently, and experience different results; this perspective assumes that results differ qualitatively from one user to the next. 
Crowdsourcing would be especially appropriate for a unique uses study of an innovation.  An evaluator or researcher might take the following steps:
1. Identify a crowd of users and figure out how to reach them online. 
2. Explain to them why it's in their interest to contribute their time to your inquiry (i.e., to respond thoughtfully to your message).  Intrinsically motivating your crowd produces more valid feedback than extrinsic rewards (e.g., entering them in a lottery if they contribute.) 
3. Ask each user to consider what's been most valuable for them about their use of the program.  What's been most burdensome, frustrating, or limiting about their use?  Explain that you need the crowd to produce a responses that are (a) each quite important to the person involved and (b) qualitatively different from other benefits and problems suggested by others.  (In 1990, if you were studying uses of word processing with personal computers, the first answers might have to do with the benefits of multiple fonts and the ease of editing. But, after enough brainstorming, eventually someone might mention that their whole approach to rethinking has changed because rewriting is so easy.)

4.  Starting with these two long lists, begin a second round of inquiry about each item. Are there patterns? Are there connections? Do they suggest new ways of understanding the program itself?

The closest I’ve seen to this approach occurred in a face-to-face discussion of perhaps 20 first generation users of an educational innovation: the use of chat rooms by students in f2f composition courses (as I recall, this was in the early 1990s). The classes met in computer labs. Instead of talking, faculty and students would type to one another.  A couple of months into the first term, faculty from perhaps ten institutions met to discuss what was happening in their courses. The first 45 minutes focused on technical issues and on what the faculty liked.  Then one faculty member, ashamed, admitted that students in his course had erupted into a barrage of profanity and obscenity.  A long pause. Then two or three others said something similar had happened to them, too.  Cutting to the end of the discussion, one faculty member remarked, “Think about the French revolution. Think about what happens when powerless people get power. Some windows get broken. But they’re investing energy into writing. That’s what you need most in a writing class. So the trick is not to crush the revolution but to figure out how to channel the energy!”  Later conversations revealed two additional, different ways to interpret this innovation, each with different insights about how to make more intentional, effective use of such chat rooms. 
Michael Scriven calls this a goal-free evaluation. What I’d emphasize is creating a process through which users can provide quite different pictures of what the program is for, how it can be used, what benefits can be created and at what cost.   Those perspectives may be incompatible with each other and at least some may come as a surprise to the people who created the program.


Thank you, Steve, for starting off our new discussion!

Sunday, September 3, 2017

Fall Series: Evaluation and the Wisdom of the Crowd

New opportunities exist for including stakeholders' and others' input in evaluation research. Can you imagine the nature of the improvements we might front load in e-learning and instructional designs if we're able to incorporate qualitative data seamlessly into the decision making process? I'd like to explore those possibilities this fall. 

Crowdsourcing involves gathering and processing large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information. Qualitative research also involves large data sets.  We can now quickly process large data sets in ways we could not in the past. For example, some helpful applications for possible qualitative research and e-learning program evaluation include “big data” techniques for information retrieval (IR), audio analytics, and video analytics (Gandomi & Haider, 2014). The techniques to acquire, clean, aggregate, represent, and analyze data are many and help justify the re-conceptualization of our evaluation paradigms and models--especially our interpretivist and postmodernist ones. Should we use crowdsourced data? If so, under what conditions given some of the challenges of internet security and crowds. I've done some research and thinking about this over the last year. Let's talk.

Defining and Framing

Estelles-Arolas & González-Ladrón-de-Guevara’s 2012 literature review on an integrated definition of crowdsourcing included 10 with a problem-solving purpose. Their proposed definition is as follows: 

Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number,via a flexible open call, the voluntary undertaking of a task.These should be useful to our discussion since the use of evaluation data in this context facilitates problem solving.


Crowdsourcing can change the way evaluators work, giving them access to just-in-time assistance in performing evaluation tasks. Researchers and practitioners will need more information to help determine if existing paradigms, approaches, and methods should be reassessed to accommodate crowdsourcing. Evaluators should also think carefully about what tasks might be appropriate for various program evaluation approaches, given some of the problems with crowdsourcing.

Additional questions to frame this blog feature: Does e-learning program evaluation need to develop its own definition of crowdsourcing, or merely validate Estelles-Arolas and González-Ladrón-de-Guevara’s (2012) definition? Do members of the American Evaluation Association task force want to consider conducting a needs-assessment for whether crowdsourced input will impact its guiding principles for evaluators (American Evaluation Association, 1995)?

Sources cited:
Estelles Arolas, E, & González-Ladrón-de-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200.

Gandomi, A., & Haider, M. (2014). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 137-144.

Tuesday, August 29, 2017

Evaluating Innovation and the Wisdom of the Crowd

I've had the privilege of collecting the thoughts from various people on this blog about how we might approach evaluating innovation. Many regard e-learning as disruptive with the potential to innovate teaching and learning. This year, I will start a new series on the potential role of crowd sourced data in evaluation: innovation, instructional design, e-learning, and technology program. As before, I'm looking for candidates to interview over the next five to six months. Please send me nominations.

In the meantime, I'm looking forward to a discussion this fall with NATO E-Learning (August) and AECT conference (November) goers. I posit that there may be opportunities for crowd data to inform our instructional designs. The wisdom of a defined crowd can be beneficial during instructional design and redesign processes. For organizations such as NATO that has tremendous human capital, the crowd can help its members solve their unique design problems and make decisions about the unknown or unfamiliar in ways essential to their goals of promoting stability, security, and prosperity. I maintain Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. It's worth a conversation in a world of lifelong learners and MOOCs. 


Do you agree? Let's talk in Jacksonville, AECT. Others, let's talk here. Read more about my thoughts on the topic in my latest book chapter Massive Open Program Evaluation: Crowdsourcing’s Potential to Improve E-Learning Quality in the book pictured below.

Friday, May 22, 2015

Program Evaluation Resources




I haven't fallen off of the face of the evaluation map or of social media. My activity level needed to decrease. Not only are we finalizing the details of our move from PA to VA, but I've been working also in the building you see in this picture. What is she doing? I'm writing a book chapter: Massive Open Evaluation: The Potential Role of Crowd Sourced Input to Improve E-learning.

During this journey, I've come across several valuable resources. Two were passed on to me by Tom Reeves, PhD Professor Emeritus of Learning, Design, and Technology, UGA:


Here's Anita Baker's Evaluation Services website. It has excellent resources and tools.

Have a great summer!
  


Friday, March 6, 2015

Characteristics of an Effective Evaluator


                  Who wants to be an evaluator?

I'm fond of the saying, never take advice from someone you're not willing to trade places with in that area. Some of Whom would you consult before your next evaluation project?

Why would you take technology program or e-learning evaluation advice from this source? What characteristics makes that person or that evaluation resource valuable to you?

Share your thoughts here.