Translate

Monday, September 11, 2017

Is there a Viable Role for Crowdsourced Data in Evaluation Studies? Steve’s Thoughts

Dr. Steve Ehrmann
I’m starting to engage researchers in a conversation about crowdsourced data. In more formal ways, I started with the publication of a chapter in Revolutionizing Modern Education through Meaningful E-learning (Khan, B. ed., 2016). I will continue in Jacksonville, at a Research & Theory AECT round table. Over the next four months, I’ll also post the thoughts of various instructional design or distance learning researchers, leaders, and practitioners on the potential role in crowdsourced data. We’ll look at e-learning, innovation, and technology program evaluations in particular. There may be some who have some experience and feedback. I’m looking forward to the exchanges.

For our first installment, we’ll hear from Dr. Steve Ehrmann. Steve and I met when he was at George Washington University and later we exchanged ideas around a couple of redesign initiatives that started with his time at the Kirwan Center for Academic Innovation at the University System of Maryland and continue until today.
Steve shared thoughts below about how we might approach crowdsourcing and innovation. First take a look at his unpublished paper attached here and his description of the uniform impact and unique uses perspectives (pp. 9-11).  
The uniform impact perspective focuses on the same outcome for each user of the program (how well did they learn X, for example).  In contrast, the unique uses perspective assumes that each user may interpret the program differently, use it differently, and experience different results; this perspective assumes that results differ qualitatively from one user to the next. 
Crowdsourcing would be especially appropriate for a unique uses study of an innovation.  An evaluator or researcher might take the following steps:
1. Identify a crowd of users and figure out how to reach them online. 
2. Explain to them why it's in their interest to contribute their time to your inquiry (i.e., to respond thoughtfully to your message).  Intrinsically motivating your crowd produces more valid feedback than extrinsic rewards (e.g., entering them in a lottery if they contribute.) 
3. Ask each user to consider what's been most valuable for them about their use of the program.  What's been most burdensome, frustrating, or limiting about their use?  Explain that you need the crowd to produce a responses that are (a) each quite important to the person involved and (b) qualitatively different from other benefits and problems suggested by others.  (In 1990, if you were studying uses of word processing with personal computers, the first answers might have to do with the benefits of multiple fonts and the ease of editing. But, after enough brainstorming, eventually someone might mention that their whole approach to rethinking has changed because rewriting is so easy.)

4.  Starting with these two long lists, begin a second round of inquiry about each item. Are there patterns? Are there connections? Do they suggest new ways of understanding the program itself?

The closest I’ve seen to this approach occurred in a face-to-face discussion of perhaps 20 first generation users of an educational innovation: the use of chat rooms by students in f2f composition courses (as I recall, this was in the early 1990s). The classes met in computer labs. Instead of talking, faculty and students would type to one another.  A couple of months into the first term, faculty from perhaps ten institutions met to discuss what was happening in their courses. The first 45 minutes focused on technical issues and on what the faculty liked.  Then one faculty member, ashamed, admitted that students in his course had erupted into a barrage of profanity and obscenity.  A long pause. Then two or three others said something similar had happened to them, too.  Cutting to the end of the discussion, one faculty member remarked, “Think about the French revolution. Think about what happens when powerless people get power. Some windows get broken. But they’re investing energy into writing. That’s what you need most in a writing class. So the trick is not to crush the revolution but to figure out how to channel the energy!”  Later conversations revealed two additional, different ways to interpret this innovation, each with different insights about how to make more intentional, effective use of such chat rooms. 
Michael Scriven calls this a goal-free evaluation. What I’d emphasize is creating a process through which users can provide quite different pictures of what the program is for, how it can be used, what benefits can be created and at what cost.   Those perspectives may be incompatible with each other and at least some may come as a surprise to the people who created the program.


Thank you, Steve, for starting off our new discussion!

Sunday, September 3, 2017

Fall Series: Evaluation and the Wisdom of the Crowd

New opportunities exist for including stakeholders' and others' input in evaluation research. Can you imagine the nature of the improvements we might front load in e-learning and instructional designs if we're able to incorporate qualitative data seamlessly into the decision making process? I'd like to explore those possibilities this fall. 

Crowdsourcing involves gathering and processing large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information. Qualitative research also involves large data sets.  We can now quickly process large data sets in ways we could not in the past. For example, some helpful applications for possible qualitative research and e-learning program evaluation include “big data” techniques for information retrieval (IR), audio analytics, and video analytics (Gandomi & Haider, 2014). The techniques to acquire, clean, aggregate, represent, and analyze data are many and help justify the re-conceptualization of our evaluation paradigms and models--especially our interpretivist and postmodernist ones. Should we use crowdsourced data? If so, under what conditions given some of the challenges of internet security and crowds. I've done some research and thinking about this over the last year. Let's talk.

Defining and Framing

Estelles-Arolas & González-Ladrón-de-Guevara’s 2012 literature review on an integrated definition of crowdsourcing included 10 with a problem-solving purpose. Their proposed definition is as follows: 

Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number,via a flexible open call, the voluntary undertaking of a task.These should be useful to our discussion since the use of evaluation data in this context facilitates problem solving.


Crowdsourcing can change the way evaluators work, giving them access to just-in-time assistance in performing evaluation tasks. Researchers and practitioners will need more information to help determine if existing paradigms, approaches, and methods should be reassessed to accommodate crowdsourcing. Evaluators should also think carefully about what tasks might be appropriate for various program evaluation approaches, given some of the problems with crowdsourcing.

Additional questions to frame this blog feature: Does e-learning program evaluation need to develop its own definition of crowdsourcing, or merely validate Estelles-Arolas and González-Ladrón-de-Guevara’s (2012) definition? Do members of the American Evaluation Association task force want to consider conducting a needs-assessment for whether crowdsourced input will impact its guiding principles for evaluators (American Evaluation Association, 1995)?

Sources cited:
Estelles Arolas, E, & González-Ladrón-de-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200.

Gandomi, A., & Haider, M. (2014). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 137-144.