Dr. Steve Ehrmann |
For our first
installment, we’ll hear from Dr. Steve Ehrmann. Steve and I met when he was at
George Washington University and later we exchanged ideas around a couple of redesign initiatives that
started with his time at the Kirwan Center for Academic Innovation at the
University System of Maryland and continue until today.
Steve
shared thoughts below about how we might approach crowdsourcing and innovation. First
take a look at his unpublished paper attached here and his description of
the uniform impact and unique uses perspectives (pp. 9-11).
The uniform impact perspective focuses on the same outcome for each user of the program (how well did they learn X, for example). In contrast, the unique uses perspective assumes that each user may interpret the program differently, use it differently, and experience different results; this perspective assumes that results differ qualitatively from one user to the next.
Crowdsourcing would be especially appropriate for a unique uses study of an innovation. An evaluator or researcher might take the following steps:
1. Identify a crowd of users and figure out how to reach them online.
2. Explain to them why it's in their interest to contribute their time to your inquiry (i.e., to respond thoughtfully to your message). Intrinsically motivating your crowd produces more valid feedback than extrinsic rewards (e.g., entering them in a lottery if they contribute.)
3. Ask each user to consider what's been most valuable for them about their use of the program. What's been most burdensome, frustrating, or limiting about their use? Explain that you need the crowd to produce a responses that are (a) each quite important to the person involved and (b) qualitatively different from other benefits and problems suggested by others. (In 1990, if you were studying uses of word processing with personal computers, the first answers might have to do with the benefits of multiple fonts and the ease of editing. But, after enough brainstorming, eventually someone might mention that their whole approach to rethinking has changed because rewriting is so easy.)
4. Starting with these two long lists, begin a second round of inquiry about each item. Are there patterns? Are there connections? Do they suggest new ways of understanding the program itself?
The closest I’ve seen to this approach occurred in a face-to-face discussion of perhaps 20 first generation users of an educational innovation: the use of chat rooms by students in f2f composition courses (as I recall, this was in the early 1990s). The classes met in computer labs. Instead of talking, faculty and students would type to one another. A couple of months into the first term, faculty from perhaps ten institutions met to discuss what was happening in their courses. The first 45 minutes focused on technical issues and on what the faculty liked. Then one faculty member, ashamed, admitted that students in his course had erupted into a barrage of profanity and obscenity. A long pause. Then two or three others said something similar had happened to them, too. Cutting to the end of the discussion, one faculty member remarked, “Think about the French revolution. Think about what happens when powerless people get power. Some windows get broken. But they’re investing energy into writing. That’s what you need most in a writing class. So the trick is not to crush the revolution but to figure out how to channel the energy!” Later conversations revealed two additional, different ways to interpret this innovation, each with different insights about how to make more intentional, effective use of such chat rooms.
Michael Scriven calls this a goal-free evaluation. What I’d emphasize is creating a process through which users can provide quite different pictures of what the program is for, how it can be used, what benefits can be created and at what cost. Those perspectives may be incompatible with each other and at least some may come as a surprise to the people who created the program.
Thank you, Steve, for starting off our new discussion!