New opportunities exist for including stakeholders' and others' input in evaluation research. Can you imagine the nature of the improvements we might front load in e-learning and instructional designs if we're able to incorporate qualitative data seamlessly into the decision making process? I'd like to explore those possibilities this fall.
Crowdsourcing involves gathering and processing large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information. Qualitative research also involves large data sets. We can now quickly process large data sets in ways we could not in the past. For example, some helpful applications for possible qualitative research and e-learning program evaluation include “big data” techniques for information retrieval (IR), audio analytics, and video analytics (Gandomi & Haider, 2014). The techniques to acquire, clean, aggregate, represent, and analyze data are many and help justify the re-conceptualization of our evaluation paradigms and models--especially our interpretivist and postmodernist ones. Should we use crowdsourced data? If so, under what conditions given some of the challenges of internet security and crowds. I've done some research and thinking about this over the last year. Let's talk.
Sources cited:
Estelles Arolas, E, & González-Ladrón-de-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200.
Gandomi, A., & Haider, M. (2014). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 137-144.
Crowdsourcing involves gathering and processing large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information. Qualitative research also involves large data sets. We can now quickly process large data sets in ways we could not in the past. For example, some helpful applications for possible qualitative research and e-learning program evaluation include “big data” techniques for information retrieval (IR), audio analytics, and video analytics (Gandomi & Haider, 2014). The techniques to acquire, clean, aggregate, represent, and analyze data are many and help justify the re-conceptualization of our evaluation paradigms and models--especially our interpretivist and postmodernist ones. Should we use crowdsourced data? If so, under what conditions given some of the challenges of internet security and crowds. I've done some research and thinking about this over the last year. Let's talk.
Defining and Framing
Estelles-Arolas & González-Ladrón-de-Guevara’s 2012 literature review on an integrated definition of crowdsourcing included 10 with a problem-solving purpose. Their proposed definition is as follows:
Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number,via a flexible open call, the voluntary undertaking of a task.These should be useful to our discussion since the use of evaluation data in this context facilitates problem solving.
Crowdsourcing can change the way evaluators work, giving them access to just-in-time assistance in performing evaluation tasks. Researchers and practitioners will need more information to help determine if existing paradigms, approaches, and methods should be reassessed to accommodate crowdsourcing. Evaluators should also think carefully about what tasks might be appropriate for various program evaluation approaches, given some of the problems with crowdsourcing.
Additional questions to frame this blog feature: Does e-learning program evaluation need to develop its own definition of crowdsourcing, or merely validate Estelles-Arolas and González-Ladrón-de-Guevara’s (2012) definition? Do members of the American Evaluation Association task force want to consider conducting a needs-assessment for whether crowdsourced input will impact its guiding principles for evaluators (American Evaluation Association, 1995)?
Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number,via a flexible open call, the voluntary undertaking of a task.These should be useful to our discussion since the use of evaluation data in this context facilitates problem solving.
Crowdsourcing can change the way evaluators work, giving them access to just-in-time assistance in performing evaluation tasks. Researchers and practitioners will need more information to help determine if existing paradigms, approaches, and methods should be reassessed to accommodate crowdsourcing. Evaluators should also think carefully about what tasks might be appropriate for various program evaluation approaches, given some of the problems with crowdsourcing.
Additional questions to frame this blog feature: Does e-learning program evaluation need to develop its own definition of crowdsourcing, or merely validate Estelles-Arolas and González-Ladrón-de-Guevara’s (2012) definition? Do members of the American Evaluation Association task force want to consider conducting a needs-assessment for whether crowdsourced input will impact its guiding principles for evaluators (American Evaluation Association, 1995)?
Sources cited:
Gandomi, A., & Haider, M. (2014). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 137-144.
No comments:
Post a Comment