Translate

Monday, December 29, 2014

Evaluation & Innovation: Dirk's View

The last post for 2014 belongs to Dirk Ifenthaler. Dirk and I ran into each other in Jacksonville, FL at AECT. In the midst of a hectic schedule, he took the time to share his thoughts. Although he would have liked to provide more insights, he says he does have many arguments ready for us, as necessary. Ready to hear from Dirk?
Dr. Dirk Ifenthaler

Dirk Ifenthaler’s previous roles include Professor and Director, Centre for Research in Digital Learning at Deakin University, Australia, Manager of Applied Research and Learning Analytics at Open Universities Australia, Professor for Education and Interim Department Chair at the University of Mannheim, Germany.

He was a 2012 Fulbright Scholar-in-Residence at the Jeannine Rainbolt College of Education, at the University of Oklahoma, USA. Dirk’s research outcomes include numerous co-authored books, book series, book chapters, journal articles, and international conference papers as well as successful grant funding in Australia, Germany, and USA . He is the Editor-in-Chief of the Springer journal Technology, Knowledge and Learning.
Website: http://www.ifenthaler.info
Technology, Knowledge and Learning: http://www.springer.com/10758/

How do we recognize innovation in teaching and learning with technology?
The major lesson to be learned from fifty years of research in the area of teaching and learning with technology is that what happens in learning environments is quite complex and multi-faceted. Students cannot learn from technology, rather, technology is a vehicle to support the processes of learning.
Due to the often very slow publication process, reports and results of innovative projects are published years later which does not facilitate the often discussed link between research and practice. The recent activities of researchers in social media help to disseminate some aspects of innovation, however, important results are often held back for publications which again slows down the innovation process.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
There is a clear lack of empirical evidence focusing on innovation in teaching and learning with technology. We need to conduct multidisciplinary and multimethods research within innovative learning environments involving researchers, education providers, and international institutions. Clearly, practitioners and researchers need to talk to each other. One place is AECT (www.aect.org) which brings together these two groups at their annual International Convention.

Can you point to some promising innovations in teaching and learning?
The current debate focusing on learning analytics is very promising. However, most often learning analytics is misunderstood as being a tool for reporting educational data. The real innovation of learning analytics is the real-time support and feedback for all stakeholders involved in teaching and learning with technology. This will result in facilitation of learning processes, optimization of teaching practices and materials, as well as improved curricular designs and administrative processes.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Check out Technology, Knowledge and Learning (www.springer.com/10758). There, I am supporting Work-in-Progress Studies and Emerging Technology Reports. These two publication types shall help to disseminate results from innovative projects already in their early phase.

Previous posts in the series:

Wednesday, November 19, 2014

Evaluation & Innovation: Anissa's View


Thanks to an ITForum nomination, I introduce to some and present to others this month's post by 
Dr. Anissa Likey-Vega. As faculty at Kennesaw State University, Anissa serves as the coordinator for the Online Teaching Endorsement and Certificate programs for the Department of Instructional Technology.  She is also founder and lead consultant at Curriculum Research and Development (CR&D), an educational technology program evaluation firm that specializes in serving the K12 sector. She has her Ph.D. in Instructional Design and Technology from Georgia State University. Curriculum Research & Development: http://www.curriculumrd.com/
Department of Instructional Technology at Kennesaw State University http://bagwell.kennesaw.edu/departments/itec

How do we recognize innovation in teaching and learning with technology?
I’m sure you’ve heard this before: If you look for a problem, you will find one; however, near that problem you will also likely find innovation. Whether K12, higher education, or industry training, the teaching and learning process is in itself an ill-structured problem complicated by unique contexts, limited resources, and (those pesky) humans. People will be innovating to solve the problems that serve as the biggest barriers to them in their context. This doesn’t mean innovation will always work or will always be adopted to saturation, but each innovation will stir debate and nay-sayers. This tension occurs because once an innovation has proven itself effective enough to be widely adopted in the environment, it is no longer innovative. It becomes the new status quo.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
Each evaluation is unique. The combination of context, time, problem, and innovation is unique every single time. Yes, there are general principles and models that can guide an evaluation plan; however, flexibility is critically important to the process and knowing when it is appropriate to change course can be challenging. Evaluation expertise is not always valued as much as it should be and this can result in evaluation outcomes that offer little direction to organizational managers. Practitioners should read up on evaluation practices in their fields and knowledgably look for evaluation expertise within their organization. If such expertise is not readily available, then it is wise to consider outsourcing the process. While it is not always possible, the best evaluations are pursued early in the innovation process, or maybe even before. This allows for the data to drive the change process and optimize the outcomes.

Can you point to some promising innovations in teaching and learning?
I’m probably a bit biased on this one since I coordinate the online teaching certificate at Kennesaw State, but I think blended and online learning practices and tools have the most potential to disrupt teaching and learning in a positive and meaningful way.  Of course, if you teach at a fully-online school, these tools and practices are now the status quo; however, many settings particularly in the K12 sector are not there, yet. Online instruction grants the learner more power in the differentiation of instruction and it puts a teacher’s instructional design on display for self-reflection. This self-review is not as easily achieved in a real-time unrecorded class session. Also, their best designs are more easily replicated when stored in a digital format rather than gone with the passing of time. This allows for teachers to put their best foot forward even on an off day.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Well, this is a bit challenging given that evaluation is context and problem-type specific.  This means it would be difficult to curate a research journal on the topic of innovation evaluation alone.  The audience would likely be scattered in context and problem-type, resulting in few articles to warrant devoted readership for each individual. I read about evaluation in journals that cross all disciplines. In fact, when I wrote my dissertation there was no published method to evaluate one or more intended curricula. I had to develop that process on my own. That said, my evaluation-related books that are resting closest to me right now are also the ones that show the most wear. These are the works of Thomas Guskey and Joellen Killion. I have to modify their designs as expected for each situation, but they both promote excellent planning and structure in the evaluation process.  Each time I start to wrap my brain around a new program or innovation that I need to evaluate, their works serve as my evaluation muses.

Previous posts in the series:

Monday, October 20, 2014

Evaluation & Innovation: Bryan's Perspective

Dr. Bryan Alexander

I met Bryan in Atlanta at a NITLE symposium in April of 2013, but as those who know Bryan understand, it seems like I've known him forever.  I am a regular reader of his posts and enjoy his sense of humor. He was the first to volunteer to help with this blog. Although I started this series with my Lehigh students and AECT in mind, I'd like to hear from my fellow evaluation instructors and professors about how this series helps you with your students. 
Bryan Alexander is a futurist, researcher, writer, speaker, consultant, and teacher, working in the field of how technology transforms education. He completed his English language and literature PhD at the University of Michigan in 1997, with a dissertation on doppelgangers in Romantic-era fiction and poetry. Learn more about this NITLE fellow here http://bryanalexander.org/bio/


How do we recognize innovation in teaching and learning with technology? 
Not well, and that's a problem.  Teaching in general is something difficult to track for a variety of reasons, including the way course management systems lock away classes from observers..  For now we hear about innovative teaching through publications (statistically very rare), social media (a bit better), and personal contact.  Social media may be the best route for discovery innovation, as its ease of use lets practitioners share thoughts, reactions, observations on the fly.
What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges? 
There are many challenges, depending on the background of would-be evaluators.  Professional training can make it difficult to appreciate innovation, either when trying to perceive new work in someone else's field, or in seeking to understand changes to one's own.  Observers can also focus too much on the first iteration of a project, rather than looking to its development and maturation over time.  It is also important to distinguish between achievement and learning versus other student responses (i.e., approval of change).
Overcoming these challenges is implicit in each one - pay more attention to innovation over time, pay less attention to non-learning responses.  Additionally, evaluators would do well to use social media to develop their reflections.
Can you point to some promising innovations in teaching and learning? 
The flipped classroom is perhaps the most notable.  It depends on no specific technology (usually Web video or podcasting), and is clearly focused on improving learning through enhancing the classroom experience.
Open education is growing steadily, as the amount of open material builds and the number of practitioners increases.
Mobile learning continues to transform teaching and learning, rendering nearly all locations potential connections to vast amounts of content and collaboration.
Gaming offers potentially huge and deep gains for learning.
Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case? 
Columbia's Teachers College keeps publishing useful material.
There are a good number of books on mobile technology worth reading, even as that tech's rapid advances dates them quickly: Rheingold's Smartmobs, Katz and Aakhus's Perpetual Contact, Ito'sPersonal, Pedestrian, Portable.  

Let me return to social media.  Readers should curate sources on Twitter, the blogosphere, podcasting, etc,, as innovators often inhabit this space to share their thoughts.

Previous posts in the series:

Saturday, September 13, 2014

Evaluation & Innovation: Rick's Perspective

Drs. Shearer and Amankwatia at DTL
Dr. Rick Shearer and I at DTL
    

     
      Penn State University has been a good PA neighbor and friend to me  and other practitioners in the distance education field. At the Distance  Teaching and Learning Conference, I ran into Rick Shearer and asked  him to lend his voice to our series. Rick is Director of Penn State's  World Campus Learning Design. Dr. Shearer has been involved in the  distance education field for over 25 years.
 

      How do we recognize innovation in teaching and learning with technology?
I believe the best description of innovation in teaching and learning is related to the idea that innovation is a process where we use new technologies to solve existing problems in our learning environments. So in essence we likely have innovation happening everywhere and we want it to happen organically and not in a managed way. Thus instructional designers daily are using new tools to solve instructional problems in our online courses. What we need is a way to capture these innovations and better disseminate information so folks do not have to re-invent the wheel.

What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
The biggest challenge is finding a way to pilot an innovation and then to also be able to walk away from it if it doesn’t work. Thus, how can you run small pilot studies on technology innovations in courses and then evaluate the effectiveness without getting into long-term contractual relationships with vendors. Often it is easy to get an innovative idea into a course, but much harder to stop using it if it does not meet the need in helping students reach the stated learning outcomes. Also, in piloting these innovations and technologies we must allow time over several semesters to evaluate the impact and we cannot get caught up in running out to adopt the next shinny object that has come along. We must take a more measured approach to our testing and evaluation of the innovations.

Can you point to some promising innovations in teaching and learning?
If we know of innovative solutions they probably are not innovative anymore, thus a better question may be about problems where we need technologies to address them.

Key areas where I see a lot of innovative approaches happening in the future will be around learning analytics (not predictive analytics) where we explore ideas of personalized learning paths and mastery-based approaches. What do these look like in today’s connected world, where we value the social aspect of learning.

Also, how will the distance education community move in order to provide verification of our students for the DOE and possibly for upcoming requirements of the reauthorization of the higher education act.

Another area that is already emerging, but needs more work is the integration of social presence type tools in our courses that are seamless, but also protect the privacy and rights of our students.

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Unfortunately there are few places for practitioners to publish their work. Most journals are research oriented and require full studies that are difficult for practicing IDs to take on.
Although AECT has recently provided a venue for practical type research/reports, we need more options available for these types of practical studies. 

Previous posts in the series:

Friday, August 22, 2014

Evaluation & Innovation: Michael's Perspective

I recently returned from my regular trip to the Distance Teaching and Learning Conference in Madison, WI. One of my highlights was a pre-conference workshop on developmental evaluation with Michael Q. Patton. I asked Michael to lend his voice to our series.

I am the Evaluator- Patton
Michael Q. Patton, Ph.D.


Michael Quinn Patton is an independent evaluation consultant based in Minnesota. He is former President of the American Evaluation Association (AEA).  He has authored six evaluation books including a 4th edition of Utilization-Focused Evaluation and 3rd edition of Qualitative Research and Evaluation Methods. He is recipient of the Alva and Gunnar Myrdal Award for "outstanding contributions to evaluation use and practice" and the Paul F. Lazarsfeld Award for lifetime contributions to evaluation theory, both from AEA.  His latest book is Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.  
http://www.guilford.com/books/Developmental-Evaluation/Michael-Quinn-Patton/9781606238721

     

How do we recognize innovation in teaching and learning with technology?
Innovation is contextual not absolute.  What is accepted practice in one context may be different and innovative in another.  Thus, innovation is a socially constructed notion which has to be understood and interpreted within a context. 


What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges?
The challenge is that traditional ways of doing evaluation (logic models, SMART goals, predetermined outcome indicators, mandated rubrics and metrics) inhibit innovation.  The solution is developmental evaluation which accompanies innovators on the emergent journey of innovation, adapting evaluation to emergent issues.  For details about developmental evaluation see: http://www.guilford.com/books/Developmental-Evaluation/Michael-Quinn-Patton/9781606238721

Can you point to some promising innovations in teaching and learning?
Digital portfolios for online learners to archive their learning and products of their learning. These are accessible to the learners on a cumulative, long-term basis and can be shared for special purposes (e.g., a job opportunity).

Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
Online learning, like all effective interventions, is most effective when geared to specific subgroups with identifiable needs.  For example, homeless youth have special needs.  Here is a study that identifies how to work effectively with homeless youth: http://www.terralunacollaborative.com/wp-content/uploads/2014/03/9-Evidence-Based-Principles-to-Help-Youth-Overcome-Homelessness-Webpublish.pdf

Previous Post in Series: Steve Ehrmann's Perspective

Thursday, July 24, 2014

Evaluation & Innovation: Steve's Perspective

Over the next several weeks, I will be inviting several educational researchers and practitioners to offer their perspectives on the role of evaluation in teaching, learning, and technology innovation. Each is asked to freely respond to the same four questions. My hope is to foster discussions that enrich understandings, challenge assumptions, and strengthen processes for those who evaluate proposals or design studies. The goal is to compile a useful reference here.

Steve Ehrmann photo
Stephen C. Ehrmann, Ph.D.
Most recently, Stephen C. Ehrmann served as Vice Provost for Teaching & Learning at the George Washington University.  Previously he was founding Vice President of the Teaching, Learning, and Technology Group (TLT Group) and Director of the Flashlight Program for the Evaluation of Educational Uses of Technoloy; Senior Program Officer with the Annenberg/CPB Projects; Program Officer with the Fund for the Improvement of Postsecondary Education (FIPSE), and Director of Educational Research and Assistance at The Evergreen State College. He has a Ph.D. in Management and Higher Education from MIT.

Blog and bio: http://sehrmann.blogspot.com 


How do we recognize innovation in teaching and learning with technology?
It combines something unfamiliar with something promising. 


What challenges might practitioners and researchers face when evaluating innovation? Do you have ideas for how to overcome these challenges? 
  • Evaluation is one of those things that everyone assumes he or she can do; the biggest challenge is not realizing there is history here – methods, previous findings, etc.
  • Because innovation is by definition somewhat unfamiliar, to some degree you don’t know what you’re looking for.
  • Those in charge of the evaluation are often advocates of the project. This role often leads them to counter-productive actions such as:

o   Delaying the creation of an evaluation plan until all the available money has been committed to other purposes and, if it’s a grant proposal, the proposal is due in a day or two.  Two major reasons why evaluation plans for grant-funded edtech innovations are often so weak.
o   Another reason: Delaying the evaluation until ‘all the kinks have been worked out’.  This means the evaluation isn’t being used to identify the kinks or to help figure out how to work them out.  It also often means that the evaluation is delayed until a time when no one remembers to do it, or has any commitment to do it.  The initiative can end (as an initiative) with stacks of data sitting somewhere, unanalyzed.
  • Rapture of the technology: paying too much attention to the technology itself as though, by itself it drives change.  That’s almost never true, so techno-centric evaluations can easily produce puzzling, misleading results.  Imagine two institutions, each using a different brand of learning management system.  Institution A uses LMS-A, while B uses LMS-B, a competitor.  Institution A has a long tradition of learning communities, seminars, group projects while institution B, valuing access highly, historically has asked little group work of its commuting students and distant learners.  An evaluation is done of online communication among students – how frequent? How productive?  A beats B. A technocentric evaluation is designed on the assumption that it’s the technology that determines the use of the technology, so the author concludes that LMS-A is better for online learning communities than LMS-B.  

In an unpublished article, I’ve described 10 principles for doing a better job evaluating innovations in  education, especially innovations using technology. Briefly:
  1. Above all, do no harm. (For example, don’t ask for data if you don’t have a firm idea of how you can satisfy your survey respondents and other informants that helping you was, indeed, worth their valuable time.)
  2. Design the evaluation so that, no matter what you find, each stakeholder is somehow better off.
  3. Be ready to compare apples with oranges because that kind of comparison is almost inevitable in a study of an educational innovation.
  4. Focus on what people DO most often with the program (their ‘activities’) rather than looking only at the tools with which they do it.
  5. Study why those people do those things in those ways.
  6. Compare the program with the most valuable alternative, not with doing nothing.
  7. Study costs, not just benefits.
  8. Remember that education does not work like a well-oiled machine – doing the ‘same thing’ often does not produce the ‘same results.’
  9. Recognize that different people using the program often have different motives, perceptions, circumstances, and, therefore, outcomes.  Realizing this leads to a dramatically different approach to assessment and evaluation – the students are doing different things (#4 above) because they have different reasons and different influences acting on them (#5 above). So the outcomes will differ from student to student.
  10. Start now!  It’s never too early in the life of innovation, even going back to before it’s introduced, to do useful studies.
Can you point to some promising innovations in teaching and learning?
      The use of questions designed to make students think with polling techniques (such as clickers or cell phones), and peer instruction.  See here for a great video on this: https://www.youtube.com/watch?v=WwslBPj8GgI
      
     Are there some effective research initiatives or studies our readers should examine? If not, why do you think that is the case?
      This is a provocative study, done over 15 years ago:  J. C. Wright, S. B. Millar, S. A. Kosciuk, D. L. Penberthy, P. H. Williams, and B. E. Wampold (1998) “A Novel Strategy for Assessing the Effects of Curriculum Reform on Student Competence,” Journal of Chemical Education, v. LXXV, pp. 986-992 (August).







Thursday, July 10, 2014

New Interview Series on Evaluation Coming Soon!

I am lining up a series of researchers, practitioners, and leaders to share their thoughts on various types of evaluation. The first series of interviews will focus on how one evaluates innovation in teaching, learning and/or research. So be sure to check back in a couple of weeks for the first post. I'll also be lining folks up for the blog in Madison at the 30th Annual Conference on Distance Teaching & Learning being held August 12-14 in Madison, Wisconsin. I'm presenting at the conference as well.

More about the Conference!

Sponsored by the University of Wisconsin-Madison, this event is a great place to hear leading experts and share best practices with colleagues from around the world in the field of online education and training. Plus, Madison is a beautiful place to visit in the summer. If you’re not familiar with this event, and want to find out more, I encourage you to visit their website at www.uwex.edu/disted/conference. Hope to see you there.

Saturday, May 10, 2014

Opportunity for Graduate Students with AEA

I still draw from my experiences as a graduate student participating in large-scale technology program evaluations. I could not function effectively in my current administrative role if I did not know how to use various types of evaluations to make decisions. Please encourage your graduate students to take advantage of opportunities such as this one by the American Evaluation Association.
CALL FOR APPLICATIONS
AEA GRADUATE EDUCATION DIVERSITY INTERNSHIP PROGRAM (GEDI) 
DEADLINE: Friday, June 6, 2014 
The American Evaluation Association welcomes applications for its Graduate Education Diversity Internship Program that provides paid internship and training opportunities during the academic year. The GEDI program works to engage and support students from groups traditionally under-represented in the field of evaluation. The goals of the GEDI Program are to:
  • Expand the pool of graduate students of color and from other under-represented groups who have extended their research capacities to evaluation.
  • Stimulate evaluation thinking concerning under-represented communities and culturally responsive evaluation.
  • Deepen the evaluation profession's capacity to work in racially, ethnically and culturally diverse settings.
Interns may come from a variety of disciplines, including public health, education, political science, anthropology, psychology, sociology, social work, and the natural sciences. Their commonality is a strong background in research skills, an interest in extending their capacities to the field of evaluation, and a commitment to thinking deeply about culturally responsive evaluation practice.

The Internship: Building on the training content described below, the interns work the equivalent of approximately two days per week at an internship site near their home institutions from approximately September 1 to July 1. The interns may work on a single evaluation project or multiple projects at the site, but all internship work is focused on building skills and confidence in real-world evaluation practices. Interns receive a stipend of $8,000 in recognition of their internship work based on completion of the internship and satisfactory finalization of program requirements, including any deliverables due to the host agency, progress reports, and reflections on the internship experience.

Training and Networking Components: It is assumed that students come to the program with basic qualitative and quantitative research skills. The GEDI Program then works to extend those skills to evaluation through multiple activities:
  
Fall Seminar. A five-day intensive seminar, held in Claremont, California, provides an orientation that expands the student's knowledge and understanding of critical issues in evaluation, including thinking about building evaluation capacities to work across cultures and diverse groups. The interns complete a self-assessment in the Fall, clarifying their own goals during program participation.

AEA Annual Conference. Interns will spend a week at the American Evaluation Association annual conference. While there, they attend  (a) pre-conference workshops selected to fill gaps in their knowledge and skills, (b) conference sessions exploring the breadth and depth of the field, and (c) multiple networking events to connect them with senior colleagues. The interns also conduct a small-service learning project in the form of an evaluation of one component of the conference.

Winter Seminar. A three-day seminar, held in January or February, provides the students with additional training, coaching on their evaluation projects, and panel discussions with evaluation practitioners working in a range of contexts.

Evaluation Project. Interns will have the opportunity to provide support to an agency's evaluation activities in close proximity to their graduate institution. Interns will provide three updates on their evaluation project activities as part of the internship program, describing and reflecting on the application of their evaluation knowledge to the actual project activities.

Monthly Webinars. The students gather each month for a two-hour webinar to check in on evaluation projects and site placements, add to existing skill-sets, and learn from invited guest speakers.

AEA/CDC Summer Evaluation Institute. The program ends with attendance at the Summer Evaluation Institute held in Atlanta each June. There, students once again connect and finalize project reporting, attend training workshops, and participate in a graduation ceremony.

Specific Support Mechanisms: Interns are supported by colleagues at school, at their site placements, and within the sponsoring association:

An Academic Advisor. The academic advisor at the Intern's home institution supports and coordinates coursework and other activities, while helping to integrate the internship program with the student's plan of study.

A Sponsoring Agency. Students generally are matched with sponsoring agencies near their graduate institution that provide the opportunity to perform evaluation activities compatible with students' research interests and skills.

Supervising Mentor. A colleague at the host site with evaluation experience acts as a guide and mentor throughout the program.

GEDI Program Leadership. GEDI Program Director and AEA President-Elect (2015) Dr. Stewart Donaldson is an experienced evaluator. Working with a cadre of colleagues, he, Co-Director Dr. Ashaki M. Jackson, and Program Liaison John LaVelle oversee the curriculum and site placements. Throughout the internship the leadership are available to guide, advise, and support the interns in achieving their professional goals and the goals of the program.

AEA Staff Support. AEA staff provides logistical support throughout the internship. Post-internship, they work to connect program graduates with opportunities for leadership, participation, and networking within the association.

Online Community. The GEDI cohort uses an online community space for checking in, turning in updates, asking questions, and informal networking.

Student Benefits: Interns receive support from advisors and mentors, quality training focused on evaluation, real-world work experience, registration waivers and guidance at two professional evaluation conferences, and multiple opportunities for professional networking. In recognition of the time involved in the program (approximately 2 days per week), each intern also receives a stipend and is reimbursed for major travel expenses related to the program (airfare and shared hotel specifically), but is responsible for travel incidentals (to and from home/airport, to/from hotels, meals not taken together, etc.).

Eligibility: We seek students who are not already enrolled in an evaluation program/specialization or pursuing an evaluation degree who:

  • Are enrolled in a masters or doctoral-level program in the United States and have completed the equivalent of one full year of graduate level coursework;
  • Are residing in the United States;
  • Have already been exposed to research methods and substantive issues in their field of expertise;
  • Demonstrate via written essays the relevance of evaluation training to their career plans and their commitment to culturally responsive practice;
  • Are eligible to work for pay in the United States outside of an academic environment (non-U.S. citizens will be asked to provide documentation of current eligibility); and
  • Have support from his/her academic advisor.
Criteria for Selection: The interns will be selected based on their completed applications, materials provided, and subsequent finalist interviews focusing on:
  • Their thinking around and commitment to culturally responsive evaluation practice;
  • The alignment between their skills, aspirations, locale, and internship site placement needs;
  • The quality of their academic, extracurricular, and personal experiences as preparation for GEDI; and
  • Their capacity to carry out and complete the program, including support from an academic advisor
To applyDownload the GEDI Application and return all requested materials via email as described on that document on or before Friday, June 6, 2014. Please note that it may take a few weeks to compile the requested information and thus we recommend that you begin as soon as possible before the deadline.

Questions: We recommend beginning by reviewing our Frequently Asked Questions (FAQ) page. Should you have further questions, please contact Program Liaison John LaVelle via email gedi@eval.org with questions about the program.

More about the program: Go to the GEDI homepage

Sunday, April 13, 2014

Awarding Badges of Honor


Last fall of 2013, Clif Mims asked members via Google Plus about their uses of badges. I posted that I would be experimenting with badges in my upcoming graduate course. This blog post is the first installment of my reporting on the badge experience. I am inviting others to share their stories. 
Image from Mozilla OpenBadges site
I'm teaching an instructional technology evaluation course in Moodle this spring. Among other things, this graduate education course is designed to help learners:
  • Generate a technology program evaluation proposal for stakeholders. 
  • Plan and manage technology program evaluations. 
  • Acquire and employ effective data collection instruments and techniques. 
  • Analyze and draw conclusions from evaluation data. 
  • Report the results of technology program evaluation.
My goal in implementing badges was to start to establish a means by which researchers at Lehigh would be able to view students' skills across several courses in order to help match students with various technology evaluation or research opportunities. I chose four skills related for which I would award badges. The badges were awarded for formulating research questions; choosing and utilizing reliable, valid instrumentation; sampling techniques; and formative evaluation design. It was important to me that learners had ample opportunities to demonstrate mastery of key concepts and the application of these skills. I had hoped to use Mozilla's Open Badges but didn't find getting started as straightforward as I had thought.
 With a busy schedule, I found it easier to slightly modify graphics to quickly design my own badges. In Moodle, I found that the automatic awarding of badges based on criteria didn't always work. We're still investigating why. In the future, I think I will opt for the manual awarding for all four.

The good news is that I was able to actually use the badge data when approached to recommend students for a software research study. Overall, I find badges worthwhile and plan to look into OpenBadges for my next iteration. The other important outcome is that the graduate students seemed genuinely interested in earning them and helping me implement them effectively.

I'd love to hear how others have implemented badges to help assemble project or research teams.

Sunday, March 9, 2014

Got Thoughts on Rapid Evaluation for our Fast-Paced World?

Cates, Brill, Patton, Reeves, Merriam, and ASTD have had the greatest influence on me as an evaluator. I'd like to add some other voices to our Lehigh graduate course discourse on formative, effectiveness, and impact evaluations. Anyone with any rapid evaluation tips to share?

Like Patton, I can certainly see why some rapid evaluations with real-time data are deemed sloppy. In your efforts to help leaders and practitioners make decisions  to improve teaching and learning, how have you avoided sloppy evaluation planning, data collection, and reporting? How do we create reports that people actually review with data they can use?


Monday, February 3, 2014

Let's Talk Tech Evaluation Planning




Technology implementations and strategic plans should account for evaluation from the beginning. But how many really do?

 Author: K.tanimoto
Do you think administrators, educators, and stakeholders are becoming more invested in  instructional technology evaluation?
If so, what's the evidence that these various groups understand the benefits? Do you have a story that shows that evaluation planning was not an afterthought and has been integrated into institutional culture? What about an account of what happens when it's not?

Let's make a convincing case here in the responses to this blog post that other leaders and practitioners can reference. What are the benefits to teaching, instruction, and learning? Share your own story, ideas, and/or research here.












Monday, January 20, 2014

INFOGRAPHIC – 4 Reasons Why Higher Ed Institutions Must Invest in SEO | EDUniverse

Search Engine Optimization (SEO) has become an important part of planning for and evaluating an e-learning implementation. Can you add to or expand upon these reasons?



INFOGRAPHIC – 4 Reasons Why Higher Ed Institutions Must Invest in SEO | EDUniverse