Sunday, April 5, 2015

EdTech 505: Week 11 - Text Review

Chapters 1-9 Boulmetis and Dutwin text - Summary


Remainder of assignment:


Discussion Posts

#1: Reflection on Course Material
As a few others have mentioned, the Chapter 4 description of the EPD as well as the various charts and diagrams embedded throughout the B&D textbook have been the most valuable to me.   I am a very visual learner, and appreciate the brevity and organization of a good diagram or chart.  It makes the relationship among the various components/terms more approachable and unambiguous. For example, I find myself referencing Exhibit 2.1: Evaluation Design (similar to Exhibit 4.5) quite often as I seek out the best way to "tease out" ideas for my own evaluation project.  I also appreciated Exhibit 4.1's example of program objectives being tied to evaluation questions.  This is one area I'm still trying to refine in my own project.  There must be a real art to it!  (A side note -- a suggestion for the next 505 offering is to have a discussion early on where students share their own project's program objectives/eval question alignment with each other and practice getting it "just right".  This seems like one of the most important steps in designing and evaluation plan, and the more minds the merrier!)
Appendix B is a super-valuable in offering up the meat and potatoes of what we are looking to achieve with our own evaluation projects. Having as many report exemplars as possible helps give context to the various components/terms we've learned about throughout the textbook.  It also helps me "learn backwards", to some degree -- figuring out why the author did what they did in each section. In this case, I was surprised how extensive the "recommendations" section was (almost as much as the data review itself), and wonder if mine will be equally as robust discussion.
I have also really enjoyed the storyline of the chapter introductions -- I find I learn a lot from case studies and real-life examples told through narrative.  In this case, it makes the evaluation process more real -- and not just stuck in theory and definitions.  To be honest, though, I think the storyline could've been elaborated even further. Throughout the chapters, the authors could've discussed in detail what the participants did with their data and subsequent discussions so we could see the project through to the end through a microscope at every level.

Reply to a peer: Yes, this is a challenging concept for me too (Evaluation vs. Research)-- the "differences" appear to overlap so much.  It's tricky since I'm struggling with the objectivity (not in the "personal bias" sense but in the "experiment design" sense) as I create my evaluation plan. (I know, I know, it's not research -- but still, I'm not accustomed to interpreting "objective" data in the social sciences. It's hard for me to grasp that there's not a control group and there's not comparisons being made to that control.)
Anyway, the biggest take-away for me is that research and evaluation are different in their purpose and audience. Research is to test theories off one's own back and publish findings for a wide, academic audience.  Evaluation is to plan activities that test the merit of some group's specific program according to its established objectives, with the audience being only the program stakeholders requesting it. I don't think that "trying to improve the program" means it's research.  I think that's just part of formative evaluation and, by extension, the program planning cycle.  When you write your report you'll have to give "recommendations" based on your data, so that's where the improvements come into play. You'll be "future-looking", but not speculatively. Just reporting data results and indicating where there's wiggle room for better objective-meeting the next time the program is carried out.  I could be way off base, but that's how it currently sits in my head

Current State of Final Project

No comments:

Post a Comment