Sunday, February 22, 2015

EdTech 505: Week 6 - EPD

Summary of chapter 4 (Starting Point: Evaluator's Program Description) in Boulmetis & Dutwin course text

Please view this PowToon I created.

Discussion posts: 
#1) Evaluation report update

My evaluation report is going to focus on a 4th grade 1:1 Chromebook initiative in our school district. I have decided to narrow down the goals for this program so that this is more coherency and focus when it comes to identifying evaluation questions and, subsequently, survey questions for the data collection process.  Therefore I will intentionally limit the scope of my evaluation.  I understand now that surveys of important stakeholders will be the most important, if not only, data source for my program evaluation.  Therefore I plan to create three different surveys for the principal participants in the program: 4th grade students, 4th grade teachers, and the librarian/media specialist. These won't be pre- and post- data so much as formative reflection.  I hope that is acceptable.  I also wonder about gathering objective, quantitative measures, and look forward to seeing how I can incorporate those into my survey design.

It was helpful to work through the GAP analysis exercise last week, which for me helped establish a more clearly defined context for this program. This was necessary since I dove into it when it was already in the beginnings of the implementation stage (when I began working in the district and even knew about this initiative.)  It's unfortunate that, as evaluator, I was not part of the Needs Assessment process, but discussions I've had with the teachers, district superintendent, and our internal grant committee have shed light on the process by which this program came to be. They all identified the goals, and the teachers themselves discussed some of the program planning that took place.  I am concerned, however, that there simply was not much program planning outside of providing Chromebooks for staff and students.  Not only that, but there was no mention of evaluation at all in the early stages of program planning, which certainly surprised me. Our EPD-like exercise, although not as formal as those in our textbook, was a fruitful and necessary exercise to bring me up to speed and help us figure out the gaps that exist in the program's definition. Until the program activities are clearly defined, it's fruitless to align objectives to evaluation measures.

Luckily there have been no major changes or unexpected problems identified.  Everyone is still on board, and the district superintendent and the internal grant committee are eager to see the results.  The results of this will surely be helpful for future pilot programs, and will also be helpful as we extend the Chromebook program into its second year.

Here are some of my lingering questions/concerns:
1. The “operation” of the program, in many ways, is the teachers generally carrying out all their lessons.  Perhaps this program is unique in that it isn’t really a stand-alone event.  It’s a full-scale integration.  This part may be challenging because it’s hard to determine which procedures/materials are unique to the “program”, and which are just part of the general teaching practice.
2. Should I use the entire school year as the scope, as Sept. is the beginning point of the program?
3. Can I really get enough data from surveys alone? I'd wanted to do anecdotes and observations, but completely understand why they are not advisable (or allowed, for this assignment).
4. How many surveys are necessary/appropriate? Is it okay not to do a pre- and post- test?
5. My report is formative, so I don’t expect any of the objectives to be “met” right off the bat, but for this evaluation process to help improve the program so that they ARE more effectively/efficiently met.  (Is this okay?)  That being said, it will be difficult to evaluate impact, which seems to be more of a summative assessment feature.

#2) AEA- There have been questions about evaluators - Who are they? What are their qualifications? What’s the training? Is there certification? Etc. Go to the American Evaluation Association Web site - http://www.eval.org/- and explore the site. “The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation.” What did you find of interest? What's pertinent to EDTECH 505? Anything surprising? General impressions? Etc.

I was impressed with the social responsibility that the AEA takes in its mission goals and values. The statements of "ethically defensible, culturally responsive evaluation ", "under-represented groups", "global and international evaluation community", "inclusiveness and diversity", "responsive, transparent, and socially responsible association operations" all point to the proactive measures this association takes to engage in community-minded evaluation methods. It was interesting to note that, although it is an American association, over 60 foreign countries are represented in the membership. Members come from various professions and with various interests as well. In our increasingly globalized society, the more perspectives the better!  I was motivated by the End Goal: "Evaluators have the skills and knowledge to be effective, culturally competent, contextually sensitive, and ethical professionals.", and subgoal, "Evaluators use a multicultural lens to engage diverse communities in evaluation effectively and with respect, to promote cultural, geographic, and economic inclusiveness, social justice, and equality."  I'm curious how this looks, though!  These are ambitious goals, and ones that I would certainly be motivated to be a part of.  How does an evaluator or evaluation process begin to tackle these social causes?  At the end of the day, the evaluation must be true to the program objectives, right?... I would be interested in learning more about how AEA evaluators have balanced these aims.
As others have mentioned, the Guiding Principles are helpful indicators of what makes a good evaluator. These are important characteristics for us all to keep in mind as we delve into our own projects.  Although we may not solve the world's problems, we can be systematic, competent, honest, respectful, and responsible for the public welfare as we carry out our task.  The "Learn" section, with its Coffee Breaks webinars and virtual conference, looks like a great hub for gathering more information from the field.
I was interested in the results of the search I conducted for evaluators in my state. I was surprised by the number of results! Education, non-profit organizations, and social work seem to be the main audience for these firms. (In fact, I never had thought of evaluators as belonging to a firm before, but I suppose it makes sense...)  Nevertheless, there aren't too many job openings in Wisconsin right now -- only one as a Program Evaluation professor at UW-Madison.  I wonder if the supply outweighs the demand, or, in general, how often AEA evaluators are called upon.  It's refreshing to look beyond my school windows at these things!

No comments:

Post a Comment