Sunday, February 15, 2015

EdTech 505: Week 5 - Gap Analysis

Gap Analysis

Assignment for Chapter 3 in B&D text:  Apply Gap Analysis to the program/project you detailed in exercises in previous assignments. Explain Figure 3.1 Program Cycle (p. 51) in terms of the program that you have chosen to evaluate. Locate an Internet site that has something to do with “program cycle,” preferably something to do with education or your particular area/business. Use your discretion as to just how connected it is to “program cycle.” 





Discussion Board Assignment: Discuss the graphic above (from p. 51) and/or Table 3.1 - Program Planning Cycle and Evaluation Activities (pp. 62-3). Pros/cons. Something you disagree with? Where does "monitoring" occur? Anything missing? Bring in any prior experience you might have with such a planning cycle. 

The Program Planning cycle suggested in this diagram is an ideal model of evaluation that is a bit challenging to replicate in the real world (at least in the education world)! The foresight and hindsight views that are required for each part of the process are so, so important, but (in my experience) so hard to coordinate and implement! Unless there is a specialist team-member focusing on best-practice evaluation, it's a challenge to get already worked-to-the-max staff members to organize and execute a strategic, multi-step, integrated evaluation process of a new initiative. Perhaps larger school districts are more familiar with this process and address program evaluation in a more formulaic way, but in my teaching experience (mostly in smaller schools), most decision-making is left to objective narrative, democracy of opinion, and cost-analysis.  That is not to say that I don't see the value in this cycle! I just have never seen it in action from a program perspective.

A big "pro" of this cycle is that it reflects an iterative process.  Our work is never done, and even after the seemingly "final" summative evaluation, there is room for reflection and reconsideration of program alignment to an organization's philosophy and goals (which can also be fluid). Changes can be made before the next implementation. The "wheel" of the planning cycle doesn't stay static -- it's in forward motion as it turns.  Another "pro" is the presence of evaluation procedures and measures in the stages of program planning, implementation and formative evaluation, and summative evaluation. It's integrated throughout, making it integral to every step (except, perhaps, in the "Needs Assessment" when we're concerned with defining goals).

From my understanding, monitoring takes place only in the implementation and formative evaluation stage because it has a bit narrower of scope and is less concerned with objective-alignment and impact.

I believe this model will fit with my proposed evaluation project (1:1 Chromebooks) expect for the fact that I came on board just after the program planning phase and I know no evaluation plan was involved in that stage. I also do not believe a formal Needs Assessment was ever carried out.  Hopefully this will not be too big of an obstacle!  Also the "summative evaluation" piece of the program cycle would most likely not come until several years of running and formatively evaluating the program.  So while I can refer back to program planning and "look forward" to summative evaluation, most of my action as evaluator right now will need to take place in the implementation and formative evaluation stage.

Discussion #2: Monitoring vs. formative evaluation
My response to a peer:
Hi Matt, after reading the chapter I, too was a bit confused by the discrepancy between monitoring and formative evaluation. Trying to distinguish the two as completely different processes seemed a bit pedantic at first. I re-read the section a few times, and what I concluded is that monitoring is an AID to evaluation, but can often go off on its own tangents and serve its own purposes.  "Although some monitoring functions may be useful to an evaluator, much of the data they collect may be useful only to the monitors" (Boulmetis & Dutwin, 2011, p. 57). Monitoring a program may involve more detailed minutiae than is needed for formative evaluation (e.g. looking at demographics of users, wording of documents, discussions/testimonials of users functions, etc.)  Some of this data could be useful to an evaluator, but some may be just for record keeping or compliance, or to see if the program is even progressing according to its logistical plan.  In this way, monitoring tasks have a more short-term, narrow focus. Unlike formative assessment, monitoring doesn't seem concerned about impact or objective alignment to outcomes.  I am not sure if this is correct at all, and it's definitely still a bit hazy, but that's where these ideas are sticking in my brain right now!

No comments:

Post a Comment