Thursday, April 30, 2015

EdTech 505: Week 15 - Final Evaluation Report

Yes, I'm finally in the home stretch!

Please view my final Program Evaluation Project by clicking here.

Discussion Board Reflection
At the beginning of the semester, I had no experience in evaluation in this format, although I looked forward to this course because I knew it would be a pivotal skill in my new (this year) role as Technology Integration Specialist. There was a number of pilot programs rolled out throughout the district with different devices, and I'm still trying to wrap my head around them all! Researching various tools, having "EPD" style conversations with colleagues, and utilizing concrete data collection techniques to help determine a program's success or merit will certainly not end here for me! It has been a very very busy semester and a very very busy process all in all.

Sunday, April 26, 2015

EdTech 505: Week 14

This week we are continuing to collaborate with our small groups (asking questions, etc.) as we gather/analyze our data and begin to write our report.


Discussion Board Posts

Three crucial things to be included in an evaluation:
1. Regular, brief meetings about progress and action-points/deliverables -- ongoing communication throughout process
2. Well-crafted and varied data collection instruments and, similarly, transparent and easy-to-interpret results
3. Suggestions of areas for improvement based specifically on data results that are unique to our program (not just canned, generic responses)

One critical job skill: Organization (of people, resources, and time)

I would set up the evaluation to include:
- thoughts/ideas from all stakeholders at the very beginning (cast the net wide)
- an evaluation team internal to the company that helps work directly with the evaluator (but is a voice/go-between of the relevant stakeholders)
- early planning stages to include the evaluator
- the development of clear objectives tied to clear activities tied to clear evaluation measures/questions
- many avenues and formats for data collection
- A lot of time spent on careful data analysis
- A well-crafted report with a lot of visuals to make the relationships/patterns among the data very clear
- Several formative evaluation processes before a summative evaluation
- Transparency and communication as much as possible to encourage trust and a spirit of joint ownership

Reply to a peer:
Yes, gap analysis was also a big takeaway for me.  Taking stock of present and future and concrete steps to bridge the two seems like a straightforward way to design a program with the "end" in mind.  The vision of the future would help build the backbone for the program objectives, which would then be aligned with evaluation questions/tasks. Being detail-oriented as an evaluator would help the program staff ensure none of these steps were left out in the process. It's also helpful that the evaluator is focused exclusively on these kinds of details instead of a bunch of other tasks/duties within the organization (which prevent one from seeing the "big picture").

Sunday, April 19, 2015

EdTech 505: Week 13 - Square Wheels

Discussion Board

Task: Generate as many thoughts as you can about this illustration and your reactions, relative to program evaluation.

Who/what does the person in front represent?
This is the project leader, who often ends up pulling more weight and doing more legwork than offering direction.  In this image the leader is barely looking beyond his/her own feet. They may not be seeing other possibilities on the horizon. There is not a lot of future-thinking because this person is arduously moving and pulling the program along just one little step at a time.
Who/what do the people in back represent?    
These are the project facilitators and teammates.  They, too, are working hard to get the job done.  They are even working together, making sure their steps are in sync and that no one falls behind.  Nevertheless, they can't see up ahead because of the size and stature of the program (the wagon) itself. In some ways, they feel like they are just following blindly and according to the directives from up ahead.  There is no time or opportunity to pause, catch one's breath, reflect on the process, or change course.
What does the body of the wagon represent?
This is the program.  It's a big task and a has a lot of parts!  It requires a lot of coordination to implement and keep it running.  Nevertheless, the project stakeholders do whatever it takes to keep it moving in the only way that they know how.
What do the square wheels represent?
This is the status quo of doing things. (It's the result of the first iteration of the program.)
What do the round wheels represent?
This is the potential for improvement.  It represents the collection of opportunities and resources to create a better program. This includes designated evaluation personnel, planning time, collaborative discussions, testing measures, pre-existing data, regular review/reflection of objectives, participant feedback, etc.  This potential always exists and is a part of every program.  It's just that sometimes we overlook this potential or don't take the effort to tap into it to improve our program delivery.
What is the overall vision or interpretation of the illustration?
Typically we have the capacity and tools to make our programs more efficient and effective, but we aren't utilizing them. Instead, we are carrying this "potential" around and carrying on with programs that are bulky, clunky, inefficient, and barely effective. If we recognized the importance of taking stock of our team's skills and opportunities for growth we could not only reduce the "weight" of our program, but also coordinate efforts to get things moving along more efficiently and effectively.  For example, instead of just sitting on data and shuffling it around year to year, we could comb through it and help it grow legs to get us from point A to B in a better way.
(Response to a peer:) I hadn't thought about the role of pride in this equation.  Interesting viewpoint.  In my case I haven't seen pride so much as laziness and comfort.  We know how to do something, and it keeps going forward somehow or another, so why stop to see if it can "go" better?  Some leaders or staff members might say if it ain't broke, why (look for reasons and ways to) fix it?  If they could press the pause button on life and fast forward to a better version of their program, then they would see the rationale.  But it's hard when you are in the present just pulling or pushing forward.

Saturday, April 18, 2015

EdTech 523: Final Discussion

Discussion Post Directions: Each is a TV show that is a subgroup for the discussion forum. Click on the provided link of your choice and post why you feel that synchronous online learning is like that TV show, using examples from the readings and prior learning.

My Contribution: I, too, believe that Synchronous Learning is a lot like the show The Voice, and for many similar reasons that have already been expressed. The element of coaching is a hallmark of this program. Each coach's team is made up of a diverse group of singers. This is similar to the diverse learning styles and interests in an online class. The coaches, professional (and famous) singers themselves, start with high expectations for their "students" and guide them throughout the season to be the best performers they can be. This is similar to the synchronous instructor. "Instruction encompasses any of the kinds of learning that happen when faculty members, knowledge experts, or facilitators meet with learners, usually in a planned manner in a specific online venue, to guide them through the achievement of learning objectives (Finkelstein, 2006, p. 3). On the Voice, as in the synchronous online classroom, coaches make suggestions and offer additional resources (e.g. famous fellow celebrity singers) to help the contestants/students improve their craft. "Support is a crucial element for retaining and motivating learners, whether it is provided by just-in-time assistance from a peer, instructor, tutor, advisor, or librarian (Finkelstein, p. 4). Feedback is prompt and bi-directional. 

Another important similarity is the process of blind auditions where the would-be coaches are listening to the singers without the benefit of face-to-face communication. This is similar to synchronous learning in that the teacher does not have physical presence with her students. Being a good listener is of utmost importance. Likewise, for the student, expressing oneself clearly and impressively is of paramount importance; it is all a teacher and classmates have to "go by". Developing communication skills that don't rely on body language, eye contact, real-time discussion, etc. is similar to the Voice contestants who are performing with only the quality of their sound to represent themselves. The coaches (like the online instructors) are assessing the singer/students' demonstration of "real-time skills and analytical thinking" (Finkelstein, p. 6) in every performance. 

During The Voice season, there are various "rounds" that involve both competition (knock-out rounds) as well as team-work (duets). This is similar to the kinds of activities that might be incorporated into a synchronous learning class. (However, this would typically err more on the side of cooperation and collaboration!) Each week is a new song, a new task, a new performance. The class, like the show, is dynamic and always looking to stretch the singer/learner's "range" or style. In both cases the audience is real (TV audience and Web 2.0) and exists in real-time. "Real-time learning environments invite active learning" (Finkelstein, p. 22). In The Voice as in good synchronous classrooms, activities are meaningful and grounded in real-life tasks. "The presence of a live instructor, combined with the use of the human voice and a rich set of facilitation and collaboration possibilities, opens up a new world... (Finkelstein, p. 7).

Tuesday, April 14, 2015

EdTech 505: Week 12 - Critique of an Evaluation Report

Assignment: Critique an Evaluation Report ("The Maine Experience")


Discussion Posts:

#1: Program Evaluator Job AdsDiscuss what you find, or don’t find, in these ads. What do you think of the jobs as described? How do you stack up to the qualifications and requirements? Think you could make “program evaluation” a PT or FT business? Other thoughts on the job ads?

These jobs certainly do reflect the skills we've overviewed and practiced in this course. Their descriptions include most of the components we included in our Response to RFP (meetings with stakeholders, designing various eval methods, traveling and overviewing the administration of the evaluations, collecting and analyzing data, offering a report and making suggestions if requested).  Upon first glance, they don't seem TOO out my league after working through this course (and from my previous experience & educational coursework).  I know a job poster is going to err on the side of the VERY specific! However, there are some things that made me feel a little ill-prepared to take on one of these positions.

First of all, many jobs ask for the applicant to specify their evaluation skills (e.g. "experience employing social science empirical research methods, theories and analytical approaches" or "experience using a mixed-method approach").  Outside of learning the theory and ideal techniques of program evaluation and completing one formal evaluation, I simply don't have these kinds of extensive and varied experiences! In addition, very few of the jobs are seeking teachers or educational technologists.  A large amount of positions also request expertise in statistical/data analysis, including special software packages.  This is one area that is definitely still quite out of my league!  The words "metrics", "audits" and "IT" come up quite often. I would have a big learning curve with those aspects.  Many also are looking for people with experience in project management, which I just don't possess. I haven't pursued any administration coursework or experiences in my career so far. It makes sense to me that many request experience with research or grant writing.  (I do have some of that.)

Overall, it seems that the ideal candidate being sought after is one who not only has experience in the specific field (healthcare, social services, etc.), but also has experience being on the other side of the table (a "leadership" stakeholder -- instructional designer, grant-writier, program manager, etc.)  They also want an excellent writer (many requesting sample of one's work), mathematician, designer, coordinator, etc.  These jobs seem pretty intense! I would've thought they would pay more than they do.  (At least in the case of the ones that post their salary...)  A lot of work is involved and a lot of expertise is expected. (Where are the entry-level positions?!)  For now, for me, I'm content with using program evaluation skills within the context of my current teaching career.

#2: Discussion of end of book

Yes, the more resources (and exemplars) the merrier.  One big difference, I feel, between Appendix B's example report and the one we're required to to for this course is that there is greater treatment of "recommendations" than we are required to provide in our own report's discussion. Appendix B also reports results according to each program objective, however I don't believe that is expected of us in this course's final report (although, if it makes sense, I can see how that would help organize and frame the discussion of data).  Appendix B's report also includes a lot of data from focus groups and interviews, which I understand that we were discouraged from using in our own report for this course.

Chapter 10 was a good pep talk for evaluators looking to permeate the professional field. Being proactive in the search for work seems like a daunting task!  At least it ensures you are evaluating something you're interested and experienced in.  The discussion of "emergency resources" was interesting, as it got me to consider the ethical dilemma of evaluating -- is the relationship with the organization or the evaluation results more important? Definitely a tricky scenario!  The discussion of contract negotiation was definitely something I'd never experienced before. Program evaluation seems quite entrepreneurial in nature. A lot of self-direction, seeking out clients, and building a reputation is involved -- not to mention the evaluation processes themselves!  The first few years must be tricky.

Saturday, April 11, 2015

EdTech 523: Facilitating Active Engagement

Discussion Board Directions: "Cranium Check"
Although we may have profile pictures of students, we rarely know what they are thinking. Since we cannot see inside each person’s head (that would be scary), we need to understand that not every student is actively engaged within class, even if it appears that they are. Since we cannot interpret body language, tone, voice, etc., we must rely on other techniques. In the discussion forum this week, first select one person from the group pictured here. Identify the person by number and then create a scenario about the person. (Get creative - create an entire story or background of the person!) In other words, describe what is going on in the person’s head and correlate it to the way you perceive their engagement in the discussion pictured (the group is discussing whether or not online learning is superior to face-to-face instruction). Second, respond to one person’s scenario as the ‘instructor,’ using techniques we have learned so far (or ones you have found effective) to facilitate participation. Lastly, select a completely different discussion thread and contribute to it (i.e. play devil’s advocate, challenge the ‘instructor,’ etc.) Be sure to check back on your ‘instructor’ post as you may be challenged to respond to a ‘student!’

My Contribution to Discussion: Muriela the Go-Getter-in-Waiting... "Geez, it's tricky sharing ideas about instructional techniques when most of you guys are being stubborn and argumentative or just being grumpy nay-sayers who don't want to rock the boat or embrace change. I know I'm new to this profession, and most of you "seasoned veterans" think you have it all figured out, but I personally think there's more than one way to skin a cat here. I have a lot of ideas that would align with both face-to-face and online learning, but being a "newbie", I feel no one wants to listen to my rookie insights. It's hard to permeate this group. I sit in the front row, smile as much as possible, and try to patiently internalize all their "battles". They're nice people, but it's hard to make progress with them. They spend so much time arguing a theory and never just put something into action. I think I'll just keep my mouth shut, and just react to others' ideas for now. Being a newbie, I don't want to be ostracized! I don't like confrontation, and I also don't want my opinions to be shot down.

Alright, Pete just proposed an action plan. Hooray for Pete! We're getting somewhere! I'm gonna clap to a) celebrate, b) remind everyone that I'm still here, and c) get my blood flowing (instead of boiling!). If no one else jumps on his bandwagon, I may just have to speak up after all."

My Reply to a Peer #1: Mr. Grey is thinking that he's just not yet ready to share out about online learning. He hasn't tried it and isn't quite sure yet if it's "for him", but he's got an open mind. At this point, though he's not ready to speak about experiences or defend a position. He just wants to gather more information. Maybe he'd prefer using this time learning a new tool or skill instead of all this group talk. Maybe he's best off, for now, taking part in the group by asking questions or just taking notes. After all, participation doesn't have to always equal generative contribution.

My Reply to a Peer #2:

Your grin is wise (mischievous?), like the Cheshire cat. You clearly have something helpful to add to rescue us from our polarizing debate. Throw us a life ring - please! It's not always that the loudest voice wins. Sometimes the most sensitive, clairvoyant, and reasoned voice brings resolution. I know from past conversations that you have a very balanced perspective on this topic. Have you ever heard of the "The Tyranny of the “OR” vs. the Genius of the “AND”'? This article How to Avoid Tyrannical Decision-Makinghelps articulate that idea. In fact, Jim Collins, who coined that term, says in his online conversation:

"Having one side of this dichotomy going without the other doesn't work. In a number of professions, such as law and medicine, in academia, and in industries such as healthcare and the utilities, people have traditionally had a very strong core ideology, a strong sense of what they are doing. But they didn't do the other side well, the side of stimulation, progress, and change. Then people began to see that the world is changing. "We have got to be more efficient and effective," they said. "We have got to think about things like markets and segmentation and costs and cycle times." And that's all true.

But they get caught up in what we call "The Tyranny of the Or," the belief that you cannot live with two seemingly contradictory ideas at the same time, that you can have change or stability, you can be conservative or bold, you can have low costs or high quality -- but never both. Our visionary companies all operate in what we call "The Genius of the And," the ferocious insistence that they can and must have both at once."

How do you think that idea can be applied here? Are we at the point where we MUST have both ideas (face-to-face and online learning) at once? Why do you think this has become a "tyranny of or"-style debate in the first place? I can see you making a list over there -- perhaps it's on topic, perhaps it's not. That is further evidence that you are one of the most organized members of this group... so help us get our thoughts in order! I challenge you to lead your colleagues into an exercise where every position statement includes an "and" clause. If that doesn't come naturally, help the team by writing the ideas on the board and assist us in drawing those "AND" lines. Clearly this group needs some guidance, and it may as well come from you!

[Peer Reply to Me: Well, Professor Fuhry. I don't want to do that because then they will think that I am a know-it-all. Plus, I don't like speaking in front of my colleagues and if I stand up and write something I will be the center of attention. Do you not see me sitting outside the circle? Sorry, I will do what I have to to be successful as long as it doesn't include me saying much aloud. Deal?]

My Subsequent Response: Alright, I hear you're not up for that kind of attention and leadership. Hopefully that confidence and voice will come in time. Perhaps you could work more "behind the scenes" gathering background information to support both kinds of teaching methodologies. That might help us organize our thoughts and keep us grounded in "best practice" that goes beyond everyone's personal opinions. Find a colleague or two to work with and share your findings with the group before the next time we meet. Regarding your comment above, I'm afraid your personal success is superfluous to this discussion. We are a team and we need to work together, helping each other and contributing according to our strengths. That will make US successful. Please find some way to take part.

Sunday, April 5, 2015

EdTech 505: Week 11 - Text Review

Chapters 1-9 Boulmetis and Dutwin text - Summary


Remainder of assignment:


Discussion Posts

#1: Reflection on Course Material
As a few others have mentioned, the Chapter 4 description of the EPD as well as the various charts and diagrams embedded throughout the B&D textbook have been the most valuable to me.   I am a very visual learner, and appreciate the brevity and organization of a good diagram or chart.  It makes the relationship among the various components/terms more approachable and unambiguous. For example, I find myself referencing Exhibit 2.1: Evaluation Design (similar to Exhibit 4.5) quite often as I seek out the best way to "tease out" ideas for my own evaluation project.  I also appreciated Exhibit 4.1's example of program objectives being tied to evaluation questions.  This is one area I'm still trying to refine in my own project.  There must be a real art to it!  (A side note -- a suggestion for the next 505 offering is to have a discussion early on where students share their own project's program objectives/eval question alignment with each other and practice getting it "just right".  This seems like one of the most important steps in designing and evaluation plan, and the more minds the merrier!)
Appendix B is a super-valuable in offering up the meat and potatoes of what we are looking to achieve with our own evaluation projects. Having as many report exemplars as possible helps give context to the various components/terms we've learned about throughout the textbook.  It also helps me "learn backwards", to some degree -- figuring out why the author did what they did in each section. In this case, I was surprised how extensive the "recommendations" section was (almost as much as the data review itself), and wonder if mine will be equally as robust discussion.
I have also really enjoyed the storyline of the chapter introductions -- I find I learn a lot from case studies and real-life examples told through narrative.  In this case, it makes the evaluation process more real -- and not just stuck in theory and definitions.  To be honest, though, I think the storyline could've been elaborated even further. Throughout the chapters, the authors could've discussed in detail what the participants did with their data and subsequent discussions so we could see the project through to the end through a microscope at every level.

Reply to a peer: Yes, this is a challenging concept for me too (Evaluation vs. Research)-- the "differences" appear to overlap so much.  It's tricky since I'm struggling with the objectivity (not in the "personal bias" sense but in the "experiment design" sense) as I create my evaluation plan. (I know, I know, it's not research -- but still, I'm not accustomed to interpreting "objective" data in the social sciences. It's hard for me to grasp that there's not a control group and there's not comparisons being made to that control.)
Anyway, the biggest take-away for me is that research and evaluation are different in their purpose and audience. Research is to test theories off one's own back and publish findings for a wide, academic audience.  Evaluation is to plan activities that test the merit of some group's specific program according to its established objectives, with the audience being only the program stakeholders requesting it. I don't think that "trying to improve the program" means it's research.  I think that's just part of formative evaluation and, by extension, the program planning cycle.  When you write your report you'll have to give "recommendations" based on your data, so that's where the improvements come into play. You'll be "future-looking", but not speculatively. Just reporting data results and indicating where there's wiggle room for better objective-meeting the next time the program is carried out.  I could be way off base, but that's how it currently sits in my head

Current State of Final Project

EdTech 523: Assessing Discussions

Assessing Discussions in the Online Class

Discussion Directions:
For this week’s activity, please refer to Analytic vs. Holistic Rubrics in Module 4. Summarize the main differences between analytic and holistic rubrics. Decide which type of rubric you would prefer to use for an online discussion and explain your reasoning. Share a rubric that someone in your group either created or found online and determine whether it is an analytic or holistic rubric. In your post, describe the evidence for your determination.
Our Group Discussion Post:




My reply to a peer: They may be faster to implement, but I know a lot of training and practice is required of a teacher/evaluator to properly/objectively utilize holistic rubrics to assess free-response standardized tests. Their scoring practices have to be "calibrated" to ensure uniformity as well. It's quite the process! I had to go through a bit of a process to become a Cambridge Primary Science exam scorer (which I never ended up doing), and I know the Alberta exams require a very elaborate training program of their scorers too. In my opinion, I think the analytic rubric makes things a little easier on the teacher -- and it's easier to be consistent. I hear what you're saying about the feedback, however. If think if the holistic rubric is worded right, it has many analytic elements built into its proficiency category descriptors. Plus, there's always the option to leave personalized comments.