Sunday, March 22, 2015

EdTech 505: Week 10 - Evaluation vs. Research

Is it Evaluation or Is It Research?

Assignment: complete two-part exercise on p. 184.




Discussion #1: Based on this week's readings, what are some ways you could choose that sample? What experiences have you had in choosing samples? What are some things to watch out for and/or avoid in selecting samples?
To choose a sample group for a survey with the most generalizable results, I would suggest either sampling all participants (as I'm hoping to do in my project, as the population is relatively small and they comprise a purposive sample), picking names randomly (simply or systematically), or using a stratified random sampling (by which you must take into account the proportion of specific groups (by gender, ethnicity, age, etc.) of the whole population to be reflected in the final sample population). A judgement sample could be useful (although it may be biased) because it is a nonprobability method of simply choosing the most available subject group.
I do not have experience in choosing samples, although this discussion does ring a bell in the shadowy portions of my mind from a (required) undergraduate course in Political Science.  I recall there being much debate about the validity and "generalizability" (I'm doubting that's a word!) of various polling techniques. This discussion also makes me think a little bit about focus groups in marketing, although again I have no experience in creating them (and I realize they serve a much different purpose).  
In my opinion, from a non-experienced perspective, the stratified random sampling method seems the most "fair".  However, I can certainly see the challenge in coming up with the characteristics for each sub-groups (and determining if they are indeed important to the evaluation question).  Also, I can see how it could be very challenging without access to reports/prior information to determine the characteristics of the entire population in the first place!
I would think when establishing a sample group one of the most important things to ask yourself is "who might I be excluding or underrepresenting by using this selection method?"
Discussion #2: Discuss "surveys" as used in program evaluations. Click here for a survey in SurveyMonkey.
Brief overview/assessment of survey:  As others have said, it's strange that there is not a descriptive title, explanation, or any kind of context to this survey at all. Participants should know what they are reporting about!  They might also like a reminder that their responses are anonymous and are used in aggregate to improve the program (or whatever the ultimate purpose is).
With the questions that use the Likert scale, as a survey participant I know I prefer to have a "neutral" option, but that's totally up to the survey designer. The jump between "good" and "poor" is pretty dramatic! I also was very confused by the word "rate", as in "how would you rate your instructor?".  What exactly am I rating? Something about their personality? Behavior? Instructional methods? I think these questions need to have more detail. Same for the program.  Perhaps it could be rephrased to "did the program activities meet your expectations? Would you recommend this program to others?  Another issue of wording I have is "acceptable", as in "was the program fee acceptable".  Again, that word needs more definition. Perhaps the question could read "was the program fee fair for the time and materials involved in the program?" or "Was the fee what you would expect for this kind of program?" Any ambiguous, value-laden word needs to be either better defined or replaced with a more neutral and precise option.
I agree with others who've posted that the mixture of choice and free-text is important and well-balanced here. Respondents value having a place to insert their opinions and suggestions.  That being said, the collector of these responses has their work cut out to read through and report on the five areas of potential additional comments!
Paper vs. online: Paper= best for quick collection with higher response rate, when participants are gathered together and their info is fresh on their minds. Online= best for larger population when they are not meeting synchronously. The results are easier to aggregate and analyze (with computers). Making sure the participants have access to the survey can be challenging, and reminding them to complete it can be more of a challenge with an online survey.
Discussion #3: Collaborative Group Share about Program Evaluation Projects
Hi all! Just wondered your thoughts on sharing links to working copies of our final Evaluation projects and, if desired, extra credit projects as well.  We could also use this place to post our thoughts or links to other related topics of interest that come up.  After dabbling with a few of the suggested collaborative tools, I was thinking about trying out a Padlet in conjunction with Google Docs, Evernote, or Dropbox links to our documents. The notes on the Padlet wall could also just be thoughts-in-passing. It seems like it'd be relatively efficient collaboration tool. Here's our shared wall: http://padlet.com/EricaWray/bhs8hhm2fouz. (Hopefully I set it up correctly!)  If any of you had a different idea, I'm very happy to go with the flow!

Sunday, March 15, 2015

EdTech 505: Week 9 - Data Sources and Data Analysis

Data Sources and Data Analysis
Chapters 6 & 7 in B&D text

Assignment: Answer the three questions on p. 141 in B & D text.



Bonus: Data Analysis website
URL:  http://onlinestatbook.com/index.html, specifically focusing on the "Interactive e-book" for iPad and OS X (selected since Java issues created problems accessing the online version): https://itunes.apple.com/us/book/introduction-to-statistics/id684001500?mt=11
Description: This e-book by David Lane (Rice University) functions as a combined textbook and workbook on the introductory statistics involved in data analysis.  It offers multimedia examples and an overview of basic principles of statistics including graphing and summarizing distributions, probability, estimation, testing means, analysis of variance, research examples, etc.  The e-book gets quite complex in its study of techniques for selecting, analyzing, interpreting, representing, and making decisions based on data.  There are plenty of mathematical equations featured in this book, but not to the point where it becomes inaccessible. The scope of this text certainly goes beyond the tools necessary in data analysis for program evaluation.  Nevertheless, it's easy enough to navigate by jumping to and skimming desired topics. The interactive workbook format is helpful because one can easily access glossary information (and create study cards of vocabulary terms), define a word, bookmark a page, view a video explanation or example, or take many self-evaluation quizzes. In addition to interactive explanations woven throughout the book, there are also external links to relevant, helpful websites to expound upon the topic of focus.
**two other interesting Statistics Education sites:  http://wise.cgu.edu and https://www.usablestats.com



Discussion Posts

#1: "How Evaluation of Technology Was Born" Caveman Tale
Educational technology, like the banana leaf used as a loudspeaker, certainly provides more efficient, effective, (and magical?!) ways to reach the masses. They feature tools for increased productivity, creativity, collaboration, problem-solving... -- the "meat" of education. As many in the field of education teach students, colleagues, and administrators how to "hunt" for this kind of learning and reap the benefits of this meat, excitement is catching on like wildfire.  More and more Thoks are emerging as wise, forward-thinking guides to help spread a) enthusiasm these new ways of learning and b) guidance on how to go about using new digital tools for this kind of learning.

There are Vals out there how do not understand and make no effort to try to embrace the "hunt" for this learning. They see the fruits of its labor as ineffective or inefficient.  This is perhaps because they don't enough about educational technology to a) use it properly or b) gauge its impact properly. Assumptions guide attitudes and false "conclusions" about the value of these new tools and ways of learning.

Therefore it is the job of the Thoks to not only teach about and model the proper use of educational technology, but also show its value through a more systematic approach.  That's where evaluation fits in.  Educating others about educational technology means showing them evidence, which is gathered through data collection -- not just theory.  It's showing how much more "meat" can be gathered by employing these tools and these teaching styles through observations, surveys, inventories, objective discussions, statistics, etc.

We know that Val's conclusion is quite ridiculous because she didn't use the technology tool in any way remotely related to its intended purpose.  Her "data collection" was skewed from the get-go because it wasn't tied to any objectives, program, or process.Here is where Thok can dive in and help. He can plan an evaluation which we know will ultimately reveal that Val's banana leaf (aka ed tech) use is flawed. He can then help her change her plan for its use (by tying it to objectives) and evaluate it again. Val might then be more convinced of the role of technology in education after that because she went through the rough and bumpy road of program planning and evaluation with her own two hands.  Just hearing about it wasn't enough for her. Or maybe Thok's "proclaiming" wasn't substantiated by enough evidence.  Sometimes we just have to see for ourselves!

Reply to peer post: I don't think students are uncomfortable having opportunities to find new tools to solve a problem or find new problems to be solved with a tool. I think students do have that spark of creativity, ingenuity, and pride that leads to great innovations.  Perhaps what they are uncomfortable with is that they, as learners, are going to be evaluated on their product. What if the whole concept fails, what if the design is impractical, what if it's too risky -- there's just too much at stake to take a chance (not to mention peer rapport (i.e. what will my friends think?))  So I guess I'm saying students may enjoy dabbling in the challenge of "reusing the leaf" in a new way, they just don't want to be judged on that first iteration.  And sometimes we just can't afford to give them time (and grade book real estate) for more iterations.

#2: National Center for Education Statistics (NCES) is the primary federal entity for collecting and analyzing data related to education. http://nces.ed.gov/ If you're into data and statistics about education, then this site is a great location. Review the site. What'd you find that's connected to EDTECH 505? To chapters 6 and 7? Find something useful?

Wow -- this site is sure robust!  First of all, I had not realized there was a "Congressional mandate to collect, collate, analyze, and report complete statistics on the condition of American education; conduct and publish reports; and review and report on education activities internationally", which is the mandate of the NCES.

I was impressed with the number of publications that (mostly appear to) address issues of economics and student profiles related to behaviors.  There are a TON of surveys and programs that look at beginning teachers, crime and safety, early childhood, high school and beyond, private schools, school staffing, urban education, etc. A variety of subgroups and educational activities are put under the microscope in these studies.  The Fast Facts section offers a myriad statistics about these various data filters. (However it's a bit challenging to work through these due to their "dryness" (mostly text and a few very busy charts and graphs!)) I feel a person really needs to know what they're looking for when they delve into this site.

It was quite interesting to conduct a "school search" to read about my own district's information, characteristics, enrollment by race and by grade. This could be a great starting point for the "public records" aspect of antecedent data that is discussed in our textbook, although it doesn't go into great detail.  The college navigator was an interesting surprise, and definitely something I'll recommend to our HS guidance department as an objective tool for students exploring higher ed options.  Younger students might enjoy exploring the Kids Zone: http://nces.ed.gov/nceskids/, especially in math class working with graphing authentic data sets. It's also a great way to engage students in their own education process (learning about their school, comparing it with others, etc.)

I have never heard of our district's students participating in the National Assessment of Educational Progress (NAEP), which is "the largest nationally representativ​e and continuing assessment of what America's students know and can do in various subject areas" -- I'd be curious to learn more about who participates in this (esp. being a national, vs. state, assessment tool), as I haven't heard of it before.

It would be helpful if the site had more information for students, educators, or administrators looking to conduct their own educational research. It appears this site is mostly fact-providing not skill-focused.  A lot of the data-tools appear very advanced and tailored for a specialist audience.  In addition, it's difficult to interpret what data collection tools were used in many cases.

Saturday, March 14, 2015

EdTech 523: Management Issues in the Online Course

Discussion Prompt:
This week, we will be discussing management issues in the online discussion forum, and strategies to address them. Please refer to two resources for this week's discussion. In Module 4, the Management Issues document describes different issues that could arrise in the online discussion forum. In the Purposeful Engagement document, the author lists many different ways that an educator can manage online communication in an effective manner.

In your experience as either a teacher or a student in online courses, which method(s) mentioned in the Purposeful Engagement document have you found to be most effective and why?
Have you ever had issues such as the ones mentioned in the Management Issues resource come up in classes that you've taught or taken? If so please discuss how you adressed it. If not, imagine if one of these issues did came up, how would you handle it (as a teacher or as a student)? Pick an issue, and describe how you would handle it. 
 
My Contribution:
Purposeful Engagement Methods

Because participation in online course discussions is so open-ended to begin with, I find that any method that communicates structure (of time, set-up, and purpose) and sets clear expectations will be a helpful strategy to promote maximum participation. As a student (never a teacher) of online courses, I really appreciate clear guidance on minimum posting requirements. It helps take away some of the "antsyness" about knowing if I've written enough content, posted frequently enough, and generally taken part as is expected. It reminds me of Spanish class when the teacher had a clipboard jotting tallies for every comment we made for adding up participation points. It created a preoccupation to say something just to say something! Throughout a week person can't be constantly checking or adding to a discussion board, nor should they be expected to, so if there are "check-ins" throughout the week, students can make sure to plan their time accordingly. They will also know that fresh content from other classmates will be guaranteed by those check-ins, so discussions will not be bunched up all at the end of the week. One thing I would say is that I'm not convinced that discussion boards are really the same as attendance. Just as there would be "lurkers" (or listeners) in a face-to-face class, there may be instances of that in an online course as well. While a student might not spout out tremendous amounts of thoughts and opinions on a discussion board, they may be internalizing some of the information from my peers and using it in their course products or personal reflection journal. I suppose what I'm trying to say is that it's hard to truly document one's "attendance" in an online course, and I'm not 100% convinced that online discussions are the only way.

Management Issues

Although I haven't taught on online class before, I can image there would be many “Must-have-an-A” students. I do like the response from Ko and Rossen. In addition, in an effort to prevent these kinds of emails in the first place, I would make sure to be VERY upfront about what it takes to get an A in my course. I would keep all rubrics or scoring guides in a central place, and explicitly highlight the pathway to that grade. I would be sure to include exemplar products (A-worthy) for every assignment so students had a sense for the depth of thought and quality of work I was looking for as an evaluator. This is a difficult one, however, because we want to promote student-centered learning in our classes. Well, how does student-centered learning fit nicely with teacher-supplied A's, B's, and C's? Yes, self-reflection through self-assessment is important, and it should be encouraged as much as possible. Nevertheless, most students still see grades as "given" by their instructors (rather than earned by themselves). The instructor must give the message that they are objectively submitting grades based on how students met the criteria for the class (which should be publicly-accessible, balanced, and easy-to-understand). If one "must have an A" then they can use those tools as a checklist of sorts, just like the instructor will.

[A Peer's Reply: Hi Erica. I agree with you that clear expectations and structure are helpful to promoting participation for online discussions. I found your opinion of participation in discussions linked to participation in class to be thought provoking. I've been that student in the traditional setting that would sit and listen to everyone else, and then used the assignments to show what was going on in my head. Because the focus needs to be placed on learning and the process it takes to get to the end goal, that personal reflection can just as powerful a tool as the discussion boards. My question for you would be if you did not use the discussion board as your tool for measuring "attendance" in class, how else would you measure their participation? ]


My Response: Yes, this is definitely the challenge (although in my opinion attendance is different from participation, even in the traditional classroom setting...)  Perhaps measuring participation comes down to the student highlighting their best work throughout the course at the end of the semester showing that they fulfilled the objectives to a certain degree... It wouldn't matter, then, if it were discussion posts, reflections, snippets from assignments, etc. It's up to the student to select the assignment/format that best addresses each learning goal. I certainly appreciate that this takes away from the community-minded collaborative style of the class, however. I wonder if discussion boards weren't mandatory or grade-dependent, what would their use be like? Are there other ways that participation can be incorporated besides the whole-group threaded discussion board model? What about small group discussions, partner tasks, collaborative document writing... It'd require lots of brainstorming, but I feel like there must be other ways to demonstrate and measure participation. I'm certainly no in any way against discussion boards, and enjoy taking part in them... just feel it's important to consider other ways of doing thing and "measuring" things. As for attendance, daily "checkins" or software like Moodle that can gauge numbers of actions or time signed into the class could be helpful statistics that take into account "lurkers" who are spending a long time reading and learning (and hopefully applying) vs. the students who quickly check in to post any old thing to fulfill a minimum requirement. There certainly is no perfect solution.

* * * 
* * * * * *
Reply to another Peer:
I also think that this reference is a great tool for showcasing exemplary and poor responses. These are good models for students for whom this kind of peer response might be unfamiliar. A sixth grade teacher that I work with who was delving into blogging with her students got them comfortable with the response process by offering sentence starters for them to pick among. Her students were discussing teen activists, and each student was responsible for showcasing a different person they'd read about. Responders had to pick from these replies:

-Another good thing I’ve heard this person has done is…..

-He/she is similar to _______ teen activist in these ways…

-A question I have about this teen activist is…

-An interesting article I found about this activist can be found here ________. It mentions…

-This surprises me because…

Sometimes it's just a matter of getting the students started -- scaffolding them to create an academic versus purely social reply.

Tuesday, March 10, 2015

EdTech 505: Week 8 - Response to RFP

Response to RFP

Assignment: Far West Laboratory for Educational & Research Development (FWL) has published a request for proposals (RFP) that asks private contractors to submit an evaluation plan for a training program - Determining Instructional Purposes (DIP). Your task is to submit a 5-6 page evaluation proposal in response.

Click here for my submitted "Great Lakes Evaluation Group" response

Saturday, March 7, 2015

EdTech 523: Voice and Tone in Discussion Board Facilitation

Discussion Prompt:  Directions: Voice and Tone Charades!

Below is a list of scenarios that take place in an online class. Each of the scenarios listed require you, the instructor, to respond to posts made in the class discussions with the proper voice and tone. Read through each of the scenarios and select two to respond to. Make sure you apply different voice and tone to each response, using this resource from our class page as a guide. Label each of your posts with the scenario number you are intending to respond to, and feel free to be creative in terms of what your students may have posted. (For example, you can make up a quote from a student post or invent names, as necessary, and in order to respond effectively.) After you have responded to the scenarios, read at least two responses posted by others. See if you can identify the voice and tone! To maximize fun and participation, attempt to guess the voice and tone of any response that does not yet have any guesses, so that everyone’s post has at least one guess on it. If all responses have guesses, choose to agree or disagree with previous guesses, using text detail from the response to support your conclusion.

Please create your first responses by Saturday, March 7 and post guesses on the other responses by Tuesday, March 10.

At the end of this discussion, on Wednesday March 11, return to this discussion one more time to let those who guessed regarding your voice and tone know if they were correct. Be sure to see if you were right in your guesses, too!

Scenario 1: As instructor of an online class, you are disappointed with the discussion your class has been having. The class is currently reading their way through a novel. Instructions were given to the class to choose a character from the novel and post in the discussion forum a comparison between that character and the student. You are finding the comparisons the students are making to be very minimal and are wanting to guide them to deeper and more meaningful comparisons. Reply with how you would respond to the students to guide the discussion in the direction you are wanting it to go.

Scenario 2: As instructor of an online class, you have asked the students to discuss the main ideas of the assigned reading in a discussion forum. Although the students are posting their thoughts in the discussion forum, their postings are off the mark and missing the intended point. Reply with how you would respond to the students to redirect their thinking in the direction originally intended.

Scenario 3: As instructor of an online class, you have asked the students to watch a video provided by you, and post on the discussion board their thoughts on how the video related to the class discussion from last week. Half the class has posted their thoughts on the video and you are pleased with how the discussion is going. You want to compliment the students who have posted to encourage the other students to continue along the path. Reply with how you would respond to the students to encourage and/or compliment the posts made.

Scenario 4: You have asked your high school students to find, via an online news source, a current event, then to summarize the event and tie it to their governmental studies. Two students have begun to debate politics in the discussion forum, and participation from the rest of the class has stopped. You need to defuse the situation and restore productivity and focus in the discussion. Reply with how you would respond in order to do so.

Scenario 5: Your students are in the midst of a discussion about character motivation. One of your students is not a native English speaker, and it requires a little extra effort to understand the insightful observation made in his lengthy post. No other students have yet commented on his post. Reply with how you would respond to direct the discussion to consider his point.


My replies:
Scenario 5: Your students are in the midst of a discussion about character motivation. One of your students is not a native English speaker, and it requires a little extra effort to understand the insightful observation made in his lengthy post. No other students have yet commented on his post. Reply with how you would respond to direct the discussion to consider his point.

I'm very impressed so far with the multitude of perspectives you all have surrounding the motives for Penelope's actions. She is indeed a complex character and her "inner state" appears to change quite significantly as the plot thickens, so to speak. Well done everyone in exploring these various dimensions. In case you may have missed it, Maria brought up an interesting point about how Penelope's motivations are quite similar to Steven's in our previous novel. This is something that is certainly worth exploring! How are the two characters' motivation trajectories mirrored? What "inner state" changes do they share? It's always interesting/rewarding to find connections between unrelated characters in unrelated books!

...As a note to myself, as instructor, I would make sure to include this note in my syllabus or opening discussion: As a reminder to our learning community, it's important to skim through everyone's posts as they appear on our discussion board. Make sure to comment on a variety of classmates' posts throughout the semester, ensuring that everyone feels included in the discussion as much as possible. If you post later in the week, consider seeking out a classmate who has yet to receive a reply. If you ever have questions of your classmates or need clarification, don't hesitate to reach out to them!

Scenario 2: As instructor of an online class, you have asked the students to discuss the main ideas of the assigned reading in a discussion forum. Although the students are posting their thoughts in the discussion forum, their postings are off the mark and missing the intended point. Reply with how you would respond to the students to redirect their thinking in the direction originally intended.

Thank you for your eager and timely contributions to this forum, but I feel compelled to step in here to refocus this discussion somewhat. There appears to be some misinterpretation of the main ideas of the assigned article, as many of you are only considering the article's message within the context of localeconomies. While this is certainly one angle of analysis, it's important to remember that the article discusses many macroeconomic principles as they apply to global trade. I would encourage you all to reread the article as well as the assignment directions. Think about: How does the author frame the current state of affairs? What are some problems she identifies? What are some solutions? How does this apply to the "big picture" of global markets? It may help bolster and redirect our discussion if you add some of these considerations to your initial posts. Thanks!

Reply to a Peer: Yes, I agree this is a generative tone -- helpful for reframing the question or explaining in greater detail what is being asked. Another way to take this tone is to model how a student could elaborate on a post. You could create a fake post (something similar to what the students have done) and show the "before" and "after", the latter being what you did to improve it according to the question reminders. I also agree that your tone is neutral.

Sunday, March 1, 2015

EdTech 505: Week 7 - Evaluation Models

Evaluation Models



Here is the quiz (from assignment above) that I generated to help others review the content from Chapter 5, aiding in their understanding of the differences and similarities between various evaluation models.
Link to Flash file
Weekly Discussion Posts
Metaphorical Story: Recently, I was talking with three exceptional (special) education teachers at a technology conference. These three colleagues described their classrooms to me. They invited me to visit. So I did.  I went into several classrooms. I approached one of the teachers and asked, "What are you doing?" "I'm teaching reading," he replied. Then I asked another teacher, "What are you doing?" "I'm showing these students how to have good study skills," she said. Then, I asked the third teacher, "What are you doing?" The woman put down her pen and said, "I'm helping all my students achieve their maximum potential in academics and social skills so that when they go out into the world they will be magnificent contributors."
Now, all three of these teachers had the same job, but only the last teacher had vision. She could see beyond the daily grind of teaching and see her students contributing mightily to our society. In our lives and in our jobs, sometimes it's hard for us to stay focused on the larger vision, to rise above the mundane, above the day-to-day.
In history, special people had that vision, one that has benefited us all. In my own work, I, too, sometimes get caught up in the details of the the daily grind. I go to meetings, read reports, and talk to colleagues. But there are times when the big picture is as clear as day, when I feel truly connected to issues and ideas much larger than myself, larger than any job, larger than any single organization.
How is this story related to EDTECH 505 and, more specifically, to the readings for this week? Do you have to have vision to be successful evaluator? How does vision fit with choosing the most appropriate evaluation model for a particular program?
My response: When approached, the three teachers in this story demonstrated that they were in full project implementation mode. They were in the process of putting program goals in action, whether teaching reading skills, study skills, or general academic/social skills.  Nevertheless, the third teacher was the only one to use the phrase "so that..."  This is what makes her stand out as focusing on the program's (in this case, education in general's) impact.  My students and I do X so that Y.  There is a clear cause and effect relationship implied in current activities and future behaviors.  Focusing on the goal (the reason for instruction) makes teacher #3's job more efficient and effective from a program implementation standpoint.  There is a sense of forward progress and being part of a "bigger picture" program cycle.  The teacher's actions are not the endpoint.
That being said, I'm not convinced that this inherently makes teacher #3 the better educator.  While it's good practice to articulate program objectives (as they align to program goals as well as activities), the fact that these two stakeholders (teachers #1&2) did not speak these impact objectives out loud does not mean they are not working towards them.  Perhaps the same "so that..." statement (as teacher #3) guides their lesson planning, assessment design, and conversations with their students.  We would have to talk more with these two teachers to get to know their motivations before assuming an absence of vision.  The same is true for a program evaluator -- we need to get to know all stakeholders, their beliefs and understanding of the program, before making conclusions based on observations of their behavior or brief one-off conversations.
Another point that interested me in this story was the fact that teacher #3's reported action was the most general/generic of the three, and therefore probably the hardest to evaluate. "Helping students achieve their maximum potential in academics and social skills" is a lofty and broad program goal that may need to be broken into more tangible sub-goals or objectives.  In order to measure and draw conclusions about teacher #3's program effectiveness, an evaluator would have to define and tease out evaluation questions regarding "maximum potential", "academic skills", "social skills", and "magnificent contributors".  Yes, this teacher has great vision (which is of supreme importance in program development and evaluation), but the vision cannot stand alone without tangible activities and measurable outcomes.
Reply #2: In my opinion, being a visionary and having vision are quite similar. Teacher #3 has demonstrated this personality trait ("being a visionary" perhaps being more perpetual and noticeable than "having vision").  Teacher  #3 is motivated by his/her drive to create impact. They see the bigger picture and the purpose behind painting the picture.  Having vision often accompanied by passion and an ability to motivate others to action. The evaluator's role is to help them look critically at the various components of this grandiose picture-painting exercise and determine if they are efficient and effective in leading to the desired impact (change in behavior/attitude/mental structure, in the case of education).  An evaluator's role is to help communicate the vision in objective, easy-to-understand ways.  An evaluator must also communicate the success of the current plan to attain this vision.
While it's terrific to be a visionary (with wisdom and creative plans for the future), it's not a precursor to having a good program plan.  Having A vision is a great start, and is necessary if one wishes to have meaning/purpose in their actions (their career?).  Even if there is only one single objective -- one carrot at the end of the stick -- there is a sense of achievement that propels these teachers forward.  There is a program cycle in place and in perpetual motion.  They might not have vision in other areas of their job, but for the purposes of one class or program, having a vision means there is something to work towards and something to evaluate and reflect upon at the end.

To be honest, I'm not sure if an evaluator really needs to be visionary or have vision. From my understanding and perspective, they are more in charge of restating, recording, and analyzing the various components of someone else's "vision plan".  As on objective bystander, they aren't creating or changing the objectives, they are reporting on their fulfillment.  Yes, they need to "have a vision" of what's all going on, but they don't necessarily need to be active in determining what the "big picture" (goals) look like.

Response to a peer: Derick, I must admit I feel the same way as you. To me an evaluator's role is not really to have a vision. Yes, they must have a wide scope of what's going on and a "vision" of program goals, activities, and measurements. However, this is different from the vision of the program in the first place. This kind of broader, step-way-back vision embodies the goals and beliefs that led the group to adopt a certain program in the first place.  Like you say, as an outside entity, It's not really the evaluator's place to change that vision.  They are working with more deductive reasoning, whereas the organization is working more open, generative, creative (not sure if it's quite "inductive") thinking.
This goes back to a previous week's discussion of the AEA, where I was very impressed but a little confused by the vision and value statements.  Where does this kind of "change-causing" fit into their job?  Isn't it against good evaluation practice to include measuring sticks and suggestions beyond the stated program goals and expected outcomes?  I'm all for evaluators being visionaries, but how would that actually work?