Sunday, March 22, 2015

EdTech 505: Week 10 - Evaluation vs. Research

Is it Evaluation or Is It Research?

Assignment: complete two-part exercise on p. 184.




Discussion #1: Based on this week's readings, what are some ways you could choose that sample? What experiences have you had in choosing samples? What are some things to watch out for and/or avoid in selecting samples?
To choose a sample group for a survey with the most generalizable results, I would suggest either sampling all participants (as I'm hoping to do in my project, as the population is relatively small and they comprise a purposive sample), picking names randomly (simply or systematically), or using a stratified random sampling (by which you must take into account the proportion of specific groups (by gender, ethnicity, age, etc.) of the whole population to be reflected in the final sample population). A judgement sample could be useful (although it may be biased) because it is a nonprobability method of simply choosing the most available subject group.
I do not have experience in choosing samples, although this discussion does ring a bell in the shadowy portions of my mind from a (required) undergraduate course in Political Science.  I recall there being much debate about the validity and "generalizability" (I'm doubting that's a word!) of various polling techniques. This discussion also makes me think a little bit about focus groups in marketing, although again I have no experience in creating them (and I realize they serve a much different purpose).  
In my opinion, from a non-experienced perspective, the stratified random sampling method seems the most "fair".  However, I can certainly see the challenge in coming up with the characteristics for each sub-groups (and determining if they are indeed important to the evaluation question).  Also, I can see how it could be very challenging without access to reports/prior information to determine the characteristics of the entire population in the first place!
I would think when establishing a sample group one of the most important things to ask yourself is "who might I be excluding or underrepresenting by using this selection method?"
Discussion #2: Discuss "surveys" as used in program evaluations. Click here for a survey in SurveyMonkey.
Brief overview/assessment of survey:  As others have said, it's strange that there is not a descriptive title, explanation, or any kind of context to this survey at all. Participants should know what they are reporting about!  They might also like a reminder that their responses are anonymous and are used in aggregate to improve the program (or whatever the ultimate purpose is).
With the questions that use the Likert scale, as a survey participant I know I prefer to have a "neutral" option, but that's totally up to the survey designer. The jump between "good" and "poor" is pretty dramatic! I also was very confused by the word "rate", as in "how would you rate your instructor?".  What exactly am I rating? Something about their personality? Behavior? Instructional methods? I think these questions need to have more detail. Same for the program.  Perhaps it could be rephrased to "did the program activities meet your expectations? Would you recommend this program to others?  Another issue of wording I have is "acceptable", as in "was the program fee acceptable".  Again, that word needs more definition. Perhaps the question could read "was the program fee fair for the time and materials involved in the program?" or "Was the fee what you would expect for this kind of program?" Any ambiguous, value-laden word needs to be either better defined or replaced with a more neutral and precise option.
I agree with others who've posted that the mixture of choice and free-text is important and well-balanced here. Respondents value having a place to insert their opinions and suggestions.  That being said, the collector of these responses has their work cut out to read through and report on the five areas of potential additional comments!
Paper vs. online: Paper= best for quick collection with higher response rate, when participants are gathered together and their info is fresh on their minds. Online= best for larger population when they are not meeting synchronously. The results are easier to aggregate and analyze (with computers). Making sure the participants have access to the survey can be challenging, and reminding them to complete it can be more of a challenge with an online survey.
Discussion #3: Collaborative Group Share about Program Evaluation Projects
Hi all! Just wondered your thoughts on sharing links to working copies of our final Evaluation projects and, if desired, extra credit projects as well.  We could also use this place to post our thoughts or links to other related topics of interest that come up.  After dabbling with a few of the suggested collaborative tools, I was thinking about trying out a Padlet in conjunction with Google Docs, Evernote, or Dropbox links to our documents. The notes on the Padlet wall could also just be thoughts-in-passing. It seems like it'd be relatively efficient collaboration tool. Here's our shared wall: http://padlet.com/EricaWray/bhs8hhm2fouz. (Hopefully I set it up correctly!)  If any of you had a different idea, I'm very happy to go with the flow!

No comments:

Post a Comment