Saturday, December 12, 2015

Final Project Working with Unity

Here is my project I created for middle school students learning about the Elements and Principles of Design. They are looking to improve their photography and graphic design for school reporting and publications. This online activity offers instruction and self-assessment to review this terminology and apply it to new images. With further development this project will include and interactive gallery to leverage the multimedia, 3D power of the Unity platform. The scripting, navigation, and organization of game elements was a very labor-intensive process! It's definitely a pride-worthy product.


http://ewray-511-finalproject.weebly.com

Direct link to game (requires browser with Unity plug-in)

Friday, May 1, 2015

EdTech 523: Final Project Submission

Technology Integration for Teachers 101 - A Schoology Site

FINAL COURSE PROJECT

My Project plan

My Evaluation plan (links to surveys and rubrics are embedded within)

Project URL: https://www.schoology.com/course/235200272/materials Access code: 3FDXQ-7BF2M



Sychronous Presentation of Final Course Project

Please visit my Self-evaluation document

(I was delighted to have received an A+ on this project from Dr. Rice!)

Thursday, April 30, 2015

EdTech 505: Week 15 - Final Evaluation Report

Yes, I'm finally in the home stretch!

Please view my final Program Evaluation Project by clicking here.

Discussion Board Reflection
At the beginning of the semester, I had no experience in evaluation in this format, although I looked forward to this course because I knew it would be a pivotal skill in my new (this year) role as Technology Integration Specialist. There was a number of pilot programs rolled out throughout the district with different devices, and I'm still trying to wrap my head around them all! Researching various tools, having "EPD" style conversations with colleagues, and utilizing concrete data collection techniques to help determine a program's success or merit will certainly not end here for me! It has been a very very busy semester and a very very busy process all in all.

Sunday, April 26, 2015

EdTech 505: Week 14

This week we are continuing to collaborate with our small groups (asking questions, etc.) as we gather/analyze our data and begin to write our report.


Discussion Board Posts

Three crucial things to be included in an evaluation:
1. Regular, brief meetings about progress and action-points/deliverables -- ongoing communication throughout process
2. Well-crafted and varied data collection instruments and, similarly, transparent and easy-to-interpret results
3. Suggestions of areas for improvement based specifically on data results that are unique to our program (not just canned, generic responses)

One critical job skill: Organization (of people, resources, and time)

I would set up the evaluation to include:
- thoughts/ideas from all stakeholders at the very beginning (cast the net wide)
- an evaluation team internal to the company that helps work directly with the evaluator (but is a voice/go-between of the relevant stakeholders)
- early planning stages to include the evaluator
- the development of clear objectives tied to clear activities tied to clear evaluation measures/questions
- many avenues and formats for data collection
- A lot of time spent on careful data analysis
- A well-crafted report with a lot of visuals to make the relationships/patterns among the data very clear
- Several formative evaluation processes before a summative evaluation
- Transparency and communication as much as possible to encourage trust and a spirit of joint ownership

Reply to a peer:
Yes, gap analysis was also a big takeaway for me.  Taking stock of present and future and concrete steps to bridge the two seems like a straightforward way to design a program with the "end" in mind.  The vision of the future would help build the backbone for the program objectives, which would then be aligned with evaluation questions/tasks. Being detail-oriented as an evaluator would help the program staff ensure none of these steps were left out in the process. It's also helpful that the evaluator is focused exclusively on these kinds of details instead of a bunch of other tasks/duties within the organization (which prevent one from seeing the "big picture").

Sunday, April 19, 2015

EdTech 505: Week 13 - Square Wheels

Discussion Board

Task: Generate as many thoughts as you can about this illustration and your reactions, relative to program evaluation.

Who/what does the person in front represent?
This is the project leader, who often ends up pulling more weight and doing more legwork than offering direction.  In this image the leader is barely looking beyond his/her own feet. They may not be seeing other possibilities on the horizon. There is not a lot of future-thinking because this person is arduously moving and pulling the program along just one little step at a time.
Who/what do the people in back represent?    
These are the project facilitators and teammates.  They, too, are working hard to get the job done.  They are even working together, making sure their steps are in sync and that no one falls behind.  Nevertheless, they can't see up ahead because of the size and stature of the program (the wagon) itself. In some ways, they feel like they are just following blindly and according to the directives from up ahead.  There is no time or opportunity to pause, catch one's breath, reflect on the process, or change course.
What does the body of the wagon represent?
This is the program.  It's a big task and a has a lot of parts!  It requires a lot of coordination to implement and keep it running.  Nevertheless, the project stakeholders do whatever it takes to keep it moving in the only way that they know how.
What do the square wheels represent?
This is the status quo of doing things. (It's the result of the first iteration of the program.)
What do the round wheels represent?
This is the potential for improvement.  It represents the collection of opportunities and resources to create a better program. This includes designated evaluation personnel, planning time, collaborative discussions, testing measures, pre-existing data, regular review/reflection of objectives, participant feedback, etc.  This potential always exists and is a part of every program.  It's just that sometimes we overlook this potential or don't take the effort to tap into it to improve our program delivery.
What is the overall vision or interpretation of the illustration?
Typically we have the capacity and tools to make our programs more efficient and effective, but we aren't utilizing them. Instead, we are carrying this "potential" around and carrying on with programs that are bulky, clunky, inefficient, and barely effective. If we recognized the importance of taking stock of our team's skills and opportunities for growth we could not only reduce the "weight" of our program, but also coordinate efforts to get things moving along more efficiently and effectively.  For example, instead of just sitting on data and shuffling it around year to year, we could comb through it and help it grow legs to get us from point A to B in a better way.
(Response to a peer:) I hadn't thought about the role of pride in this equation.  Interesting viewpoint.  In my case I haven't seen pride so much as laziness and comfort.  We know how to do something, and it keeps going forward somehow or another, so why stop to see if it can "go" better?  Some leaders or staff members might say if it ain't broke, why (look for reasons and ways to) fix it?  If they could press the pause button on life and fast forward to a better version of their program, then they would see the rationale.  But it's hard when you are in the present just pulling or pushing forward.

Saturday, April 18, 2015

EdTech 523: Final Discussion

Discussion Post Directions: Each is a TV show that is a subgroup for the discussion forum. Click on the provided link of your choice and post why you feel that synchronous online learning is like that TV show, using examples from the readings and prior learning.

My Contribution: I, too, believe that Synchronous Learning is a lot like the show The Voice, and for many similar reasons that have already been expressed. The element of coaching is a hallmark of this program. Each coach's team is made up of a diverse group of singers. This is similar to the diverse learning styles and interests in an online class. The coaches, professional (and famous) singers themselves, start with high expectations for their "students" and guide them throughout the season to be the best performers they can be. This is similar to the synchronous instructor. "Instruction encompasses any of the kinds of learning that happen when faculty members, knowledge experts, or facilitators meet with learners, usually in a planned manner in a specific online venue, to guide them through the achievement of learning objectives (Finkelstein, 2006, p. 3). On the Voice, as in the synchronous online classroom, coaches make suggestions and offer additional resources (e.g. famous fellow celebrity singers) to help the contestants/students improve their craft. "Support is a crucial element for retaining and motivating learners, whether it is provided by just-in-time assistance from a peer, instructor, tutor, advisor, or librarian (Finkelstein, p. 4). Feedback is prompt and bi-directional. 

Another important similarity is the process of blind auditions where the would-be coaches are listening to the singers without the benefit of face-to-face communication. This is similar to synchronous learning in that the teacher does not have physical presence with her students. Being a good listener is of utmost importance. Likewise, for the student, expressing oneself clearly and impressively is of paramount importance; it is all a teacher and classmates have to "go by". Developing communication skills that don't rely on body language, eye contact, real-time discussion, etc. is similar to the Voice contestants who are performing with only the quality of their sound to represent themselves. The coaches (like the online instructors) are assessing the singer/students' demonstration of "real-time skills and analytical thinking" (Finkelstein, p. 6) in every performance. 

During The Voice season, there are various "rounds" that involve both competition (knock-out rounds) as well as team-work (duets). This is similar to the kinds of activities that might be incorporated into a synchronous learning class. (However, this would typically err more on the side of cooperation and collaboration!) Each week is a new song, a new task, a new performance. The class, like the show, is dynamic and always looking to stretch the singer/learner's "range" or style. In both cases the audience is real (TV audience and Web 2.0) and exists in real-time. "Real-time learning environments invite active learning" (Finkelstein, p. 22). In The Voice as in good synchronous classrooms, activities are meaningful and grounded in real-life tasks. "The presence of a live instructor, combined with the use of the human voice and a rich set of facilitation and collaboration possibilities, opens up a new world... (Finkelstein, p. 7).

Tuesday, April 14, 2015

EdTech 505: Week 12 - Critique of an Evaluation Report

Assignment: Critique an Evaluation Report ("The Maine Experience")


Discussion Posts:

#1: Program Evaluator Job AdsDiscuss what you find, or don’t find, in these ads. What do you think of the jobs as described? How do you stack up to the qualifications and requirements? Think you could make “program evaluation” a PT or FT business? Other thoughts on the job ads?

These jobs certainly do reflect the skills we've overviewed and practiced in this course. Their descriptions include most of the components we included in our Response to RFP (meetings with stakeholders, designing various eval methods, traveling and overviewing the administration of the evaluations, collecting and analyzing data, offering a report and making suggestions if requested).  Upon first glance, they don't seem TOO out my league after working through this course (and from my previous experience & educational coursework).  I know a job poster is going to err on the side of the VERY specific! However, there are some things that made me feel a little ill-prepared to take on one of these positions.

First of all, many jobs ask for the applicant to specify their evaluation skills (e.g. "experience employing social science empirical research methods, theories and analytical approaches" or "experience using a mixed-method approach").  Outside of learning the theory and ideal techniques of program evaluation and completing one formal evaluation, I simply don't have these kinds of extensive and varied experiences! In addition, very few of the jobs are seeking teachers or educational technologists.  A large amount of positions also request expertise in statistical/data analysis, including special software packages.  This is one area that is definitely still quite out of my league!  The words "metrics", "audits" and "IT" come up quite often. I would have a big learning curve with those aspects.  Many also are looking for people with experience in project management, which I just don't possess. I haven't pursued any administration coursework or experiences in my career so far. It makes sense to me that many request experience with research or grant writing.  (I do have some of that.)

Overall, it seems that the ideal candidate being sought after is one who not only has experience in the specific field (healthcare, social services, etc.), but also has experience being on the other side of the table (a "leadership" stakeholder -- instructional designer, grant-writier, program manager, etc.)  They also want an excellent writer (many requesting sample of one's work), mathematician, designer, coordinator, etc.  These jobs seem pretty intense! I would've thought they would pay more than they do.  (At least in the case of the ones that post their salary...)  A lot of work is involved and a lot of expertise is expected. (Where are the entry-level positions?!)  For now, for me, I'm content with using program evaluation skills within the context of my current teaching career.

#2: Discussion of end of book

Yes, the more resources (and exemplars) the merrier.  One big difference, I feel, between Appendix B's example report and the one we're required to to for this course is that there is greater treatment of "recommendations" than we are required to provide in our own report's discussion. Appendix B also reports results according to each program objective, however I don't believe that is expected of us in this course's final report (although, if it makes sense, I can see how that would help organize and frame the discussion of data).  Appendix B's report also includes a lot of data from focus groups and interviews, which I understand that we were discouraged from using in our own report for this course.

Chapter 10 was a good pep talk for evaluators looking to permeate the professional field. Being proactive in the search for work seems like a daunting task!  At least it ensures you are evaluating something you're interested and experienced in.  The discussion of "emergency resources" was interesting, as it got me to consider the ethical dilemma of evaluating -- is the relationship with the organization or the evaluation results more important? Definitely a tricky scenario!  The discussion of contract negotiation was definitely something I'd never experienced before. Program evaluation seems quite entrepreneurial in nature. A lot of self-direction, seeking out clients, and building a reputation is involved -- not to mention the evaluation processes themselves!  The first few years must be tricky.

Saturday, April 11, 2015

EdTech 523: Facilitating Active Engagement

Discussion Board Directions: "Cranium Check"
Although we may have profile pictures of students, we rarely know what they are thinking. Since we cannot see inside each person’s head (that would be scary), we need to understand that not every student is actively engaged within class, even if it appears that they are. Since we cannot interpret body language, tone, voice, etc., we must rely on other techniques. In the discussion forum this week, first select one person from the group pictured here. Identify the person by number and then create a scenario about the person. (Get creative - create an entire story or background of the person!) In other words, describe what is going on in the person’s head and correlate it to the way you perceive their engagement in the discussion pictured (the group is discussing whether or not online learning is superior to face-to-face instruction). Second, respond to one person’s scenario as the ‘instructor,’ using techniques we have learned so far (or ones you have found effective) to facilitate participation. Lastly, select a completely different discussion thread and contribute to it (i.e. play devil’s advocate, challenge the ‘instructor,’ etc.) Be sure to check back on your ‘instructor’ post as you may be challenged to respond to a ‘student!’

My Contribution to Discussion: Muriela the Go-Getter-in-Waiting... "Geez, it's tricky sharing ideas about instructional techniques when most of you guys are being stubborn and argumentative or just being grumpy nay-sayers who don't want to rock the boat or embrace change. I know I'm new to this profession, and most of you "seasoned veterans" think you have it all figured out, but I personally think there's more than one way to skin a cat here. I have a lot of ideas that would align with both face-to-face and online learning, but being a "newbie", I feel no one wants to listen to my rookie insights. It's hard to permeate this group. I sit in the front row, smile as much as possible, and try to patiently internalize all their "battles". They're nice people, but it's hard to make progress with them. They spend so much time arguing a theory and never just put something into action. I think I'll just keep my mouth shut, and just react to others' ideas for now. Being a newbie, I don't want to be ostracized! I don't like confrontation, and I also don't want my opinions to be shot down.

Alright, Pete just proposed an action plan. Hooray for Pete! We're getting somewhere! I'm gonna clap to a) celebrate, b) remind everyone that I'm still here, and c) get my blood flowing (instead of boiling!). If no one else jumps on his bandwagon, I may just have to speak up after all."

My Reply to a Peer #1: Mr. Grey is thinking that he's just not yet ready to share out about online learning. He hasn't tried it and isn't quite sure yet if it's "for him", but he's got an open mind. At this point, though he's not ready to speak about experiences or defend a position. He just wants to gather more information. Maybe he'd prefer using this time learning a new tool or skill instead of all this group talk. Maybe he's best off, for now, taking part in the group by asking questions or just taking notes. After all, participation doesn't have to always equal generative contribution.

My Reply to a Peer #2:

Your grin is wise (mischievous?), like the Cheshire cat. You clearly have something helpful to add to rescue us from our polarizing debate. Throw us a life ring - please! It's not always that the loudest voice wins. Sometimes the most sensitive, clairvoyant, and reasoned voice brings resolution. I know from past conversations that you have a very balanced perspective on this topic. Have you ever heard of the "The Tyranny of the “OR” vs. the Genius of the “AND”'? This article How to Avoid Tyrannical Decision-Makinghelps articulate that idea. In fact, Jim Collins, who coined that term, says in his online conversation:

"Having one side of this dichotomy going without the other doesn't work. In a number of professions, such as law and medicine, in academia, and in industries such as healthcare and the utilities, people have traditionally had a very strong core ideology, a strong sense of what they are doing. But they didn't do the other side well, the side of stimulation, progress, and change. Then people began to see that the world is changing. "We have got to be more efficient and effective," they said. "We have got to think about things like markets and segmentation and costs and cycle times." And that's all true.

But they get caught up in what we call "The Tyranny of the Or," the belief that you cannot live with two seemingly contradictory ideas at the same time, that you can have change or stability, you can be conservative or bold, you can have low costs or high quality -- but never both. Our visionary companies all operate in what we call "The Genius of the And," the ferocious insistence that they can and must have both at once."

How do you think that idea can be applied here? Are we at the point where we MUST have both ideas (face-to-face and online learning) at once? Why do you think this has become a "tyranny of or"-style debate in the first place? I can see you making a list over there -- perhaps it's on topic, perhaps it's not. That is further evidence that you are one of the most organized members of this group... so help us get our thoughts in order! I challenge you to lead your colleagues into an exercise where every position statement includes an "and" clause. If that doesn't come naturally, help the team by writing the ideas on the board and assist us in drawing those "AND" lines. Clearly this group needs some guidance, and it may as well come from you!

[Peer Reply to Me: Well, Professor Fuhry. I don't want to do that because then they will think that I am a know-it-all. Plus, I don't like speaking in front of my colleagues and if I stand up and write something I will be the center of attention. Do you not see me sitting outside the circle? Sorry, I will do what I have to to be successful as long as it doesn't include me saying much aloud. Deal?]

My Subsequent Response: Alright, I hear you're not up for that kind of attention and leadership. Hopefully that confidence and voice will come in time. Perhaps you could work more "behind the scenes" gathering background information to support both kinds of teaching methodologies. That might help us organize our thoughts and keep us grounded in "best practice" that goes beyond everyone's personal opinions. Find a colleague or two to work with and share your findings with the group before the next time we meet. Regarding your comment above, I'm afraid your personal success is superfluous to this discussion. We are a team and we need to work together, helping each other and contributing according to our strengths. That will make US successful. Please find some way to take part.

Sunday, April 5, 2015

EdTech 505: Week 11 - Text Review

Chapters 1-9 Boulmetis and Dutwin text - Summary


Remainder of assignment:


Discussion Posts

#1: Reflection on Course Material
As a few others have mentioned, the Chapter 4 description of the EPD as well as the various charts and diagrams embedded throughout the B&D textbook have been the most valuable to me.   I am a very visual learner, and appreciate the brevity and organization of a good diagram or chart.  It makes the relationship among the various components/terms more approachable and unambiguous. For example, I find myself referencing Exhibit 2.1: Evaluation Design (similar to Exhibit 4.5) quite often as I seek out the best way to "tease out" ideas for my own evaluation project.  I also appreciated Exhibit 4.1's example of program objectives being tied to evaluation questions.  This is one area I'm still trying to refine in my own project.  There must be a real art to it!  (A side note -- a suggestion for the next 505 offering is to have a discussion early on where students share their own project's program objectives/eval question alignment with each other and practice getting it "just right".  This seems like one of the most important steps in designing and evaluation plan, and the more minds the merrier!)
Appendix B is a super-valuable in offering up the meat and potatoes of what we are looking to achieve with our own evaluation projects. Having as many report exemplars as possible helps give context to the various components/terms we've learned about throughout the textbook.  It also helps me "learn backwards", to some degree -- figuring out why the author did what they did in each section. In this case, I was surprised how extensive the "recommendations" section was (almost as much as the data review itself), and wonder if mine will be equally as robust discussion.
I have also really enjoyed the storyline of the chapter introductions -- I find I learn a lot from case studies and real-life examples told through narrative.  In this case, it makes the evaluation process more real -- and not just stuck in theory and definitions.  To be honest, though, I think the storyline could've been elaborated even further. Throughout the chapters, the authors could've discussed in detail what the participants did with their data and subsequent discussions so we could see the project through to the end through a microscope at every level.

Reply to a peer: Yes, this is a challenging concept for me too (Evaluation vs. Research)-- the "differences" appear to overlap so much.  It's tricky since I'm struggling with the objectivity (not in the "personal bias" sense but in the "experiment design" sense) as I create my evaluation plan. (I know, I know, it's not research -- but still, I'm not accustomed to interpreting "objective" data in the social sciences. It's hard for me to grasp that there's not a control group and there's not comparisons being made to that control.)
Anyway, the biggest take-away for me is that research and evaluation are different in their purpose and audience. Research is to test theories off one's own back and publish findings for a wide, academic audience.  Evaluation is to plan activities that test the merit of some group's specific program according to its established objectives, with the audience being only the program stakeholders requesting it. I don't think that "trying to improve the program" means it's research.  I think that's just part of formative evaluation and, by extension, the program planning cycle.  When you write your report you'll have to give "recommendations" based on your data, so that's where the improvements come into play. You'll be "future-looking", but not speculatively. Just reporting data results and indicating where there's wiggle room for better objective-meeting the next time the program is carried out.  I could be way off base, but that's how it currently sits in my head

Current State of Final Project

EdTech 523: Assessing Discussions

Assessing Discussions in the Online Class

Discussion Directions:
For this week’s activity, please refer to Analytic vs. Holistic Rubrics in Module 4. Summarize the main differences between analytic and holistic rubrics. Decide which type of rubric you would prefer to use for an online discussion and explain your reasoning. Share a rubric that someone in your group either created or found online and determine whether it is an analytic or holistic rubric. In your post, describe the evidence for your determination.
Our Group Discussion Post:




My reply to a peer: They may be faster to implement, but I know a lot of training and practice is required of a teacher/evaluator to properly/objectively utilize holistic rubrics to assess free-response standardized tests. Their scoring practices have to be "calibrated" to ensure uniformity as well. It's quite the process! I had to go through a bit of a process to become a Cambridge Primary Science exam scorer (which I never ended up doing), and I know the Alberta exams require a very elaborate training program of their scorers too. In my opinion, I think the analytic rubric makes things a little easier on the teacher -- and it's easier to be consistent. I hear what you're saying about the feedback, however. If think if the holistic rubric is worded right, it has many analytic elements built into its proficiency category descriptors. Plus, there's always the option to leave personalized comments.

Sunday, March 22, 2015

EdTech 505: Week 10 - Evaluation vs. Research

Is it Evaluation or Is It Research?

Assignment: complete two-part exercise on p. 184.




Discussion #1: Based on this week's readings, what are some ways you could choose that sample? What experiences have you had in choosing samples? What are some things to watch out for and/or avoid in selecting samples?
To choose a sample group for a survey with the most generalizable results, I would suggest either sampling all participants (as I'm hoping to do in my project, as the population is relatively small and they comprise a purposive sample), picking names randomly (simply or systematically), or using a stratified random sampling (by which you must take into account the proportion of specific groups (by gender, ethnicity, age, etc.) of the whole population to be reflected in the final sample population). A judgement sample could be useful (although it may be biased) because it is a nonprobability method of simply choosing the most available subject group.
I do not have experience in choosing samples, although this discussion does ring a bell in the shadowy portions of my mind from a (required) undergraduate course in Political Science.  I recall there being much debate about the validity and "generalizability" (I'm doubting that's a word!) of various polling techniques. This discussion also makes me think a little bit about focus groups in marketing, although again I have no experience in creating them (and I realize they serve a much different purpose).  
In my opinion, from a non-experienced perspective, the stratified random sampling method seems the most "fair".  However, I can certainly see the challenge in coming up with the characteristics for each sub-groups (and determining if they are indeed important to the evaluation question).  Also, I can see how it could be very challenging without access to reports/prior information to determine the characteristics of the entire population in the first place!
I would think when establishing a sample group one of the most important things to ask yourself is "who might I be excluding or underrepresenting by using this selection method?"
Discussion #2: Discuss "surveys" as used in program evaluations. Click here for a survey in SurveyMonkey.
Brief overview/assessment of survey:  As others have said, it's strange that there is not a descriptive title, explanation, or any kind of context to this survey at all. Participants should know what they are reporting about!  They might also like a reminder that their responses are anonymous and are used in aggregate to improve the program (or whatever the ultimate purpose is).
With the questions that use the Likert scale, as a survey participant I know I prefer to have a "neutral" option, but that's totally up to the survey designer. The jump between "good" and "poor" is pretty dramatic! I also was very confused by the word "rate", as in "how would you rate your instructor?".  What exactly am I rating? Something about their personality? Behavior? Instructional methods? I think these questions need to have more detail. Same for the program.  Perhaps it could be rephrased to "did the program activities meet your expectations? Would you recommend this program to others?  Another issue of wording I have is "acceptable", as in "was the program fee acceptable".  Again, that word needs more definition. Perhaps the question could read "was the program fee fair for the time and materials involved in the program?" or "Was the fee what you would expect for this kind of program?" Any ambiguous, value-laden word needs to be either better defined or replaced with a more neutral and precise option.
I agree with others who've posted that the mixture of choice and free-text is important and well-balanced here. Respondents value having a place to insert their opinions and suggestions.  That being said, the collector of these responses has their work cut out to read through and report on the five areas of potential additional comments!
Paper vs. online: Paper= best for quick collection with higher response rate, when participants are gathered together and their info is fresh on their minds. Online= best for larger population when they are not meeting synchronously. The results are easier to aggregate and analyze (with computers). Making sure the participants have access to the survey can be challenging, and reminding them to complete it can be more of a challenge with an online survey.
Discussion #3: Collaborative Group Share about Program Evaluation Projects
Hi all! Just wondered your thoughts on sharing links to working copies of our final Evaluation projects and, if desired, extra credit projects as well.  We could also use this place to post our thoughts or links to other related topics of interest that come up.  After dabbling with a few of the suggested collaborative tools, I was thinking about trying out a Padlet in conjunction with Google Docs, Evernote, or Dropbox links to our documents. The notes on the Padlet wall could also just be thoughts-in-passing. It seems like it'd be relatively efficient collaboration tool. Here's our shared wall: http://padlet.com/EricaWray/bhs8hhm2fouz. (Hopefully I set it up correctly!)  If any of you had a different idea, I'm very happy to go with the flow!

Sunday, March 15, 2015

EdTech 505: Week 9 - Data Sources and Data Analysis

Data Sources and Data Analysis
Chapters 6 & 7 in B&D text

Assignment: Answer the three questions on p. 141 in B & D text.



Bonus: Data Analysis website
URL:  http://onlinestatbook.com/index.html, specifically focusing on the "Interactive e-book" for iPad and OS X (selected since Java issues created problems accessing the online version): https://itunes.apple.com/us/book/introduction-to-statistics/id684001500?mt=11
Description: This e-book by David Lane (Rice University) functions as a combined textbook and workbook on the introductory statistics involved in data analysis.  It offers multimedia examples and an overview of basic principles of statistics including graphing and summarizing distributions, probability, estimation, testing means, analysis of variance, research examples, etc.  The e-book gets quite complex in its study of techniques for selecting, analyzing, interpreting, representing, and making decisions based on data.  There are plenty of mathematical equations featured in this book, but not to the point where it becomes inaccessible. The scope of this text certainly goes beyond the tools necessary in data analysis for program evaluation.  Nevertheless, it's easy enough to navigate by jumping to and skimming desired topics. The interactive workbook format is helpful because one can easily access glossary information (and create study cards of vocabulary terms), define a word, bookmark a page, view a video explanation or example, or take many self-evaluation quizzes. In addition to interactive explanations woven throughout the book, there are also external links to relevant, helpful websites to expound upon the topic of focus.
**two other interesting Statistics Education sites:  http://wise.cgu.edu and https://www.usablestats.com



Discussion Posts

#1: "How Evaluation of Technology Was Born" Caveman Tale
Educational technology, like the banana leaf used as a loudspeaker, certainly provides more efficient, effective, (and magical?!) ways to reach the masses. They feature tools for increased productivity, creativity, collaboration, problem-solving... -- the "meat" of education. As many in the field of education teach students, colleagues, and administrators how to "hunt" for this kind of learning and reap the benefits of this meat, excitement is catching on like wildfire.  More and more Thoks are emerging as wise, forward-thinking guides to help spread a) enthusiasm these new ways of learning and b) guidance on how to go about using new digital tools for this kind of learning.

There are Vals out there how do not understand and make no effort to try to embrace the "hunt" for this learning. They see the fruits of its labor as ineffective or inefficient.  This is perhaps because they don't enough about educational technology to a) use it properly or b) gauge its impact properly. Assumptions guide attitudes and false "conclusions" about the value of these new tools and ways of learning.

Therefore it is the job of the Thoks to not only teach about and model the proper use of educational technology, but also show its value through a more systematic approach.  That's where evaluation fits in.  Educating others about educational technology means showing them evidence, which is gathered through data collection -- not just theory.  It's showing how much more "meat" can be gathered by employing these tools and these teaching styles through observations, surveys, inventories, objective discussions, statistics, etc.

We know that Val's conclusion is quite ridiculous because she didn't use the technology tool in any way remotely related to its intended purpose.  Her "data collection" was skewed from the get-go because it wasn't tied to any objectives, program, or process.Here is where Thok can dive in and help. He can plan an evaluation which we know will ultimately reveal that Val's banana leaf (aka ed tech) use is flawed. He can then help her change her plan for its use (by tying it to objectives) and evaluate it again. Val might then be more convinced of the role of technology in education after that because she went through the rough and bumpy road of program planning and evaluation with her own two hands.  Just hearing about it wasn't enough for her. Or maybe Thok's "proclaiming" wasn't substantiated by enough evidence.  Sometimes we just have to see for ourselves!

Reply to peer post: I don't think students are uncomfortable having opportunities to find new tools to solve a problem or find new problems to be solved with a tool. I think students do have that spark of creativity, ingenuity, and pride that leads to great innovations.  Perhaps what they are uncomfortable with is that they, as learners, are going to be evaluated on their product. What if the whole concept fails, what if the design is impractical, what if it's too risky -- there's just too much at stake to take a chance (not to mention peer rapport (i.e. what will my friends think?))  So I guess I'm saying students may enjoy dabbling in the challenge of "reusing the leaf" in a new way, they just don't want to be judged on that first iteration.  And sometimes we just can't afford to give them time (and grade book real estate) for more iterations.

#2: National Center for Education Statistics (NCES) is the primary federal entity for collecting and analyzing data related to education. http://nces.ed.gov/ If you're into data and statistics about education, then this site is a great location. Review the site. What'd you find that's connected to EDTECH 505? To chapters 6 and 7? Find something useful?

Wow -- this site is sure robust!  First of all, I had not realized there was a "Congressional mandate to collect, collate, analyze, and report complete statistics on the condition of American education; conduct and publish reports; and review and report on education activities internationally", which is the mandate of the NCES.

I was impressed with the number of publications that (mostly appear to) address issues of economics and student profiles related to behaviors.  There are a TON of surveys and programs that look at beginning teachers, crime and safety, early childhood, high school and beyond, private schools, school staffing, urban education, etc. A variety of subgroups and educational activities are put under the microscope in these studies.  The Fast Facts section offers a myriad statistics about these various data filters. (However it's a bit challenging to work through these due to their "dryness" (mostly text and a few very busy charts and graphs!)) I feel a person really needs to know what they're looking for when they delve into this site.

It was quite interesting to conduct a "school search" to read about my own district's information, characteristics, enrollment by race and by grade. This could be a great starting point for the "public records" aspect of antecedent data that is discussed in our textbook, although it doesn't go into great detail.  The college navigator was an interesting surprise, and definitely something I'll recommend to our HS guidance department as an objective tool for students exploring higher ed options.  Younger students might enjoy exploring the Kids Zone: http://nces.ed.gov/nceskids/, especially in math class working with graphing authentic data sets. It's also a great way to engage students in their own education process (learning about their school, comparing it with others, etc.)

I have never heard of our district's students participating in the National Assessment of Educational Progress (NAEP), which is "the largest nationally representativ​e and continuing assessment of what America's students know and can do in various subject areas" -- I'd be curious to learn more about who participates in this (esp. being a national, vs. state, assessment tool), as I haven't heard of it before.

It would be helpful if the site had more information for students, educators, or administrators looking to conduct their own educational research. It appears this site is mostly fact-providing not skill-focused.  A lot of the data-tools appear very advanced and tailored for a specialist audience.  In addition, it's difficult to interpret what data collection tools were used in many cases.

Saturday, March 14, 2015

EdTech 523: Management Issues in the Online Course

Discussion Prompt:
This week, we will be discussing management issues in the online discussion forum, and strategies to address them. Please refer to two resources for this week's discussion. In Module 4, the Management Issues document describes different issues that could arrise in the online discussion forum. In the Purposeful Engagement document, the author lists many different ways that an educator can manage online communication in an effective manner.

In your experience as either a teacher or a student in online courses, which method(s) mentioned in the Purposeful Engagement document have you found to be most effective and why?
Have you ever had issues such as the ones mentioned in the Management Issues resource come up in classes that you've taught or taken? If so please discuss how you adressed it. If not, imagine if one of these issues did came up, how would you handle it (as a teacher or as a student)? Pick an issue, and describe how you would handle it. 
 
My Contribution:
Purposeful Engagement Methods

Because participation in online course discussions is so open-ended to begin with, I find that any method that communicates structure (of time, set-up, and purpose) and sets clear expectations will be a helpful strategy to promote maximum participation. As a student (never a teacher) of online courses, I really appreciate clear guidance on minimum posting requirements. It helps take away some of the "antsyness" about knowing if I've written enough content, posted frequently enough, and generally taken part as is expected. It reminds me of Spanish class when the teacher had a clipboard jotting tallies for every comment we made for adding up participation points. It created a preoccupation to say something just to say something! Throughout a week person can't be constantly checking or adding to a discussion board, nor should they be expected to, so if there are "check-ins" throughout the week, students can make sure to plan their time accordingly. They will also know that fresh content from other classmates will be guaranteed by those check-ins, so discussions will not be bunched up all at the end of the week. One thing I would say is that I'm not convinced that discussion boards are really the same as attendance. Just as there would be "lurkers" (or listeners) in a face-to-face class, there may be instances of that in an online course as well. While a student might not spout out tremendous amounts of thoughts and opinions on a discussion board, they may be internalizing some of the information from my peers and using it in their course products or personal reflection journal. I suppose what I'm trying to say is that it's hard to truly document one's "attendance" in an online course, and I'm not 100% convinced that online discussions are the only way.

Management Issues

Although I haven't taught on online class before, I can image there would be many “Must-have-an-A” students. I do like the response from Ko and Rossen. In addition, in an effort to prevent these kinds of emails in the first place, I would make sure to be VERY upfront about what it takes to get an A in my course. I would keep all rubrics or scoring guides in a central place, and explicitly highlight the pathway to that grade. I would be sure to include exemplar products (A-worthy) for every assignment so students had a sense for the depth of thought and quality of work I was looking for as an evaluator. This is a difficult one, however, because we want to promote student-centered learning in our classes. Well, how does student-centered learning fit nicely with teacher-supplied A's, B's, and C's? Yes, self-reflection through self-assessment is important, and it should be encouraged as much as possible. Nevertheless, most students still see grades as "given" by their instructors (rather than earned by themselves). The instructor must give the message that they are objectively submitting grades based on how students met the criteria for the class (which should be publicly-accessible, balanced, and easy-to-understand). If one "must have an A" then they can use those tools as a checklist of sorts, just like the instructor will.

[A Peer's Reply: Hi Erica. I agree with you that clear expectations and structure are helpful to promoting participation for online discussions. I found your opinion of participation in discussions linked to participation in class to be thought provoking. I've been that student in the traditional setting that would sit and listen to everyone else, and then used the assignments to show what was going on in my head. Because the focus needs to be placed on learning and the process it takes to get to the end goal, that personal reflection can just as powerful a tool as the discussion boards. My question for you would be if you did not use the discussion board as your tool for measuring "attendance" in class, how else would you measure their participation? ]


My Response: Yes, this is definitely the challenge (although in my opinion attendance is different from participation, even in the traditional classroom setting...)  Perhaps measuring participation comes down to the student highlighting their best work throughout the course at the end of the semester showing that they fulfilled the objectives to a certain degree... It wouldn't matter, then, if it were discussion posts, reflections, snippets from assignments, etc. It's up to the student to select the assignment/format that best addresses each learning goal. I certainly appreciate that this takes away from the community-minded collaborative style of the class, however. I wonder if discussion boards weren't mandatory or grade-dependent, what would their use be like? Are there other ways that participation can be incorporated besides the whole-group threaded discussion board model? What about small group discussions, partner tasks, collaborative document writing... It'd require lots of brainstorming, but I feel like there must be other ways to demonstrate and measure participation. I'm certainly no in any way against discussion boards, and enjoy taking part in them... just feel it's important to consider other ways of doing thing and "measuring" things. As for attendance, daily "checkins" or software like Moodle that can gauge numbers of actions or time signed into the class could be helpful statistics that take into account "lurkers" who are spending a long time reading and learning (and hopefully applying) vs. the students who quickly check in to post any old thing to fulfill a minimum requirement. There certainly is no perfect solution.

* * * 
* * * * * *
Reply to another Peer:
I also think that this reference is a great tool for showcasing exemplary and poor responses. These are good models for students for whom this kind of peer response might be unfamiliar. A sixth grade teacher that I work with who was delving into blogging with her students got them comfortable with the response process by offering sentence starters for them to pick among. Her students were discussing teen activists, and each student was responsible for showcasing a different person they'd read about. Responders had to pick from these replies:

-Another good thing I’ve heard this person has done is…..

-He/she is similar to _______ teen activist in these ways…

-A question I have about this teen activist is…

-An interesting article I found about this activist can be found here ________. It mentions…

-This surprises me because…

Sometimes it's just a matter of getting the students started -- scaffolding them to create an academic versus purely social reply.