Sunday, February 8, 2015

EdTech 505: Week 4

Creating an outline for my Final Evaluation Report + additional activities.



Pertinent discussion post I contributed from this week:
Here is my eval project outline (with lots of questions!) so far.
My project is to evaluate our district's year-long fourth grade Chromebook 1:1 pilot program (with a view to use this pilot as a means of testing the practicality and value of distributing Chromebooks to all students). Some of the goals of this program are to improve rigor (in terms of collaboration, creativity, and critical thinking skills), support and enhance student responsibility and independence in their learning, offer differentiated curriculum via personalized remediation and enrichment, improve keyboarding skills required by online standardized testing, and advance student learning along the SAMR model.  Another sub-goal of the Chromebook initiative is to improve teaching pedagogy with focus on increased rigor and teacher collaboration (in learning the device, apps, and new teaching practices).
The fourth grade teachers themselves drove this pilot program by submitting an internal grant to a district-wide teacher-led committee. Some of the student outcomes they cited were collaborative writing, blogging, multimedia and other creation tasks, using targeted apps to practice spelling, vocabulary, and math skills, and learning self-assessment techniques (particularly with reading). Teacher outcomes included working together to develop innovations in course content presentation and class structure.
I aim to conduct the evaluation by including all 75 fourth grade students and three teachers in one or two surveys.  As this is a year-long program (and has been in action for several months), I'm trying to figure out how to collect "pre-test" kinds of information... or is it best to draw a line in the sand and call this the pre-test, and at the end of the evaluation do another survey which will serve as the post-test?  I don't think this will be very authentic, and I'm hoping figure out a solid plan for handling the surveys.  As the evaluator, I also plan to conduct a few "snapshot" visits of classrooms where I take stock of what kinds of activities I am observing to draw out any patterns or trends.  I would like to use student-created work as a data source, but I'm not sure how to make that an "objective" collection. I don't just want work that shows how the teachers are using the devices for certain skills but whether or not they are doing it. The keyboarding skills are the only behaviors that I feel I can evaluate with an outside assessment (some kind of typing test), although even that is a bit artificial because we are concerned with seeing how students are integrating keyboarding skills in their habitual and natural use of the Chromebooks (impact vs. effectiveness).  I understand we're meant to shy away from interviews, but I am hoping to distribute a structured questionnaire to the media/librarian as well as the principal to see if they are able to offer their data (thoughts? opinions? observations?) on students' independence as well as teacher collaboration with the devices.  As you can see, I'm still struggling with the data sources and the best ways to collect data keeping as objective as possible.  The stakeholders involved in these tasks are definitely on board with this evaluation plan, and are happy to accommodate.  I just need to reiterate to the teachers, especially, that this is not a judgmental/critical task!
Completing the Outline of the Evaluation Report was a welcome exercise, and helped me wrap my brain around this task.  I also completed assignment option B, which forced me to organize my "knowns" into their appropriate category, and to think critically about aligning evaluation data sources/tasks to evaluation questions.  I know this will require a lot of revision, but it was a helpful starting point.  I'm counting on our textbook and the contents of this course to offer more guidance in establishing good evaluation questions, selecting the best data collection methods, and properly (objectively and thoroughly) analyzing results.  It seems like a daunting task to me right now, as this is such a big project to scrutinize. 

No comments:

Post a Comment