Saturday, January 31, 2015

EdTech 505: Week 2

Week 2 Assignment

Directions: Complete exercise 1 on p. 22. Submit 200-400 words in the designated section.

Monday, January 26, 2015

EdTech 505: Defining Evaluation

Considering Two Definitions of Evaluation (from B&T text)

1)The purpose of evaluation is "to determine whether and to what degree the objects have been or are being achieved" (p. 4)
2) "Evaluation is the systematic process of collecting and analyzing data in order to make a decision" (p. 4)


My response to a peer on our discussion board:
I had a bit of trouble discerning between the two definitions because I thought they could be interpreted in different ways.  In the text's discussion that followed the two examples, the authors pointed out that the first definition is more of an objective "okay, here's where things are".  The second definition seems to encompass "okay, here's where things are and here's where you may want to go".  To me, the second definition is more productive because it's concerned with the bigger picture, the next steps, and with making an impact.  It doesn't just take a snapshot in time and leave it at that.

I agree with everything in your interpretation of the definitions except for the part "the evaluator may be collecting and analyzing data to help stakeholders design project goals, unlike the other definition in which evaluation occurs after the goals are set".  I my opinion, it would not be wise to put the cart before the horse and establish goals AFTER evaluation.  What are they evaluating in that case?  To me, a big part of evaluation is measuring the success of the objectives, which are in essence, a breakdown of the project goals.  Those should be embedded in the initial design of the project.


Now, it makes sense that after evaluation the stakeholders may decide to alter their goals or redefine their program.  In that case, they may want to do the implementation and evaluation process all over again with those revisions.  Here is where your great summary of the purpose of a systematic process comes into play.  After making decisions (definition #2), the program design is tweaked, and the evaluation process can be replicated so that, as you say,"an inconsistent process cannot be blamed for results".  In my understanding, the difference in the two definitions is not the ordering of setting goals and evaluating but rather to what degree there (if at all) "next steps" are proposed.  Not sure if I have that right, though!

Saturday, January 24, 2015

EdTech 505: SMART evaluation

Scenario: A former college roommate of yours is a new technology coordinator at a school district in a rural part of Idaho. In the first few days on the job, in a meeting with the school administrators, she learns that "most teachers in our building resist using technology, even though we just spent $75,000 buying brand new SMART Boards for their classrooms." Those SMART Boards are installed in the classrooms but are getting very little intended use. The administrators are concerned on multiple levels, including being worried about what will happen if district taxpayers learn those purchases are mostly being unused. The administrators then ask her to perhaps identify teachers in classrooms or even other schools who would make better use of the equipment. But - your friend is intrigued. Rather than simply take away the SMART Boards from teachers who are not using them, she wants to know why the teachers in the elementary school are not using the new interactive whiteboards. However, she's not really sure how to go about doing this and wants your advice.

I believe this scenario is very common for school districts who are quick to innovative with tangible items without thinking through the rationale and complete process (going beyond logistics) of implementation. It sounds to me that the generally less-than-ideal use of SMART Boards in this district is either related to a lack of teacher "buy-in" or a lack of proper training. Both of these causes are fixable, so evaluating teachers' attitudes, teaching styles, and knowledge/comfort levels (among other indicators) will help ensure that the solution addresses the root issue in an efficient and effective way.
I think an important first step is to understand the big picture of this issue. The technology coordinator should speak with administration to gather facts about the goals for this initiative. What was the rationale for spending $75,000 on SMART Boards in the first place (i.e. what were the objectives)?  Who made that decision, and what involvement did teachers have? (E.g. were they made aware of the objectives and told of their accountability?) It would also be helpful to gather information about timelines for implementation and any training sessions that were conducted to get an objective sense of the behind-the-scenes preparation for this technology roll-out.

The technology coordinator should then gather data about the problem to help determine the root issue. First of all, she could try to discern if there is a pedagogical reason for their lack of use. Stopping into classrooms and gathering this kind of data from observation checklists or field notes might reveal patterns. Are the teachers that are not using the SMART Boards using other kinds of technology in their place? Are they typically demonstrating other types of teaching strategies that do not involve sharing information on a screen?  Is the room set up for a different kind of interaction? It's possible that this technology is a mismatch for the teaching styles of many staff members.  Their lack of use may not be through laziness or confusion at all, which it why it is so important not to jump to any conclusions before looking at the issue from as many angles as possible.

The tech coordinator would also want to find out if the teachers do not understand the purpose or value of these machines. This could be due to a lack of understanding of how they work and/or their capabilities beyond a whiteboard. It could also be due to a perception that these devices are an inferior or less efficient/effective way to teach their students. Or perhaps there is simply a lack of time to learn about and explore the device features. The best ways to gather data surrounding these attitudes and perceptions is through surveys. The coordinator could generate questions that touch upon teachers' skill and comfort level with the SMART Boards and well as their feelings about the power and value of these devices as a learning tool.

The fact-based background information, observational data, and attitude/perception surveys should help shed light on the "real" matter at hand. From here, the administration, teachers, and technology coordinator can work together to reframe, retrain, or re-implement this roll-out as needed.

EdTech 523: Educational Innovation

Discussion 1: What does innovation mean to you? In your post, include what personalized learning means to you, characteristics of innovation that you have identified and the challenges associated with thinking outside the box.
Modernist architects are principled in their design philosophy that form follows function. It would be prudent for educational architects (teachers, curriculum writers, and instructional designers) to agree that the form(at) of instruction (i.e. methods) must derive from the objectives--the purpose of a learning program. Efficiency and effectiveness are key. Due to advances in technology and shifts in societal norms, students learn physiologically differently today. What is valued in education is also changing. Therefore, the "function" of school is different. Innovation is a necessary part of education's evolutionary process. To avoid obsolescence, new requirements need better solutions. Innovations are not just improvements, however, and they are not just hypothetical idealizations. They are attention-grabbing and revolutionary shifts in direction. In the realm of education, one manifestation of innovation is flexible, student-centric learning.
The Christensen, Horn, and Staker article seems to support a disruptive version of innovation in education. In this model, "newness" arrives in a discontinuous versus incremental and evolutionary way. To them, disruptive innovations do not try to bring better products, but rather offer a new definition of what is good. They believe that the "deployment of disruptive blended-learning models will have a greater impact on replacing the classroom with a student-centric design" (para. 12). Reflecting on this assertion, I felt slightly uncomfortable. Isn't such a disruptive change too drastic and harsh? Isn't it too much too soon? What about students that value and succeed in other models of education? Are we really ready to completely redefine "what is good"? In the spirit of compromise, I am in support of hybrid models of personalized learning. To me, a "combo" offering is more sustainable and appealing to a wider audience. In this way, more needs are being met and more interest is piqued. Blended learning and personalized education are two examples of this hybridity.
To me, personalized education means student voice and choice. There is a shift in power and autonomy such that education is guided by the learner. This learner does not get lost in the crowd or herded like a sheep through the "system". Rather, this learner is motivated and empowered to explore, take risks, seek support, manage time and resources, and pursue mastery (or at least successful achievement of fulfilled objectives). In Michael Horn's TEDx presentation, he claims that in our current education model, time is the constant and learning is highly variable, but that student-centric learning flips this relationship on its head. (He uses this juxtaposition to promote competency-based learning.) Flexible settings, time parameters, and presentation modes are hallmarks of personalized learning. Assessments as formative, helpful, and cyclical versus summative, conclusive (and potentially punitive) are also features. I see this starting to take place in my school district, where data from formative assessments is informing teachers where their students are on a trajectory of learning, and targeted interventions are designed around these areas to help students meet specific, individual learning needs. There is also a shift towards standards-based grading (which is a step towards competency-based grading), which has become universal at the elementary school, and is now creeping into the middle school. This is also an example of the evolutionary model of innovation versus the "all right stop, collaborate, and listen", halting (and perhaps imposing) model of innovation. (That song just popped into my head, sorry it's out of context to this discussion!)
Change is in the air, and many schools are starting to offer options for those looking to embrace these "outside the box" methods and philosophies. Nevertheless, there are challenges and dangers associated with innovation. Some of these include the danger of giving up power to an unknown and fickle player, danger of letting the "shock" of the change overpower conversations about actual progress, danger of systemic failure, and danger of being detached from reality to some degree (by ignoring some of the important norms and expectations that are still "inside" the box, whether we like it or not.) There are also the challenges of organization, funding, resources, and training. These are real variables that need addressing throughout the innovation process.

References
Christensen, C. M., Horn, M. B., & Staker, H. (2013, May). Is K–12 blended learning disruptive? An introduction to the theory of hybrids. Retrieved from http://www.christenseninstitute.org/wp-content/uploads/2014/06/Is-K-12-blended-learning-disruptive.pdf.
Horn, M. [TEDx Talks]. (2011, November 21). Toward student-centric learning [Video file]. Retrieved from https://www.youtube.com/watch?v=hF8hNkEG0ig&feature=youtu.be

Sunday, January 18, 2015

EdTech 505: Week 1

Personal Introduction

Paint a "word picture" of who you are by telling us about your professional background, especially as it pertains to this course (e.g., Tell us about your current situation. Besides your current position, what else have you done/taught in education or elsewhere? Other jobs?). Also, indicate:
1. Your personal objectives for taking this course. What do you hope to get out of this course?
2. Any questions or concerns about the course or using this online medium to take a college course?
3. Ever take another online course? If so, please describe your experience (good/not so good aspects).
4. Describe your previous experience with "evaluation."
5. Other information you could include:
  • What's your geographical location? Where's your hometown?
  • What brought you to Boise State?
  • What are your academic goals? Job goals? Life goals?
  • Anything we should know about your hobbies, interests, etc.?

Greetings everyone! I'm writing from my new home in Sheboygan, WI -- about 45 miles from where I grew up near Milwaukee.  We moved here this Fall from Bermuda, and I've taken on a new role as Technology Integration Coodinator for a small, quite rural public school district.  It's a new position for the school, so there have been a plethora of initiatives and projects on the go!  For example, the district chose to use this year as a tech device piloting year whereby interested teachers wrote internal grants -- leading to a collection of LearnPads, Chromebooks, iPads, and laptops to manage throughout K-12. I work closely with the IT manager, and our two-person team handles pretty much everything from district website design to teachers' troubleshooting questions (a la "what does this message mean?") to "how can I integrate Google Apps with my flipped learning ideas?" I know I don't have all the answers, but having the time to brainstorm, research, and plan alongside my new colleagues has been a rewarding experience.  A lot of the tools the MET program has offered have come in handy as well. (I found Boise State's EdTech program simply through a Google search of reputable, affordable, and fully online EdTech programs (which was especially important to me while working in a country without the infrastructure for higher Ed.))

Prior to becoming a tech-focused teacher/admin/coach, I was a classroom teacher in a variety of age groups and subject matters.  Until this Fall, I have been living abroad since graduating college (UW-Madison), so have had a sampling of international, government, and private schools. I completed my student teaching in Ecuador, taught grades 2&3 as well as middle school Math and Science in Venezuela, and was a combined grade 1-2 teacher, elementary music teacher, 6-12 computer studies teacher, and grades 5-8 multi-aged teacher in Bermuda (work permits and immigration were tough in that country!).  I've always tried to stretch the limits as much as possible with technology available in my classrooms. My time as a computer studies teacher fomented my desire to pursue this field as a Master's degree.  I knew it would lead to increased professional opportunities and make me a better teacher.

My personal objectives for this course are quite practical.  I hope that learning about program evaluation will improve my effectiveness as a Tech Integration Coordinator. There are indeed many programs and initiatives the teachers and I are juggling this year, and I think that a systematic approach to analyzing their components and assessing their viability/products/success will lead to a more informed, fair, and objective conclusion.  This decision will help drive our next steps about purchasing certain devices and/or embracing certain apps/method.  My previous experience with "evaluation" has always been as a classroom teacher, but in retrospect this was an assessment of my students' learning more than of my program or methods. It was much less formal and systematic than the evaluation required for program management or Instructional Design, and took place on a much smaller scale.  The closest I have ever gotten to this kind of evaluation process was my evaluation of my middle school Scratch programming unit I conducted in the EdTech 503 course. All in all, I hope to apply the skills, theories, and best practices I learn in this class directly to my job.

As I've taken several online courses in this program already, I don't have any specific questions or concerns at this point. I am quite familiar with the format and have gotten used to text-based, asynchronous discussions. I do miss the "vibe" of a face-to-face class and buzz that can generate with interaction and shared space.  Sometimes I get frustrated with group work online because of scheduling issues, unfamiliarity with the person, and shared grades -- however, I've had good experiences overall in that domain in my MET classes so far.  My favorite part of online classes is working at my own pace/time and being able to use a variety of multimedia to communicate my thoughts.

In my free time I enjoy doing creative and artistic projects and spending time outdoors.  I love music as well, and hope to find more time to sing and practice the piano again.  On the whole, my husband and I are getting used to our new neighborhood and life in the US.  It's definitely an exciting new adventure! I look forward to meeting you all online throughout this course.

Here is a link to the Course Syllabus for EdTech 505: Evaluation for Educational Technologists