Sunday, February 23, 2014

EdTech 503: Discussion #2



1) Research indicated that the single most important information an instructional designer must consider is prior knowledge (Streamlined ID, p. 53). What are some strategies instructional designers can use to discover prior knowledge. How can educators access this information?

(February 19)
While observations and conversations with learners are convenient and popular ways to gauge students' prior knowledge, they can lead to a more vague and subjective view of what they already know instead of a true objective analysis.  Surveys can be appropriate for a quick overview of general skills, but they must well-written to include different types and depth of knowledge for a specific course, as often it is difficult for a person to articulate an assessment of their own abilities. Concrete evidence of prior knowledge could come in the form of performance tasks or key concepts tests (which could easily be implemented through a multiple-choice quiz, holding up answer cards (yes/no or colors), or by using response clickers).  Examples of these can be found here.  Another objective method that provides a picture of knowledge content as well as organization of knowledge is the use of concept maps, such as those discussed here.  View this example and this one too. These include terms/concepts, the relationships between them, and a classification of those relationships (with linking words).  These should include examples as well as a hierarchy of general to specific terminology.  Although they may be complicated to explain at first, and a bit time consuming for the learner to complete, they provide a good view of misconceptions and level of complexity of their overall understanding of a topic.

I haven't used these types of intentional analytic pre-assessments in my classes before, but their value is clear.  Another appealing measurement tool is the use of Classroom Assessment Techniques (CATs).  They are quick, immediate, and useful for gauging levels of understanding from an entire group of learners (rather than individuals). Although they are often used during instruction, they can be adapted to be used beforehand as well.  Please visit this webpage for examples of CAT activities.  The "Minute Paper" sounds to me to be very similar to the "K" in a KWL chart, in which students have 1-2 minutes to write down everything they know about a topic.  The instructional designer can group/classify the responses into three levels of understanding to derive an overall impression of background knowledge.

I believe conversations and observations with learners about their prior knowledge need to be supplemented with concrete evidence that can be documented.  It's important to take time to construct carefully-worded pre-assessments that shed light on the skills and concepts, types of knowledge, and connections among knowledge for that target topic. Often times these prior assessments look similar to regular formative assessments.

* * * * * * * * *
(February 23)
Caroline, I really appreciate that you brought up the idea about individualized instruction embedded in the ID process. This has been something I've been concerned about as the ID process, at least initially, came across to me as very formulaic and prescriptive.  It seemed like designing an input-output machine.  In the analysis stage, we are gathering a lot of raw data and making many generalizations and conclusions.  But, like you, I feel it's important to remember the individuality of the learner.

Personalized learning involves acknowledging that students will have different sets of prior knowledge before delving into a task, and that the learning experience may have to be varied for them to be successful in achieving the desired objective (even though their product may look different). Assessing prior knowledge is not a one-off task.  It's part of the continual revision and additive process of ID.

Caroline, I really liked your idea about incorporating pre-assessment into the actual lesson plan of an instructionally designed class/activity. This pre-assessment might be an abbreviated version of what the instructional designer went through in initially creating the activity.  Or it could be a sort of informal "streaming" activity to group students by knowledge/ability level (with regard to the specific task objective).
I feel that designing different launching points for instruction based on the results of the pre-assessment activity ensures the plan is inclusive and appropriate for all.  The launching point could be different for different groups of individuals.  It could either help bring students "up to speed" enough to complete the task, or help them expand upon the upcoming task's requirements so that the experience is more enriching and "new".  (An extension would be appropriate if the bulk of the lesson was teaching students how to make presentations with Google Docs and several already know how.)

Another possibility of using different jumping off points embedded in instructional design is a bit more student-directed. Howard Gardner introduced the idea of "entry points" as a way for students to approach a new topic according to one of their interests or intelligences.  In his case, they could enter an activity or topic by choosing: the Narrative “narrational” Window, the Logical/Quantitative “numerical” Window, the Foundational or “existential” Window, the Experiential or “hands-on” Window, or the Aesthetic Window.  (I found an interesting overview with example here, perhaps from another EdTech course?!)  Implementing an activity like this could shed insight into students' prior knowledge and experience while making them motivated and confident in completing the upcoming task.  This type of entry point, or launching point, would clearly work better in some disciplines more than other, but the idea that students choose where/how to enter is another option.  The rest of the instruction could be more streamlined and universal, but always being conscious to value and incorporate students' preferences and intelligences.

Achieving instructional objectives is the target of an ID process, and we know that there are many, many ways to do that. So an approach that plans for student diversity and includes performance flexibility is important.  It's not enough to just do it "behind the scenes" or in a general sense.


2) a) In case study 7, critique the steps Maya took to identify the needs in the case. What are some things she did well? What would you suggest to improve her practice. 
b) In general, is a needs analysis always necessary when planning an instructional design project? Why or why not?

(February 20)
a) Maya was a very empathetic, patient, and reflective thinker when it came to initiating an instructional redesign of Ruth Ann’s 7th grade math course.  I would have loved to have such a helpful colleague at my side at many points in my career! I believe she was a good sounding board and “needs-detective,” however at the same time I can understand how Ruth Ann would have left the exchange feeling a bit unsatisfied and uncertain as to the next steps.
Maya was overall quite thorough in her needs analysis process in that she systematically collected data from various resources and stakeholders. She:
  1. Identified teacher assumptions and beliefs (which included behavior distractions, motivational  struggles, and deficiencies in basic skills).
  2. Noted defining characteristics of students (overall demographics and home life)
  3. Recorded field notes from interviews with students (from two different ability levels and a lower grade level), the teacher (Ruth Ann), and the assistant principal regarding their:
    1. Attitudes and values about the subject and its relevance
    2. Opinions about materials (books, money, technology-use) and methods (collaborative, board-work, problem-solving vs. problem solving)
    3. Thoughts on assessment measures (homework as well as state- and school- dictated)
    4. Noted which methods had been used before which had proven unsuccessful, according to the teacher
    5. Sought points of alignment between materials (textbook) and state standards
I believe Maya could have done a better job of collecting more objective data (through surveys and other numerical/statistical analytic tools (rating scales, frequency count forms, checklists, etc.).  Measurable data would’ve shown patterns, trends, and priorities in a clearer way than conversational interviews. It would help make a more compelling case for the focus of the solution (e.g. which problem was the most pressing or universal?).  Maya could’ve also helped paint a clearer picture in this stage by including a detailed list of the current resources available as well as any constraints that would limit the new plan’s design, budget, and/or schedule.

Maya also set out well in her analysis of the data. First of all, she noted that the results showed a discrepancy between the textbook and the standards.  The analysis also pointed out that Ruth Ann, her “client”, did not really have a clear vision of what her class should be like.  In her mind, what did the students actually need to know, do, and believe?  The state standards weren’t going to teach themselves.  Yes, Ruth Ann wanted an innovative “real-world” math environment, but was scared to leave the sense of security and control that came from drill-and-practice techniques. So what was her priority? Introducing open-ended and complex math problems, including more writing in the problem-solving process, improving overall motivation, incorporating more technology, or bolstering basic math skills? This lack of a focused definition of the problem made defining the performance gap quite difficult, or at best, imprecise.  Therefore the goals of the new teaching plan were not anchored in anything concrete and clear-cut.

I thought it important to note that throughout the analysis process, Maya’s own expectations were also challenged. Just like any teacher, ID’ers have their own assumptions which much be deconstructed in the analysis phase.  In this case there was a cultural context at work in the background of these students’ lives that she had not considered: resistance to education in general.  This observation prompted her to emphasize real-world connections in her plan suggestion going forward.

I believe Maya was a bit too quick to suggest a plan to Ruth Ann, and the plan she rather casually introduced lacked definition.  Integrating a variety of new curricular materials sounded like a promising fix, as it would hopefully address the issues of relevance and motivation.  However, Ruth Ann still had concerns which included alignment to standards, a lack of targeted focus to improve basic math skills, an overemphasis on writing in the math curriculum (which was not motivating to them), lack of teacher training and comfort with the materials and pedagogy, assessment, and parent/community response.  A priority mapping technique may have assisted in connecting these concerns with solutions that were embedded in the plan.   It would have weighed some more heavily than others, which may have helped Ruth Ann feel less overwhelmed.  After all, no one single solution was going to be able to address all the problems she articulated.

Maya’s plan needed a bit more evidence to support its implementation.  It needed to demonstrate how it was, in fact, a solution.  Ruth Ann was clearly having difficulty seeing how the new curricular materials were going to address the gaps in students’ skill-levels.  She was still trusting of the recently-adopted textbook and appeared fearful of diverging from it too readily.  While the discussion was healthy and helpful in that it “broke down” the problem, Maya could have perhaps done a better job of analyzing and prioritizing the pieces before suggesting a new plan in a relatively haphazard way.

b) I, too, believe a Needs Analysis is best practice for a thorough instructional design project.  It’s helpful because it forces the designer and client to come to some agreement about what is currently taking place and what should be taking place.  Only then can the needs be determined.  Coming up with a “what should be” was a difficult phase in the process for Maya and Ruth Ann, and I’m not convinced that they actually successfully bridged the gap.

* * * * * * * * *
(February 23)
What a helpful guided analysis process you shared with us, Mike. Thank you! Like you and many others here, I believe that in this (all-too-familiar) story of Maya and Ruth Ann, the needs analysis was comprised of too many narrative or subjective components. Raw data was missing in the forefront, and well-defined, objective evaluation was lacking in the future. There were so many variables at play contributing to the problem (or in my opinion, problems) that it was difficult to come up with a feasible plan.

I thought you brought up a great suggestion for Maya to seek and define “specific performance improvements (test scores, student reported feelings, etc)”. I feel that Maya offered a well-meaning and perhaps excellent solution to Ruth Ann (try new materials and a new way of teaching). However, it was such a tall order and was left rather open-ended, which means its implementation was likely to be poor. Not only that, but it seemed Ruth Ann had little faith in the prospect of that devised plan. I believe Maya shouldn’t have encouraged her to bite off more than she can chew. It needed to be broken down into pieces, each aligning with a specific performance or behavior. They should have devised a plan of attack that included “baby steps” to help transition from one teaching style, classroom structure, array of materials, and set of expectations to this new one. What Maya was suggesting is a pretty drastic shift in pedagogy. Instead, perhaps she could select and clearly define one performance indicator to tackle each term.

Also, as you rightfully said, Mike, Maya’s understanding of the learning problem was holistic. She was able to ingest a wide variety of components from the atmosphere of the learning context. For example, she seemed to really “get” the students and their attitudinal/motivational problems. However, in my opinion, that’s where the success of this ID initiative sort of trailed off. Maya needed to carefully tease out the specific problems she wanted Ruth Ann to address. I agree with you, Mike, as you suggested “can the problem be defined in tangible, quantitative terms?” Despite all the analysis, I feel strongly that there was little definition of the actual problem. Or rather, there were too many distinct problems lumped together that a one-size-fits-all solution could simply not address. As I said in my initial reply, there was no clear-cut performance gap on which the instructional design solution could anchor itself. Another factor was the lack of quantitative data to support the definition of the problem.

I must say that my approach to this problem would be very similar to Maya’s. Despite my critiquing of various aspects of her game plan, in reality I would probably have gone through a similar thought-process. I would try to gain a general, overall understanding of my students and their issues, and then try something drastic or wildly creative to transform their learning and get them to move beyond their “issues”. I am not inherently systematic in my approaches! However, by reflecting on our readings and various case studies, I am gaining a better appreciation of the scientific and problem-solving approach to instructional design. Focus is key. Data is important. Cause-and-effect needs to be at the heart of decision-making.

No comments:

Post a Comment