»
S
I
D
E
B
A
R
«
Mutiple choice or open-ended question . . . it doesn’t matter under the right circumstances
Nov 2nd, 2009 by Frank LaBanca, Ed.D.

questionThe fact of the matter is that objective assessments are here for a while.  How do we as teachers find the balance between objective assessments and authentic assessments? I am a strong proponent of authentic assessments:

  • position (critical stance) papers
  • lab reports
  • poster presentations
  • oral PowerPoint presentations
  • Blog posts and responses

They so better provide a more realistic cognitive apprenticeship for students as they traverse their knowledge growth potential.  But for better or worse, there is an obligation for teachers to work with students and allow them to engage in more objective assessments:  timed tests on specific content.  I’ve often worked with teachers who indicate that they would NEVER use a multiple choice question.  They spout off some nonsense about the nature of the question.  However I would only agree with them if the multiple choice question is a fact check. 

I would classify types of questions (whether objective or authentic)  that teachers ask students into three major categories:

  • factual
  • conceptual
  • analytical

Factual questions are just that:  checking facts.  Isolated information that stands alone.  Generally much lower on Bloom’s Taxonomy (knowledge/comprehension).  Conceptual and analytical questions, though, would fall under higher order thinking questions.  Conceptual questions: ill-defined allowing students to connect ideas together and draw on knowledge.  Analytical: well-defined, challenging students to interpret information or data, and make calculations. 

I’ve seen essay questions that were just as factual as a factual multiple chioce.  Conversely, when students are challenged to connect ideas or analyze information – that’s higher order thinking no matter what the format.

I often think back to a teacher that would tell me that his midterm exam had 300 multiple choice questions for the 2-hour period.  My students can barely complete 40 multiple choice during the same time frame.  Easy reason:  my questions require more thinking and analysis.  His only check facts.  My student test booklets are covered with notes, comments, calculations, and figures.  There certainly is something to it.  The challenge for educators is to put more emphasis on HOTS – no matter what the format.  Authentic assessment can stink just as much as some forms of objective assessment if it isn’t pushing students to higher levels of intellecuation.  

So, ultimately, it’s not what we ask students to do – it’s how we ask them to do it.

I’ve done more detailed posts about conceptual assessment here and here.

Bloom’s Taxonomy Paradox of Theoretical with Practical
Jun 9th, 2009 by Frank LaBanca, Ed.D.

I was recently speaking with a group of educators about using data to inform instruction.  Specifically, my team at Oxford High School identified that students were having trouble with graphing interpretations.  Students could successfully construct a graph, title, label, and plot, both on paper and electronically using data that they collected from experiments.  Unfortunately they were struggling with using a preconstructed graph to interpolate and extrapolate other information.

For example, we recently completed a DNA electrophoresis experiment separating DNA to make a DNA fingerprints.  The fingerprints make banding patterns that need to be measured and then graphed. A specific control is used to determine a standard curve, which is then used to predict the sizes  of other bands in the gel.  Graphs were made with little problem.  However, when the students went to predict sizes based on the standard curve, things when awry.

This has been a consistent problem.  I see the challenge:  there is definite higher-order processing going on when students are trying to extract information from a data set, in this case a graph.  We’ve focused on graph interpretation throughout the year as we recongize this as a weak point for our students.

But this has got me thinking about Bloom’s Taxonomy.  A brief summary follows:

1.Knowledge (finding out)
a. Use – records, films, videos, models, events, media, diagrams, books…
b. observed behavior – ask match, discover, locate, observe, listen.

2. Comprehension (understanding)
a. Use – trends, consequences, tables, cartoons….
b. observed behavior – chart, associate, contrast, interpret, compare.

3. Application (making use of the knowledge)
a. use – collection, diary, photographs, sculpture, illustration.
b. observed behavior – list, construct, teach, paint, manipulate, report.

4. Analysis questions (taking apart the known)
a. use – graph, survey, diagram, chart, questionnaire, report….
b. observed behavior – classify, categorize, dissect, advertise, survey.

5. Synthesis (putting things together in another way)
a. use – article, radio show, video, puppet show, inventions, poetry, short story…
b. observed behavior – combine, invent, compose, hypothesis, create, produce, write.

6. Evaluation (judging outcomes)
a. use – letters, group with discussion panel, court trial, survey, self-evaluation, value, allusions…
b. observed behavior – judge, debate, evaluating, editorialize, recommend

If I consider the taxonomy, Graphing hits Level 4: Analysis.  However, considering the interpretation from a previously constructed graph hits Level #2:  Comprehension.  This is interesting, because students are finding more success higher up the taxonomy and struggling with lower on the continuum.  There is supposed to be a higher level of thinking and processing associated with higher educational objectives, however, practical experience tells me that this might not always be the case.

What ultimately is important is figuring out how to help students think and learn well.

»  Substance: WordPress   »  Style: Ahren Ahimsa