Expecting the “right” answers
Jan 19th, 2012 by Frank LaBanca, Ed.D.

I have long been an advocate for conceptual learning – big ideas. At the heart of good conceptual teaching is quality assessment. It is HARD to ask good questions of students. But I sometimes wonder if teachers are always looking for the “right” answer. I have always felt that it is better to find the “best” answer. Here’s a list of questions with some interesting answers. Of course, most of these questions are lower-order thinking factual recall. However, I love the divergent thinking!

Q1. In which battle did Napoleon die?
* his last battle

Q2. Where was the Declaration of Independence signed?
* at the bottom of the page

Q3. River Ravi flows in which state?
* liquid

Q4. What is the main reason for divorce?
* marriage

Q5. What is the main reason for failure?
* exams

Q6. What can you never eat for breakfast?
* Lunch &dinner

Q7. What looks like half an apple?
* The other half

Q8. If you throw a red stone into the blue sea what it will become?
* Wet

Q9. How can a man go eight days without sleeping ?
* He sleeps at night.

Q10. How can you lift an elephant with one hand?
* You will never find an elephant that has only one hand..

Q11. If you had three apples and four oranges in one hand and four apples and three oranges in other hand, what would you have ?
* Very large hands

Q12. If it took eight men ten hours to build a wall, how long would it take four men to build it?
* No time at all, the wall is already built.

Q13. How can u drop a raw egg onto a concrete floor without cracking it?
*Any way you want, concrete floors are very hard to crack.

Problem solving isn’t always obvious
Apr 26th, 2010 by Frank LaBanca, Ed.D.

from: kidsaccident.psy.uq.edu.au


As some might notice, I had a friend design a new header for my blog.  Mark maintains his consulting business at www.mokturtle.net.  He designed the header (which is similar to my homepage labanca.net), sent me some files, and then I had to figure out how to upload them and get them working on my WordPress blog.  I enjoyed the challenge of figuring out how to get it all to work. My problem solving involved several different techniques and cognitive mechanisms (from Wikipedia): 

  • Brainstorming:
  • suggesting a large number of solutions or ideas and combining and developing them until an optimum is found.
  • Lateral thinking: approaching solutions indirectly and creatively.
  • Means-ends analysis: choosing an action at each step to move closer to the goal.
  • Morphological analysis: assessing the output and interactions of an entire system.
  • Research: employing existing ideas or adapting existing solutions to similar problems.
  • Trial-and-error: testing possible solutions until the right one is found.

Often, when some think of problem solving, especially from an educational standpoint it comes down to: 

  • Hypothesis testing: assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption.
This linear method may have applications at times, but doesn’t really allow for the creative potential that is often necessary when solving ill-defined problems:  problems that have more than one possible method of reaching the outcome, or perhaps problems that have more than one acceptable outcome. 

Enter a project that I conducted with my students:  Each student was required to create a short blog post, which had to include a graphic and a self-made media clip (audio or video) about a genetic disorder.  I created a blog (actually two:  here and here), established student accounts, and let them go.  In my usual style, I was intentionally vague so as to not limit the creative potential of the students. 

It was interesting to see that most of the questions I received as the students worked on their projects over the course of  a week were focused on operating the blog platform.  Questions were simple, directed, and easy to provide support. They had to troubleshoot the best ways to make their presentations work.  I think, though, they really could focus on the content without getting bogged down in the idiosyncrasies of technology.

What do I take away?

  1. The tools allow students to focus on content rather than the minutia of form to create attractive products.
  2. Using the tools has its own challenges and allowing students to work through these problems is good problem solving.
  3. Quality of content is still important.  Glitz does not take away understanding.  Just because we made something fancy doens’t mean that we can allow the quality of the concepts to slip.
  4.  In just 4 years since I gave this assignment last, student IT skills have improved tremendously.  I needed to provide very little support for students to make their media components – they know how to do it, and most of them have the tools.  I did loan some digital voice recorders to some, but did NOT have to provide instructions for usage.
  5. Making and editing video has become incredibly easy and there are a wide variety of tools to do it:  webcams, digital cameras, cell phones, video cameras; PC: Movie Maker, MAC: iMOVIE.

Allowing students to be creative producers is critical; these kinds of projects move us in the right direction.

Mutiple choice or open-ended question . . . it doesn’t matter under the right circumstances
Nov 2nd, 2009 by Frank LaBanca, Ed.D.

questionThe fact of the matter is that objective assessments are here for a while.  How do we as teachers find the balance between objective assessments and authentic assessments? I am a strong proponent of authentic assessments:

  • position (critical stance) papers
  • lab reports
  • poster presentations
  • oral PowerPoint presentations
  • Blog posts and responses

They so better provide a more realistic cognitive apprenticeship for students as they traverse their knowledge growth potential.  But for better or worse, there is an obligation for teachers to work with students and allow them to engage in more objective assessments:  timed tests on specific content.  I’ve often worked with teachers who indicate that they would NEVER use a multiple choice question.  They spout off some nonsense about the nature of the question.  However I would only agree with them if the multiple choice question is a fact check. 

I would classify types of questions (whether objective or authentic)  that teachers ask students into three major categories:

  • factual
  • conceptual
  • analytical

Factual questions are just that:  checking facts.  Isolated information that stands alone.  Generally much lower on Bloom’s Taxonomy (knowledge/comprehension).  Conceptual and analytical questions, though, would fall under higher order thinking questions.  Conceptual questions: ill-defined allowing students to connect ideas together and draw on knowledge.  Analytical: well-defined, challenging students to interpret information or data, and make calculations. 

I’ve seen essay questions that were just as factual as a factual multiple chioce.  Conversely, when students are challenged to connect ideas or analyze information – that’s higher order thinking no matter what the format.

I often think back to a teacher that would tell me that his midterm exam had 300 multiple choice questions for the 2-hour period.  My students can barely complete 40 multiple choice during the same time frame.  Easy reason:  my questions require more thinking and analysis.  His only check facts.  My student test booklets are covered with notes, comments, calculations, and figures.  There certainly is something to it.  The challenge for educators is to put more emphasis on HOTS – no matter what the format.  Authentic assessment can stink just as much as some forms of objective assessment if it isn’t pushing students to higher levels of intellecuation.  

So, ultimately, it’s not what we ask students to do – it’s how we ask them to do it.

I’ve done more detailed posts about conceptual assessment here and here.

Engineering project inspires creativity
Jul 13th, 2009 by Frank LaBanca, Ed.D.

nav_logo_gla1As part of the curriculum I developed for  Beacon Preservation’s Green Light Academy, students participated in a hands-on, minds-on activity to develop and build a small-scale solar still.  In true “guided inquiry” format, we gave the students some minor expository information about concepts of distillation for purifying salt water, and then asked them to design and build their own still using wood splints, plastic wrap, and different adhesives. 

I was absolutely amazed how engaged the students were.  They were building, asking questions, sketching, thinking, and really working hard.  They actually wound up working over an hour longer than we initially had planned.  No problems on my end.  When you are working with flexible time, and not confined to the “tyranny of the bell,” you can make great learning experiences occur.  Best of all, students were being creative, and NOT working under the traditional frameworks often associated with a science lab: 

  • a clear, defined procedure,
  • identifying variables, constants, and controls
  • meticulous data collection
from: rael.berkeley.edu

from: rael.berkeley.edu

I think science instruction often focuses on logical/analytical processes.  However, this was an engineering project – build, develop, deliver.   And although there were logical and analytical thoughts, there was more of an emphasis on creativity.  There was no one design that would work (the well-conceived (structured) question), but rather an unlimited number of possibilities (the ill-conceived (open-ended) question).  Many students were in awe that we, as teachers, did not have a “right” answer in mind.

What has bothered me, however, was the evidence.  I think I somewhat dropped the ball, because I didn’t plan well to document student learning.  Sure, I anecdotatly perceived student learning of concepts and creativity development, but how did I know it actually occurred?   I think it’s so important that we are able to show that students have, in fact, learned.  I have been thinking about ways to better document the concept learning and am curious about a good assessment method/mechanism for such a task.

Conceptual assessment increases science knowledge aquisition
Jun 25th, 2009 by Frank LaBanca, Ed.D.



I recently gave an objective test to my students on an Evolution Unit.  The test consisted of multiple choice questions and short answers.  I know many moan when they hear about multiple choice questions, and their groans are justified. 


 Part I:  You see, multiple choice questions often test isolated facts – a knowledge/comprehension type of assessment, fairly low on Bloom’s Taxonomy.  However, well written multiple choice questions can be more conceptual or analytical.  Students are challenged to apply their knowledge using higher order thinking skills.  This is what I strive for in my assessment strategies.

Part II:  Objective tests are often used as end-points to learning.  Teacher and students engage in learning activities which result in content and concept acquisition, which are then summatively assessed.  Learning stops prior to the assessment.  I’ve often wondered why learning had to stop there and why it couldn’t continue after an assessment was given.  In my case, I allow students to debate and vote for the best answer for multiple choice questions – which allows for even more higher order thinking.  Please note that I say “BEST” answer.  Since the questions are conceptual in nature, sometimes other answer choices are factually accurate, but don’t answer the question in the best possible way.  We get AWAY from right and wrong.  After the debate, some students are not necessarily in agreement with their peers, in which case, they have the option to write a response to justify their disagreement.  At the same time, those who decide that their answers were also not the best have the option to demonstrate their learning in writing, and earn credit back. 

I was recently impressed by this evolution test, and the high-quality thinking that was associated with their understanding of the evolution concepts.  Please note, these questions are short, yet they stimulate deep, sophisticated understanding of concepts.  Don’t believe me?  Read some student responses.  This is about empowering students to be independent, self-directed, critical thinkers.  My role is clearly the facilitator, NOT the knowledge disseminator.


My question:

2. Insects with wing mutations that prevent flight (e.g., in fruit flies, some flies have crumpled wings throughout their lives) usually can’t survive long in nature. Flightlessness is selected against. But in three of the following environments the trait could actually be selected for. In which environment would useless wings NOT be selected for?

     a. an island where stiff winds blow some flying insects out to sea, never to return.

   b. a swamp full of frogs that can see and catch flying insects better than crawling insects.

     c. a forest full of bats that catch and eat insects while in flight.

     d. a cage with predators, who crawl along the base


A student response, indicating that her answer was incorrect 

2.a The original answer selected was A, that insects with useless wings would not be selected for an island where stiff winds blow some flying insects out to sea, to never return. This answer was chosen because it seemed to be the worst environment for an insect with useless wings and the best environment for an insect with functional wings. This means that insects with functional wings would be selected for an environment where stiff winds blow while insects with useless wings would not be selected for this environment. Although insects with flying wings have the chance of flying out to sea in the winds, it was assumed that insects that could not fly would have a harder time escaping this stiff wind. This would make the environment more suitable to insects with functional wings. However this assumption was incorrect.

b. The class discussion involved many possible answers. There were various reasons behind each class member’s choice of answer. However, in the end, the possible answers were narrowed down to D, a cage with no predators, and E, a cage with slippery walls that insects cannot climb and an electrical screen on top that electrocutes insects that touch it. Reasoning behind D was that it was the most neutral answer. This environment would select insects with both functional and useless wings because food is readily available at low places which can be reached by both types of the insects. Reasoning behind E was that insects would have no source of food to survive on and therefore would not be selected. Finally, the class decided that D was the best answer because it suited both insects.

c. The correct answer is D. D is an environment in which both insects, with or without functional wings, would be selected. The question specifically asked in which environment -+–i useless wings would not be selected for. All other choices than D include situations where insects with useless wings would be selected for. In A, an island where stiff winds blow some flying insects out to sea, never to return, useless wings would keep an insect on the ground where it would be safe from the stiff winds. Therefore, the insects would be selected in this environment and A is not a correct choice. In B, a swamp full of frogs that can see and catch flying insects better than crawling insects, the insects with useless wings would have a better chance for survival over the insects with functional wings. Therefore, the insects with useless wings would be selected over insects with functional wings, so B is not a correct answer. In C, a forest full of bats that catch and eat insects while in flight, the insects with useless wings would not risk being caught because they do not fly while insects with wings do. Therefore, the insects with useless wings would be favored in this environment, so C is not the best answer. In E, a cage with slippery walls that insects cannot climb and an electrified screen on the top that electrocutes insects that touch it, insects with functional wings would try to fly to the top and then get electrocuted while insects with useless wings would remain safe on the bottom of the cage. Therefore, this environment would be favorable to insects with useless wings, so E is not the best answer. However, D is the best answer. In this environment, a cage with no predators in which food is provided in low dishes, neither of the insects, with or without functional wings, would be favored. Therefore, in this environment, insects with useless wings would not be selected over insects with functional wings.


My question:

7. A biologist studied a population of squirrels for 15 years. Over that time, the population was never fewer than 30 squirrels and never more than 45. Her data showed that over half of the squirrels born did not survive to reproduce, because of competition for food and predation. Suddenly, the population increased to 80. In a single     generation, 90% of the squirrels that were born lived to reproduce.  What inferences might you make about that population?

          1. The amount of available food probably increased.

          2. The number of predators probably decreased.

          3. The young squirrels in the next generation will show greater levels of variation than in the previous generations because squirrels that would not have survived in the past are now surviving.

     a. 1, 2, and 3 are correct.

     b. 1

     c. 2

     d. 3

     e. Both 1 and 2 are reasonable inferences.

 A student response indicating that she disagreed with the class’ conclusion.


7) a. The original answer chosen was a. 1, 2, and 3 are correct. This answer was chosen based upon the belief that , if a population increases suddenly, reasonable inferences to be drawn from the information given would be that there would be more variation in genes in that population, predation probably decreased, and the amount of food available probably increased.

b. The class discussion focused upon the fact that large populations tend to have a stable gene pool and therefore, according to the class, the correct answer to the question would be e. both 1 and 2 are reasonable inferences. The class agreed with the original answer in that the lack of predation and the increase in food would be reasonable inferences to draw from the information given.

c. The class discussion was not convincing, and the best answer is still a. 1, 2, and 3 are correct for various reasons. The class discussion was based upon the fact that the gene pool of large populations is stable, but this fact does not address the amount of variation within a population.

A large population might have a stable gene pool, but that gene pool will still have a great amount of variation. If a population of squirrels increases sharply due to a lack of predation and an abundance of food, squirrels that might not have favorable characteristics will have a better chance of procreating. This reproduction will increase the amount of genetic variation within the population. Endangered species have reduced genetic variation because the population is so small; this is because many of the traits that were not favorable were lost due to the loss of many of the species. The opposite would be true with a species that was allowed to greatly increase in population. Many unfavorable traits would be allowed to flourish and this would increase genetic variation. Therefore, a. 1, 2, and 3 are correct is the best answer to the question.

Blogging Live
May 15th, 2009 by Frank LaBanca, Ed.D.


Right now, I am attending a professional development session with Dr. Katie Moirs.  She works with the CT State Department of Education.  Her presentation is entitled “Assessment for Learning Presentation”.  I will comment as her presentation goes, and will post at the end.   I am doing this to document the session, but to also experience what live blogging is like, for me.

assessmentShe is beginning to speak about assessment literacy.  This is interesting to consider – a meaningful defintion might emerge?

Use of assessment: In the old days, assessment means standardized tests. No longer are we focused on standardized tests that rank order students.  What are we concerned with?  Think about a balanced assessment strategy. 

  • Institutional levels:  e.g., CAPT, CMT – a bad thing is that we rank order schools.  Where do standardized tests fit within the big picture of assessment.  They DON’T help kids learn – rather they are used for accountability.  They are reliable and valid, yet they are insular.  They measure a restricted skill set.  
  • Benchmark level:  program evaluation at a building or district level.  Common assessments fit into these categories.  Within this school, this is how many kids are at a certain, measurable level.  It’s also an accountability measure, because it’s closer to home.  They are school/district specific.  There is still accountability, but they generally still don’t promote student level.  SRBI:  Scientific research based intervention – benchmarking level. 
  • Classroom level:  Most neglected area of training, yet the most important.  Formal, informal, summative, or formative.  This is what helps kids learn.  What really promotes student achievement and learning is what happens in the classroom.  That’s why it’s so important to develop meaningful assessments.

Cognitive psychology appoach and framework.  Think about the importance of assessment training at the undergraduate level. 

  • Crystallized to fluid ability. Students start at a basic level and acquires basic skills, basic procedures, facts.  Simple, easily assimilated.  Easily automated – once achieved, they are crystallized.  Once there, students move to fluid abilities: doing something with the knowledge acquired.  When they can apply to novel situations, they can problem solve and tackle new things.   She refers to Picasso and developing skills. 
  • Novice to expert ability.  Moving from novice to expert problem solving.  The more knowledge acquired, the better problem finding and problem solving.  There are big differences between novice and expert English students. 
  • Anderson & Krawthwohl.A revised Bloom’s taxonomy to make it more useful for educators in various domains.  Cognitive process dimensions mapped onto knowledge dimensions


Knowledge dimension



































A website that focuses on knowledge dimensions and cognitive processes:  http://oregonstate.edu/instruct/coursedev/models/id/taxonomy/#table

  • Stiggins.  A practical way to use assessments.  Offers the following taxonomy:  (a) knowledge mastery, (b) reasoning proficiency, (c) skills, (d) ability to create products, (e) dispositions

Assessment can be divided into two categories:  selected response, constructed response.  There are benefits to both.  Mapping assessment onto a continuum is critical to figuring out what’s going on because it is necessary to making sure you are following a crystallized to fluid ability

Pulling it together. You need foundational knowledge in order to do higher order thinking.    However, you can never assess anything perfectly.  Internal and external errors always exist. 

High reliability and high validity for selected responses (but measure a limited, insular skill set)–> Low reliability and low validity for constructed responses (because there are no right or wrong answers).  If teachers develop knowledge and skills then they should be successful on the standardized tests – there has to be a careful mesh of the two. 

  • Clear and appropriate learning targets.  Content and learning standards from the state.  Guidelines for schools of what students are able to do and know.  How do I operationalize what I am measuring?  How can I take what students are learning and measure it?  Standards are limiting but the present a starting point.  Backward mapping from assessments to teaching.
  • Observable indicators of performance.  When you think about what you are measuring – is it observable, defined, and measurable – but is it reliable and valid?
  • Appropriateness of assessment method.  Are skills and abilities aligned with assessment?  What do I want students to show, do, and know?  How do we map skills and knowledge onto assessment
  • Trained assessors.  I am a team of 1 in my classroom.  If I teach X, Y, and Z, does my assessment test A, B, and C?  Need to be aligned – otherwise really low validity and reliability. 
Do teachers “believe in” data driven decision making?
Jan 22nd, 2009 by Frank LaBanca, Ed.D.

As teachers strive to increase the quality of instruction, more evidence-based practices have been implemented in classrooms. A recent trend challenges teachers to evaluate data to make decisions that will inform their instruction. My current district strives to collect student data information, but I think they still struggle with what to do with this information. We can collect it, but do we do anything with it?

In my leadership role, I have done my due diligence with my department to really think about data in meaningful ways. After all, as scientists and science teachers, we strive to use natural empirical evidence with our students to draw meaningful conclusions. Should we do the same for ourselves as we measure achievement data? As a “pocket” in the faculty community of our district, I think we are taking great steps to use assessment as a meaningful tool to help students learn.
I am writing about this, because over the past week I’ve watched an irony in all of this. Fortunately this irony does not apply to the teachers in my department. You see, the past week has been midterm exams. Many evaluate students using multiple choice questions. I feel strongly that multiple choice questions, if well written, conceptually-based, can be very effective assessment instruments. (I’ve written about this before.) Many teachers use machine grade sheets to efficiently correct the papers.  I have no issue with this.  In fact, I provided a machine (a very affordable product from Apperson), connected to a laptop which had data analysis software installed.

A teacher could log on to the laptop, start the data analysis software, scan his or her students’ exams, correct them, and have a full analysis of the questions in a matter of minutes.  This is effective use of teachers’ time.  They gather necessary information and learn about trends of student understanding.  Who could ask for more?

The irony?  No one, except my department members and two other teachers have logged onto the laptop. Teachers are correcting their exams without a care for the analysis of the data.  The machine grades, puts a score on their students’ papers, and they  walk away.   They are not collecting what could be the most valuable information of all:  the item analysis.  I’m sure there are lots of reasons why.  I’m also sure that none of them are any good.  Anyone who has ever collected item analysis data knows how valuable it is.

In a time when we say that data is so important, I wonder how many actually, truly, and really believe it?

Conceptual multiple choice questions
Dec 12th, 2008 by Frank LaBanca, Ed.D.

One of my good friends and colleagues, Nick Kowgios, is perhaps the most innovative, thoughtful educator I have ever met.   He developed a method for assessment coined “Test Debate/Test Analysis” where students i.) take a multiple choice test, then, as a class, ii.) debate and vote on the answers to the test, and finally iii.) metacognitively write about choices they made and their impressions they had.  This process is very Socratic and allows the teacher to truly be a facilitator. 

On the surface it sounds very odd. Students vote for the best answers and decide?  Probably would sound even odder if I told you that students have debated one multiple choice question for well over an hour.  However, Nick’s work has demonstrated that this method produce statistically significant increases on standardized tests (AP exams, state exams). 

I’ve used the method, and what strikes me is that assessment becomes more formative.  In other words, we often teach students concepts, learning stops, we assess, and move on. In this format, we teach student concepts, we assess, and learning continues.   The key to the whole process is that assessment MUST be conceptual.  Nick and I were chatting about the application from his discipline (English/LA) to mine (Science), and some of the resistance he has encountered from science teachers.  Here’s part of what I wrote to him:

I would categorize science learning and assessment into three broad categories:

1. factual

2. conceptual

3. analytical


Factual clearly being a way where teachers are concerned with isolated facts out of context.  Conceptual as you and I think about it.  In science assessment- more so using big ideas to analyze scenarios and apply knowledge.  Analytical would be more of a computational problem solving approach.  I think of conceptual questions more as ill-defined problems and analytical as well defined problems.  Both are inquiry-based but a conceptual question can have multiple possibilities (i.e., the BEST answer), where a well-defined has one right answer (i.e., the CORRECT answer).


Most chemistry teachers use an analytical approach to their teaching, so they might not realize that they have to change the way they assess – they need questions that have best answers instead of questions that have right answers.  (Is my distinction OK and clear?)  Conceptual learning generally works better (easier? less work for the teacher?  less change in philosophy?) in a non-quantitative course like Biology.


Today we were doing debate and this question really challenged the kids (about 30 minutes on this one):


8. A scientist suspects that the food in an ecosystem may have been contaminated with radioactive nitrogen over a period of months.  Which of the following substances could be examined for radioactivity to test that hypothesis?

a. the cell walls of plants growing in the ecosystem

b. the hair produced by skunks living in the ecosystem

c. the sugars produced during photosynthesis by plants growing in the ecosystem

d. the cholesterol in the cell membranes of organisms living in the ecosystem

e. any of these choices would work well.


The context of the question comes from a unit on macromolecules.  We had learned the structure of carbohydrates, lipids, and proteins.  We had not discussed radioactivity in any sense.  They should have had previous exposure to radioactivity, but ultimately, it doesn’t matter too much in the context of the question.  I’ll give my impression on the thought process that should/might happen:


First, students have to recognize that nitrogen is an atom and nitrogen makes up only certain macromolecules. (This, by the way, didn’t happen for all students – they got stuck on radiation as some amorphous property that could “drift” from one place to another, instead of being a physical property of the nitrogen atom (i.e., additional neutrons)).

1. carbohydrates are made from carbon, hydrogen, and oxygen

2. lipids are made from carbon, hydrogen, and oxygen

3. proteins are made from nitrogen, carbon, hydrogen, oxygen and sulfur

(4. nucleic acids (DNA/RNA) are made from nitrogen, carbon, hydrogen, oxygen and phosphorus) – I put this one in parenthesis because we did not talk about nucleic acids, and there are no nucleic acids in the choices above).


Now students have to decide which of the above might contain proteins (no longer nitrogen)

a. cell walls are primarily made of cellulose – cellulose is a carbohydrate – but there are some proteins that are present.  However, the radioactivity is probably mostly in the plants – however it’s the proteins of the plants, and there’s not very much of that in a cell wall.

b. hair of the skunk is primarily made of protein.  Toxins tend to bioaccumulate, so as you go up the food chain there should be a higher concentration.  I think this is the best choice.

c. sugars are carbs – no nitrogen.  Interestingly, a student quoted a book saying something about radioactivity in the photosynthetic process.  He was quickly slapped by another student who commented that he was talking about radioactive carbon, not radioactive nitrogen.

d. cholesterol is a lipid (steroid) – again, no nitrogen.

e.  they just all don’t work

The class was primarily debating the merits of a and b.  I actually stopped for five minutes to make them do some data hunting for better support – they hit the books and came back, still arguing.  Ultimately the class went for b, because the “a” supporters were having trouble putting holes in the “b” argument. 


Notice how much I can write about a multiple choice question.  The students are just as passionate.  And the learning that is taking place is powerful.  Consider the following question.  The students in my class are split over the best answer.  Read the comments and see how they interpret, support, provide evidence, analyze, and synthesize information:

15.  A reasonable conclusion from the Sponge – Bacterial Growth Lab based on class data would be

a. the zone of inhibition prevents bacterial growth

b. Lysol is an effective antibacterial agent

c. pathogenic bacteria grow on Petri dishes

d. a moist, 37oC incubator is the optimal growing environment for cultured bacteria

e. microwaving a sponge for 1 minute effectively kills bacteria

Andragony offers an effective use of formative assessment
Oct 22nd, 2008 by Frank LaBanca, Ed.D.

Adults have different expectations in learning than children do.  Androgogy, the teaching of adults, contains the following important components and tenets:

·         Adult learning is voluntary and learner-oriented. 

·         Education brings freedom to the learners as they assimilate learning with life experiences.

·         Androgogy encourages divergent thinking and active learning. 

·         Often the roles of the learner and the teacher are blurred in the process. 

·         Often there is an uncertainty about the outcome of learning, regardless of the curriculum content. 

I currently have the pleasure of working with many expert teachers in the quantitative statistics course I am teaching for WestConn.  Interestingly, though, the course I am teaching puts many of these expert students in an uncomfortable novice position. 

Research demonstrates that there is a difference in learning between novice professionals and expert professionals.  Three main aspects of performance change in novice to expert learners: 

·         The novice professional’s work paradigm focuses on abstract principles while the expert uses concrete past experiences

·         The novice often views situations discretely where the expert sees situations as part of a whole.

·         The novice is often a detached observer where the expert is an involved performer (Daley, 1999). 

A striking difference when considering novices and experts is that novices are often hindered by specifics of the job, where experts are often hindered by the system.  Novices prefer, and best learn formally, where experts learn best informally, often in conjunction with their peers.  Novice professionals prefer learning strategies like memory and therefore accumulate information, while the expert professional uses dialogue to create a knowledge base (Daley, 1999).    When I consider my students, clearly from an andragonolical standpoint, they behave as experts. 

            Throughout the course, I have assigned work for the students to learn and master statistical techniques that may be useful for them as they begin to research their educational passions.  The assessments have been designed to be formative in nature.  As such, many submit assignments, wait for meaningful feedback, make necessary changes and resubmit.  I am very glad that many feel very comfortable presenting work, knowing that it may require revision. After all, much learning takes place when there is dialogue (in this case, electronic dialogue).  Mistakes are just as valuable as successes. In an adult learning environment, where students are motivated to learn, we can take advantage of the formative process.

            In just a short while, they will begin to work on dissertations, and that is a totally formative process.  Glad we can enjoy it now!

»  Substance: WordPress   »  Style: Ahren Ahimsa