Blogging Live
May 15th, 2009 by Frank LaBanca, Ed.D.


Right now, I am attending a professional development session with Dr. Katie Moirs.  She works with the CT State Department of Education.  Her presentation is entitled “Assessment for Learning Presentation”.  I will comment as her presentation goes, and will post at the end.   I am doing this to document the session, but to also experience what live blogging is like, for me.

assessmentShe is beginning to speak about assessment literacy.  This is interesting to consider – a meaningful defintion might emerge?

Use of assessment: In the old days, assessment means standardized tests. No longer are we focused on standardized tests that rank order students.  What are we concerned with?  Think about a balanced assessment strategy. 

  • Institutional levels:  e.g., CAPT, CMT – a bad thing is that we rank order schools.  Where do standardized tests fit within the big picture of assessment.  They DON’T help kids learn – rather they are used for accountability.  They are reliable and valid, yet they are insular.  They measure a restricted skill set.  
  • Benchmark level:  program evaluation at a building or district level.  Common assessments fit into these categories.  Within this school, this is how many kids are at a certain, measurable level.  It’s also an accountability measure, because it’s closer to home.  They are school/district specific.  There is still accountability, but they generally still don’t promote student level.  SRBI:  Scientific research based intervention – benchmarking level. 
  • Classroom level:  Most neglected area of training, yet the most important.  Formal, informal, summative, or formative.  This is what helps kids learn.  What really promotes student achievement and learning is what happens in the classroom.  That’s why it’s so important to develop meaningful assessments.

Cognitive psychology appoach and framework.  Think about the importance of assessment training at the undergraduate level. 

  • Crystallized to fluid ability. Students start at a basic level and acquires basic skills, basic procedures, facts.  Simple, easily assimilated.  Easily automated – once achieved, they are crystallized.  Once there, students move to fluid abilities: doing something with the knowledge acquired.  When they can apply to novel situations, they can problem solve and tackle new things.   She refers to Picasso and developing skills. 
  • Novice to expert ability.  Moving from novice to expert problem solving.  The more knowledge acquired, the better problem finding and problem solving.  There are big differences between novice and expert English students. 
  • Anderson & Krawthwohl.A revised Bloom’s taxonomy to make it more useful for educators in various domains.  Cognitive process dimensions mapped onto knowledge dimensions


Knowledge dimension



































A website that focuses on knowledge dimensions and cognitive processes:  http://oregonstate.edu/instruct/coursedev/models/id/taxonomy/#table

  • Stiggins.  A practical way to use assessments.  Offers the following taxonomy:  (a) knowledge mastery, (b) reasoning proficiency, (c) skills, (d) ability to create products, (e) dispositions

Assessment can be divided into two categories:  selected response, constructed response.  There are benefits to both.  Mapping assessment onto a continuum is critical to figuring out what’s going on because it is necessary to making sure you are following a crystallized to fluid ability

Pulling it together. You need foundational knowledge in order to do higher order thinking.    However, you can never assess anything perfectly.  Internal and external errors always exist. 

High reliability and high validity for selected responses (but measure a limited, insular skill set)–> Low reliability and low validity for constructed responses (because there are no right or wrong answers).  If teachers develop knowledge and skills then they should be successful on the standardized tests – there has to be a careful mesh of the two. 

  • Clear and appropriate learning targets.  Content and learning standards from the state.  Guidelines for schools of what students are able to do and know.  How do I operationalize what I am measuring?  How can I take what students are learning and measure it?  Standards are limiting but the present a starting point.  Backward mapping from assessments to teaching.
  • Observable indicators of performance.  When you think about what you are measuring – is it observable, defined, and measurable – but is it reliable and valid?
  • Appropriateness of assessment method.  Are skills and abilities aligned with assessment?  What do I want students to show, do, and know?  How do we map skills and knowledge onto assessment
  • Trained assessors.  I am a team of 1 in my classroom.  If I teach X, Y, and Z, does my assessment test A, B, and C?  Need to be aligned – otherwise really low validity and reliability. 

»  Substance: WordPress   »  Style: Ahren Ahimsa