»
S
I
D
E
B
A
R
«
Independent learning
May 28th, 2009 by Frank LaBanca, Ed.D.
from: www.rwd.com

from: www.rwd.com

I’ve been working on several projects lately considering autonomy of learning whether it be for students or adults.  Specifically, I am (a) working with the High Ability Inquiry Research group at McGill University trying to define the term inquiry literacy, (b) working with some of my Ed.D. colleagues from Western Connecticut State Univ on several independent publications from our dissertations, (c) preparing professional development programming for Oxford, (d) developing a Moodle site for a blended learning course I teach and (d) working with my applied research students on their continued work.  These activities have me continually thinking about being a self-directed, self-effective, life-long learner. 

I was recenlty invited to view a fantastic wiki, written by my colleage, Donna Baratta, Library Media Specialist from Mildred E. Strang Middle School in Yorktown, NY.   Although I believe her wiki is currently private, it includes a wonderful explanation of models for professional development:

Five Models of Staff Development by Sparks and Loucks-Horsley may be used to differentiate instruction in order to meet the needs of teachers based on years of experience, level of technology use and/or mastery, and professional goals in conjunction with district initiatives, NYSED Standards and more. (This information also appears under the heading of Models and Activities on the Models page.)  Differentiation in regard to technology PD is particularly significant, as learners may vary from reluctant users to confident users of technology.  PD must be designed to meet the needs of all learners participating in the PD experience.

Five Models of Staff Development by Sparks and Loucks-Horsley

 1.  Individually Guided Staff Development

     A process though which teachers plan and implement their own activities to promote their own learning

 2. Observation/Assessment

     This model provides objective data and feedback regarding classroom performance to produce growth or identify areas for growth

 3.  Involvement in a Development/Improvement Process

     Teachers engage in curriculum development, program design or a school improvement process

 4.  Training

     Individual or group instruction that involves teachers in the acquisition of knowledge

 5.  Inquiry

     Teachers identify an area of instructional interest, collect data, and make changes in their instruction based on an interpretation of those data

(Sparks & Loucks-Horsley, 1989, p. 41)

 Further Reading:

Differentiation: Lessons from Master Teachers  

Recommended Reading: (Not available from ERIC in time for this posting)

Sparks, Dennis. Journal of Staff Development, Fall2005, Vol. 26 Issue 4, p4-4, 2/3p; (AN 20217427) 
Gregory, Gayle H.. 2003 132 pp. (ED476461)

I really like the progression presented, allowing for a continuum of growth as expertise level increases.  We certainly should be aiming for teachers to be engaged in independent action research as part of professional growth, evaluation, and supervision.  I am convinced that this change process of teacher as researcher andpractitioneris the one of the necessary steps to allow for systemic increases in student achievement.  Best practices will continue to develop out of an evidence-based profession, not one based on anecdotal, feel-good, been-doin’-it-fer-years strategy.

I think this might have applications beyond the professional growth model, as we think about how to develop 21st-century skills in all learners, both educators and our students.

Citations responsibility
May 28th, 2009 by Frank LaBanca, Ed.D.
When I blog, I often include a photo, often searched from Google images, that I post on the right side of my posts.  Stylistically, it’s just what I’ve done over the years, and I don’t intend to change this practice.   What I do intend to change is my responsibility to identify my sources for images.  I’ve noticed that WordPress (my blogging platform) gives options to include a caption with each photo – I will start using that caption to include a citation. 

referenceI teach responsibility of giving credit to others with my students.  I need to model good practice and do the same myself.

Blogging Live
May 15th, 2009 by Frank LaBanca, Ed.D.
http://www.thesharkbook.com/blog/uploaded_images/i_love_blogging-787805.jpg

http://www.thesharkbook.com

Right now, I am attending a professional development session with Dr. Katie Moirs.  She works with the CT State Department of Education.  Her presentation is entitled “Assessment for Learning Presentation”.  I will comment as her presentation goes, and will post at the end.   I am doing this to document the session, but to also experience what live blogging is like, for me.

assessmentShe is beginning to speak about assessment literacy.  This is interesting to consider – a meaningful defintion might emerge?

Use of assessment: In the old days, assessment means standardized tests. No longer are we focused on standardized tests that rank order students.  What are we concerned with?  Think about a balanced assessment strategy. 

  • Institutional levels:  e.g., CAPT, CMT – a bad thing is that we rank order schools.  Where do standardized tests fit within the big picture of assessment.  They DON’T help kids learn – rather they are used for accountability.  They are reliable and valid, yet they are insular.  They measure a restricted skill set.  
  • Benchmark level:  program evaluation at a building or district level.  Common assessments fit into these categories.  Within this school, this is how many kids are at a certain, measurable level.  It’s also an accountability measure, because it’s closer to home.  They are school/district specific.  There is still accountability, but they generally still don’t promote student level.  SRBI:  Scientific research based intervention – benchmarking level. 
  • Classroom level:  Most neglected area of training, yet the most important.  Formal, informal, summative, or formative.  This is what helps kids learn.  What really promotes student achievement and learning is what happens in the classroom.  That’s why it’s so important to develop meaningful assessments.

Cognitive psychology appoach and framework.  Think about the importance of assessment training at the undergraduate level. 

  • Crystallized to fluid ability. Students start at a basic level and acquires basic skills, basic procedures, facts.  Simple, easily assimilated.  Easily automated – once achieved, they are crystallized.  Once there, students move to fluid abilities: doing something with the knowledge acquired.  When they can apply to novel situations, they can problem solve and tackle new things.   She refers to Picasso and developing skills. 
  • Novice to expert ability.  Moving from novice to expert problem solving.  The more knowledge acquired, the better problem finding and problem solving.  There are big differences between novice and expert English students. 
  • Anderson & Krawthwohl.A revised Bloom’s taxonomy to make it more useful for educators in various domains.  Cognitive process dimensions mapped onto knowledge dimensions

 

Knowledge dimension

Remember

Understand

Apply

Analyze

Evaluate

Create

Factual

 

 

 

 

 

 

Conceptual

 

 

 

 

 

 

Procedural

 

 

 

 

 

 

Metacognitive

 

 

 

 

 

 

A website that focuses on knowledge dimensions and cognitive processes:  http://oregonstate.edu/instruct/coursedev/models/id/taxonomy/#table

  • Stiggins.  A practical way to use assessments.  Offers the following taxonomy:  (a) knowledge mastery, (b) reasoning proficiency, (c) skills, (d) ability to create products, (e) dispositions

Assessment can be divided into two categories:  selected response, constructed response.  There are benefits to both.  Mapping assessment onto a continuum is critical to figuring out what’s going on because it is necessary to making sure you are following a crystallized to fluid ability

Pulling it together. You need foundational knowledge in order to do higher order thinking.    However, you can never assess anything perfectly.  Internal and external errors always exist. 

High reliability and high validity for selected responses (but measure a limited, insular skill set)–> Low reliability and low validity for constructed responses (because there are no right or wrong answers).  If teachers develop knowledge and skills then they should be successful on the standardized tests – there has to be a careful mesh of the two. 

  • Clear and appropriate learning targets.  Content and learning standards from the state.  Guidelines for schools of what students are able to do and know.  How do I operationalize what I am measuring?  How can I take what students are learning and measure it?  Standards are limiting but the present a starting point.  Backward mapping from assessments to teaching.
  • Observable indicators of performance.  When you think about what you are measuring – is it observable, defined, and measurable – but is it reliable and valid?
  • Appropriateness of assessment method.  Are skills and abilities aligned with assessment?  What do I want students to show, do, and know?  How do we map skills and knowledge onto assessment
  • Trained assessors.  I am a team of 1 in my classroom.  If I teach X, Y, and Z, does my assessment test A, B, and C?  Need to be aligned – otherwise really low validity and reliability. 
»  Substance: WordPress   »  Style: Ahren Ahimsa