Young, T. (2005). “Better data … better decisions.” Library media connection 23(4), 14-19.
Better data collection will save our library, but we need to collect only meaningful and purposeful data. The table shows the differences between quantitative data and qualitative data. We should place more emphasis on collecting data on students' wants and needs, collection quality, and higher-order thinking inquiries. This data can then be used to describe library media center services, evaluate the library media center programs, and measure performance.
This sounds great, but I imagine it is hard to do. Numbers are so much easier to provide. It would also be hard to assess these qualitative measures if this type of data has not been collected in the past and you have no baseline to work from. Nevertheless, when we present the data to principals and school board members it must make sense. Simply stated, the data needs to reveal that our library contributes to improved student achievement, “by providing up-to-date instructional materials aligned to the curriculum and instructional practices; library media specialist collaborating with and supporting teachers, administrators, and parents; and extending their hours of operation beyond the school day.” That’s all we need to prove in order to save ourselves;)
Was he trying to sell his book to me at the end of the article???
Woolls, Chapter 13
I was grabbed by David V. Loertsher’s question, “If you were arrested for contributing to the education of a student, would there be enough evidence to convict you?” (Wools, 201). That’s what school librarians need to provide- evidence. Evidence that their program has value and contributes to student learning. I think evaluating a school library would be an extremely useful (and demanding) project for this course. An evaluation of a media program (not just the library itself) would help others to understand the value of and purpose of the media program. If teachers, parents, and administrators can read a document that states, “This is what we should be doing, this is what we are doing, and this is how well we are doing it”, they will know exactly how the media program contributes to academic achievement through it’s inclusion into the curriculum in relation to the school district and the community. Evaluation should also reveal where improvements can be made. I like this.
The table on page 205 really highlighted what the expectations are for most school libraries in contrast to what the expectations should be. The “what should be” measures should be posted in every media center for all to see (in BIG, BLACK LETTERS!)
· This Media Center is not just a place to send students; it is a place with open access for research and reading based on collaborative programming.
· This Media Specialist not only shares lists of available resources to use in instruction, but also collaborates with teachers in planning, conducting, teaching, and evaluating instruction.
Of course other measures need to be considered including, staff performance and appraisal (is it a friendly place?, are librarians eager to help?) and collection measurement (though I feel this will become less and less important as a result of the expansion of Internet resource usage and ILL usage), but the most important aspect to consider in evaluating media program is it’s contribution to student learning. It’s up to the Media Specialist to develop tools such as, checklists, rubrics, rating, scales, logs, and student portfolios to assess and demonstrate the program’s impact. The evidence that a media program contributes to student learning should be “condemning”.
AND- it’s snowing outside:(
Mueller, J. (2005) Authentic assessment in the classroom … and the library media center.” Library media connection 23(7), 14-18.
I’m in LOVE! Finally, a discussion on standards that I can relate too. I’ll just say it, really fast, “Idon’tliketheAASL’sStandardsforthe21st-CenturyLearner!”. They don’t mean anything to me. I keep going back to the MDE Technology standards, I’m a hands-on, process, step-by-step to the end goal, kinda gal. I don’t like the attitude of, this is what we all need to know, but you figure out how you are going to get there- ok hon?
Mueller’s step by step process for assessing standards is right inline with my beliefs of how to guide students in learning to achieve measurable goals- and I thought there was something wrong with me.
He’s right, it is extremely difficult to assess “cross-curricular process skills such as self-assessment, information literacy, collaboration, or metacognitive skills”, but his step-by-step process to create this type of assessment make so much sense to me.
Step 1: Writing Skills as Standards
Identify a good standard, including writing it in observable and measurable language, e.g “How could the students demonstrate that they understand the concept or process? What would that look like?" to develop the skills that will accomplish this standard.
Step 2: Design Tasks to Assess the Skills In Step 1
How can students demonstrate that they have acquired this skill and apply it in relevant contexts? Give them opportunities to do so. Create simple or complex tasks by asking questions such as "When would someone ever use this skill?" or "Why would someone ever need to know how to do this?"
Step 3: Identify the Criteria for the Skill
Identify the specific criteria of good performance on a task. What are the behavioral indicators of proficiency on a particular skill?
Step 4: Create Rubrics for Rating Skill Performance
Once the criteria for a task have been identified, a rubric, or rating scale, can be used to judge how well someone has met the criteria for performance on that skill task. Authentic assessment of skills does not require a rubric, but the use of rubrics can increase the consistency of application of the criteria (Marzano, 2006). Additionally, by articulating the criteria and the characteristics of good performance at each level, those learning and performing the skill and those teaching and assessing it will share a clearly defined picture of what proficiency should look like.
This is exactly what I’ve been trying to express as my own opinion this entire semester, but I didn’t know how to explain it!
Jon- where have you been all my life?
Todd, R.J. (2003). School libraries and evidence: Seize the day. Library media connection 22(1), 12-18.
Todd asks questions about the efficacy of a school library:
· How does an effective school library help students?
· How does it empower student learning in and out of school?
· What does an effective school library enable students to do and to become?
· What difference does an effective school library make to students and their learning?
These are the questions we need to ask ourselves instead of focusing on the barriers that are in front of us to function as school librarians. Todd points out that we need to move beyond “selling ourselves” and focus on an “evidence-based” practice approach. We should let the evidence speak for itself, acting as the voice of the profession.
To do so we need to answer the following questions:
· How does your school library make a difference to student learning outcomes?
· How do local outcomes affect decisions relating to staffing and budgets?
· How can we “analyze and synthesize research into meaningful generalizations with practical utility” in order to present relevant findings?
In doing so, we will hurdle the barriers we see in front of us and move our libraries (and the profession forward). We need to focus on the big picture, not our individual problems. That sounds like a great idea to me.