Skip to content

Sometime around 1996 I attended a conference on communication studies. I was working on a master’s degree in Comm Studies and this was my first conference in an area outside of librarianship. I was happy to discover a presentation on research related to libraries, specifically nonverbal behaviors of reference librarians. As the researcher described her findings and quoted from student statements about their interactions with librarians, I experienced a range emotions. Interest and pride soon gave way to embarrassment and frustration. The way I remember it now, there were a host of examples of poor interactions. “The librarian looked at me like I was from Mars,” that sort of thing. Most memorable to me was one of the comment/questions from an audience member. “Librarians need to fix this. What are they going to do about it?,” as though this study had uncovered a heretofore invisible problem that we should urgently address. (Did I mention feeling defensive, too?) I didn’t dispute the findings. What I struggled with was the sense that the people in the room thought that we librarians didn’t already know about the importance of effective communication and that we weren’t working on it. Was there room for improvement? For sure! But it wasn’t news to us.

I thought about that presentation again recently after viewing a webinar by Lisa Hinchliffe about her research project, Predictable Misunderstandings in Information Literacy: Anticipating Student Misconceptions To Improve Instruction. Using data from a survey of librarians who provide information literacy instruction to first year students, Lisa and her team provisionally identified nine misconceptions that lead to errors in information literacy practice. For example, first year students “believe research is a linear (uni-directional) process (and therefore do not see it as an iterative process and integrated into their work).” The project is a partnership with Credo. See the press release or view the webinar slides. ...continue reading "We’re Working On It: Taking Pride in Continuous Instructional Improvement"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.

We’re excited that this semester all four modules are available for field testing.  Modules 1 and 2 now offer students feedback when they finish the tests.  Modules 3 and 4, still in the first phase of field testing, do not yet provide immediate feedback to students.  But that doesn’t mean that students shouldn’t reflect on their experience taking the test.  When I have students take Module 3: Research & Scholarship and Module 4: The Value of Information, I create an online survey they can complete as soon as they’ve finished the last question.  Setting up the test through www.thresholdachievement.com makes that easy by providing an option for directing students to a URL at the end of the test.  You can view the brief survey that I give students.

When asking for students’ reflections on their experiences, whether for the TATIL modules or for any instructional interaction, I always rely on critical incident questionnaires as my starting point.  Stephen Brookfield, a transformative educator who is an expert in adult learning, has been promoting critical incident questionnaires since the 1990s.  Building upon Dr. Brookfield’s work, faculty have used the instrument to survey students about their experiences in face-to-face classes as well as online.  Read more about his work and the work of his colleagues here: http://www.stephenbrookfield.com/ciq/

If you would prefer to collect information about students’ perceptions of the test content rather than or in addition to their experience taking the test, consider survey questions like:

  • Where did you learn the skills and knowledge that you used on this test?
  • What do you think you should practice doing in order to improve your performance on this test in the future?
  • What were you asked about on this test that surprised you?

By surveying students at the end of the test, you lay the groundwork for class discussions about the challenges the test presented, areas of consensus among your students, and misconceptions that you may want to address.  The test gives students a chance to focus on their information literacy knowledge and beliefs, which they do not always have the time or structure to do.  Writing briefly about their experience taking the test while it is still fresh in their mind will help students to identify the insights they have gained about their information literacy through the process of engaging with the test.

I was fortunate to get to attend ALA in Orlando.  When I’m at ALA, I make sure to always attend the ACRL Instruction Section panel.  This year, I was especially interested because the panel took on Authority is Constructed and Contextual, a very rich concept in the Framework that we’ve had many conversations about as we’ve worked on the first module of the test: Evaluating Process and Authority.

The panelists described how they have engaged with the concept of authority in their own teaching and how the Framework has inspired them to think about this concept in new ways.  Though the panel itself raised many interesting questions, a comment from the audience particularly piqued my interest.  Jessica Critten, from West Georgia University, highlighted the gap in librarians’ discourse about what constitutes evidence and how students are taught to understand what they’re doing with the information sources we’re asking them to evaluate.  She clearly identified the implication of the Authority is Constructed and Contextual Frame, which is that we evaluate authority for a purpose and librarians need to engage in more meaningful discussion about those purposes if we are going to do more than leave students with the sense that everything is relative. Jessica has been thinking about these issues for a while.  She co-authored a chapter called “Logical Fallacies and Sleight of Mind: Rhetorical Analysis as a Tool for Teaching Critical Thinking” in Not Just Where to Click: Teaching Students How to Think about Information.

Jessica’s remarks showed me a connection that we need to continue to strengthen between our work in libraries and our colleagues’ work in composition studies and rhetoric.  Especially at a time of increasing polarization in public discourse, the meaning of concepts like authority, facts, and evidence cannot be taken for granted as neutral constructions that we all define the same way.  When I got back from Orlando, I sat down with our Rhetoric and Composition consultant, Richard Hannon, to ask him to elaborate on the connection between the Framework and how he gets students to think critically about facts, evidence, and information sources.
Read more

We are now ready to start cognitive interviews to get students' feedback about Module 3: Research & Scholarship.  We are also starting to write items for our final module, Module 4: The Value of Information.  That means we're more than half way through with test development.  And we just keep getting more intrigued with the depth of the Framework the more we work with it.

One of the exciting things about the Framework is the way the writers identified the “dispositions” that constitute the affective facets of information literacy. From the beginning of brainstorming about a new IL test way back in spring 2014, we’ve known that we wanted to address dispositions, as well as knowledge, in any new instrument we created. We found a way to do that with scenario-based problem solving items. And we’ve continued to deepen our understanding of dispositions by searching the education literature.
Read more