Skip to content

Suppose that you think students should be knowledgeable about the rights and responsibilities of information creation. Furthermore, they should be able to recognize social, legal, and economic factors affecting access to information. These two statements form the basis of the Module 4 – The Value of Information – of the Threshold Achievement Test for Information Literacy (TATIL). In this post, I will describe the development of TATIL test knowledge questions. How do we go from a concept to a set of fully formed, sound test questions?

It begins with outcomes and performance indicators written by members of the TATIL advisory board and inspired by the ACRL Framework for Information Literacy. An iterative process of review and revision guided by the TATIL project leader Dr. April Cunningham results in the foundation for writing test questions.

...continue reading "Genesis of a Test Question"

The cornerstone of the Threshold Achievement Test for Information Literacy are the outcomes and performance indicators we wrote that were inspired by the ACRL Framework for Information Literacy for Higher Education.

Working with members of our Advisory Board, we first defined the information literacy skills, knowledge, dispositions, and misconceptions that students commonly demonstrate at key points in their education: entering college, completing their lower division or general education requirements, and preparing for graduation. These definitions laid the groundwork for analyzing the knowledge practices and dispositions in the Framework in order to define the core components that would become the focus of the test. Once we determined to combine frames into four test modules, the performance indicators were then used to guide item writing for each of the four modules. Further investigation of the Framework dispositions through a structural analysis led to identifying and defining information literacy dispositions for each module.

...continue reading "From Framework to Outcomes to Performance Indicators, Plus Dispositions!"

After three years of development, two years of field testing, and countless hours of creative innovation and hard work, Carrick Enterprises is proud to announce the availability of the Threshold Achievement Test for Information Literacy!

We are fortunate to work with many librarians, professors, measurement and evaluation experts, and other professionals on the development of this test. We are grateful for the opportunity to collaborate with these creative people and to benefit from their insights and wisdom.


Test Item Developers
Jennifer Fabbi – Cal State San Marcos
Hal Hannon – Palomar and Saddleback Colleges
Angela Henshilwood – University of Toronto
Lettycia Terrones – Los Angeles Public Library
Dominique Turnbow – UC San Diego
Silvia Vong – University of Toronto
Kelley Wantuch – Los Angeles Public Library

Test Item Reviewers
Joseph Aubele – CSU Long Beach
Liz Berilla – Misericordia University
Michelle Dunaway – Wayne State University
Nancy Jones – Encinitas Unified School District

Cognitive Interviewers
Joseph Aubele – CSU Long Beach
Sophie Bury – York University, Toronto
Carolyn Gardner – CSU Dominguez Hills
Jamie Johnson – CSU Northridge
Pearl Ly – Skyline College
Isabelle Ramos – CSU Northridge
Silvia Vong – University of Toronto

Field Test Participants
Andrew Asher – Indiana University
Joseph Aubele – California State University, Long Beach
Sofia Birden – University of Maine Fort Kent
Rebecca Brothers – Oakwood University
Sarah Burns Feyl – Pace University
Kathy Clarke – James Madison University
Jolene Cole – Georgia College
Gloria Creed-Dikeogu – Ottawa University
David Cruse – Adrian College
April Cunningham – Palomar College
Diane Dalrymple – Valencia College
Christopher Garcia – University of Guam
Rumi Graham – University of Lethbridge
Adrienne Harmer – Georgia Gwinnett College
Rosita Hopper – Johnson & Wales University
Suzanne Julian – Brigham Young University
Cynthia Kane – Emporia State University
Martha Kruy – Central Connecticut State University
Jane Liu – Pomona College
Talitha Matlin – California State University at San Marcos
Courtney Moore – Valencia College
Colleen Mullally – Pepperdine University
Dena Pastor – James Madison University
Benjamin Peck – Pace University
Carolyn Radcliff – Chapman University
Michelle Reed – University of Kansas
Stephanie Rosenblatt – Cerritos College
Heidi Senior – University of Portland
Chelsea Stripling – Florida Institute of Technology
Kathryn Sullivan – University of Maryland, Baltimore County
Rosalind Tedford – Wake Forest University
Sherry Tinerella – Arkansas Tech
Kim Whalen – Valparaiso University

Standard Setters
Joseph Aubele – California State University, Long Beach
Stephanie Brasley – California State University Dominguez Hills
Jennifer Fabbi – California State University San Marcos
Hal Hannon – Palomar and Saddleback Colleges
Elizabeth Horan – Coastline Community College
Monica Lopez – Cerritos College
Natalie Lopez – Palomar College
Talitha Matlin – California State University San Marcos
Cynthia Orozco – East Los Angeles College
Stephanie Rosenblatt – Cerritos College

The Threshold Achievement Test for Information Literacy (TATIL) measures student knowledge and dispositions regarding information literacy. The test is inspired by the Association of College and Research Libraries' Framework for Information Literacy for Higher Education and by expectations set by the nation's accrediting
agencies. TATIL offers librarians and other educators a better understanding of the information literacy capabilities of their students. These insights inform instructors of improvement areas, guide course instruction, affirm growth following instruction, and prepare students to be successful in learning and life. Each test is made up of a combination of knowledge items and disposition items.
...continue reading "It’s Here! Announcing the Threshold Achievement Test for Information Literacy!"

Last week I was fortunate to get to attend and present at LOEX 2017, in Lexington, KY.  I’m excited to have joined the LOEX Board of Trustees this year and it was great to see familiar faces and meet new, energized librarians, too.

I presented a one-hour workshop where I walked participants through a comparison of two common types of results reports from large-scale assessments.  We looked at an example of a rubric-based assessment report and a report from the Evaluating Process and Authority module of the Threshold Achievement Test.  We compared them on the criteria of timeliness, specificity, and actionability, and found that rubric results reports from large-scale assessments often lack the specificity that makes it possible to use assessment results to make plans for instructional improvement.  The TATIL results report, on the other hand, offered many ways to identify areas for improvement and to inform conversations about next steps.  Several librarians from institutions that are committed to using rubrics for large-scale assessment said at the end of the session that the decision between rubrics and tests now seemed more complicated than it had before.  Another librarian commented that rubrics seem like a good fit for assessing outcomes in a course, but perhaps are less useful for assessing outcomes across a program or a whole institution.  It was a rich conversation that also highlighted some confusing elements in the TATIL results report that we are looking forward to addressing in the next revision.

Overall, I came away from LOEX feeling excited about the future of instruction in the IL Framework era.  While the Framework remains an enigma for some of us, presenters at LOEX this year found many ways to make practical, useful connections between their work and the five frames. ...continue reading "May Update: Report from LOEX"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.