Skip to content

After three years of development, two years of field testing, and countless hours of creative innovation and hard work, Carrick Enterprises is proud to announce the availability of the Threshold Achievement Test for Information Literacy!

We are fortunate to work with many librarians, professors, measurement and evaluation experts, and other professionals on the development of this test. We are grateful for the opportunity to collaborate with these creative people and to benefit from their insights and wisdom.


Test Item Developers
Jennifer Fabbi – Cal State San Marcos
Hal Hannon – Palomar and Saddleback Colleges
Angela Henshilwood – University of Toronto
Lettycia Terrones – Los Angeles Public Library
Dominique Turnbow – UC San Diego
Silvia Vong – University of Toronto
Kelley Wantuch – Los Angeles Public Library

Test Item Reviewers
Joseph Aubele – CSU Long Beach
Liz Berilla – Misericordia University
Michelle Dunaway – Wayne State University
Nancy Jones – Encinitas Unified School District

Cognitive Interviewers
Joseph Aubele – CSU Long Beach
Sophie Bury – York University, Toronto
Carolyn Gardner – CSU Dominguez Hills
Jamie Johnson – CSU Northridge
Pearl Ly – Skyline College
Isabelle Ramos – CSU Northridge
Silvia Vong – University of Toronto

Field Test Participants
Andrew Asher – Indiana University
Joseph Aubele – California State University, Long Beach
Sofia Birden – University of Maine Fort Kent
Rebecca Brothers – Oakwood University
Sarah Burns Feyl – Pace University
Kathy Clarke – James Madison University
Jolene Cole – Georgia College
Gloria Creed-Dikeogu – Ottawa University
David Cruse – Adrian College
April Cunningham – Palomar College
Diane Dalrymple – Valencia College
Christopher Garcia – University of Guam
Rumi Graham – University of Lethbridge
Adrienne Harmer – Georgia Gwinnett College
Rosita Hopper – Johnson & Wales University
Suzanne Julian – Brigham Young University
Cynthia Kane – Emporia State University
Martha Kruy – Central Connecticut State University
Jane Liu – Pomona College
Talitha Matlin – California State University at San Marcos
Courtney Moore – Valencia College
Colleen Mullally – Pepperdine University
Dena Pastor – James Madison University
Benjamin Peck – Pace University
Carolyn Radcliff – Chapman University
Michelle Reed – University of Kansas
Stephanie Rosenblatt – Cerritos College
Heidi Senior – University of Portland
Chelsea Stripling – Florida Institute of Technology
Kathryn Sullivan – University of Maryland, Baltimore County
Rosalind Tedford – Wake Forest University
Sherry Tinerella – Arkansas Tech
Kim Whalen – Valparaiso University

Standard Setters
Joseph Aubele – California State University, Long Beach
Stephanie Brasley – California State University Dominguez Hills
Jennifer Fabbi – California State University San Marcos
Hal Hannon – Palomar and Saddleback Colleges
Elizabeth Horan – Coastline Community College
Monica Lopez – Cerritos College
Natalie Lopez – Palomar College
Talitha Matlin – California State University San Marcos
Cynthia Orozco – East Los Angeles College
Stephanie Rosenblatt – Cerritos College

The Threshold Achievement Test for Information Literacy (TATIL) measures student knowledge and dispositions regarding information literacy. The test is inspired by the Association of College and Research Libraries' Framework for Information Literacy for Higher Education and by expectations set by the nation's accrediting agencies. TATIL offers librarians and other educators a better understanding of the information literacy capabilities of their students. These insights inform instructors of improvement areas, guide course instruction, affirm growth following instruction, and prepare students to be successful in learning and life. Each test is made up of a combination of knowledge items and disposition items.

About the Test

The Threshold Achievement Test assesses students' ability to recall and apply their knowledge and their metacognition about core information literacy dispositions that underlies their behaviors. Through this combination of knowledge and dispositional assessment TATIL offers a unique and valuable measure of the complexities of information literacy.

The knowledge items in TATIL are based on information literacy outcomes and performance indicators created by the test developers and advisory board of librarians and other educators. Knowledge items assess an array of cognitive processes that college students develop as they transition from pre-college to college ready to research ready. Mental behaviors tested include understanding (facts, concepts, principles, procedures), problem solving (problem identification, problem definition, analysis, solution proposal), and critical thinking (evaluating, predicting, deductive and inductive thinking). The items are presented in a variety of structured response formats to assess students' information literacy knowledge, skills, and abilities.

Dispositions are at the heart of a student's temperament and play an important role in learning transfer. Dispositions constitute affective facets of information literacy and are essential to students' information literacy outcomes. They indicate students' willingness to consistently apply the skills they have learned in one setting to novel problems in new settings. While some dispositions can be seen as natural tendencies, they may also be cultivated over time through intentionally-designed instruction and through exposure to tacit expectations for student behavior.

To address dispositions in the test, we use scenario-based problem solving items. Students are presented with a scenario describing an ill-defined information literacy challenge related to the content of the module. Following the scenario, students are presented with strategies for addressing the challenge. Students evaluate the usefulness of each strategy.

About the Reports

Threshold Achievement Test reports provide test managers with detailed and robust analyses of student performance. Sections include:

  • Summary results for knowledge and disposition dimensions
  • Detailed results for each knowledge outcome
  • Performance indicator rankings that identify students' relative strengths and weaknesses
  • Performance levels indicators ranging from conditionally ready to college ready to research ready
  • Disposition results with descriptions that align with students' scores
  • Breakouts for subgroups such as first year students or transfer students
  • Cross-institutional comparisons with peer institutions and other institutional groupings
  • Suggestions for targeted readings that can assist in following up on the results

Test managers also receive a set of supporting files:

  • Test Item document. A PDF document with a description of each test item.
  • Raw data file. Contains all of the scores presented in the report.
  • Student data file. Contains scores for every student.
  • Student data codebook. Describes the demographic options that were configured for the test.
  • Student Report zip file. Contains a directory of PDF documents with an analysis of each student's performance.

Test managers have the option to present students with personalized reports upon completing the test. As soon as the student finishes the test a dynamically generated reports is displayed describing the student’s performance and offering recommendations for improvement. The report content is connected directly with the knowledge outcomes, performance indicators, and dispositions of the module being tested.

About the Modules

Two TATIL modules are available now! Two more will come online in 2018. Read brief descriptions below and click on the module titles to see the outcomes, performance indicators, and dispositions. You may also download a PDF document with descriptions for all four modules.

Evaluating Process & Authority (the first module, available now!) focuses on the process of information creation and the constructed and contextual nature of source authority. It assesses how students understand and value authority, how they define their role in evaluating sources, and how they perceive the relative value of different types of sources for common academic needs.

Strategic Searching (the second module, also available now!) focuses on the process of planning, evaluating, and revising searches during strategic exploration. It tests students' ability to recall and apply their knowledge of searching and it tests their metacognition about a core information literacy disposition that underlies their searching behaviors.

Research & Scholarship is the third module and will be available in 2018. The test addresses students' ability to apply the research process to their college work in order to participate in the scholarly conversation and assesses how students understand and value their role within the scholarly community.

The Value of Information (fourth module, coming in 2018) assesses how students understand and value their role within the information ecosystem. It focuses on the norms of academic information creation and the factors that affect access to information. It tests students' ability to recall and apply their knowledge of information rights and responsibilities and it tests their metacognition about core information literacy dispositions that underlie their behaviors.

Learn More

The Threshold Achievement Test for Information Literacy (TATIL) is a unique and valuable tool to add to your assessment program. Explore the Threshold Achievement Test website to learn more about the test, cost and requirements for administering the finished modules, and participating in field testing for the remaining two modules.

Last week I was fortunate to get to attend and present at LOEX 2017, in Lexington, KY.  I’m excited to have joined the LOEX Board of Trustees this year and it was great to see familiar faces and meet new, energized librarians, too.

I presented a one-hour workshop where I walked participants through a comparison of two common types of results reports from large-scale assessments.  We looked at an example of a rubric-based assessment report and a report from the Evaluating Process and Authority module of the Threshold Achievement Test.  We compared them on the criteria of timeliness, specificity, and actionability, and found that rubric results reports from large-scale assessments often lack the specificity that makes it possible to use assessment results to make plans for instructional improvement.  The TATIL results report, on the other hand, offered many ways to identify areas for improvement and to inform conversations about next steps.  Several librarians from institutions that are committed to using rubrics for large-scale assessment said at the end of the session that the decision between rubrics and tests now seemed more complicated than it had before.  Another librarian commented that rubrics seem like a good fit for assessing outcomes in a course, but perhaps are less useful for assessing outcomes across a program or a whole institution.  It was a rich conversation that also highlighted some confusing elements in the TATIL results report that we are looking forward to addressing in the next revision.

Overall, I came away from LOEX feeling excited about the future of instruction in the IL Framework era.  While the Framework remains an enigma for some of us, presenters at LOEX this year found many ways to make practical, useful connections between their work and the five frames. ...continue reading "May Update: Report from LOEX"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.

We’re excited that this semester all four modules are available for field testing.  Modules 1 and 2 now offer students feedback when they finish the tests.  Modules 3 and 4, still in the first phase of field testing, do not yet provide immediate feedback to students.  But that doesn’t mean that students shouldn’t reflect on their experience taking the test.  When I have students take Module 3: Research & Scholarship and Module 4: The Value of Information, I create an online survey they can complete as soon as they’ve finished the last question.  Setting up the test through www.thresholdachievement.com makes that easy by providing an option for directing students to a URL at the end of the test.  You can view the brief survey that I give students.

When asking for students’ reflections on their experiences, whether for the TATIL modules or for any instructional interaction, I always rely on critical incident questionnaires as my starting point.  Stephen Brookfield, a transformative educator who is an expert in adult learning, has been promoting critical incident questionnaires since the 1990s.  Building upon Dr. Brookfield’s work, faculty have used the instrument to survey students about their experiences in face-to-face classes as well as online.  Read more about his work and the work of his colleagues here: http://www.stephenbrookfield.com/ciq/

If you would prefer to collect information about students’ perceptions of the test content rather than or in addition to their experience taking the test, consider survey questions like:

  • Where did you learn the skills and knowledge that you used on this test?
  • What do you think you should practice doing in order to improve your performance on this test in the future?
  • What were you asked about on this test that surprised you?

By surveying students at the end of the test, you lay the groundwork for class discussions about the challenges the test presented, areas of consensus among your students, and misconceptions that you may want to address.  The test gives students a chance to focus on their information literacy knowledge and beliefs, which they do not always have the time or structure to do.  Writing briefly about their experience taking the test while it is still fresh in their mind will help students to identify the insights they have gained about their information literacy through the process of engaging with the test.

Download sample student reportThis semester Carolyn Radcliff and I had the opportunity to discuss the test and the students’ results reports with our own classes or with students in our colleagues’ classes.  You can see an example of students’ personalized results reports by clicking the thumbnail to the right.  These reports are currently available for the field testing versions of modules 1 and 2 and will be available for field testing versions of modules 3 and 4 in 2017.

Students’ Responses to their Personalized Results

Our conversations with students gave us a new perspective on the test.   As with any test results, some students were disappointed by their results and others disagreed with the evaluation of their performance, but overall students found value in the reports.  Here are some samples of reflective responses from students:

  • I felt most engaged when the results said that I ‘have the habit of challenging (my) own assumptions.’ That’s something I definitely do and I was surprised that the test was able to detect that.
  • I was most surprised that the report said that I defer to particular kinds of authority a bit more than others; I will be sure to keep the recommendations in mind.
  • It was surprising that I wasn’t as proficient as I thought but I felt most engaged by the results when I learned that most college students are also at my level.
  • It was surprising that the results reminded me to seek out additional perspectives and not only ones that support my claim or topic.
  • The chart of my score was interesting.
  • I felt most engaged at the beginning [of the results report] when they analyzed my results directly by using [the pronoun] ‘you.’
  • The test was beneficial by making me think about the use of different sources.
  • Nothing was surprising, but I did agree with the recommendations to strengthen my writing/reading abilities, which I found very helpful.

Students appreciate having results immediately, and in one class where we promised them results but an error on my part during the test set-up delayed their reports, students expressed disappointment and were relieved when they understood that they would still get their personalized reports later.  Nevertheless we know that not every testing situation is intended to result in direct feedback to students, so the student reports are an optional feature that you can turn on or off when you set up the test each time.

...continue reading "December Update: How Students Experience the Test"