Skip to content

Carolyn Caffrey Gardner
Carolyn Caffrey Gardner, Information Literacy Coordinator at Cal State Dominguez Hills in Carson, California, USA

It can be a challenge to navigate accrediting bodies and their expectations for information literacy instruction and assessment. This is a snapshot of how folks at one campus tackled the self-study for WSCUC accreditation, including some takeaways that may help you on your own accreditation journey.

I joined California State University Dominguez Hills in May of 2016, in the midst of an accreditation preparation frenzy. As the new information literacy coordinator, I jumped right into the ongoing process of preparing for reaccreditation, which had started years in advance. In Fall 2015, as we geared up for our 2018 site visit, our campus created Core Task Forces. Each core task force was charged with analyzing a WSCUC core competency on our campus. These competencies are expected of every graduating student and include Information Literacy (IL). Led by Library Dean Stephanie Brasley, the IL Task Force began with extensive discussions about how information literacy is defined and where we can identify these skills being taught on our campus. The committee was made up of a diverse cross-section of faculty and administrators, each with different understandings of what information literacy is and how we can measure competency. While I wasn’t yet on campus for these discussions, the committee minutes and other documentation describe the task force’s adoption of the ACRL Framework definition of information literacy and the recommendation that we distribute that definition widely. The IL Task Force then began identifying where IL competencies were taught on our campus. Ultimately, the task force felt that retroactive assessment of assignments not intended to teach or measure information literacy outcomes wouldn’t provide an authentic understanding of our students’ learning. For those reasons, they opted not to conduct a one-time assessment project, such as applying an existing rubric (e.g., AAC&U) to collect student work, and instead opted to find existing evidence. The committee recruited students to participate in IL testing using Project SAILS, used existing NSSE data (from the general questions and not the information literacy module add-on), and explored program-level student learning outcomes assessment data. ...continue reading "CSU Dominguez Hills and the WASC Senior College and University Commission"

Sign reading Good Cheap Fast
Credit: cea+ www.flickr.com/photos/centralasian/4534292595 CCBY2.0

It can be a challenge to decide which SAILS or TATIL test is the best one for your needs. Here I will take a few minutes to explain why we offer so many test options and how to determine which one is right for you.

The construct of information literacy is very broad. If you think about it as a light spectrum, it includes everything from infrared to ultraviolet. Many important concepts such as authority, intellectual property, search strategies, scholarship, and research are included. There is a lot to cover if you are going to assess your students’ information literacy capabilities. In order to make testing of these concepts manageable, we have grouped them in various ways.

Project SAILS has eight skill sets that we developed using the ACRL Information Literacy Competency Standards for Higher Education as a source for our
learning objectives. There are 162 test questions across the eight skill sets. The skill sets allow for in-depth scoring.

Threshold Achievement Test for Information Literacy (TATIL) has four modules. Using the ACRL Framework for Information Literacy as a guide, our advisory board created performance indicators for the entire IL construct that we then combined into modules. There are a total of 101 test questions across the four modules. These modules allow for in-depth scoring.

We think it's important to make tests that can be administered in a standard class hour. This means we cannot ask a student to answer every SAILS question or every TATIL question. Instead students answer a subset of the full test question bank.

We would also like to be able to give each student an individual score when possible. For many institutions receiving individual student scores is necessary in order to achieve their goals. Having individual scores also means we can generate a custom report for each student highlighting their strengths and making recommendations.

I have covered the three aspects of information literacy testing. We call these Breadth, Depth, and Individualization. Breadth indicates how much of the IL construct is covered, from partial to complete. Depth indicates how granular the reporting is, from shallow to deep. And Individualization indicates whether an individual student receives a score.

When having someone do a job for you, the old saying goes: Good, cheap, fast -- pick two. When deciding on a testing option you have a similar choice: Breadth, Depth, Individualization -- pick two. Here’s why:

...continue reading "SAILS and TATIL: Why Are There So Many Test Options?"

Cynthia Kane, Emporia State University
Cynthia Kane, Emporia State University, Kansas, USA

Cynthia Kane joined the Advisory Board of the Threshold Achievement Test for Information Literacy in 2017. Here she answers questions about her work and her passion for assessment.

Question: Please tell us about your current job. 

Cynthia: I am currently the Director of Assessment at the Emporia State University Libraries and Archives. I oversee all aspects of assessment initiatives in our program, including information literacy assessments. I also represent the Libraries and Archives on two university-wide committees:  the Student Learning Assessment Council and the Higher Learning Commission Leadership Team. I really enjoy these last two opportunities because it’s given me a wider audience to highlight the impact of the academic library in student learning and success throughout their undergraduate and graduate careers.

Q: Do you teach? How has your approach to teaching changed since you started?

Cynthia: I have taught library instruction sessions in undergraduate and graduate courses for over 25 years. In addition, I served for years as an adjunct faculty member for ESU’s School of Library and Information Management. I presently coordinate the scheduling and teach sections of UL100, Research Skills, Information and Technology. This course counts for the “Information Technology” General Education requirement at ESU. My approach to teaching hasn’t really changed over the years – mainly, just being aware that technology tools will change, but the need to know how to find and use information effectively will never change!

Q: How has your library approached the Framework?

Cynthia: We’re working through that right now!  Our UL100 course will be 3 credit hours in Fall 2018 and we are reworking our course curriculum not only to accommodate ...continue reading "Meet the TATIL Advisory Board: Cynthia Kane"

Carrick Enterprises has begun to modernize the Project SAILS web site, administrator tools, and reports. This work will continue through the 2017-2018 academic year and will be put into production June 15, 2018. There will be no disruption of service during this work and all existing information will be migrated to the new system.

What’s new:

Peer institution scoring

You will select the tests from your peer institutions to include as a cross-institutional score. This will be reported with all score reporting except your Custom Demographic Questions. You will continue to see cross-institutional scores by institution type, however, you will now be able to include multiple institution types in these scores.

On-demand Cohort report creation

Cohort reports will no longer be restricted to being created at the end of December and the beginning of June. Once you have stopped testing, you will be able to configure your report for production. As long as all of the tests you have included in your peer institution list are completed, your report will be generated overnight and available to you the following day. Your payment will still be required to have been received by us before you can download your report.

Student reports for Individual Scores

You will have the option to display an individualized analysis of your students’ performance when they complete the test. They will have the option to download this report as a PDF document. If you choose to not display this report to your students, you will still receive the reports in your report download.

Detailed narrative report for Individual Scores

In addition to student data, you will receive a narrative report analyzing your students’ performance on the test. This report is something that can be shared with your faculty collaborators and your library administration.

Student activity monitoring

You will be able to monitor in real-time how far along your students are as they take the test. You will see the Student Identifier (which will be called the Student Key), start time, and page number that they are currently answering. You will still be able to download a list of Student Keys that have completed the test. This will continue to include the start time, end time, and number of seconds elapsed for each student.

What’s changing:

...continue reading "Project SAILS Enhancements in the Works"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.