April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"

Thanks to the help of librarians from throughout southern California, we made a big step forward with test modules 1 and 2 this summer.  Because TATIL is a criterion referenced test (rather than a norm referenced test like SAILS) we rely on the expertise of librarians and other educators to set performance standards so that we can report more than a raw score when students take the test.  By setting standards, we can make and test claims about what students’ scores indicate about their exposure to and mastery of information literacy.  This standard setting process is iterative and will continue throughout the life of the test.  By completing the first step in that ongoing effort, we now have two module result reports that provide constructive feedback to students and educators.

Standard setting plays an important role in enhancing the quality of the test.  For more detailed information about the standard setting method like the one we used, I recommend these slides from the Oregon Department of Education. The essence of this approach to standard setting is that we used students’ responses from the first round of field testing to calculate the difficulty of each test item.  Then the test items were printed out in the order of how difficult they were for students.  Expert panelists went through these item sets, using their knowledge of student learning to identify points in the continuum of items where the knowledge or ability required to correctly answer the questions seemed to cross a threshold.  These thresholds indicate the boundary between beginning students, intermediate students, and expert students’ performance.  We then used the difficulty levels of the items at the thresholds to calculate the cut scores.

...continue reading "September Update: Our Standard Setting Process"

The Project SAILS tests were developed soon after the Association of College and Research Libraries adopted the “Information Literacy Competency Standards for Higher Education” in 2000. The Standards received wide attention and many academic libraries and their parent organizations embraced all or part of the Standards as guideposts for their information literacy programs.

The Standards were structured so that each of the five standards had performance indicators, and each performance indicator had outcomes. Subsequent to the publication of the Standards, a task force created the objectives for many of the outcomes. (See “Objectives for Information Literacy Instruction: A Model Statement for Academic Librarians.”) The resulting combination of standards, performance indicators, outcomes, and objectives served as the foundation of the SAILS tests, with test items based on most of the objectives (or for cases in which no objective was written, on outcomes).

Since 2006, hundreds of colleges and universities have used the SAILS tests to measure the information literacy knowledge of their students. The Cohort version of the SAILS test was released in 2006 with the Individual Scores version becoming available in 2010. More recently, the Build Your Own Test (BYOT) option went live in 2016.

Carrick Enterprises assumed responsibility for the continued operation of Project SAILS in 2012. Since that time, we have repeatedly stated our intention to continue offering the SAILS tests as long as they prove useful to the higher education community. That promise continues to this day. The Association of College and Research Libraries rescinded the “Information Literacy Competency Standards for Higher Education” earlier this year, but we stand by our commitment to offer the SAILS tests well into the future. We know that many institutions want a long-term solution to information literacy assessment and SAILS is one such solution.

The SAILS tests will be available as long as they are needed. We continue to monitor how well the test items perform, to make updates to test items, and to improve the underlying systems. If you would like to discuss how the SAILS tests can help you and your institution, please contact us.

I was fortunate to get to attend ALA in Orlando.  When I’m at ALA, I make sure to always attend the ACRL Instruction Section panel.  This year, I was especially interested because the panel took on Authority is Constructed and Contextual, a very rich concept in the Framework that we’ve had many conversations about as we’ve worked on the first module of the test: Evaluating Process and Authority.

The panelists described how they have engaged with the concept of authority in their own teaching and how the Framework has inspired them to think about this concept in new ways.  Though the panel itself raised many interesting questions, a comment from the audience particularly piqued my interest.  Jessica Critten, from West Georgia University, highlighted the gap in librarians’ discourse about what constitutes evidence and how students are taught to understand what they’re doing with the information sources we’re asking them to evaluate.  She clearly identified the implication of the Authority is Constructed and Contextual Frame, which is that we evaluate authority for a purpose and librarians need to engage in more meaningful discussion about those purposes if we are going to do more than leave students with the sense that everything is relative. Jessica has been thinking about these issues for a while.  She co-authored a chapter called “Logical Fallacies and Sleight of Mind: Rhetorical Analysis as a Tool for Teaching Critical Thinking” in Not Just Where to Click: Teaching Students How to Think about Information.

Jessica’s remarks showed me a connection that we need to continue to strengthen between our work in libraries and our colleagues’ work in composition studies and rhetoric.  Especially at a time of increasing polarization in public discourse, the meaning of concepts like authority, facts, and evidence cannot be taken for granted as neutral constructions that we all define the same way.  When I got back from Orlando, I sat down with our Rhetoric and Composition consultant, Richard Hannon, to ask him to elaborate on the connection between the Framework and how he gets students to think critically about facts, evidence, and information sources.
Read more