We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"

Thanks to the help of librarians from throughout southern California, we made a big step forward with test modules 1 and 2 this summer.  Because TATIL is a criterion referenced test (rather than a norm referenced test like SAILS) we rely on the expertise of librarians and other educators to set performance standards so that we can report more than a raw score when students take the test.  By setting standards, we can make and test claims about what students’ scores indicate about their exposure to and mastery of information literacy.  This standard setting process is iterative and will continue throughout the life of the test.  By completing the first step in that ongoing effort, we now have two module result reports that provide constructive feedback to students and educators.

Standard setting plays an important role in enhancing the quality of the test.  For more detailed information about the standard setting method like the one we used, I recommend these slides from the Oregon Department of Education. The essence of this approach to standard setting is that we used students’ responses from the first round of field testing to calculate the difficulty of each test item.  Then the test items were printed out in the order of how difficult they were for students.  Expert panelists went through these item sets, using their knowledge of student learning to identify points in the continuum of items where the knowledge or ability required to correctly answer the questions seemed to cross a threshold.  These thresholds indicate the boundary between beginning students, intermediate students, and expert students’ performance.  We then used the difficulty levels of the items at the thresholds to calculate the cut scores.

...continue reading "September Update: Our Standard Setting Process"

The Project SAILS tests were developed soon after the Association of College and Research Libraries adopted the “Information Literacy Competency Standards for Higher Education” in 2000. The Standards received wide attention and many academic libraries and their parent organizations embraced all or part of the Standards as guideposts for their information literacy programs.

The Standards were structured so that each of the five standards had performance indicators, and each performance indicator had outcomes. Subsequent to the publication of the Standards, a task force created the objectives for many of the outcomes. (See “Objectives for Information Literacy Instruction: A Model Statement for Academic Librarians.”) The resulting combination of standards, performance indicators, outcomes, and objectives served as the foundation of the SAILS tests, with test items based on most of the objectives (or for cases in which no objective was written, on outcomes).

Since 2006, hundreds of colleges and universities have used the SAILS tests to measure the information literacy knowledge of their students. The Cohort version of the SAILS test was released in 2006 with the Individual Scores version becoming available in 2010. More recently, the Build Your Own Test (BYOT) option went live in 2016.

Carrick Enterprises assumed responsibility for the continued operation of Project SAILS in 2012. Since that time, we have repeatedly stated our intention to continue offering the SAILS tests as long as they prove useful to the higher education community. That promise continues to this day. The Association of College and Research Libraries rescinded the “Information Literacy Competency Standards for Higher Education” earlier this year, but we stand by our commitment to offer the SAILS tests well into the future. We know that many institutions want a long-term solution to information literacy assessment and SAILS is one such solution.

The SAILS tests will be available as long as they are needed. We continue to monitor how well the test items perform, to make updates to test items, and to improve the underlying systems. If you would like to discuss how the SAILS tests can help you and your institution, please contact us.

I was fortunate to get to attend ALA in Orlando.  When I’m at ALA, I make sure to always attend the ACRL Instruction Section panel.  This year, I was especially interested because the panel took on Authority is Constructed and Contextual, a very rich concept in the Framework that we’ve had many conversations about as we’ve worked on the first module of the test: Evaluating Process and Authority.

The panelists described how they have engaged with the concept of authority in their own teaching and how the Framework has inspired them to think about this concept in new ways.  Though the panel itself raised many interesting questions, a comment from the audience particularly piqued my interest.  Jessica Critten, from West Georgia University, highlighted the gap in librarians’ discourse about what constitutes evidence and how students are taught to understand what they’re doing with the information sources we’re asking them to evaluate.  She clearly identified the implication of the Authority is Constructed and Contextual Frame, which is that we evaluate authority for a purpose and librarians need to engage in more meaningful discussion about those purposes if we are going to do more than leave students with the sense that everything is relative. Jessica has been thinking about these issues for a while.  She co-authored a chapter called “Logical Fallacies and Sleight of Mind: Rhetorical Analysis as a Tool for Teaching Critical Thinking” in Not Just Where to Click: Teaching Students How to Think about Information.

Jessica’s remarks showed me a connection that we need to continue to strengthen between our work in libraries and our colleagues’ work in composition studies and rhetoric.  Especially at a time of increasing polarization in public discourse, the meaning of concepts like authority, facts, and evidence cannot be taken for granted as neutral constructions that we all define the same way.  When I got back from Orlando, I sat down with our Rhetoric and Composition consultant, Richard Hannon, to ask him to elaborate on the connection between the Framework and how he gets students to think critically about facts, evidence, and information sources.
Read more

Orlando Train Station By DanTD - Own work, CC BY 3.0At ALA in Orlando on June 24 and 25, the final cohort of ACRL’s Assessment in Action team leaders will present the results of their assessment projects. This will be the culmination of 15 months of work that they have done on their own campuses and in our community of learners. For me, it will also be the culmination of about three and a half years of collaboration with Deb Gilchrist, Lisa Hinchliff, Carrie Donovan, and Kara Malenfant--as well as John Watts and Eric Resnis who joined the team in 2015.  I have been a facilitator and curriculum developer for Assessment in Action since the first cohort began in 2013, and I have learned so much about assessment by working with librarians as they designed and implemented their projects.  

In particular, I have learned about the value of thinking carefully about my institutional culture and norms when I am weighing different methods of assessment.  Since there is no single right answer to the question of what type of assessment method or instrument we should use, the best guidance I have found has been to ask the question: “What will result in findings that we can use to ask new questions about our practice and that we can make meaningful to our colleagues?”  Keeping my institution’s priorities in mind helps me to manage the sometimes overwhelming variety of approaches to assessing learning.

I have also learned that perseverance and a willingness to treat assessment as serious play will make it possible for librarians to sustain assessment projects over time.  We all know that assessment is not a one-and-done activity, no matter how well designed, and so it is important to see it as a puzzle that we’ll get better at creating and solving as we become more practiced.  The most important step to successful assessment is just to get started doing something, because the best assessments don’t just answer questions, they also raise new ones and that means that there’s never a final assessment project.  For the AiA team leaders, I know that the results they’re sharing at ALA are just the first step in an ongoing process of learning more about their own contributions to students’ success.
Read more