Skip to content

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"

Thanks to the help of librarians from throughout southern California, we made a big step forward with test modules 1 and 2 this summer.  Because TATIL is a criterion referenced test (rather than a norm referenced test like SAILS) we rely on the expertise of librarians and other educators to set performance standards so that we can report more than a raw score when students take the test.  By setting standards, we can make and test claims about what students’ scores indicate about their exposure to and mastery of information literacy.  This standard setting process is iterative and will continue throughout the life of the test.  By completing the first step in that ongoing effort, we now have two module result reports that provide constructive feedback to students and educators.

Standard setting plays an important role in enhancing the quality of the test.  For more detailed information about the standard setting method like the one we used, I recommend these slides from the Oregon Department of Education. The essence of this approach to standard setting is that we used students’ responses from the first round of field testing to calculate the difficulty of each test item.  Then the test items were printed out in the order of how difficult they were for students.  Expert panelists went through these item sets, using their knowledge of student learning to identify points in the continuum of items where the knowledge or ability required to correctly answer the questions seemed to cross a threshold.  These thresholds indicate the boundary between beginning students, intermediate students, and expert students’ performance.  We then used the difficulty levels of the items at the thresholds to calculate the cut scores.

...continue reading "September Update: Our Standard Setting Process"

Orlando Train Station By DanTD - Own work, CC BY 3.0At ALA in Orlando on June 24 and 25, the final cohort of ACRL’s Assessment in Action team leaders will present the results of their assessment projects. This will be the culmination of 15 months of work that they have done on their own campuses and in our community of learners. For me, it will also be the culmination of about three and a half years of collaboration with Deb Gilchrist, Lisa Hinchliff, Carrie Donovan, and Kara Malenfant--as well as John Watts and Eric Resnis who joined the team in 2015.  I have been a facilitator and curriculum developer for Assessment in Action since the first cohort began in 2013, and I have learned so much about assessment by working with librarians as they designed and implemented their projects.  

In particular, I have learned about the value of thinking carefully about my institutional culture and norms when I am weighing different methods of assessment.  Since there is no single right answer to the question of what type of assessment method or instrument we should use, the best guidance I have found has been to ask the question: “What will result in findings that we can use to ask new questions about our practice and that we can make meaningful to our colleagues?”  Keeping my institution’s priorities in mind helps me to manage the sometimes overwhelming variety of approaches to assessing learning.

I have also learned that perseverance and a willingness to treat assessment as serious play will make it possible for librarians to sustain assessment projects over time.  We all know that assessment is not a one-and-done activity, no matter how well designed, and so it is important to see it as a puzzle that we’ll get better at creating and solving as we become more practiced.  The most important step to successful assessment is just to get started doing something, because the best assessments don’t just answer questions, they also raise new ones and that means that there’s never a final assessment project.  For the AiA team leaders, I know that the results they’re sharing at ALA are just the first step in an ongoing process of learning more about their own contributions to students’ success.
Read more

We're gearing up for big things this summer.  By the end of July we expect to have performance standards set for the first two modules.  We'll use the results of the field tests to establish criteria (i.e. cut scores) for how well we expect students to do on the test when they are entering college, when they've completed the bulk of their GE, and when they're ready to graduate with a bachelor's degree.  This is a major step forward for making the test ready to be used for course-level, program-level, and institutional assessment.

Over the past few months, we've also been thinking more about the role of dispositions in students' IL outcomes.  We know from the research on learning mindsets by Andrea Dweck and her colleagues that it’s vitally important for educators to instill in students the belief that they can develop their aptitudes through consistent effort.  Students who believe that their intelligence or skills are already fixed and cannot improve over time are more likely to struggle in their courses and may not persist to achieve their academic goals.
Read more