Skip to content

We’re excited that this semester all four modules are available for field testing.  Modules 1 and 2 now offer students feedback when they finish the tests.  Modules 3 and 4, still in the first phase of field testing, do not yet provide immediate feedback to students.  But that doesn’t mean that students shouldn’t reflect on their experience taking the test.  When I have students take Module 3: Research & Scholarship and Module 4: The Value of Information, I create an online survey they can complete as soon as they’ve finished the last question.  Setting up the test through www.thresholdachievement.com makes that easy by providing an option for directing students to a URL at the end of the test.  You can view the brief survey that I give students.

When asking for students’ reflections on their experiences, whether for the TATIL modules or for any instructional interaction, I always rely on critical incident questionnaires as my starting point.  Stephen Brookfield, a transformative educator who is an expert in adult learning, has been promoting critical incident questionnaires since the 1990s.  Building upon Dr. Brookfield’s work, faculty have used the instrument to survey students about their experiences in face-to-face classes as well as online.  Read more about his work and the work of his colleagues here: http://www.stephenbrookfield.com/ciq/

If you would prefer to collect information about students’ perceptions of the test content rather than or in addition to their experience taking the test, consider survey questions like:

  • Where did you learn the skills and knowledge that you used on this test?
  • What do you think you should practice doing in order to improve your performance on this test in the future?
  • What were you asked about on this test that surprised you?

By surveying students at the end of the test, you lay the groundwork for class discussions about the challenges the test presented, areas of consensus among your students, and misconceptions that you may want to address.  The test gives students a chance to focus on their information literacy knowledge and beliefs, which they do not always have the time or structure to do.  Writing briefly about their experience taking the test while it is still fresh in their mind will help students to identify the insights they have gained about their information literacy through the process of engaging with the test.

Investigating Information Literacy Among Occupational Therapy Students at Misericordia University Using SAILS Build-Your-Own-Test

BY: Elaina DaLomba, PhD, OTR/L, MSW
Assistant Professor, Occupational Therapy Department
Misericordia University

Information literacy (IL) skills, as a component of evidence-based practice (EBP), are critical for healthcare practitioners. Most Occupational Therapy programs and the American Occupational Therapy Association require that curricula address IL/EBP skills development. However, evidence shows that occupational therapists don’t utilize IL/EBP once they graduate. Therapists don’t feel they possess the resources or skills to find current and applicable evidence in the literature. At Misericordia University’s Occupational Therapy program we decided to look at our student’s IL/EBP skills and trial a different method to enhance students’ skills. Measuring these constructs in a way that has clinical meaning is difficult. Misericordia uses SAILS for pre and post testing of all students’ IL skills development (during freshman and senior year) so it seemed a natural fit to use this within a research project. We didn’t want to collect unnecessary data due to time constraints so we chose the Build Your Own Test (BYOT), with three questions from each of the first six skill sets of SAILS. These 18 questions could be answered quickly and the data would be analyzed for us. This freed us up to focus on the qualitative portions of our research. Although the SAILS BYOTs don’t have reliability and validity measures particular to them (because they are individually constructed), the overall metrics of the SAILS are very good.

We designed an intensive embedded librarian model to explore what impact this would have on students' skill development in IL standards one, two, and three as per the objectives of our Conceptual Foundations of Occupational Therapy course. The librarian handled all of the pre and post-testing having the students simply enter their SAILS unique identifier codes (UIC) on computers in the library’s lab. Students then used their SAILS UIC for all study related protocols. The intervention started with an interactive lecture in the computer lab with simple, but thorough instructional sheets for the students to use throughout the semester. For each clinical topic introduced the instructor used the librarian’s model to create and complete searches in vivo, allowing the students to add, modify, or eliminate words, Boolean operators, MESH terms etc. The librarian was an active presence on our Blackboard site and maintained office hours within the College of Health Sciences and Education. Students were also instructed to bring their database search strategies and results for approval from the librarian prior to writing their research papers, exposing them to her knowledge, even if they had chosen not to access her assistance initially. The data will be analyzed in spring 2017, but data collection was a breeze!

The SAILS BYOT gave us meaningful, quantitative data in a quickly delivered format. While we might not conduct this same study again, we will continue to use SAILS BYOT for program development and course assessment due its ease of use and practical data.

Download sample student reportThis semester Carolyn Radcliff and I had the opportunity to discuss the test and the students’ results reports with our own classes or with students in our colleagues’ classes.  You can see an example of students’ personalized results reports by clicking the thumbnail to the right.  These reports are currently available for the field testing versions of modules 1 and 2 and will be available for field testing versions of modules 3 and 4 in 2017.

Students’ Responses to their Personalized Results

Our conversations with students gave us a new perspective on the test.   As with any test results, some students were disappointed by their results and others disagreed with the evaluation of their performance, but overall students found value in the reports.  Here are some samples of reflective responses from students:

  • I felt most engaged when the results said that I ‘have the habit of challenging (my) own assumptions.’ That’s something I definitely do and I was surprised that the test was able to detect that.
  • I was most surprised that the report said that I defer to particular kinds of authority a bit more than others; I will be sure to keep the recommendations in mind.
  • It was surprising that I wasn’t as proficient as I thought but I felt most engaged by the results when I learned that most college students are also at my level.
  • It was surprising that the results reminded me to seek out additional perspectives and not only ones that support my claim or topic.
  • The chart of my score was interesting.
  • I felt most engaged at the beginning [of the results report] when they analyzed my results directly by using [the pronoun] ‘you.’
  • The test was beneficial by making me think about the use of different sources.
  • Nothing was surprising, but I did agree with the recommendations to strengthen my writing/reading abilities, which I found very helpful.

Students appreciate having results immediately, and in one class where we promised them results but an error on my part during the test set-up delayed their reports, students expressed disappointment and were relieved when they understood that they would still get their personalized reports later.  Nevertheless we know that not every testing situation is intended to result in direct feedback to students, so the student reports are an optional feature that you can turn on or off when you set up the test each time.

...continue reading "December Update: How Students Experience the Test"

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"