Investigating Information Literacy Among Occupational Therapy Students at Misericordia University Using SAILS Build-Your-Own-Test

BY: Elaina DaLomba, PhD, OTR/L, MSW
Assistant Professor, Occupational Therapy Department
Misericordia University

Information literacy (IL) skills, as a component of evidence-based practice (EBP), are critical for healthcare practitioners. Most Occupational Therapy programs and the American Occupational Therapy Association require that curricula address IL/EBP skills development. However, evidence shows that occupational therapists don’t utilize IL/EBP once they graduate. Therapists don’t feel they possess the resources or skills to find current and applicable evidence in the literature. At Misericordia University’s Occupational Therapy program we decided to look at our student’s IL/EBP skills and trial a different method to enhance students’ skills. Measuring these constructs in a way that has clinical meaning is difficult. Misericordia uses SAILS for pre and post testing of all students’ IL skills development (during freshman and senior year) so it seemed a natural fit to use this within a research project. We didn’t want to collect unnecessary data due to time constraints so we chose the Build Your Own Test (BYOT), with three questions from each of the first six skill sets of SAILS. These 18 questions could be answered quickly and the data would be analyzed for us. This freed us up to focus on the qualitative portions of our research. Although the SAILS BYOTs don’t have reliability and validity measures particular to them (because they are individually constructed), the overall metrics of the SAILS are very good.

We designed an intensive embedded librarian model to explore what impact this would have on students' skill development in IL standards one, two, and three as per the objectives of our Conceptual Foundations of Occupational Therapy course. The librarian handled all of the pre and post-testing having the students simply enter their SAILS unique identifier codes (UIC) on computers in the library’s lab. Students then used their SAILS UIC for all study related protocols. The intervention started with an interactive lecture in the computer lab with simple, but thorough instructional sheets for the students to use throughout the semester. For each clinical topic introduced the instructor used the librarian’s model to create and complete searches in vivo, allowing the students to add, modify, or eliminate words, Boolean operators, MESH terms etc. The librarian was an active presence on our Blackboard site and maintained office hours within the College of Health Sciences and Education. Students were also instructed to bring their database search strategies and results for approval from the librarian prior to writing their research papers, exposing them to her knowledge, even if they had chosen not to access her assistance initially. The data will be analyzed in spring 2017, but data collection was a breeze!

The SAILS BYOT gave us meaningful, quantitative data in a quickly delivered format. While we might not conduct this same study again, we will continue to use SAILS BYOT for program development and course assessment due its ease of use and practical data.

Download sample student reportThis semester Carolyn Radcliff and I had the opportunity to discuss the test and the students’ results reports with our own classes or with students in our colleagues’ classes.  You can see an example of students’ personalized results reports by clicking the thumbnail to the right.  These reports are currently available for the field testing versions of modules 1 and 2 and will be available for field testing versions of modules 3 and 4 in 2017.

Students’ Responses to their Personalized Results

Our conversations with students gave us a new perspective on the test.   As with any test results, some students were disappointed by their results and others disagreed with the evaluation of their performance, but overall students found value in the reports.  Here are some samples of reflective responses from students:

  • I felt most engaged when the results said that I ‘have the habit of challenging (my) own assumptions.’ That’s something I definitely do and I was surprised that the test was able to detect that.
  • I was most surprised that the report said that I defer to particular kinds of authority a bit more than others; I will be sure to keep the recommendations in mind.
  • It was surprising that I wasn’t as proficient as I thought but I felt most engaged by the results when I learned that most college students are also at my level.
  • It was surprising that the results reminded me to seek out additional perspectives and not only ones that support my claim or topic.
  • The chart of my score was interesting.
  • I felt most engaged at the beginning [of the results report] when they analyzed my results directly by using [the pronoun] ‘you.’
  • The test was beneficial by making me think about the use of different sources.
  • Nothing was surprising, but I did agree with the recommendations to strengthen my writing/reading abilities, which I found very helpful.

Students appreciate having results immediately, and in one class where we promised them results but an error on my part during the test set-up delayed their reports, students expressed disappointment and were relieved when they understood that they would still get their personalized reports later.  Nevertheless we know that not every testing situation is intended to result in direct feedback to students, so the student reports are an optional feature that you can turn on or off when you set up the test each time.

...continue reading "December Update: How Students Experience the Test"

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"

Thanks to the help of librarians from throughout southern California, we made a big step forward with test modules 1 and 2 this summer.  Because TATIL is a criterion referenced test (rather than a norm referenced test like SAILS) we rely on the expertise of librarians and other educators to set performance standards so that we can report more than a raw score when students take the test.  By setting standards, we can make and test claims about what students’ scores indicate about their exposure to and mastery of information literacy.  This standard setting process is iterative and will continue throughout the life of the test.  By completing the first step in that ongoing effort, we now have two module result reports that provide constructive feedback to students and educators.

Standard setting plays an important role in enhancing the quality of the test.  For more detailed information about the standard setting method like the one we used, I recommend these slides from the Oregon Department of Education. The essence of this approach to standard setting is that we used students’ responses from the first round of field testing to calculate the difficulty of each test item.  Then the test items were printed out in the order of how difficult they were for students.  Expert panelists went through these item sets, using their knowledge of student learning to identify points in the continuum of items where the knowledge or ability required to correctly answer the questions seemed to cross a threshold.  These thresholds indicate the boundary between beginning students, intermediate students, and expert students’ performance.  We then used the difficulty levels of the items at the thresholds to calculate the cut scores.

...continue reading "September Update: Our Standard Setting Process"