Carrick Enterprises has begun to modernize the Project SAILS web site, administrator tools, and reports. This work will continue through the 2017-2018 academic year and will be put into production June 15, 2018. There will be no disruption of service during this work and all existing information will be migrated to the new system.

What’s new:

Peer institution scoring

You will select the tests from your peer institutions to include as a cross-institutional score. This will be reported with all score reporting except your Custom Demographic Questions. You will continue to see cross-institutional scores by institution type, however, you will now be able to include multiple institution types in these scores.

On-demand Cohort report creation

Cohort reports will no longer be restricted to being created at the end of December and the beginning of June. Once you have stopped testing, you will be able to configure your report for production. As long as all of the tests you have included in your peer institution list are completed, your report will be generated overnight and available to you the following day. Your payment will still be required to have been received by us before you can download your report.

Student reports for Individual Scores

You will have the option to display an individualized analysis of your students’ performance when they complete the test. They will have the option to download this report as a PDF document. If you choose to not display this report to your students, you will still receive the reports in your report download.

Detailed narrative report for Individual Scores

In addition to student data, you will receive a narrative report analyzing your students’ performance on the test. This report is something that can be shared with your faculty collaborators and your library administration.

Student activity monitoring

You will be able to monitor in real-time how far along your students are as they take the test. You will see the Student Identifier (which will be called the Student Key), start time, and page number that they are currently answering. You will still be able to download a list of Student Keys that have completed the test. This will continue to include the start time, end time, and number of seconds elapsed for each student.

What’s changing:

...continue reading "Project SAILS Enhancements in the Works"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.

Investigating Information Literacy Among Occupational Therapy Students at Misericordia University Using SAILS Build-Your-Own-Test

BY: Elaina DaLomba, PhD, OTR/L, MSW
Assistant Professor, Occupational Therapy Department
Misericordia University

Information literacy (IL) skills, as a component of evidence-based practice (EBP), are critical for healthcare practitioners. Most Occupational Therapy programs and the American Occupational Therapy Association require that curricula address IL/EBP skills development. However, evidence shows that occupational therapists don’t utilize IL/EBP once they graduate. Therapists don’t feel they possess the resources or skills to find current and applicable evidence in the literature. At Misericordia University’s Occupational Therapy program we decided to look at our student’s IL/EBP skills and trial a different method to enhance students’ skills. Measuring these constructs in a way that has clinical meaning is difficult. Misericordia uses SAILS for pre and post testing of all students’ IL skills development (during freshman and senior year) so it seemed a natural fit to use this within a research project. We didn’t want to collect unnecessary data due to time constraints so we chose the Build Your Own Test (BYOT), with three questions from each of the first six skill sets of SAILS. These 18 questions could be answered quickly and the data would be analyzed for us. This freed us up to focus on the qualitative portions of our research. Although the SAILS BYOTs don’t have reliability and validity measures particular to them (because they are individually constructed), the overall metrics of the SAILS are very good.

We designed an intensive embedded librarian model to explore what impact this would have on students' skill development in IL standards one, two, and three as per the objectives of our Conceptual Foundations of Occupational Therapy course. The librarian handled all of the pre and post-testing having the students simply enter their SAILS unique identifier codes (UIC) on computers in the library’s lab. Students then used their SAILS UIC for all study related protocols. The intervention started with an interactive lecture in the computer lab with simple, but thorough instructional sheets for the students to use throughout the semester. For each clinical topic introduced the instructor used the librarian’s model to create and complete searches in vivo, allowing the students to add, modify, or eliminate words, Boolean operators, MESH terms etc. The librarian was an active presence on our Blackboard site and maintained office hours within the College of Health Sciences and Education. Students were also instructed to bring their database search strategies and results for approval from the librarian prior to writing their research papers, exposing them to her knowledge, even if they had chosen not to access her assistance initially. The data will be analyzed in spring 2017, but data collection was a breeze!

The SAILS BYOT gave us meaningful, quantitative data in a quickly delivered format. While we might not conduct this same study again, we will continue to use SAILS BYOT for program development and course assessment due its ease of use and practical data.

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

The Project SAILS tests were developed soon after the Association of College and Research Libraries adopted the “Information Literacy Competency Standards for Higher Education” in 2000. The Standards received wide attention and many academic libraries and their parent organizations embraced all or part of the Standards as guideposts for their information literacy programs.

The Standards were structured so that each of the five standards had performance indicators, and each performance indicator had outcomes. Subsequent to the publication of the Standards, a task force created the objectives for many of the outcomes. (See “Objectives for Information Literacy Instruction: A Model Statement for Academic Librarians.”) The resulting combination of standards, performance indicators, outcomes, and objectives served as the foundation of the SAILS tests, with test items based on most of the objectives (or for cases in which no objective was written, on outcomes).

Since 2006, hundreds of colleges and universities have used the SAILS tests to measure the information literacy knowledge of their students. The Cohort version of the SAILS test was released in 2006 with the Individual Scores version becoming available in 2010. More recently, the Build Your Own Test (BYOT) option went live in 2016.

Carrick Enterprises assumed responsibility for the continued operation of Project SAILS in 2012. Since that time, we have repeatedly stated our intention to continue offering the SAILS tests as long as they prove useful to the higher education community. That promise continues to this day. The Association of College and Research Libraries rescinded the “Information Literacy Competency Standards for Higher Education” earlier this year, but we stand by our commitment to offer the SAILS tests well into the future. We know that many institutions want a long-term solution to information literacy assessment and SAILS is one such solution.

The SAILS tests will be available as long as they are needed. We continue to monitor how well the test items perform, to make updates to test items, and to improve the underlying systems. If you would like to discuss how the SAILS tests can help you and your institution, please contact us.