vol. 15 no. 3, September, 2010 | ||||
In the library science field, assessment is an ongoing and essential component to information literacy instruction, although it can be difficult to put into effective practice. In 2003, the Association of College and Research Libraries approved guidelines for information literacy best practices, which emphasize ten categories of instructional efficacy, including assessment and evaluation (American Library Association 2003). In addition to measurable assessment, the Association's best practices (American Library Association 2006a) outline the need for student learning outcomes that mirror the performance indicators widely accepted by the profession's Information Literacy Competency Standards for Higher Education (American Library Association 2006b). While most academic librarians agree that assessment is important, there is lack of consistency when it comes to implementation (Donovan and Winterman 2009). However, particularly in the current budgetary climate, a uniform approach to assessment is critical to improving the instructional programme and providing a systematic basis for institutional support.
San Jose State University administrators and disciplinary faculty ascribe great importance to information literacy instruction as is evidenced by the University Academic Senate's recommendations (San Jose State University Academic Senate 2004), the mission and shared values of the university (San Jose State University 2007) and its strategic learning goals (San Jose State University Library 2007). The university's assessment programme is further guided by the Western Association of Schools and Colleges (WASC) criteria for review during the accreditation process (Western Association of Schools and Colleges 2008). Consequently, an evidence-based approach to instruction is not only helpful to inform effective teaching models, but is required to maintain the integrity of disciplinary programmes across campus (Bogel 2008).
Currently, there is no consistent or uniform way of assessing student information literacy skills at San Jose State University. While academic librarians assess students' comprehension of information literacy concepts in a number of different ways, multiple-choice tests, or surveys, are common tools (Williams 2000). However, a review of the literature reveals that many survey instruments in the field are created without a process of standardisation to yield statistically sound data (Cameron et al. 2007). Moreover, most that are standardised provide large-scale, general measures of student information literacy competence. Such surveys, consequently, lack ties to institution-specific learning outcomes. This begs the question: how can librarians develop a locally-relevant, psychometric tool to assess student learning given the time and resource constraints of the current academic environment? In addressing this question, the authors explored the use of an online survey assessment tool that could measure student learning at San Jose State University.
There is much literature on the use of surveys as assessment tools. Allen and Babbie note that 'survey research is perhaps the most frequently used mode of observation in the social sciences' (Allen and Babbie 2008: 366). In the library and information science field, the Multimedia Educational Resource for Learning and Online Teaching (2009) site offers a number of survey assessment tools for educators. Additionally, Merz and Mark (2002) have compiled examples of information literacy assessment instruments developed by librarians at different higher education institutions for shared learning opportunities.
Many studies have shown that pre- and post-tests are a successful means of measuring library instructional efficacy at the institution level. Koehler and Swanson (1988) worked with international students over a three-year period using such an approach. This longitudinal study employed the use of pre-tests to assess student readiness and post-tests to measure changes in information literacy comprehension over the study's duration. The authors reported a vast increase in student competence levels. Similarly, Knight's (2002) use of pre- and post-tests during student instruction resulted in student improvement on every question relating to library research skills. In another study, Jackson (2006) noted a six percent improvement in student comprehension of plagiarism concepts through an analysis of pre-and post-test scores related to an online tutorial.
At present, few authors have used standardised, local instruments to determine whether student respondents interpret survey questions correctly. Non-standardised questions run the risk of being confusing, misleading, or biased. Without appropriate methods of developing assessment instruments, test score improvement as an indicator of student learning is called into question. Cameron et al. note that 'there is a need for reliable and valid data on student learning outcomes' (Cameron et al. 2007: 230).
A few studies have applied survey assessment tools that have been expertly evaluated. Gilstrap and Dupree (2008) describe the use of Brookfield's Critical Incident Questionnaire (1995), a tool that prompts students to write open-ended responses to critical incidents, or moments of learning recognition, they experience during an instruction session. In another study, James Madison University librarians and faculty developed the Information-Seeking Skills Test, a discipline-based, online survey instrument for measuring student information literacy competence at the first-year level (Cameron et al. 2007). Moreover, at Central Michigan University, librarians used pre- and post-tests to measure the impact of library instruction on students' information literacy skills through the use of the Research Readiness Self-Assessment tool, which was evaluated for validity and reliability (Mathson and Lorenzen 2008).
While there are a few standardised instruments useful at the institution level, most are developed as tests of general information literacy skills geared towards the larger academic community. The Research Readiness Self-Assessment tool, originally developed for local institutional needs, has now been expanded for worldwide application by other academic institutions, whose specific needs may or may not be met (Mathson and Lorenzen 2008). Similarly, James Madison University educators evolved their local Information-Seeking Skills Test tool for use by other academic institutions as the Information Literacy Test (Cameron et al. 2007). Blixrud (2003) explains that the Standard Assessment of Information Literacy Skills test measures data on student information literacy skills at academic institutions on a national level. The tool is further described as one that 'contains items not specific to a particular institution or library' (Kent State University 2008: para. 3). The Tool for Real-Time Assessment of Information Literacy developed by the Institute for Library and Information Literacy Education and Kent State University Libraries, is a free, online survey instrument used to assess 'skills and concepts generally considered essential to information literacy' (Schloman and Gedeon 2007: 2). Additionally, the Information and Communication Technology Literacy Test (Kenney 2006), now called iSkills, is a large-scale test (Rockman and Smith 2005) for purchase to assess students' general competency in information and communication technology (Educational Testing Service 2009) through the replication of real-world, online tasks (Somerville et al. 2008).
The California State University has endorsed iSkills as a tool for measuring student information communication technology proficiency (California State University 2007). However, as the University's Information/ICT Literacy Strategic Planning Committee has acknowledged the need for assessment at 'systemwide and campus levels,' a more precise method of measuring student learning at the local, San Jose State level is necessary (California State University [n.d.]: 2). A thoughtfully-designed, local tool could collectively address national, campus and departmental assessment standards in a cost-effective manner that more accurately identifies areas for programmatic improvement at the University Library.
To investigate this issue, an online application allowing librarians to select expertly-evaluated, multiple-choice questions for use during library instruction sessions was developed. The tool generates automated pre- and post-test surveys containing questions that match the specific learning outcomes of particular courses across campus disciplines at San Jose State University (see Table 1 for learning outcomes applied in this study). Using this tool, the authors launched a case study whose findings will be analysed in greater depth during future studies.
Specifically, the authors explored a programmatic model for information literacy assessment. This involved the development of standardised, multiple-choice questions and entering them into the online assessment tool. An important piece of this process entailed gauging whether it was feasible to use the tool during one-off, library instruction sessions and if so, to gain some preliminary insight from test score comparisons:
The authors chose a pre- and post- test survey research design to discern differences in student scores before and after a library instruction session. The intent was to gain measures of formative and summative assessment (evaluation of student achievement as well as instructional efficacy on the part of librarians) in making decisions about programme improvement (The Center for Effective Teaching and Learning at the University of Texas at El Paso [n.d.])
Pre- and post-test surveys were created from a pool of multiple-choice questions covering various information literacy concepts in the social sciences, including citation analysis, identification of scholarly sources, appropriate subject databases and library Web site navigation (see Appendix A).
Psychology falls within the broader area of social sciences and is a part of the College of Social Sciences at San Jose State University. Consequently, the 11 questions were developed based on learning outcomes identified by a team of librarians specializing in the social sciences at San Jose State University. These librarians worked together to formulate information literacy learning outcomes reflecting general education needs across the social sciences at San Jose State University. Reference statistics, from both the Main Reference Desk and in-person consultations, also influenced the process. For future studies, the authors plan to customize survey questions for particular subject areas whenever possible. Ultimately, all five junior-level psychology courses received the same eleven survey questions testing information literacy skills. See Table 1, below, for more details.
Additionally, all surveys administered to students in junior-level psychology courses contained six background questions before the eleven testing students' knowledge of research skills. These questions were meant to provide descriptive statistics and, through future studies, identify variables that might affect a student's incoming research experience independent of the actual instruction session. Background questions included whether students had received library instruction before, how often they conducted research at the library (both in person and remotely), student class level and student major, among others (see Appendix A).
The aforementioned questions were entered into the online assessment tool. The creation of survey questions involved consultation with San Jose State University librarians, existing information literacy tutorials and survey instruments such as the University of Texas Information Literacy Tutorial adapted by various other universities (1998). Each question was linked to applicable Association of College and Research Libraries standards and performance indicators (American Library Association 2006b), American Psychological Association undergraduate learning goals (American Psychological Association 2007) and social sciences learning outcomes developed by San Jose State Univerity librarians for general education courses. See Table 1 for more details.
Question | Social Sciences Learning Outcomes | ACRL Standards / Performance Indicators | APA Learning Goals |
---|---|---|---|
1. Imagine you have an assignment to write a paper based on scholarly information. Which would be the most appropriate source to use? | Understand the difference between popular and scholarly literature | Articulate and apply initial criteria for evaluating both the information and its sources | Use selected sources after evaluating their suitability based on appropriateness, accuracy, quality, and value of the source |
2. How can you tell you are reading a popular magazine? | Understand the difference between popular and scholarly literature | Articulate and apply initial criteria for evaluating both the information and its sources | Use selected sources after evaluating their suitability based on appropriateness, accuracy, quality, and value of the source |
3. What is the name of the linking tool found in SJSU databases that may lead you to the full text of an article? | Determine local availability of cited item and use Link+ and interlibrary loan services as needed | Determine the availability of needed information and makes decisions on broadening the information seeking process beyond local resources | Demonstrate information competence and the ability to use computers and other technology for many purposes. |
4. In considering the following article citation, what does 64(20) represent? Kors, A. C. (1998). Morality on today's college campuses: The assault upon liberty and dignity. Vital Speeches of the Day, 64(20), 633-637. |
Identify the parts of a citation and accurately craft bibliographical references. | Differentiate between the types of sources cited and understand the elements and correct syntax of a citation for a wide range of resources | Quote, paraphrase and cite correctly from a variety of media sources |
5. In an online database which combination of keywords below would retrieve the greatest number of records? | Conduct database searches using Boolean strategy, controlled vocabulary and limit features | Construct and implement effectively-designed search strategies | Formulate a researchable topic that can be supported by database search strategies |
6. If you find a very good article on your topic, what is the most efficient source for finding related articles? | Follow cited references to obtain additional relevant information | Compare new knowledge with prior knowledge to determine the value added | Locate and use relevant databases…and interpret results of research studies |
7. What is an empirical study? | Distinguish among methods used in retrieved articles | Identify appropriate investigative methods | Explain different research methods used by psychologists |
8. Which area of the SJLibrary.org web site provides a list of core databases for different student majors? | Identify core databases in the discipline | Select the most appropriate investigative methods or information retrieval systems for accessing the needed information | Locate and choose relevant sources from appropriate media |
9.What does the following citation represent: Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. |
Identify the parts of a citation and accurately craft bibliographical references. | Differentiate between the types of sources cited and understands the elements and correct syntax of a citation for a wide range of resources | Identify and evaluate the source, context and credibility of information |
10. If you are searching for a book or article your library does not own, you can get a free copy through: | Determine local availability of cited item and use Link+ and Interlibrary Loan services as needed | Determine the availability of needed information and makes decisions on broadening the information seeking process beyond local resources | Locate and choose relevant sources from appropriate media |
11. How would you locate the hard-copy material for this citation? Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. |
Search library catalog and locate relevant items | Uses various search systems to retrieve information in a variety of formats | Locate and use relevant databases…and interpret results of research studies |
As outlined in the literature, best practices were followed in developing the survey questions. Hansen and Dexter provide a valuable set of guidelines on how to write quality multiple-choice questions, noting they 'can be used to measure a range of learning outcomes and can provide a reliable assessment of a student's progress(Hansen and Dexter 1997: 1). Many of their guidelines are aligned with those that Allen and Babbie (2008) recommend including creating short, clear questions that are culturally sensitive, avoiding the use of negative words such as not in question statements and steering clear of biased language that may lead participants to answer correctly or in a way that controls response outcomes.
The creation of multiple-choice questions was undertaken with an understanding of their limitations. Much has been written about the disadvantages of fixed-choice questions, including their testing of memory recall rather than higher order thinking skills (Oakleaf 2008). They are further criticised for rewarding guessing (Oakleaf 2008). Additionally, Carter (2002) discusses the benefits of Barclay's (1993) method of using open-ended questions because they encourage students' natural thought process, a closer reflection of the real-world research process as opposed to the artificial test environment posed by multiple-choice questions (Oakleaf 2008).
We decided on multiple-choice questions for this study to allow for immediate computation of results. Past experience at the San Jose State University Library has shown that student commentary is time-consuming to code and standardise. Given the recent California State University budget cuts resulting from state-wide deficits, fixed-choices offer a practical and inexpensive alternative to more resource-intensive approaches to data analysis for organisation-wide assessment.
To address some of the drawbacks of quantitative data collection, each multiple-choice question had four distracters (i.e., incorrect responses). This increased the level of thought necessary to eliminate perceived wrong answers (Jensen et al. 2006). Survey questions also had an option labelled not sure to minimize guessing (Radcliff et al. 2007). This wording was used to lessen the stigma associated with a don't know response. This option also guaranteed that every question had an obvious relatable answer, particularly if none of the other choices seemed correct (Radcliff et al. 2007). To further encourage students to choose the not sure option if they did not know the correct answer, scripted instructions before the pre-test emphasized that scores were not being graded or shared with others and that answering truthfully would help librarians improve instructional services.
Many students chose the not sure option in this study, particularly on the pre-test, illustrating this was an effective strategy in reducing student guessing. Future studies in which students are surveyed on their guessing behaviour would need to be conducted to explore this in more depth. See Appendix D for more details.
All survey questions were reviewed to ensure they met current standards of quality assurance. Creating an effective survey instrument involved the evaluation of multiple-choice questions by individual undergraduate students, faculty members from the Psychology Department and experts at the San Jose State University Office of Institutional Research.
Questions and learning outcomes were first emailed to approximately fifty-five psychology faculty members for feedback. A few responded requesting that additional questions be added to the tool. These were incorporated into the survey instrument. Subsequently, questions were reviewed by experts at the Center for Assessment under the Office of Institutional Research. They provided valuable guidance on how to clarify wording and avoid standard pitfalls in developing survey questions. The authors found that the survey questions and their connection to various educational standards encouraged effective collaboration with campus offices and psychology faculty members to optimise student learning, a process that has been an ongoing challenge at the University.
After the initial review process by psychology faculty members and assessment experts, the principal author conducted cognitive interviews with students further to address the clarity of survey questions. Five undergraduate students representing different ethnic backgrounds participated in the one-on-one cognitive interviews for this study. While the principal author would have liked to interview more students, the interview sessions were time-consuming and it was difficult to solicit volunteers for a one-hour time commitment. Nonetheless, Allen and Babbie note that 'the pretest sample can be small—10 people or less' (Allen and Babbie 2008: 211). During each interview session, a set of scripted instructions adapted from a tool developed by Willis (2005) was read to an interviewee. Each student was asked to read the survey questions thinking aloud about their clarity. Additionally, two scripted probing questions were asked of students once they finished commenting on each survey question:
See Appendix B for more details regarding the script.
This study involved a hybrid method of cognitive interviewing in pre-testing survey questions (Beatty and Willis 2007). Rather than interviewing students with strictly scripted questions and no oral intervention (or on the other extreme, conducting inconsistent, probing interviews across interviewees) the hybrid method of cognitive interviewing allowed participants to think aloud about each survey question while the interviewer followed up with scripted questions asked consistently of all interviewees. This model had the advantage of gathering uninfluenced responses from participants with a systematic means of clarifying them.
Much has been written on the value of cognitive interviews in improving the validity and reliability of survey questionnaires. Desimone and Le Floch note '[t]oo often we create inquiry tools without validating our measures against how respondents interpret our questions and therefore collect data of questionable quality' (Desimone and Le Floch 2004: 18). In one study, a combination of expert advice and qualitative methods, including cognitive interviews, was used to improve a nationally disseminated student survey (Ouimet et al. 2004). In another study, Hughes (2004) found that cognitive interviews assisted in revealing comprehension problems of survey questions.
Based on student feedback in this study, survey questions were further refined. Student interviewees had trouble with the term currency in one of the survey questions. This word was used to indicate a current, or up-to-date source, whereas students interpreted it to mean one that would cost money. Hence the question was re-worded to clarify its intended meaning. See Table 1 for more details.
Once the survey questions were evaluated for quality, a pilot study was conducted during the 2008 Winter Session with students in an upper division psychology course.
The purpose of the pilot study was to determine the average length of time to complete surveys, the overall impact of the surveys on the delivery of content in the instruction session, whether there were any technical problems with the survey interface such as login and usability problems and whether survey questions continued to be clear and comprehensible to students.
The pilot study revealed that survey administration took a total of ten minutes on average (five minutes for the pre-test and five minutes for the post-test) and did not negatively impact the coverage of content during the information literacy session. One problem arose when students taking the post-test entered their login incorrectly or with typing errors. However, this was easily remedied by having them re-enter the login with deliberate care.
None of the students raised questions about the wording or clarity of the survey instrument during the pilot test. However, the principal author did not solicit this input from them, mainly because the short time-frame of the instruction session was a concern. Nevertheless, it was encouraging that no confusion was raised, despite the possibility of remaining ambiguities.
Approval was obtained from the Institutional Review Board on campus and student volunteers were solicited from five upper division psychology courses to participate in this assessment study. The upper division psychology course was targeted in particular because it fulfils a strong writing and research component of the curriculum. Psychology faculty members teaching five of the nine sections offered during the Spring 2009 semester requested library instruction sessions and consented to student participation. Psychology courses were specifically identified for this study because the principal author served as the Psychology Librarian at that time. Consequently, coordination of library instruction sessions for these courses was convenient and easy to facilitate.
Since the core assignments, learning outcomes, survey questions and library instruction treatments were the same, the authors combined student results for all five sections. Students who participated in the pre- and post-tests during a regularly scheduled library instruction session comprised the entire survey population (n = 83). Most were junior-level undergraduates (69.9%) with a smaller percentage of senior-level undergraduates (30.1%).
The junior-level psychology courses are typically geared towards those majoring in the department, which accounts for the high percentage of psychology majors (92.8%). Other majors included Social Work (2.4%), Communicative Disorders and Sciences (2.4%), Economics (1.2%) and Justice Studies (1.2%).
Further demographic information is included in Appendix A and will be used for more in-depth analysis in future studies.
The study was conducted in the Spring 2009 semester. The principal author conducted all psychology instruction sessions because she was the only librarian dedicated to the subject of psychology at the time of the study.
As with the pilot study, the locally-designed, assessment application was used to create a survey containing expertly-evaluated multiple-choice questions matching social sciences learning outcomes. The principal author coordinated with psychology faculty members in selecting questions relevant to their course goals and stressed that students be prompt to ensure adequate time for assessment and instruction.
Once the appropriate survey questions were chosen, the assessment tool automated the development of the customised survey instrument. The newly developed survey was then created as a link on the Library's psychology research guide, an online resource used as a starting point for the instruction sessions. After each session, the assessment tool provided immediate access to student scores in an HTML, Excel, or Word format for review.
Before each instruction session, the principal author set up all computer stations in the classroom to show the direct link to the pre-test survey. This was done in an effort to save time once the students arrived. All five psychology library instruction sessions took place in the same computer laboratory to ensure environmental consistency. Course outlines and handouts were disseminated to students after they completed the post-test at the end of the instruction session. This was to encourage reliance upon their own incoming knowledge and subsequent understanding of information literacy concepts.
Before instruction, the principal author read a script (see Appendix C) explaining the purpose of the study, emphasizing anonymity, confidentiality, voluntary participation and the benefits to the programme of a better understanding of student learning needs. Voluntary participation was an intentional part of the research design and is supported as an effective means of gathering accurate data. Portmann and Roush (2004) conducted pre- and post-test surveys during a library instruction intervention and found that students do not provide thoughtful responses when motivated by other means, such as receiving extra credit.
The principal author was careful to cover all learning objectives during the instruction session without referring to content in the direct wording of the multiple-choice survey questions. This would have encouraged memory recall rather than critical thinking on the post-test. Each instruction session lasted seventy-five minutes. This allowed for about an hour of research orientation and active learning exercises before the post-test.
Participants accessed the pre-test survey through their student identity number. They answered six background questions, then 11 multiple-choice questions covering eight learning objectives related to their course. After the instructional intervention, participants used the same student number to access the post-test. Using the student number ensured a unique login for each participant. This login also served to link individual student pre-test scores with corresponding post-test scores.
The post-test contained the same multiple-choice survey questions as the pre-test to provide a direct comparison of scores in assessing the impact of library instruction on student learning. However, the post-test did not repeat the background questions contained on the pre-test as these data had already been collected and would not change.
While a post-test given directly after a library instruction session may lead to reliance on memory recall, every effort was made to avoid this problem as well as other limitations associated with multiple-choice tests (see Question development above for more details). Nevertheless, in a future study, the authors plan to investigate whether students retain information provided in library instruction sessions over a longer period of time. The longitudinal study would involve administering the pre-test to a class at the beginning of a semester and follow up by administering the post-test to the same class at the end of the semester. Collecting data from the same set of students ensures a direct comparison of the before and after data.
In this study, ninety-four students initially participated in the assessment study. However, students who completed the pre-test survey but failed to take the post-test survey were automatically eliminated from the data pool, which resulted in eighty-three valid student participant data sets. Unfortunately, we were unable to ascertain why some students did not complete the post-test. The Institutional Review Board mandates student anonymity and as a result, individual identities were not available to the authors. Regardless, the Board guidelines also stipulate that student participation be voluntary. Consequently, even if the authors were privy to personal profiles, it was unethical to solicit information about students' lack of participation.
The authors provide a bird's-eye view of pre- and post-test scores, laying the groundwork for future research projects and general directions in library instruction.
One overall finding was that student scores improved on every survey question after the students received an instructional intervention. Of particular note, the entire student population (n = 83) scored high relating to questions about scholarly and popular sources on the post-test survey. Every student scored correctly on Question 1 of the post-test:
Moreover, most (91.6%) scored correctly on Question 2 of the post-test:
However, pre-test scores for Question 1 and Question 2 showed a gap in student score ranges. While 92.8% of participants responded correctly to Question 1 in the pre-test, only 69.9% provided correct answers for Question 2 on this same topic. This suggests that multiple survey questions covering different aspects of an information literacy concept are needed to test student comprehension thoroughly.
While students scored well on questions related to scholarly and popular literature, they had the most difficulty identifying the Web page containing core databases for different student majors (Question 8). On the pre-test, only 50.6% answered this question correctly. A possible reason for this may be that the Web page, at the time, was labelled SJSU Research Topics. Such terminology may have caused confusion as research topics does not adequately translate to core databases for different student majors. This interpretation is also supported by the post-test results for Question 8. Only 59.0%, the lowest post-test score, of the participants answered this question correctly, despite repeated exposure to the SJSU Research Topics Web page during the library instruction session. Given the emphasis placed on this Web page for identifying subject-specific resources, the difference in pre- and post-test student scores (8.4%) indicates that, at the least, students answered Question 8 without much reliance on memory recall.
Before the library instruction session, students did fairly well in identifying parts of a journal citation (84.3% answered Question 4 correctly). However, there was confusion over what type of source a particular citation represented (67.5% answered Question 9 correctly). Additional pre-test difficulties arose in understanding Boolean logic (39.8% answered Question 5 correctly), the value of article bibliographies in discovering other related sources for research assignments (47.0% answered Question 6 correctly), how to obtain material the library does not own from other services (49.4% answered Question 10 correctly) and, of particular note, how to locate a hard-copy version of an article (33.7% answered Question 11 correctly). See Table 2 for more details.
Through discourse with students in the classroom, in office hour consultations and at the reference desk, the most popular research request is for immediate access to full-text articles. Given this attention to instantaneous document retrieval, a logical shortfall is that students are probably less skilled at negotiating the print journal collection. As the survey pre-test scores show, many students are unaware of how to look up a journal title in the library catalogue, get a call number and check the physical volumes to get an article that is unavailable online (Question 11). Similarly, students are unfamiliar with how to acquire materials unavailable at the San Jose State University Library, perhaps, again, because this requires additional time. This may account for why only 49.4% of respondents answered Question 10 correctly on the pre-test.
However, students showed the greatest improvement in post-test scores for Question 10 and Question 11. Their awareness of interlibrary lending services increased by 36.1% on the post-test (85.5% answered correctly). Similarly, students' understanding of how to locate print journals improved by 36.2% on the post-test (69.9% answered correctly), although the post-test scores on this topic still seem poor.
Student post-test scores for Question 5 on Boolean logic and Question 6 on the importance of bibliographies also raise questions about the significance of results. On Question 5, student scores improved by 25.2% on the post-test (65.0% answered correctly) and on Question 6, there was a 25.3% increase in post-test scores (72.3% answered correctly). However, is the seventieth percentile an acceptable marker for correct answers? It seems frustrating that, if this were so, roughly twenty-three of the respondents would still have trouble understanding a basic research concept.
Determining an acceptable score range for Question 7 (75.9% answered correctly on the post-test), which tests student comprehension of different research methodologies, is even more problematic. One could argue that an acceptable score for this survey question should be higher than the norm since junior-level psychology students are often required to find empirical studies in supporting their research. They may be able to find relevant articles without understanding and conducting Boolean searches (Question 5), but they would not be able to fulfil their assignments without understanding the definition of an empirical study.
While this exploratory study provides general points for discussion of student learning needs, future research and an analysis of student's incoming experience with library instruction will allow the authors to comment more thoroughly on the significance of student test scores.
Question | Pre-test Score* | Post-test Score* | Difference |
---|---|---|---|
1. Imagine you have an assignment to write a paper based on scholarly information. Which would be the most appropriate source to use? | 92.8% (77) | 100.0% (83) | +7.2% |
2. How can you tell you are reading a popular magazine? | 69.9% (58) | 91.6% (76)) | +21.7% |
3. What is the name of the linking tool found in SJSU databases that may lead you to the full text of an article? | 73.4% (61) | 88.0% (73) | +14.6% |
4. In considering the following article citation, what does 64(20) represent? Kors, A. C. (1998). Morality on today's college campuses: The assault upon liberty and dignity. Vital Speeches of the Day, 64(20), 633-637. | 84.3% (70) | 91.6% (76) | +7.3% |
5. In an online database which combination of keywords below would retrieve the greatest number of records? | 39.8% (33) | 65.0% (54) | +25.2% |
6. If you find a very good article on your topic, what is the most efficient source for finding related articles? | 47.0% (39) | 72.3% (60) | +25.3% |
7. What is an empirical study? | 57.8% (48) | 75.9% (63) | +18.1% |
8. Which area of the SJLibrary.org web site provides a list of core databases for different student majors? | 50.6% (42) | 59.0% (49) | +8.4% |
9.What does the following citation represent: Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. | 67.5% (56) | 83.1% (69) | +15.6% |
10. If you are searching for a book or article your library does not own, you can get a free copy through: | 49.4% (41) | 85.5% (71) | +36.1% |
11. How would you locate the hard-copy material for this citation? Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. | 33.7% (28) | 69.9% (58) | +36.2% |
In addition to exploring an institution-specific model for information literacy assessment, another purpose of this study was to determine the feasibility of administering pre- and post-test surveys during the short period of time in which library instruction is provided. This is critical to optimising student learning opportunities and convincing other librarians to use the tool during their instruction sessions. Findings from this study demonstrate a feasible means of assessing student information literacy competence in the classroom setting, given close coordination with campus faculty.
Nevertheless, the condensed time-frame of information literacy instruction sessions merits further attention, particularly since embedded assessment requires vigilant organisation and limits what can be measured.
This study explored the assessment of eight student learning outcomes, permitting only a short period of time to address each one during an hour allotted for instruction. Such broad content coverage can overwhelm students and reduce the reliability of assessment efforts (Baume 2001). Consequently, the large number of learning outcomes tested in the study may account for why students performed better on some questions than on others. Moreover, eleven multiple-choice questions seem inadequate to sufficiently test students on eight different research skills. Ideally, two to four questions should be developed for each learning outcome (Persky and Pollack 2008). Unfortunately, the authors were unable to lengthen the survey because of time constraints. However, recent attention to electronic teaching aids offers promising support for maximising student learning through more focused library instruction content.
For the past several years, the California State University has assembled a core set of digital learning objects to support system-wide information literacy instruction programmes (California State University 2009). Digital learning objects, by definition, are a derivative of the object-oriented programming concept in that they are succinct, self-contained pieces of instructional media (Hunsaker et al. 2009) covering individual research skills. They are easily reusable (Clyde 2004) and do not require students to take an hour-long, comprehensive tutorial to gain proficiency in one specific research activity.
Currently, San Jose State University librarians have developed a number of digital learning objects covering content including scholarly and popular research material, the use of call numbers and the use of a popular database (Academic Search Premier). The authors recommend the continued development of digital learning objects that educate students on basic skills such as the identification of particular sources, learning the classification system, using Boolean search operators, refining a research topic and understanding plagiarism. The California State University digital learning objects core team has identified similar types of content in digital form that address many of these information literacy concepts, interactive tools that should be referenced from the library Web site and not reinvented locally (California State University 2009).
More and more courses and programmes are only available in an online format at San Jose State University. In this electronic learning environment, students are increasingly expected to investigate and familiarise themselves with web-based research tools on their own to achieve academic success. A thoughtful approach to building a digital learning object library that supports and tests student learning of particular research skills would greatly enhance the information literacy programme. Currently, students may or may not stumble across digital learning objects available on the library Web site. There is no infrastructure that ties digital learning objects to specific programmes on campus. Ideally, librarians would select digital learning objects appropriate for particular disciplines and courses that could be embedded within campus curricula and required for students to complete as part of their academic programme. This would allow librarians to focus on a small set of course-specific learning outcomes during instruction sessions rather than trying to cover every aspect of the research process in a short timeframe.
In addressing fewer learning outcomes, librarian instructors gain the added benefit of assigning more questions for each research concept being tested. For example, instead of selecting eleven survey questions to test eight learning outcomes, a library instructor could use the same number of questions to test three learning outcomes, thus providing a more reliable assessment of student learning.
Based on findings from this study and characteristics of quality digital learning objects, the authors have considered how to modify future assessment surveys for a more efficient and comprehensive examination of student learning.
A desirable feature of digital learning objects is re-usability (Watson 2010), which refers to learning objects that are standalone training modules, those that are not tied to institutions through branding and those that can be easily adapted, or re-purposed, by others (Watson 2010). This reduces the amount of time and money spent creating training materials that support library instruction programmes. Digital learning object re-usability can be taken a step further to mean the use of material that is unchanging, content that can be re-used indefinitely without having to be updated. For example, time-tested content such as Boolean logic, citation analysis and the difference between scholarly and popular literature lends itself better to the digital learning object format than learning how to use a popular database, the interface of which will change from time to time.
In reviewing the information literacy concepts covered by the current assessment survey, the authors would probably retain Question 3 on the linking tool found in San Jose State University databases and Question 8 on the area of the library Web site that lists databases by major, relying on digital learning objects to cover the remaining topics. More questions (two to four for each learning objective) testing students' knowledge of subject-specific resources could be added to the survey and form the basis of the library instruction session. Such an approach would permit librarians to focus on content that is more complex and evolving, while topics of a static nature could be addressed by digital learning objects.
In analysing student pre- and post-test scores, the authors have targeted preliminary ways of improving instructional efficacy and information literacy skills. Additionally, they recommend analysing a wider range of data in future assessment studies to reflect a solid, scientific approach.
To this end, the authors continue to coordinate with colleagues at the San Jose State University's Office of Institutional Research in reviewing new survey questions for future studies. Through this dialogue, assessment specialists propose that a stronger basis for analysis would enhance and enrich assessment findings. Consequently, the principal author has applied for and been granted monies to hire a statistician. The statistician will build a mathematical template incorporating student background variables, their relationship with each other as a whole and their collective impact on assessment results. As data is collected through online surveys, the template will automate statistical computations and reports, yielding a more complete picture of student research skills and instructional efficacy.
Additionally, for future studies, the authors plan to:
With the aid of a discipline-based assessment tool containing standardised, multiple-choice questions, the authors have been provided a means of administering pre- and post-test surveys to evaluate student learning during library instruction sessions. The practicality of the tool and its impact on time management was tested with five junior-level psychology courses during the Spring 2009 semester. Preliminary data about student comprehension of information literacy concepts was also gathered.
While scores improved for every survey question, some concepts were more difficult than others for student participants to understand. Student comprehension was likely to have been affected by the condensed time-frame of instruction sessions and the limited ability of the library instructor to provide in-depth coverage of individual learning objectives. Given the inconsistency among students' post-test scores across survey questions, the authors recommend that library instructors focus on a few key concepts. This will maximise instruction time and students' ability to reliably comprehend what is taught. The strategic development of digital learning objects that address basic learning goals will make it possible for instructors to concentrate on a few course-specific skills such as navigating core subject databases. Future studies, however, will provide additional data analysis and reveal a more complete picture of student learning needs.
The authors hope the use of this survey application and a more thorough approach to statistical analysis will encourage a unified, organisation-wide approach to information literacy assessment at the San Jose State University Library. The ability to illustrate effective modes of teaching and programmatic improvement will not only enhance student learning, but strengthen librarian ties with campus faculty, bolster documentation during the accreditation process, provide publication opportunities for individual library faculty and increase possibilities for campus funding support of library services.
We wish to thank the following colleagues for their willingness to proofread our document and/or offer critical commentary: Dr. Berkeley Miller, Dr. Sutee Sujitparapitaya, Dr. Lili Luo, Dr. Mary Somerville, Christina Peterson (MLIS), Bridget Kowalczyk (MLIS), Diana Wu (MLIS), Francis Howard (MLIS) and Robert Bruce (MLIS).
Thanks to Rebecca Feind (MLIS) and Lydia Collins (MLIS) for providing feedback during the design of the online assessment tool. Also thanks to Lyna Nguyen and Jessie Cai for developing the computer programming for the tool.
Shannon M. Staley, the corresponding author, is a library faculty member at San Jose State University. Formerly the Psychology Librarian, she now serves as the Education Librarian. She provides instruction, collection development and research services for departments in the College of Education. She can be contacted at
[email protected].
Nicole A. Branch and Tom L. Hewitt are graduate students at the School of Library and Information Science at San Jose State University. Nicole can be contacted at [email protected]. Tom can be contacted at [email protected].
Find other papers on this subject | ||
Please answer every question below. All information you provide is confidential. This survey instrument and research project has been approved by Graduate Studies and will take about 5 minutes to complete. You can read more about your rights as a participant and who to contact with any questions. | |
1. Please indicate your Academic Level in School: Freshmen - undergraduate Sophormore - undergraduate Junior - undergraduate Senior - undergraduate Graduate Student in Library & Information Science Graduate Student in another program Unclassified Other | |
2. Please indicate your Gender: Female Male | |
3. Please indicate your age: Lesse than 18 18 to 29 30 to 39 40 to 49 50 or older | |
4. Please specify your major: | |
If you have a double major or your major is not in the dropdown above, please indicate other(s) below: 1. 2. 3. | |
5. Did you begin college at San Jose State University or elsewhere: Started at SJSU Started elsewhere | |
6. Have you received library instruction before at SJSU Library: Yes No Not Sure | |
7. In a typical 7-day week, about how many hours do you spend (in person or electronically) conducting research at King library: None 1-4 hours 5-10 hours 11-20 hours More than 20 hours | |
1. Imagine you have an assignment to write a paper based on scholarly information. Which would be the most appropriate source to use?
| |
2. How can you tell you are reading a popular magazine?
| |
3. What is the name of the linking tool found in SJSU databases that may lead you to the full text of an article?
| |
4. In considering the following article citation, what does 64(20) represent? Kors, A. C. (1998). Morality on today's college campuses: The assault upon liberty and dignity. Vital Speeches of the Day, 64(20), 633-637.
| |
5. In an online database which combination of keywords below would retrieve the greatest number of records?
| |
6. If you find a very good article on your topic, what is the most efficient source for finding related articles?
| |
7. What is an empirical study?
| |
8. Which area of the SJLibrary.org web site provides a list of core databases for different student majors?
| |
9. What does the following citation represent: Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65(1), 69-75.
| |
10. If you are searching for a book or article your library does not own, you can get a free copy through:
| |
11. How would you locate the hard-copy material for this citation? Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65(1), 69-75.
|
Let me tell you a little bit about what we're doing. We're testing a new questionnaire with the help of students like you. Our goal is to get a better idea of how the questionnaire and corresponding multiple choice answers are working as far as clarity and understanding. So I'd like you to think aloud as you consider them. Tell me everything you are thinking about as we go over each one.
At times I'll stop to ask you more questions about terms or phrases in the questions and what you think the question is asking about. I'll also take notes.
Please keep in mind that I really want to hear all of your opinions and reactions. Don't hesitate to speak up whenever something seems unclear. It does not matter whether you know the correct answers to the questions or not. This is not a test.
Do you have any questions before we start?
As a student recipient of library instruction, you are being asked to complete an online survey prior to and after today's scheduled library instruction session. Your survey responses are anonymous and will not be tied in any way to your personal identity. Also, you are not being graded so please be honest in answering the questions. This will ensure that the results will assist librarians in developing more useful instructional strategies to support a diverse range of learning styles.
Your consent to participate is voluntary. No services of any kind, to which you are otherwise entitled, will be lost or jeopardized if you choose not to participate.
The survey should take about 5 – 10 minutes to complete. When you are done, minimize the browser window and open another browser window for the instruction session. For those who wish to participate, please pull up the survey from the open browser window on your task bar.
We greatly appreciate your help in making our instructional services better!
Question | No. of pre-test incorrect answers | No. of post-test incorrect answers |
---|---|---|
1. Imagine you have an assignment to write a paper based on scholarly information. Which would be the most appropriate source to use? | 6 (includes 3 not sures) | 0 |
2. How can you tell you are reading a popular magazine? | 25 (includes 12 not sures) | 7 (includes 0 not sures) |
3. What is the name of the linking tool found in SJSU databases that may lead you to the full text of an article? | 22 (includes 18 not sures) | 10 (includes 1 not sure) |
4. In considering the following article citation, what does 64(20) represent? Kors, A. C. (1998). Morality on today's college campuses: The assault upon liberty and dignity. Vital Speeches of the Day, 64(20), 633-637. |
13 (includes 4 not sures) | 7 (includes 0 not sures |
5. In an online database which combination of keywords below would retrieve the greatest number of records? | 50 (includes 6 not sures) | 29 (includes 0 not sures) |
6. If you find a very good article on your topic, what is the most efficient source for finding related articles? | 44 (includes 10 not sures) | 23 (includes 1 not sure) |
7. What is an empirical study? | 35 (includes 7 not sures) | 20 (includes 1 not sure) |
8. Which area of the SJLibrary.org web site provides a list of core databases for different student majors? | 41 (includes 21 not sures) | 35 (includes 1 not sures) |
9.What does the following citation represent: Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. |
26 (includes 14 not sures) | 14 (includes 4 not sures) |
10. If you are searching for a book or article your library does not own, you can get a free copy through: | 42 (includes 34 not sures) | 12 (includes 3 not sures) |
11. How would you locate the hard-copy material for this citation? Erzen, J. N. (2007). Islamic aesthetics: An alternative way to knowledge. Aesthetics and Art Criticism, 65 (1), 69-75. |
55 (includes 27 not sures) | 25 (includes 4 not sures) |
© the authors, 2010. Last updated: 20 August, 2010 |
|