Improving the Clarity of Journal Abstracts in Psychology: The Case for StructureJames Hartley
|
Incidental and informal methods of learning to spell should replace more traditional and direct instructional procedures, according to advocates of the natural learning approach. This proposition is based on 2 assumptions: (a) Spelling competence can be acquired without instruction and (b) reading and writing are the primary vehicles for learning to spell. There is only partial support for these assumptions. First, very young children who receive little or no spelling instruction do as well as their counterparts in more 'traditional spelling programs; but the continued effects of no instruction beyond first grade are unknown. Second, reading and writing contribute to spelling development, but their overall impact is relatively modest. Consequently, there is little support for replacing traditional spelling instruction with the natural learning approach. Background. Advocates of the 'natural learning' approach propose that incidental and informal methods of learning to spell should replace more traditional and direct instructional procedures. |
Two sets of objective computer-based measures, and two different subjective reader-based measures were then made using these two sets of abstracts. The two sets of computer-based measures were derived from (i) MicroSoft's package, Office 97, and (ii) Pennebaker's Linguistic Inquiry and Word Count (LIWC) (Pennebaker, Francis and Booth, 2001). Office 97 provides a number of statistics on various aspects of written text. LIWC counts the percentage of words in 71 different categories (e.g., cognitive, social, personal, etc). (Note: when making these computer-based measures the sub-headings were removed from structured versions of the abstracts.)
The two reader-based measures were (i) the average scores on ratings of the presence or absence of information in the abstracts; and (ii) the average scores on ratings of the clarity of the abstracts given by authors of other articles in the JEP. The items used for rating the information content are shown in Appendix 1. It can be seen that respondents have to record a 'Yes' response (or not) to each of 14 questions. Each abstract was awarded a total score based on the number of 'Yes' decisions recorded. In this study two raters independently made these ratings for the traditional abstracts, and then met to agree their scores. The ratings for the structured abstracts were then made by adding in points for the extra information used in their creation.
The ratings of abstract clarity were made independently by 46 authors of articles in the JEP from the year 2000 (and by 2 more authors of articles in other educational journals). Each author was asked (by letter or e-mail) to rate one traditional and one structured abstract for clarity (on a scale of 0-10, where 10 was the highest score possible). To avoid bias, none of these authors were personally known to the investigator, and none were the authors of the abstracts used in this enquiry.
48 separate pairs of abstracts were created, each with a traditional version of one abstract, and a structured version of a different one. 24 of these pairs had the traditional abstracts first, and 24 the structured ones. The fact that the abstracts in each pair were on different topics was deliberate. This was done to ensure that no order effects would arise from reading different versions of the same abstract (as has been reported in previous studies, e.g., Hartley and Ganier, 2000). The 48 pairs of abstracts were created by pairing each one in turn with the next one in the list, with the exception of the ones for the two research reviews that were paired together.
Table 1 shows the main results of this enquiry. It can be seen, except for the average number of passives used, that the structured abstracts were significantly different from the traditional ones on all of the measures reported here.
Traditional format N = 24 |
Structured format N = 24 |
Paired t |
p value (two-tailed) |
|
---|---|---|---|---|
Data from MicroSoft's Office 97 | ||||
Abstract length (in words) |
M 133 SD 22 |
186 15 |
17.10 | <.001 |
Average sentence lengths |
M 24.6 SD 8.3 |
20.8 3.0 |
2.48 | <.02 |
Percentage of passives | M 32.7 SD 22.8 | 23.7 17.3 | 1.58 | n.s.d. |
Flesch Reading | M 21.1 SD 13.7 | 31.1 12.1 | 5.23 | <.001 |
Data from Pennebaker's Linguistic Inquiry and Word Count (LIWC) | ||||
Use of longer words | M 40.0 SD 5.3 | 35.8 4.6 | 4.69 | <.001 |
Use of common words | M 57.7 SD 8.6 | 61.1 6.3 | 3.43 | <.01 |
Use of present tense | M 2.7 SD 2.8 | 4.1 1.9 | 2.90 | <.01 |
Reader-based measures | ||||
Information checklist score | M 5.5 SD 1.0 | 9.7 1.4 | 13.72 | <.001 |
Clarity ratings | M 6.2 SD 2.0 | 7.4 2.0 | 3.22 | <.01 |
To some extent these results speak for themselves and, in terms of this paper, provide strong support for structured abstracts. But there are some qualifications to consider.
The structured abstracts were, as expected, longer than the traditional ones. Indeed, they were approximately 30% longer, which is 10% more than the average 20% increase in length reported by Hartley (2002) for nine studies. It is interesting to note, however, that the average length of the traditional abstracts was also longer than the 120 words specified by the APA. Eighteen (i.e., 75%) of the 24 authors of the traditional abstracts exceeded the stipulated length.
Hartley (2002) argued that the extra space required by introducing structured abstracts was a trivial amount for most journals, amounting at the most to three or four lines of text. In many journals new articles begin on right-hand pages, and few articles finish exactly at the bottom of the previous left-hand one. In other journals, such as Science Communication, new articles begin on the first left- or right-hand page available, but even here articles rarely finish at the bottom of the previous page. (Indeed, inspecting the pages in this issue of this journal will probably show that the few extra lines required by structured abstracts can be easily accommodated). Such concerns, of course, do not arise for electronic journals and databases.
More importantly, in this section, we need to consider cost-effectiveness, rather than just cost. With the extra lines comes extra information. It may be that more informative abstracts might encourage wider readership, greater citation rates and higher journal impact factors - all of which authors and editors might think desirable. Interestingly enough, McIntosh et al. ( 1999) suggest that both the information content and the clarity of structured abstracts can still be higher than that obtained in traditional abstracts even if they are restricted to the length of traditional ones.
Table 1 shows the Flesch Reading Ease scores for the traditional and the structured abstracts obtained in this enquiry. Readers unfamiliar with Flesch scores might like to note that they range from 0-100, and are sub-divided as follows: 0-29 college graduate level; 30-49 13-16th grade (i.e., 18 years +); 50-59 10-12th grade (i.e., 15-17 years) etc., and that they are based on a formula that combines with a constant measures of sentence lengths and numbers of syllables per word (Flesch, 1948; Klare, 1963). Of course it is possible that the finding of a significant difference in favour of the Flesch scores for the structured abstracts in this study reflects the fact that fact that the present author wrote all of the structured abstracts. However, since this finding has also occurred in other studies where the abstracts have been written by different authors (e.g., see Hartley and Sydes, 1997, Hartley and Benjamin, 1998) this finding is a relatively stable one.
The Flesch Reading Ease score is of course a crude - as well as dated - measure, and it ignores factors affecting readability such as type-size, type-face, line-length, and the effects of sub-headings and paragraphs, as well as readers' prior knowledge. Nonetheless, it is a useful measure for comparing different versions of the same texts, and Flesch scores have been quite widely used - along with other measures - for assessing the readability of journal abstracts (e.g., see Dronberger and Kowitz, 1975, Hartley, 1994, Hartley and Benjamin, 1998; Roberts, Fletcher and Fletcher, 1994; Tenopir and Jacso, 1993).
The gain in readability scores found for the structured abstracts in this study came, no doubt, from the fact that the abstracts had significantly shorter sentences and, as the LIWC data showed, made a greater use of shorter words. The LIWC data also showed that the structured abstracts contained significantly more common words and made a significantly greater use of the present tense. These findings seem to suggest that it is easier to provide information when writing under sub-headings than it is when writing in a continuous paragraph. Such gains in readability should not be dismissed lightly, for a number of studies have shown that traditional abstracts are difficult to read. Tenopir and Jacso (1993) for instance reported a mean Flesch score of 19 for over 300 abstracts published in APA journals. (The abstract to this article has a Flesch score of 26 when the sub-headings are excluded.)
Interestingly enough, there were no significant differences in the percentage of passives used in the two forms of abstracts studied in this paper. This finding is similar to one that we found when looking at the readability of well-known and less well-known articles in psychology (Hartley, Sotto and Pennebaker, 2002). The view that scientific writing involves a greater use of passives, the third person and the past tense is perhaps more of a myth than many people suspect (see, e.g., Kirkman, 2001; Riggle, 1998; Swales and Feak, 1994). Indeed the APA Publication Manual (2001) states, "Verbs are vigorous, direct communicators. Use the active rather than the passive voice, and select tense or mood carefully". (5th edition, p.41.)
The scores on the information checklist showed that the structured abstracts contained significantly more information than did the traditional ones. This is hardly surprising, given the nature of structured abstracts, but it is important. Analyses of the information gains showed that most of the increases occurred on questions 1 (50%), 3 (83%), 5 (63%) and 12 (63%). Thus it appears that in these abstracts more information was given on the reasons for making the study, where the participants came from, the sex distributions of these participants, and on the final conclusions drawn.
These findings reflect the fact that few authors in American journals seem to realise that not all of their readers will be American, and that all readers need to know the general context in which a study takes place in order to assess its relevance for their needs. Stating the actual age group of participants is also helpful because different countries use different conventions for describing people of different ages. The word 'student', for instance, usually refers to someone studying in tertiary education in the UK, whereas the same word is used for very young children in the USA. Although the checklist is a simple measure (giving equal weight to each item, and is inappropriate for review papers), it is nonetheless clear from the results that the structured abstracts contained significantly more information than the original ones and that this can be regarded as an advantage for such abstracts. Advances in 'text mining', 'research profiling' and computer-based document retrieval will be assisted by the use of such more informative abstracts (Blair and Kimbrough, 2002; Pinto and Lancaster, 1999; Porter, Kongthon and Lu, 2002; Wilczynski, Walker, McKibbon and Haynes, 1995).
In previous studies of the clarity of abstracts (e.g., Hartley 1999a; Hartley and Ganier, 2000) the word 'clarity' was not defined and respondents were allowed to respond as they thought fit. In this present study the participants were asked to 'rate each of these of abstracts out of 10 for clarity (with a higher score meaning greater clarity)'. This was followed by the explanation: 'If you have difficulty with what I mean by 'clarity', the kinds of words I have in mind are: 'readable', 'well-organized', 'clear', and 'informative'. (This phraseology was based on wording used by a respondent in a previous study who had explained what she had meant by 'clarity' in her ratings.) Also in this present study - as noted above - the participants were asked to rate different abstracts rather than the same abstract in the different formats. However, the mean ratings obtained here of 6.2 and 7.4 for the traditional abstracts and the structured ones respectively closely match the results of 6.0 and 8.0 obtained in the previous studies. Nonetheless, because the current results are based on abstracts in general rather than on different versions of the same abstract, these findings offer more convincing evidence for the superiority of structured abstracts in this respect.
Finally, in this section, we should note that several of the respondents took the opportunity to comment on the abstracts that they were asked to judge. Table 2 contains a selection from these remarks.
Preferences for the traditional abstracts My ratings are 2 for the structured abstract and 1 for the traditional one. Very poor abstracts. I have read the two abstracts that you sent for my judgement. I found the first one (traditional) clearer than the second (structured) one. I would give the first about 9 and the second about 8. Please note, however, that I believe that my response is affected more by the writing style and content of the abstracts than by their organization. I would have felt more comfortable comparing the two abstracts if they were on the same topic. The first (structured) one was well organized, and the reader can go to the section of interest, but the meaning of the abstract is broken up (I give it 8). The second (traditional) abstract flowed more clearly and was more conceptual (I give it 10). I rate the first (structured) abstract as a 7 and the second (traditional) one as an 8. I prefer the second as it flows better and entices the reader to read the article more than the first, although I understand the purpose of the first to 'mimic' the structure of an article, and hence this should add to clarity. No clear preference for either format Both abstracts were clear and well organized. The format was different but both told me the information I wanted to know. I gave them both 8. I found each of the abstracts in this pair to be very clear and without ambiguity. The structured abstract gives the explicit purposes and conclusions, whereas the traditional one does not, but I believe that those are unrelated to 'clarity' as you are defining and intending it - for me they represent a different dimension. I would give both abstracts a rating of 9. I did what you wanted me to do, and I did not come up with a clear preference. My rating for the structured abstract was 9 compared to a rating of 8 for the traditional one. Preferences for the structured abstracts Overall I thought that the structured abstract was more explicit and clearer than the traditional one. I would give 7 to the structured one and 5 to the traditional one. I would rate the second (structured) abstract with a higher clarity (perhaps 9) and the first (traditional) one with a lower score (perhaps 4), but not necessarily due to the structured/unstructured nature of the two paragraphs. The structured abstract was longer, and more detailed (with information on sample size, etc.). If the unstructured abstract were of equal length and had sample information to the same degree as the structured abstract, they may have been equally clear. My preference for the structured abstract (10) is strongly influenced by the fact that I could easily reproduce the content of the abstract with a high degree of accuracy, compared to the traditional abstract (which I give 6). I was actually quite impressed by the different 'feel' of the two formats. I would give the traditional one 4 and the structured one 8. You inspired me to look up my own recent JEP article's abstract. I would give it 5 - of course an unbiased opinion! I rated the traditional abstract 3 for clarity, and the structured abstract 7. In general the traditional abstract sacrificed clarity for brevity and the structured one was a touch verbose. Both abstracts were too general. In general I prefer the structured layout. I have read many articles in health journals that use this type of format and I find the insertion of the organizer words a very simple, yet powerful way to organize the information. The bold-faced headings for the structured abstract do serve an organizational function, and would probably be appreciated by students. Overall I think that the structured format is good and I hope that the JEP will seriously consider adopting it. |
Abstracts in journal articles are an intriguing genre. They encapsulate, in a brief text, the essence of the article that follows. And, according to the APA Publication Manual (2001), "A well-prepared abstract can be the most important paragraph in your article� The abstract needs to be dense with information but also readable, well organized, brief and self-contained". (p.12.)
In point of fact the nature of abstracts in scientific journals has been changing over the years as more and more research articles compete for their readers' attention. Berkenkotter and Huckin (1995) have described how the physical format of journal papers has altered in order to facilitate searching and reading, and how abstracts in scientific journal articles have been getting both longer and more informative (p. 34-35).
The current move towards adopting structured abstracts might thus be seen as part of a more general move towards the use of more clearly defined structures in academic writing. Indeed, whilst preparing this paper, I have come across references to structured content pages (as in Contemporary Psychology and the Journal of Social Psychology and Personality), structured literature reviews (Ottenbacher, 1983; Sugarman, McCrory, and Hubal, 1998), structured articles (Goldmann, 1997; Hartley, 1999b; Kircz, 1998) and even structured book reviews (in the Medical Education Review).
These wider issues, however, are beyond the scope of this particular paper. Here I have merely reported the findings from comparing traditional abstracts with their equivalent structured versions in one particular context. My aim, however, has been to illustrate in general how structured abstracts might make a positive contribution to scientific communication.
James Hartley is Research Professor in the Department of Psychology at the University of Keele in Staffordshire, England. His main interests lie in written communication and in teaching and learning in higher education. He is the author of Designing Instructional Text (3rd ed., 1994) and Learning and Studying: A Research Perspective (1998).
Originally published in Science Communication, 2003, Vol 24, 3, 366-379, copyright: Sage Publications.
I am grateful to Geoff Luck for scoring the abstract checklist, James Pennebaker for the LIWC data, and colleagues from the Journal of Educational Psychology who either gave permission for me to use their abstracts, or took part in this enquiry.
Professor James Hartley. Department of Psychology, Keele University, Staffordshire, ST5 5BG, UK; phone: 011 44 1782 583383; fax: 011 44 1782 583387; e-mail: [email protected]; Web site: http://www.keele.ac.uk/depts/ps/jhabiog.htm
The abstract evaluation checklist used in the present study
Abstract No. ________
1. _____Is anything said about previous research or research findings on the topic?
2. _____Is there an indication of what the aims/purposes of this study were?
3. _____Is there information on where the participants came from?
4. _____Is there information on the numbers of participants?
5. _____Is there information on the sex distribution of the participants?
6. _____Is there information on the ages of the participants?
7. _____Is there information on how the participants were placed in different groups (if appropriate)?
8. _____Is there information on the measures used in the study?
9. _____Are the main results presented in prose in the abstract?
10______Are the results said to be (or not to be) statistically significant, or is a p value given?
11._____ Are actual numbers (e.g. means/correlation coefficients/t values) given in the abstract?
12 _____Are any conclusions/implications drawn?
13 _____Are any limitations of the study mentioned?
14 _____Are suggestions for further research mentioned?
Note: this checklist is not suitable for theoretical or review papers but can be adapted to make it so. It would also be interesting to ask for an overall evaluation score (say out of 10) which could be related to the individual items.