LANNA tests and the prediction of Year 10 English and Mathematics results
Brian Hemmings and Russell Kay
Charles Sturt University
There is a dearth of published information about the Literacy and Numeracy National Assessment (LANNA). This study was designed to explore the relationships among LANNA test scores at Year 7 and, in addition, to examine the predictive capacity of these scores in relation to Year 10 School Certificate English and Mathematics results. As a way of meeting these aims, data were used from one New South Wales secondary school across the period 2001-2007. An examination of these data, drawing on correlation and multiple regression analyses, showed that Reading results are closely associated with Spelling and Writing scores and that there is a strong relationship between Reading and Numeracy scores as evidenced by the fact that the correlations across the various cohorts between Reading and Numeracy averaged .62. Additionally, very strong relationships were found between specific LANNA tests and Year 10 results. In general, Year 7 Reading scores were the best predictor of Year 10 English and Year 7 Numeracy scores were the best predictor of Year 10 Mathematics. Across the different student cohorts, the average amount of variance in Year 10 English explained by the LANNA tests was 59 percent; whereas, the average amount of variance for Year 10 Mathematics was 72 percent when using the same set of predictors. The authors of the study conclude their paper by discussing the implications of the findings for school administrators who have access to LANNA scores or who used other nationally-benchmarked Literacy and Numeracy assessments.
LANNA was designed by the Australian Council for Educational Research (ACER) to meet national reporting guidelines through a benchmarking approach. To illustrate, in 2004 the maximum achievable Year 7 LANNA score for Numeracy was 175 and scores of 110 and less fell below the Numeracy benchmark set by the Benchmark Equating Steering Committee. Since the first use of LANNA for students enrolled in Years 3, 5, and 7 until its last administration in 2007, participating schools, through their administrators, in all Australian states and territories have received feedback on their school's performance as well as the results for each student sitting the various tests. For the greater part of this period, students in non-government Australian schools could have sat four specific tests, namely, Reading, Spelling, Writing, and Numeracy, with approximately 15 000 primary and secondary school students involved in 2005. Although specific population figures for 2007 are not known, over 280 primary and secondary schools were expected to participate in the testing during the last year of its application.
As a way of offering feedback about LANNA to a range of stakeholders, the ACER furnishes four different reports for the administrators of participating schools: a school report, a year level report, a student report, and a diagnostic report for Reading and Numeracy. What is surprising, however, about this degree of reporting and extensive use of LANNA tests in the Australian non-government school sector is that not one scholarly publication that discusses the impact and the relevance of the test results, including their explanatory and predictive power for later school performance, is cited in the research literature. Interestingly, the only material that was freely available for consideration was information of a more general nature supplied by the ACER at one of its websites. This website has now been removed.
There is a very large corpus of literature pertaining to the factors that influence school achievement (see, for example, Ali & McInerney, 2005; Hemmings, 1996). Despite some evidence that gender (Peng, Lee, & Ingersoll, 2002; Rothman and McMillan, 2004), ethnicity (Dickie, 2000; Walker & Chamberlain, 1999), socio-economic status of students (Wilkins & Ma, 2003), motivation (Reynolds & Walberg, 1992; Spinath, Spinath, Harlaar, & Plomin, 2006), attitude towards specific school subjects and school more generally (Ai, 2002; Grootenboer & Hemmings, 2007; Singh, Granville, & Dika, 2002), and academic engagement (Akey, 2006) are linked to achievement, often the chief determinant of current school achievement is previous school achievement (see, for example, Reynolds, 1991; Yates, 2000). This finding has been reported in studies of both primary and secondary school students in a range of varying contexts. To exemplify, Hemmings (1996) showed that the performance of a sample of Year 12 students in New South Wales (NSW), Australia was significantly affected by their results in English, Mathematics, and Science at Year 10. In fact, over 62 percent of the variance in the Year 12 Tertiary Education Rank was explained by a composite measure of the three Year 10 subjects. Yates (2000), drawing on a younger school-age sample in another state in Australia, found that achievement in Mathematics in Years 3 to 7 was highly predictive of achievement in Mathematics for the same cohorts two years later. Reynolds (1991), researching in a North American context, reported similar relationships for Year 8 Mathematics and Science. Once again, the largest predictor of achievement was prior achievement.
Despite the extensive use of literacy (e.g., reading and spelling) and numeracy (e.g., number and measurement) tests, little is reported in the literature about the relationship between these tests and their respective components. What is reported tends either to be North American-based studies or work conducted many decades earlier. For example, Mehta, Foorman, Branum-Martin, and Taylor (2005), using a predominantly African American and high poverty sample, showed that the correlations in Grade 3 for word reading and writing were .52, word reading and spelling were .68, and spelling and writing were .52. Similar findings were also reported for the same correlation measures taken at Grade 4. Aiken (1972), in his review of language factors involved in learning mathematics at the intermediate grades in the United States of America (USA), noted that correlations between reading ability and mathematics achievement ranged between .40 and .86. One of the studies comprising his review was based on results obtained in 1963 from California State Achievement Tests at Grade 5. This study reported that the correlations between spelling and arithmetic reasoning and spelling and arithmetic fundamentals were .61 and .59 respectively. More recently, Thomas (2002), who drew on Grade 8 data collected for the 1988 USA National Education Longitudinal Study, found that the association between mathematics and reading proficiency was .46. An extensive analysis of Australian literacy and numeracy data that spanned a 20-year period was carried out by Marks and Ainley (1997). They revealed that the correlation coefficient between reading comprehension and numeracy was .60 for a sample of 13 to 15 year olds. This finding was the only correlation reported.
Given both the paucity of information published about LANNA and the small number of Australian studies that have reported correlations between literacy and numeracy measures at the secondary school level, it is timely that LANNA is interrogated from several levels. As a consequence, this study sought to achieve two aims: first, to explore the relationships among LANNA test scores at Year 7; and second, to examine the predictive capacity of these scores in relation to Year 10 School Certificate (SC) English and Mathematics results. This form of interrogation will help to supply valuable information that can further support the developments around NAPLAN and other secondary school assessment matters.
Even though all the measures needed for the study were supplied by the participating school, they were based on data either initially computed by the ACER or the NSW Board of Studies. Both the LANNA test scores and the SC English and Mathematics examination results are scaled scores based on raw scores. The scaled scores or measures were accessed through the school's student records and then entered into an Excel spreadsheet for transfer to SPSS (Version 16.0) and then checked for accuracy.
|* p<.01 (2-tailed)|
In order to determine the predictive capacity of the LANNA tests and the results in English and Mathematics at the SC level, a multiple regression analysis was performed using the Regression program in SPSS. Separate regression analyses, using the hierarchical entry method, were run for each cohort under investigation with either SC English or Mathematics as the dependent measure. Generally, variables were entered on the basis of their anticipated influence, for example, Numeracy was entered first for models predicting SC Mathematics.
In relation to SC English, Reading was generally the best predictor of SC English but the overall result varied between the four student cohorts (refer to Table 2). This result is probably linked to the small numbers in the cohort, particularly in 2004 and 2005. For some cohorts, both Spelling and Writing test results did add significantly to the total explained variance in SC English; however, no significant contribution from Numeracy was apparent.
As a way of making the findings with regard to Reading more understandable, it is useful to consider the performance of individual students in Year 7 and compare their performance at Year 10. To illustrate, of the 15 students in the bottom quartile of Reading results in 2004, nine (i.e., 60 percent) remained in the bottom quartile in SC English four years later. Of the others in this cohort, five (i.e., 33.3 percent) moved to the second bottom quartile and one jumped to the next quartile (or one below the top quartile). Similarly, of the top 8 students in the Reading test in Year 7, 5 (i.e., 62.5 percent) remained in the top quartile, whereas the others slipped one quartile.
|SC cohort||N||Reading||Numeracy||Spelling||Writing||Total R2|
|- Tests not administered; ns Not significant; * If entered first|
The Year 7 Numeracy results were by far the strongest predictor of SC Mathematics (refer to Table 3). Across the three years (2004-2006), about 70 percent of the explained variance in SC Mathematics was predicted by the Numeracy result. Using the 2007 cohort and their Numeracy results as an example, 66.7 percent of the students in the bottom quartile in Year 7 were still in the bottom quartile for SC Mathematics. The remainder were distributed in the bottom second and third quartiles. For the top students, 58.3 percent of those performing highly in Year 7 Numeracy were in the top quartile for SC Mathematics. Most of the others were in the second top quartile.
One other finding worthy of noting is that Reading for most of the cohorts was a significant predictor, behind Numeracy, of the variance in SC Mathematics. And, except for the 2002-2005 cohort, the other two predictors, namely, Spelling and Writing did not contribute to the total explained variance in Mathematics.
|SC cohort||N||Numeracy||Reading||Spelling||Writing||Total R2|
|- Tests not administered; ns Not significant; * If entered first|
The relationships found among the LANNA Reading, Spelling, Writing, and Numeracy scores are similar to those cited by Mehta et al. (2005) in another North American study. However, making comparisons across different tests and educational jurisdictions is fraught with difficulty. One way of further validating the findings of the present study is for other Australian schools and, perhaps clusters of schools, to make available their LANNA results for comparison purposes.
The other aim of the study was to examine the predictive capacity of the LANNA scores in relation to SC English and Mathematics results at Year 10. Two prominent patterns emerged from the data analysis. Firstly, Reading was found to be the best predictor of SC English and for some cohorts both Spelling and Writing results added significantly to the explained variance in English. Secondly, Numeracy was clearly the main predictor of SC Mathematics. In fact, the Numeracy measure in 2004-2006 explained more than two-thirds of the variance in Mathematics. A subsidiary finding, in relation to the explained variance in Mathematics, was that Reading was generally a significant predictor behind Numeracy. This result supports a view that Mathematics has a strong language component and, in particular, a key reading component. Aiken (1972), for example, has argued that "linguistic abilities affect performance in mathematics ... [and that] mathematics itself is a specialised language" (p. 359).
The findings of the current study imply a need to explore further the relationships between and among the four Year 7 standardised tests. It could be argued, for example, that a common factor underpins these test measures, and that traditional factor analysis or structural equation modelling would be ideally suited to test such an argument. Carroll's (1993) study, including the work he reviewed, would suggest that a factor termed crystallised intelligence (Gc) might be involved here. This speculative view is based on the fact that Carroll (1993) showed how verbal ability, reading comprehension, spelling, language development, and numerical facility were frequently loading on Gc.
Taken together, the findings pertaining to Year 10 English and Mathematics demonstrate that previous achievement in related areas, namely, reading, spelling, writing, and numeracy, has high predictive capacity. Although some other Australian studies (see, for example, Hemmings, 1996; Yates, 2000) point out that prior achievement is an excellent predictor of current achievement, these studies focus on a two-year time span. The study reported here is based on a four-year differential and the average explained variance in English and Mathematics is 65 percent. An obvious question that arises from this finding is 'What can explain the unaccounted variance?' Known at this juncture is that motivation, academic engagement, and attitudes towards English, Mathematics and school in general are potential contributors to the unexplained variance. Unquestionably, further study is required to fill a gap. Such future research could start by examining the previously-mentioned variables through a survey of students who sat LANNA tests in 2006 and 2007 as these students would be sitting SC examinations in 2009 and 2010. Another worthwhile study could involve the use of Year 7 test results from the NAPLAN to ascertain how well these tests might predict SC English and Mathematics. However, the earliest these calculations could occur would be 2011 as the first administration took place in 2008.
In the meantime, other worthy research could take advantage of the stores of data currently held by school administrators. These school administrators, with the help of researchers, could draw on their existing school records, including their past LANNA scores, and explore the relationships described earlier. Given that many non-government school administrators, across the nation, have access to such results from 1999, and even more complete data sets from 2000, there is a rich source to be mined. No doubt, individual school administrators have used the LANNA test scores in a range of ways. For example, at the student level, individuals might have been monitored from Year 7 to the SC at Year 10 or even earlier through their various achievement tests in Years 8 and 9. An illustration of this tracking of student progress was presented above in the results section where students were positioned in quartiles and matched against their peers from Years 7 and 10 as a way of measuring positional gain or loss. This is just one example of what is possible with the LANNA data. However, what is very apparent is that the literature is silent on the topic and, as a consequence, little or no information has been exchanged on a broader scale between school administrators, teachers, parents, and other stakeholders on this topic. In other words, a potentially important educational topic has been virtually left untreated; a topic which would greatly support any new developments in secondary school assessment and learning such as the NAPLAN or the Basic Skills Testing instigated about a decade earlier. Clearly, one way of validating a new testing regime is to examine its predecessor schemes and to find out if the new regime is standing on a solid foundation.
Despite the present study contributing worthwhile information and adding to the existing literature relating to school achievement, there are nevertheless limitations inherent in the study. Firstly, only one school was involved and the numbers of students, especially in the early cohorts, were small. This fact would bring into question the legitimacy of generalising the findings to other settings and educational jurisdictions. And secondly, no qualitative data were collected to supplement the quantitative data. In hindsight, interviews with a group of Year 10 students, before and after the time of sitting the SC examinations, would have potentially revealed some insights about their views towards English and Mathematics and what, in their view, contributed to their achievements in these two subjects. Although they might not have focused their thinking on the notion of prior achievement, they would have offered some ideas as to what other factors contribute to their successes and failures at the subject level. This is a critical point since 35 percent of the variance, on average, in both English and Mathematics at Year 10 was left unexplained.
This proposed study and the others identified earlier need to be given serious consideration if inroads are to be made regarding the study of literacy and numeracy at the secondary school level. This call is supported by Rothman and McMillan (2003) who argued that if literacy and numeracy were to stay at the top of the agenda for Australian education, research concentrating on the literacy and numeracy skills of our middle school students is essential. The time is now opportune as new developments in literacy and numeracy testing and benchmarking emerge to investigate the impact of these tests and their forerunner tests, including LANNA.
Aiken, L.R. (1972). Language factors in learning mathematics. Review of Educational Research, 42(3), 359-385.
Akey, T.M. (2006). School context, student attitudes and behavior, and academic achievement: An exploratory analysis (Research Monograph). New York: MDRC. http://www.eric.ed.gov:80/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED489760
Ali, J. & McInerney, D.M. (2005, November). An analysis of the predictive validity of the Inventory of School Motivation (ISM). Paper presented at the Australian Association for Educational Research (AARE) Conference, Sydney, Australia. http://www.aare.edu.au/05pap/ali05403.pdf
Carroll, J.B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press.
Dickie, J. (2000). Pacific nations students in primary teacher training: Investigating their learning needs. SET, 2, 11. [see Pacific Studies - Pacific Theses | The Library | Victoria University of Wellington, http://www.victoria.ac.nz/library/subjectguides/pacificstudies/pacifictheses.aspx]
Grootenboer, P., & Hemmings, B. (2007). Mathematics performance and the role played by affective and background factors. Mathematics Education Research Journal, 19(3), 3-20.
Hemmings, B.C. (1996). A longitudinal study of Australian secondary school achievement. Issues in Educational Research, 6(1), 13-37. http://www.iier.org.au/iier6/hemmings.html
Marks, G., & Ainley, J. (1997). Reading comprehension and numeracy among junior secondary school students in Australia. LSAY Research Report 3. Melbourne, Vic: Australian Council for Educational Research.
Mehta, P.D., Foorman, B.R., Branum-Martin, L, & Taylor, W.P. (2005). Literacy as a unidimensional multilevel construct: Validation, sources of influence, and implications in a longitudinal study in Grades 1 to 4. Scientific Studies of Reading, 9(2), 85-116.
Ministerial Council on Education, Employment, Training and Youth Affairs (2008). Assessing Student Achievement in Australia 2008. Melbourne, Vic: MCEETYA.
Peng, C.J., Lee, K.L., & Ingersoll, G.M. (2002). An introduction to logistic regression analysis and reporting. The Journal of Educational Research, 96(1), 3-14.
Reynolds, A.J. (1991). The middle schooling process: Influences on science and mathematics achievement from the longitudinal study of American youth. Adolescence, 16(101), 132-157.
Reynolds, A.J., & Walberg, H.J. (1992). A structural model of science achievement and attitude: An extension to high school. Journal of Educational Psychology, 84(3), 371-382.
Rothman, S., & McMillan, J. (2003). Influences on achievement in literacy and numeracy. ACER Research Report 36: Camberwell, Vic.: Australian Council for Educational Research.
Rothman, S., & McMillan, J. (2004). Signposts to improved test scores in literacy and numeracy. EQ Australia, 1, 24-26. http://www.eqa.edu.au/site/signpoststoimproved.html
Singh, K., Granville, M., & Dika, S. (2002). Mathematics and science achievement: Effects of motivation, interest, and academic engagement. The Journal of Educational Research, 95(6), 323-332.
Spinath, B., Spinath, F.M., Harlaar, N., & Plomin, R. (2006). Predicting school achievement from general cognitive ability, self-perceived ability, and intrinsic value. Intelligence, 34(4), 363-374.
Thomas, J.P. (2002). An examination of the relationship between learning variables and academic achievement outcomes (Document No. TM 034 700). Grayslake, Illinois: College of Lake County (ERIC Document Reproduction Service No. ED 471 988). http://www.eric.ed.gov:80/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED471988
Walker, M., & Chamberlain, M. (1999). A brief overview of the Third International Mathematics and Science Study (TIMSS) including the conceptual framework for the study, sampling procedures, and a summary of key results for New Zealand. The Research Bulletin, 10(October), 41-55.
Wilkins, J.L.M., & Ma, X. (2003). Modeling change in student attitude toward and beliefs about mathematics. The Journal of Educational Research, 97(1), 52-63.
Yates, S. (2000). Task involvement and ego orientation in mathematics achievement: A three-year follow-up. Issues in Educational Research, 10(1), 77-91. http://www.iier.org.au/iier10/yates.html
|Authors: Dr Brian Hemmings is a Senior Lecturer in the School of Education, Charles Sturt University, Wagga Wagga. His research focus is on academic achievement at all education levels. Email: firstname.lastname@example.org
Russell Kay is an Adjunct Senior Lecturer in Education at Charles Sturt University. His research interests concentrate on schooling performance and he draws on the use multivariate statistics.
Please cite as: Hemmings, B. & Kay, R. (2009). LANNA tests and the prediction of Year 10 English and Mathematics results. Issues In Educational Research, 19(1), 25-33. http://www.iier.org.au/iier19/hemmings2.html