The impact of mobile learning on student performance as gauged by standardised test (NAPLAN) scores
Steven Males, Frank Bate and Jean Macnish
University of Notre Dame Australia
This paper discusses the National Assessment Program for Literacy and Numeracy (NAPLAN) performance of Years Five, Seven and Nine students in standardised tests prior and post the implementation of a mobile learning initiative in a Western Australian school for boys. The school sees the use of ICT as important in enhancing its potential to deliver optimal educational outcomes. However, the School is also cognisant of the shared concern of teachers and parents in the school community about an over-reliance on mobile devices for learning, to the detriment of students' accomplishments in literacy and numeracy. The paper examines NAPLAN results from standardised test scores prior and post the mobile learning initiative at the School and in comparison to national data. Literacy and numeracy results were analysed between two periods: prior to 2010 in which the mobile learning initiative was not implemented at the School and between 2010 and 2012, in which the mobile learning initiative was implemented in Years Five, Seven and Nine. It is argued that the implementation of mobile learning has had a minimal effect on student performance as gauged by standardised testing.
Discussion in Australia about the use of mobile devices in class and their impact on standardised high-stakes testing continues, particularly in light of shared concerns by parents and teachers (Bate, Macnish & Males, 2012). Since the onset of the technology penetration in schools in the U.S., reading and mathematics test scores at the high school level are no higher than 30 years ago (Grimes & Warschauer, 2008). According to Cuban (2006), for example, there is no link between students having access to a 1:1 laptop program and improved test scores. However, in studies with similar goals but different participants, such as that by Lei and Zhao (2007), links were found between students having access to a 1:1 laptop program and improved test scores. Clearly, more research is required.
In Australia, national standardised testing was introduced in 2008 with the National Assessment Program for Literacy and Numeracy (NAPLAN). The testing under this program takes place each year at the same time across the country. The data provides policy makers, school communities and parents with information about student performance, thereby ensuring greater accountability of schools to improve teaching and learning (Belcastro & Boon, 2012; Wildy & Clark, 2012). NAPLAN is a test of literacy and numeracy skills over time linked to a national curriculum. NAPLAN consists of tests in the domains of: reading, writing, language conventions (spelling, and Grammar and punctuation,) and numeracy. Annually, NAPLAN testing is carried out in Years Three, Five, Seven and Nine across all schools in Australia (ACARA, 2016). NAPLAN is similar to standardised testing programs in the UK, Singapore, Germany, China and the U.S. (Rotberg, 2006; Mattei, 2012).
Lei and Zhao (2008) reported that a number of studies have sought to gauge the impact of ICT, using approaches based on pre- and post-testing (Rockman, Walker & Chessler, 2000; Gulek & Demirtas, 2005; Silvernail & Gritter, 2005; Shapley et al., 2009). The purpose of this paper is to discuss the School's performance in NAPLAN testing pre- and post- the implementation of the mobile learning initiative in 2010.
The description of NAPLAN results draws on five scales, one for each of the domains of Reading, Writing and Numeracy and two for Language Conventions (Grammar and punctuation, and Spelling). These five scales span all year levels from Year Three to Year Nine and describe the development of student achievement according to ten bands on the scale. The scales are designed so that any given score by any student in Australia can be interpreted in the same way over time and represents the same level of achievement. For example, a score of 700 in Numeracy will have the same meaning in 2012 as in 2010, enabling improvements to be gauged over time. Each domain is divided into ten bands on the scale to cover the full range of student achievements tested. The bands are used for reporting student performance at each year level. The Year Three report comprises bands one to six, Year Five bands three to eight, Year Seven bands four to nine, and Year Nine bands five to ten (ACARA, 2016); for further information about NAPLAN, see http://nap.edu.au.
Analysing the NAPLAN results for cohorts pre- and post- the 1:1 mobile learning initiative helps to gauge the impact of mobile learning in the learning areas of literacy and numeracy. NAPLAN testing was introduced in 2008 so only three years of data can be used prior to the mobile learning implementation at the School in 2010. For this analysis, 2008, 2009 and 2010 results for Years Five, Seven and Nine were compared to 2011 and 2012 results for the same years. Writing had not been included in the analysis due to the changes in the Writing section of NAPLAN test from 2011. A move from narrative writing to persuasive writing was approved by Australian State Ministers in 2010 following extensive piloting (Turvey, 2012).
In addition to analysing mean scores, NAPLAN data also provided an opportunity to compare the learning gains of cohorts to previous years where students did not have access to a mobile learning device. For this analysis, learning gains for Years Five and Seven from 2008 to 2010 were compared to learning gains for the same years from 2010 to 2012. There were no control groups due to the scale of the mobile learning implementation at the School.
Ideally, when comparing schools it is useful to select 'like schools' based on the Index of Community Socio-Educational Advantage (ICSEA). However, as there were a limited number of male-only schools that were similar in ICSEA to the School in this study, the School is compared with males nationally. Mean scores for students at the School is presented alongside corresponding data for males nationally. For every mean NAPLAN test score, a corresponding 'Uncertainty of the mean' is provided (in terms of a 95% confidence interval (CI)). A 'One-way ANOVA (analysis of variance) for summary data' is undertaken to determine when a difference between a mean for the students at the School and the corresponding mean for males nationally is statistically significant at both p<0.01 and p<0.05 levels. The One-way ANOVA is also used to identify statistically significant differences in learning gains (at both p<0.01 and p<0.05 levels) between students at the School and males nationally, and between students pre-mobile learning implementation (Years Five to Seven and Seven to Nine, 2008 to 2010) and post-mobile learning implementation (Years Five to Seven and Seven to Nine, 2010 to 2012).
Over the five-year period 2008 to 2012 (inclusive), the mean score recorded by the School was roughly +20 marks higher than that recorded by Year Five males nationally for Reading and Spelling, and Grammar and punctuation. For Numeracy, the margin was larger, at about +40 marks in 2008, 2009 and 2011. For Persuasive Writing, a roughly +20 mark gap was observed in 2011 and 2012. In 2010 (Cohort A) the Year Five Cohort scores for Spelling, Grammar and punctuation, and Numeracy were approximately 10 marks above the mean recorded nationally by males. The Year Five School results are consistent across the period 2008 to 2012, with 2010 being the only year that recorded lower scores than the other years.
Figure 2 displays the School mean score compared to the national mean score for males in Year Seven. Cohort B is shown as Year Seven students in 2010 (first year of study) and Cohort A is shown as Year Seven students in 2012 (third year of study).
Figure 1: Year Five NAPLAN test results 2008-2012: School vs. males nationally
Figure 2: Year Seven NAPLAN test results 2008-2012: School vs. males nationally
Over the five-year period 2008 to 2012 (inclusive), the mean score recorded by the School was roughly +30 marks higher than that recorded by Year Seven males nationally (in the four areas excluding Persuasive Writing). For Persuasive Writing, a +30 mark gap was observed in 2011 and 2012. In relative terms, the above margins represent a difference of about +5.5% for all domains. The 2009 Year Seven pre-mobile learning cohort at the School scored particularly highly in the four domains. In this year the margin between the School and Year Seven males nationally was on average about +9.5% in relative terms. In the years since the results of 2009, the School's results have tended to return to a position closer to, but still above, the Year Seven male national results.
Figure 3 displays the School's mean score compared to the national mean score for males in Year Nine. Cohort B is shown as Year Nine students in 2012 (third year of the study).
Figure 3: Year Nine NAPLAN test results 2008-2012: School vs. males nationally
Over the five-year period 2008 to 2012 (inclusive), the mean score recorded by the School was roughly +30 marks higher than that produced by Year Nine males nationally in Reading and Spelling, and in Grammar and punctuation. For Numeracy, the margin was slightly more at about +50 marks. For Persuasive Writing, a +40 mark gap was recorded in 2011 and 2012. In relative terms, the above margins of +30, +40 and +50 marks represent differences of about +5%, 7.5% and 8.5%, respectively. The Year Nine School results are consistent across the period 2008 to 2012, with no one year being particularly strong or weak.
Overall, the data indicates that the School has performed consistently better than the national average in all domains between 2008 and 2012. The implementation of the mobile learning initiative appears to have had no discernible impact.
Figure 4: Cohort gains Years Five to Seven 2008-2010 (pre-mobile learning initiative) (relative percentage)
Figure 5: Cohort gain Years Five to Seven 2010-2012 (Cohort A) (relative percentage)
Between 2008 and 2010, the four domains recorded differences in cohort gain between the School and males nationally of about one per cent (relative). None of the differences in cohort gains between the males at the School and males nationally were statistically significant at the p < 0.05 level. Similarly, between 2010 and 2012, differences in cohort gains between the School and males nationally were not statistically significant at the p < 0.05 level, although it is interesting that School gains were higher than national gains in three of the four domains.
Figure 6 shows learning gains from Year Seven to Year Nine in all domains for the School and males nationally between 2008 and 2010 (pre-mobile learning initiative). Figure 7 presents the same data between 2010 and 2012 (i.e. for Cohort B).
Figure 6: Cohort gain Years Seven to Nine 2008-2010: School vs. males
nationally (pre-mobile learning initiative) (relative percentage)
Between 2008 and 2010, differences in cohort gains between the School and males nationally were negligible and not statistically significant at the p < 0.05 level. For Numeracy, the result was unusual in that the cohort gain for males nationally was greater than that achieved by the School. Between 2010 and 2012, differences in cohort gains between the School and males nationally were also negligible and not statistically significant at the p < 0.05 level. However, cohort gains achieved by males nationally were greater than those achieved by the School in three of the four domains.
Figure 8 plots the cohort gains for Year Five to Year Seven as a percentage for the 2008 to 2010 pre-mobile learning cohort compared to the post-mobile learning cohort 2010 to 2012 (Cohort A). Figure 9 plots the cohort gains for Year Seven to Year Nine as a percentage for the 2008 to 2010 pre-mobile learning cohort compared to the post-mobile learning cohort 2010 to 2012 (Cohort B).
Figure 7: Cohort gain Years Seven to Nine 2010-2012 (Cohort B) (relative percentage)
Figure 8: School's cohort (Cohort A) gains Years Five to Seven: 2008-2010 vs. 2010-2012 (relative percentage)
For Years Five to Seven, Reading and Numeracy 2008 to 2010 gains were marginally greater than the 2010 to 2012 gains; for Spelling, and Grammar and punctuation the 2010 to 2012 gains were greater than the 2008 to 2010 gains. None of the differences in gains were statistically significant at the p < 0.05 level.
Figure 9: School's cohort (Cohort B) gains Years Seven to Nine: 2008-2010 vs. 2010-2012 (relative percentage)
For Years Seven to Nine, differences between 2008-2010 and 2010-2012 cohorts in Reading, Spelling and Numeracy were negligible and not statistically significant at the p < 0.05 level. However, the difference in gains for Grammar and punctuation was 26 marks (absolute) and 5.2% (relative). A One-Way ANOVA test (Table 1) revealed that the difference between the 2008 to 2010 cohort gain and the 2010 to 2012 cohort gain was statistically significant at the p < 0.01 level.
|Area||School 2008-10||School 2010-12||ANOVA results|
|Δ [(2010-12) -|
|Δ (Year 7|
to Year 9)
|Note. ** Significance at p < 0.01 level; * significance at p < 0.05 level|
Overall, Years Five to Seven cohort gains were mixed, with no discernible patterns evident. This result somewhat contrasted with the Years Seven to Nine cohort gains as for three of the four domains, the 2010 to 2012 post-mobile learning cohort gain was less than the 2008 to 2010 pre-mobile learning cohort gain. For Grammar and punctuation, the difference in cohort gain was statistically significant.
Research into the application of ICT for student learning suggests that a move towards student ownership of mobile devices may yield significant benefits, particularly in terms of student engagement (Mouza, 2008; Bate, 2010; Bebell & Kay, 2010; Keengwe & Bhargava, 2014). The findings reveal a school that performed consistently higher than the national average in all areas of NAPLAN indicating good teaching and support structures are in place. Whether these preconditions are necessary for the implementation of mobile learning initiatives may be a worthy question for future research. Mean scores in all domains remained stable from pre- to post-mobile learning implementation suggesting that the implementation of the mobile learning initiative resulted in no significant impact. This finding is consistent with other studies (Grimes & Warschauer, 2008; Bebell & O'Dwyer, 2010; Won Hur & Oh, 2012; Fabian, Topping & Barron, 2016) which also reported negligible impacts on student learning arising from mobile learning implementations, in this case laptop computers. In Cohort B, however, cohort gains in three of the four domains were less than the 2008 to 2010 pre-mobile learning cohort gains and in Grammar and punctuation, a statistically significant decline was recorded in learning gains between 2008-2010 (pre-mobile learning) and 2010-2012 (post-mobile learning) in Years Seven to Nine. It could well be that the commencement of the 1:1 initiative, at a time when students are making a transition from primary to middle school, is not ideal. The impact of ICT implementations in transition years could well be a fruitful area for further research.
Teaching and learning interactions are influenced by a complex mix of psychological and sociocultural factors, in addition to physical and temporal considerations (Hattie & Yates, 2014). It is, therefore, difficult to untangle the impact of a mobile learning initiative from the range of other influences on student learning, particularly when approaches to integrating mobile learning vary from teacher to teacher. For example, it could be that teaching strategies in Grammar and punctuation Years Seven to Nine did not emphasise the use of mobile learning in the classroom. Therefore, inferences about the impact of the 1:1 implementation should be made cautiously. The way in which mobile devices are implemented is an important question for future research. This insight was noted by Weston and Bain (2010) in their review of realising the advantages of 1:1 computing. More research, drilling down into the detail, is clearly required to give enhanced texture to studies that seek to link student performance with ICT initiatives.
The mobile learning initiative at the School was implemented to provide students with access to digital learning experiences that mirrored societal trends in the use of ICT. NAPLAN testing tends to ignore key generic and digital literacy skills such as the ability to access and synthesise online information, problem-solving skills, communication skills and ICT capabilities (Grimes & Warschauer, 2008). According to Holcomb (2009, p. 54), ICT skills "do not necessarily align with today's standardised assessments." Therefore, the use of standardised tests in isolation to monitor school improvement in schools with mobile learning programs is problematic. High stakes testing is often conducted in environments where test scores are tied to funding models. While standardised testing in OECD countries is commonplace, Morris (2011, p. 21) discussed some of its associated concerns:
... there are a number of limitations to standardized tests which weaken the capacity to achieve their purpose. Primarily, standardized tests are limited in scope both in terms of the breadth of their reach and in terms of their depth of assessment.The literature suggests that the evaluation of mobile learning initiatives with a sole focus on standardised testing results is limited and so researchers continue to search for accurate measures of student achievement in schools with such programs. While standardised testing might be a crude instrument, it nevertheless plays an important role in measuring student improvement in key learning areas because of the associated reliability of the assessment (Suhr, Hernandez, Grimes & Warschauer, 2010; Perso, 2011).
There may well have been benefits of the mobile learning initiative that are currently not measured through standardised testing. Certainly, Newhouse (2014) found that the integration of ICT resulted in benefits that transcended those that might be captured in the formal curriculum. The benefits of mobile learning as a mechanism for enhancing key generic and digital literacies could be the subject of future research into the broader impact of mobile technologies on student learning.
Some of the limitations of the research should be acknowledged. The research was conducted in one single-gender school in one state in Australia. The research was, therefore, based on a relatively small scale single case. Further, the implementation of mobile learning initiative was embryonic and the way in which students and teachers used these devices evolved rapidly over the period of the research. It should also be noted that the purpose of the NAPLAN program is to identify regions and schools that have the greatest need for additional resources, rather than small-scale improvement initiatives such as was the subject of the research. In saying this, the use of NAPLAN data to inform debate around a school improvement initiative was certainly worthwhile. Recently the Australian Curriculum Assessment and Reporting Authority (ACARA) has announced that NAPLAN testing will move from the conventional pen and paper method to an online assessment in 2019 (ACARA, 2016b). If, as Biggs and Tang (2011) suggest, assessment drives learning then online NAPLAN testing may well be a catalyst for more extensive use of ICT for learning.
Analysis of literacy and numeracy outcomes for cohorts A and B was multi-faceted and considered how each cohort performed over time:
As mentioned above, these results should be treated with some caution as the level of use of mobile devices varied between subjects and the overall aim of NAPLAN is to inform macro- rather than micro-level change initiatives. Therefore, correlating the data with increases to learning is challenging and underlines the need to prudently understand the contexts and nuances of specific cohorts in terms of performance. Also, due to the limited timeframe of the study, a more relevant longitudinal timeframe of a decade might offer more conclusive findings. The study was set in one school in one socio-cultural setting in one state. The implementation was focused on one type of device. Generalisations to other contexts should be made with caution. The implementation of mobile learning approaches in the two cohorts was variable across NAPLAN areas, and therefore it is difficult to draw any inferences from the data without complimentary, in-depth qualitative data. Mixed methods approaches have proven fruitful in providing a breadth and depth of data which have led to more sophisticated understandings of ICT interventions. Finally, the concern or blame expressed by some members of the education community about declining NAPLAN scores allegedly due to the use of mobile devices for learning appears to be more speculative than proven.
ANAO (Australian National Audit Office) (2009). Digital education revolution - national secondary schools computer fund. Australian National Audit Office. https://www.anao.gov.au/work/performance-audit/digital-education-revolution-program-national-secondary-schools-computer-fund
Bate, F. (2010). A bridge too far? Explaining beginning teachers' use of ICT in Australian schools. Australasian Journal of Educational Technology, 26(7), 1042-1061. http://dx.doi.org/10.14742/ajet.1033
Bate, F., Macnish, J. & Males, S. (2012). Understanding parent perceptions of a 1:1 laptop program in Western Australia. Australian Educational Computing, 27(2), 18-21. http://acce.edu.au/journal/27/2/understanding-parent-perceptions-11-laptop-program-western-australia
Bebell, D. (2005). Technology promoting student excellence: An investigation of the first year of 1:1 computing in New Hampshire middle schools. http://www.bc.edu/research/intasc/PDF/NH1to1_2004.pdf
Bebell, D. & Kay, R. (2010). One to one computing: A summary of the quantitative results from the Berkshire wireless learning initiative. The Journal of Technology, Learning, and Assessment, 9(2), 1-60. http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1607
Bebell, D. & O'Dwyer, L. (2010). Educational outcomes and research from 1:1 computing settings. The Journal of Technology, Learning, and Assessment, 9(1), 1-16. http://ejournals.bc.edu/ojs/index.php/jtla/issue/view/145
Belcastro, L. & Boon, H. (2012). Student motivation for NAPLAN tests. Australian and International Journal of Rural Education, 22(2), 1-19. http://search.informit.com.au.ipacez.nd.edu.au/fullText;dn=664273667559840;res=IELIND
Biggs, J. B. & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Maidenhead, England: Open University Press.
Carmichael, C., MacDonald, A. & McFarland-Piazza, L. (2014). Predictors of numeracy performance in national testing programs: Insights from the longitudinal study of Australian children. British Educational Research Journal, 40(4), 637-659. http://dx.doi.org/10.1002/berj.3104
Crompton, H. (2013). A historical overview of mobile learning: Toward learner-centered education. In Z. L. Berge & L. Y. Muilenburg (Eds), Handbook of mobile learning, 3-14. Routledge.
Cuban, L. (2006). The laptop revolution has no clothes. Education Week, 26(8), 29-29. Retrieved from http://www.edweek.org/ew/articles/2006/10/18/08cuban.h26.html
Dreher, K. (2012). Tests, testing times and literacy teaching. Australian Journal of Language & Literacy, 35(3), 334-352. http://search.informit.com.au/documentSummary;dn=793996412993767;res=IELHSS
Fabian, K., Topping, K. J. & Barron, I. G. (2016). Mobile technology and mathematics: Effects on students' attitudes, engagement, and achievement. Journal of Computers in Education, 3(1), 77-104. http://dx.doi.org/10.1007/s40692-015-0048-8
Grimes, D. & Warschauer, M. (2008). Learning with laptops: A multi-method case study. Journal of Educational Computing Research, 38(3), 305-332. http://dx.doi.org/10.2190/EC.38.3.d
Gulek, J. C. & Demirtas, H. (2005). Learning with technology: The impact of laptop use on student achievement. Journal of Technology, Learning, and Assessment, 3(2), 1-39. http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1655/1501
Hardy, I. (2014). A logic of appropriation: Enacting national testing (NAPLAN) in Australia. Journal of Education Policy, 29(1), 1-18. http://dx.doi.org/10.1080/02680939.2013.782425
Hattie, J. & Yates, G. C. (2014). Visible learning and the science of how we learn. Abingdon, England/New York, NY: Routledge.
Holcomb, L. B. (2009). Results & lessons learned from 1:1 laptop initiatives: A collective review. TechTrends, 53(6), 49-55. http://dx.doi.org/10.1007/s11528-009-0343-1
Keengwe, J. & Bhargava, M. (2014). Mobile learning and integration of mobile technologies in education. Education and Information Technologies, 19(4), 737-746. http://dx.doi.org/10.1007/s10639-013-9250-3
Lei, J. & Zhao, Y. (2007). Technology uses and student achievement: A longitudinal study. Computers & Education, 49(2), 284-296. http://dx.doi.org/10.1016/j.compedu.2005.06.013
Lei, J. & Zhao, Y. (2008). One-to-one computing: What does it bring to schools? Journal of Educational Computing Research, 39(2), 97-122. http://dx.doi.org/10.2190/EC.39.2.a
Mattei, P. (2012). Raising educational standards: National testing of pupils in the United Kingdom, 1988-2009. Policy Studies, 33(3), 231-247. http://dx.doi.org/10.1080/01442872.2012.658260
Morris, A. (2011). Student standardised testing: Current practices in OECD countries and a literature review. OECD. http://dx.doi.org/10.1787/5kg3rp9qbnr6-en
Mouza, C. (2008). Learning with laptops: Implementation and outcomes in an urban, under privileged school. Journal of Research on Technology in Education, 40(4), 447-472. http://dx.doi.org/10.1080/15391523.2008.10782516
Newhouse, C. P. (2014). Learning with portable digital devices in Australian schools: 20 years on! The Australian Educational Researcher, 41(4), 471-483. http://dx.doi.org/10.1007/s13384-013-0139-3
Perso, T. (2011). Assessing numeracy and NAPLAN. Australian Mathematics Teacher, 67(4), 32-35. http://www.aamt.edu.au/content/download/18075/240416/file/amt67_4_perso.pdf
Rockman, S., Walker, L. & Chessler, M. (2000). A more complex picture: Laptop use and impact in the context of changing home and school access. http://rockman.com/docs/downloads/microsoft-anytime-anywhere-evaluation.pdf
Rotberg, I. C. (2006). Assessment around the world. Educational Leadership, 64(3), 58-63. http://www.ascd.org/publications/educational-leadership/nov06/vol64/num03/Assessment-Around-the-World.aspx
Selwyn, N. (2011). Schools and schooling in the digital age: A critical analysis. Abingdon, England: Routledge.
Shapley, K., Sheehan, D., Sturges, K., Carinkas-Walker, F., Huntsberger, B. & Maloney, C. (2009). Evaluation of the Texas technology immersion pilot: Final outcomes for a four-year study (2004-05 to 2007-08). Austin, TX: Texas Center for Educational Research. http://etcjournal.files.wordpress.com/2010/07/etxtip_final.pdf
Silvernail, D. & Gritter, A. (2005). Maine's middle school laptop program: Creating better writers. http://maine.gov/mlti/resources/Impact_on_Student_Writing_Brief.pdf
Suhr, K. A., Hernandez, D. A., Grimes, D. & Warschauer, M. (2010). Laptops and fourth grade literacy: Assisting the jump over the fourth grade slump. Journal of Technology, Learning, and Assessment, 9(5), 1-46. http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1610/
Turvey, A. (2012). Researching the complexity of classrooms. Changing English: Studies in Culture Education, 19(1), 57-65. http://dx.doi.org/10.1080/1358684X.2012.649144
Weston, M. E. & Bain, A. (2010). The end of techno-critique: The naked truth about 1:1 laptop initiatives and educational change. Journal of Technology, Learning, and Assessment, 9(6), 1-26. http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1611
Wildy, H. & Clarke, S. (2012). Making local meaning from national assessment data: A Western Australian view. Journal of Management Development, 31(1), 48-57. http://dx.doi.org/10.1108/02621711211190998
Won Hur, J. & Oh, J. (2012). Learning, engagement, and technology: Middle school students' three-year experience in pervasive technology environments in South Korea. Journal of Educational Computing Research, 46(3), 295-312. http://dx.doi.org/10.2190/EC.46.3.e
|Authors: Dr Steven Males BEd (ACU, 1996), MEd (ACU, 2001),PhD (UNDA, 2015) is currently a primary school principal in Perth, Western Australia. Steven has a deep interest in the use of ICT in K-12 environments focusing on what makes a difference for student learning. He completed his PhD thesis One-to-one laptop program: Effect on boys' education at the University of Notre Dame Australia.|
Associate Professor Frank Bate PhD (Murd, 2010), MEd (UNDA, 2006), BA (Curtin, 1988) is the Director of Medical Education at the University of Notre Dame Australia. Frank is passionate about supporting staff in their teaching and developing curricula through rich, innovative pedagogical approaches. His research interests centre on curriculum, educational change, and the use of ICT to enhance the learning experience.
Associate Professor Jean Macnish Dip Teach (Curtin), BEd Secondary (Curtin), MEd (Deakin), PhD (Curtin) coordinates and teaches undergraduate and postgraduate Information & Communications Technologies units at The University of Notre Dame Australia's School of Education, Fremantle campus. Jean currently supervises PhD studies in mobile learning in early childhood education, and ICT enriched problem-based learning in high school science classrooms. She is passionate about sound ICT integration.
Please cite as: Males, S., Bate, F. & Macnish, J. (2017). The impact of mobile learning on student performance as gauged by standardised test (NAPLAN) scores. Issues in Educational Research, 27(1), 99-114. http://www.iier.org.au/iier27/males.html