IIER logo 4
Issues in Educational Research, 2016, Vol 26(1), 10-28
[ Contents Vol 26 ] [ IIER Home ]

Conducting psychological assessments in schools: Adapting for converging skills and expanding knowledge

Terry Bowles
The University of Melbourne, Australia

Janet Scull
Monash University, Australia

John Hattie and Janet Clinton
The University of Melbourne, Australia

Geraldine Larkins
Australian Catholic University, Australia

Vincent Cicconi, Doreen Kumar and Jessica L. Arnup
The University of Melbourne, Australia

In this paper we argue for a revision of the knowledge, skills and competencies of school psychologists and specialist teachers. Evidence-based practice of psychologists and teachers, the accountability movement, and calls for improved service delivery have led to changes in the practice of both professions. The changes in technology and the growing complexity of service delivery have converged, leading to changes in the practice of testing, assessment, and evaluation in schools. This has led to calls for increased competencies of teachers. Testing, assessment and evaluation have previously been a central practice of educational psychologists but it is now expected to be a key competence of all teachers and other professionals working in schools. This inevitably changes the balance of practices of both teachers and educational psychologists in schools as roles converge, making collaboration and joint consultation best practice within a response to intervention framework. In view of the growing demands, changes and possibilities, we propose a three tier model of assessment functions that includes educational psychologists, special educators and teachers. The proposal is inclusive and offers possibilities for a more collaborative and participatory relationship between these professions in school settings and a stronger emphasis in pre-service curriculum on testing and interpretation and its utility in effective intervention.


We argue that the understanding and approach to assessment and intervention in schools is changing in the 21st century and this requires a broader view of assessors and assessments. Assessment, intervention, consulting and counselling are key functions of educational psychologists and school psychologists (ESPs). In the past decade, special education teachers (SETs) and classroom teachers have become more involved in testing and assessment of students to identify how and what students can achieve, the impact of the curriculum and specific interventions, and the quality and impact of teaching. The ongoing introduction of an audit approach utilising greater reliance on testing, such as the National Assessment Program, Literacy and Numeracy (NAPLAN, 2013) has meant a new set of skills for many in education and new ways to consider assessment, outcomes and subsequently intervention. Therefore, ESPs will need to redefine their role in assessment and reporting to inform teacher judgments at each tier of intervention (Hughes & Dexter, 2011) and teacher educators will need to focus on building competence in test administration, interpretation and statistics, and assessment more broadly. This will allow ESPs to develop a broader, more consultation focused practice of their role. Finally, elements of the roles of ESPs, SETs and classroom teachers will be discussed through the lens of response to intervention (RTI; Hughes & Dexter, 2011).

Assessment and educational psychology

Educational psychology assessment generally involves the administration and interpretation of tests or measures of intellectual abilities, academic skills, and other attributes associated with educational performance and the psychological and mental status of the individual. The purpose of assessment is to gather information in order to provide informed advice or recommendations concerning aspects of the student's educational and/or psychosocial functioning and attainment. The types of tests commonly used that we consider appropriate for SETs and classroom teachers to be trained in usually require specialist training in administration and scoring. These tests are exemplified by the Peabody Picture Vocabulary Test (PPVT-4; Dunn & Dunn, 2007), the Clinical Evaluation of Language Fundamentals Screening Test (CELF-5; Wiig, Semel & Secord, 2013), the Woodcock-Johnson Tests of Cognitive Abilities IV (WJ-IV; Woodcock, McGrew & Mather, 2014), and the Wechsler Individual Achievement Test II (WIAT-II; Wechsler, 2005). Evaluation is the process of establishing whether the assessment and constituent elements of subsequent consultation (e.g., the justification, purpose, process, context and outcomes) were effective. Evaluating and writing the report may be a joint or supervised activity combining the information from the test administration with interview data and careful planning of processes of intervention may be part of the new roles for ESPs and SETs.

ESPs spend a large proportion of time assessing students. ESPs are comprehensively trained to engage in a wide variety of activities in schools, however, psychoeducational assessment was the most time-consuming activity, reported in 2014 with 40% to 75% of their work time engaged in psychoeducational assessment (Canning & Strong, 1996; Harris & Joy, 2010). Psychoeducational assessment has been a preferred activity, although respondents indicated they wanted to do less psychoeducational assessment. Research from the United States (Hutton, Dubes & Muir, 1992; Reschly, 2000) and Canada (Corkum, French & Dorey, 2007; Jordan, Hindes & Saklofske, 2009) also indicated that North American ESPs reported spending the largest amount of time engaged in intellectual assessment (i.e., 50% to 75%). ESPs in some Canadian provinces report conducting a significant amount of psychoeducational assessment and their commitment to assessment corresponds with the experience of North American and Australian ESPs. Despite this, ESPs would prefer to increase the time spent on other activities, such as prevention activities, individual and group counselling and research (Harris & Joy, 2010).

This broader set of work roles is in line with the Canadian Psychological Association (2007) school psychology best practice policy, recommending a refocus to include prevention, counselling and research. Specifically, the problem for ESPs is how they, the schools and education systems can work to utilise all the skills of ESPs, allowing them to contribute effectively to the other areas mentioned (as well as assessment) given limits on their time and increasing demand for their services in schools. Finally, while psychological assessment has been previously associated with deficit models describing performance in which the individual student is excluded from mainstream activities due to low ability, we advocate a needs-based model in combination with RtI. A needs based approach to intervention assessment can be used to determine the range of competencies and capabilities of the student followed by instrumental consultation in which the staff with appropriate competencies and appropriate resources are assigned to assist the student to remain in the mainstream setting to gain and flourish.

The Australian context

As new models of accreditation and training for Australian psychologists working in school settings are being considered (Psychology Board of Australia, 2015), the performance of teachers is also undergoing review (AITSL, 2011). In Australia, registered psychologists can practice assessment in a school if they have training with the specific assessment and have been supervised to administer and interpret the results of testing. This occurs after six years of pre-service training followed by supervision at placement and in work settings. Sometimes this training occurs during postgraduate study, in supervised and specialist training, or is provided by the test publisher/owner. The traditional role of the ESPs includes psychological assessment, intervention and consultation (Fagan & Wise, 2007). For decades ESPs have correspondingly spent large amounts of their time assessing students, with approximately 54% of their time undertaking assessments, 24% of their time implementing interventions, 20% of their time completing professional development and consultations with parents, teachers and other professionals, and 2% of their time undertaking research and/or program evaluation (Fagan & Wise, 1994). The term ‘assessment' was the third most frequent title and keyword in one of Australia's premier journals of educational and developmental psychology (following ‘children' and ‘adolescents'), indicating it has held prominence within the profession, practice and the research of Australian psychologists (Bowles, 2009). Similarly, Evans, Grahamslaw, Henson and Prince (2012) investigated the competencies and practice of ESPs in the United Kingdom. They found practice involved (in order from greatest to least): consultation skills, standard assessment, non-standard assessment, therapeutic approaches, presentation and interpersonal skills, problem-solving, pupil intervention, research skills, group work, systematic approaches, staff support, and parent support. Evans et al.'s (2012) research described the complexity of the role of ESPs. Participants in their study felt ESPs supported school staff and parents through the consultation, interpretation, and translation of complex test results, however implementing actions was an area they required more support and training. Although ESPs' work was valued, the researchers noted that there were mismatches between training and professional work due to a poorly defined professional identity for ESPs (Evans et al., 2012).

There is also some need for a critical reappraisal of the aims and methods of psychological assessment in relation to student educational functioning. Rather than attaching diagnostic labels and prescribing blanket treatments, more emphasis should be given to individualised assessment aiming towards effective, long-term intervention programs. Such advice needs to include concrete recommendations and thorough evaluations of effectiveness, accompanied by instructional consultation emphasising strategic, programmatic intervention, leadership and evaluation (Hatzichristou & Lampropoulou, 2004; Rosenfield, 2008). The recommendations need to take into account the educational milieu as well as the person, evaluation and further prognostic intervention. The utility of ESPs work then will be dependent on the translation of planned interventions into effective classroom practice, therapeutic intervention or other consultation practices. Specialists responsible for designing and operating such assessment programs should possess a unique combination of skills and competencies not generally found in graduates of traditional masters and doctoral programs in education or psychology (Astin & Antonio, 2012; Rosenfield, 2008).

ESPs are also leaders in consultancy and change agents through knowledge transfer, combining research, policy and practice for positive change in the lives of students. This can be reflected in recommendations and interventions designed for specific students (Dunsmuir & Kratochwill, 2013; Rosenfield, 2008). Such work is high-stakes and taxing, however, and while ESPs are trained in psychological consultancy, an over commitment to assessment and report writing reduces the possibility for effective leadership and strategic consultancy (Ingram, 2013; Rosenfield, 2008). Given the demand for specialist services and assessment at the direct intervention level, it is argued that the roles and function of ESPs in regard to assessments are changing and that a broader view of assessors and assessments within educational settings should be considered. This is especially relevant given the growing need for teachers to become more conversant with and competent at understanding tests, assessment, evaluation, and planning interventions. The central issue is whether and who of the other school professionals, in particular those trained in clinical practice in education (McLean Davies et al., 2013), trained teachers, special education teachers, or new, emerging professions, may be better placed to provide some of the services previously associated with the testing and assessment work of ESPs.

Assessment and teaching

Assessment from the perspective of teachers has also changed. While classroom assessment continues as a practice that involves noticing, and responding to children's presenting behaviours relevant to learning (Johnston & Costello, 2005), some teachers now have postgraduate training equivalent in time and specificity to psychologists. This leads to question of why teachers do not have the right to test, and assess, as do ESPs have done previously. There is now a massive spread and openness of many (previously secure) assessments and tests available via the web, with videos and aids to help interpret the assessments, and online communities that help teachers react appropriately to using and interpreting tests and assessment techniques. The convergence of clinical and assessment (diagnostic and treatment) roles of many education professionals relevant to what was once the singular domain of educational psychology has grown and a number of professions maintain comparable skills and competencies, including graduates of paediatrics, developmental psychology, population health, science, sociology, social work, education (Gilman & Gabriel, 2004) and occupational and speech therapy. However, previous attempts to define what teachers should know and be able to do in relation to assessment have not addressed assessment skills in a systematic way (Brookhart, 2011). Nevertheless, researchers have long recognised assessment skills as a crucial element of effective teaching practice (Gullickson, 1986; Kyrikides, 2014; Schafer, 1991).

There are now teacher education programs that have developed a shared language – almost a meta-language – for understanding and talking about teaching as clinical practice. This language includes assessment which is at odds with deficit models of assessment and is firmly based on asset and strength-based models (Jimerson, Sharkey, Nyborg & Furlong, 2004). These training programs incorporate all of the aspects of the clinical process for assessment and evaluation. While it is "important to note that words such as ‘clinical' and ‘intervention' are not terms traditionally associated with learning and teaching in Australian schools and early childhood/early years settings" (McLean Davies et al., 2013, p.98), this meta-language has increasingly been incorporated into the lexicon of schools and school staff as the framework for professional teacher development programs. The articulation of this meta-language is an attribute of evidence-based clinical practice. In short, teachers are no longer seen simply as a dispenser of knowledge (Reynolds, 1999); they also are taught assessment skills to both be aware of developmental factors and the personal context of their students, in order to improve the learning of their students. Thus, teachers have dual roles to manage the class and plan and intervene at the individual and group level, for example the primary teacher who has trained in specialised practice (e.g., literacy, assessment), or a secondary history teacher who is trained as a clinical, interventionist practitioner.

Assessment lies at the heart of clinical teaching. For example, teachers make, choose, administer, score and interpret assessments and implement interventions to assist students' progress. They practice designing interventions for all students: the advanced, the on-track, and those requiring differentiated curriculums (Wiliam, Lee, Harrison & Black, 2004). Like psychologists, these teachers are trained to identify and use behavioural observations to determine which students are not engaged or achieving and why; some teachers may even perform applied behavioural analysis for specific, evidence–based interventions (Allen & Bowles, 2014). They write reports about students' development through the curriculum and participate in teacher-teacher and parent-teacher meetings from this standpoint (McLean Davies et al., 2013). Thus, clinical teachers need to be skilled in diagnostic assessment. Finally, these findings also support the use of teacher ratings in general because these incorporate performance over a period of time, and can address constructs such as academic enablers, when screening for learning problems. These findings extend into Australia a line of research from the United States that has characterised teachers as accurate assessors of students' levels of academic achievement (Gerber & Semmel, 1984; Gresham et al., 1997; Kettler, Elliott, Davies & Griffin , 2012).

For the past decade teachers have been directed to personalise learning (Keamy, Nicholas, Mahar & Herrick, 2007). A common endorsed approach taken by schools and teachers has been to implement individual learning contracts and individual learning programs. The focus however, has tended to be on classroom behaviours and practices rather than directly on learning and academic achievement. The introduction of clinical approaches equips the teacher to more effectively make an individual assessment of student learning and to understand where they have been, where their learning is at present, and how the student is going to get to the level of learning intended (Dinham, 2013, 2016). Clinical teachers use evidence from student assessment diagnostically whilst assessing their own effectiveness as a teacher and evaluating the process for learning and teaching. For ESPs and classroom teachers, however, the convergence and clarity of the roles of each, both during training and in practice in school settings, requires redefinition and clarification. Teachers more than ESPs are now at the front line of observation, diagnoses, choosing interventions, and evaluating and interpreting progress. This is a dependency upon all parties sharing a common purpose, and methods of practice, which we argue is both possible and also the appropriate direction for education and schools.

Assessment and response to intervention

There have been recent comments on the paradigm shift of current perspective in cognition theory, particularly in regard to RtI (Fox, Carta, Strain, Dunlap & Hemmeter, 2010) which emphasises the importance of assessment in identifying the nature and severity of a student's difficulties and treatment intervention. Further, that testing should not mask or prevent practitioners and teachers from seeing other deficits/abilities that are present in the learning processes of students (Australian Psychological Society, 2014). One of the fundamental principles of RtI, underlying all of the core components, is the importance of matching the severity of student problems with appropriate intervention intensities (Gresham, 2004). All levels of prevention outcomes in the social, emotional, academic and behavioural RtI framework (Hughes & Dexter, 2011) call for effective interventions utilising evidence-based strategies that prevent problems, rather than reacting to problems by employing aversive consequences. This requires early identification and strategic and evaluated intervention (possibly long-term) that can prevent the escalation of problems into more debilitating forms of social-emotional, academic and behavioural functioning manifest in diagnoses. In relation to RtI, the current system of testing and policies on student assessment are not as focused on collaborative, strategic and inclusive practices as they could be (Gresham, 2004).

In order to understand assessment as a common practice of educational professionals, we examine the evolution of the definition and practice of assessment. According to Tayler "assessment is the term used to describe the process of gathering information and evidence about what a child knows and can do" (2000, p. 204). Similarly, Fagan and Wise (2007) defined psychological assessment as a complex problem-solving or information gathering task. By contrast, Cohen, Swerdlik and Phillips (1996) stated psychological assessment can be seen as "the gathering and integration of psychology-related data for the purpose of making a psychological evaluation, accomplished through the use of tools such as tests, interviews, case studies, behavioural observation, and specially designed apparatuses and measurement procedures" (p. 6). For education, by contrast, "assessment in special education is a shared practice not owned exclusively by or limited to any single profession or group of professionals" (National Clearinghouse for Professions in Special Education, 2000; North Carolina School Psychology Association, 2005; Sutton & Letendre, 2000; Zweback & Mortenson, 2002). Importantly, educational assessment is carried out by different professional groups each bringing their own perspective and orientation (Salvia & Ysseldyke, 2004; Sutton, Frye & Frawley, 2009; Zweback & Mortenson, 2002). We view this more inclusive standpoint relevant for our current context and consider the overarching approach defining appropriate assessment.

The arguments to only allow psychologists to be the main source of choosing, administering and interpreting assessment revolves around competency, administration, interpreting and assessment functions, which are intertwined with diagnosis (if necessary), planning and intervention, follow-up and evaluation. The Australian Psychological Society (2011) considered "it is inappropriate for school counsellors and those who are not registered psychologists to administer psychological tests (e.g., WISC-IV), which school counsellors may be asked to do, as they are not trained in the underlying principles of administration, interpretation or reporting requirements of these tests." This concern may be warranted and certainly psychological assessment and testing are central to the activities and functions of ESPs, but as changes continue in the educational environment, a broader understanding application of the expertise of educational professionals following evidence-based principles should be reflected in assessment policy and practice. This is particularly the case given teachers are doing these tasks, more pressure is being placed on teachers to do these tasks, and many recent models of teaching assume these tasks are at the core of the teachers' daily practice. The role of the special educator and special teacher overlap in the areas of planning and intervention with the ESP. Correspondingly, Ashton and Roberts (2006) found that it is already difficult for ESPs to develop a professional identity separate to the role of the special education teacher.

A report by NSW Department of Education and Communities (2011) compared ESPs roles in England, Wales and Scotland and found that ESPs were too heavily involved in assessments, which prevented them from undertaking work in other areas. The Scottish review (Scottish Executive, 2006) found that a more inclusive involvement of school staff, not just ESPs, was necessary in coordinating and supporting assessment and intervention. The experience in Australia is not dissimilar. Teachers and ESPs have similar knowledge in their training and are in a strong position to collaborate on essential processes of intervention. How they provide these services requires consideration of a more collaborative view of best practice policies that are driven by needs and serviced by competent, well supervised practitioners, rather than being constrained by traditional roles. The future benefits to ESPs will be expanding and diversifying their functions and roles to greater leadership and consultation with the students of greatest need and the benefits for teachers and special educators is that it makes RtI more of a team-based consultative effort and increasing the knowledge base all professionals in the schools and providing more (possibly more comprehensively) focused, relevant assessment and consultation.

A proposed process

Given the convergence of the clinical teacher's and the educational psychologist's roles we view assessment as a central part of RtI (Fox, Carta, Strain, Dunlap & Hemmeter, 2010). RtI is comprised of three tiers: Tier 1 is comprised of universal preventative practices; Tier 2 is targeted to attend to at-risk student's needs where specific system adaptation may be required; and Tier 3 is an intensive level of individualised and specifically tailored interventions and assistance (Fox et al, 2010; Hughes & Dexter, 2011). As assessment and evaluation occurs regularly, when a student is identified as reaching Tier 2 or 3 we suggest that a process of assessment and evaluation that follows a systematic procedure, where there is:
  1. an identification of key capabilities to be measured
  2. selection of key performance indicators to be measured
  3. selection of tests and/or processes (pre-designed; standardised; purpose developed depending on need) to be utilised
  4. administration under appropriate conditions
  5. scoring and describing of data
  6. evaluation of results and other information
  7. reporting and concluding
  8. designing of intervention / accommodation / treatment plan
  9. defining of achievable outcomes
  10. allocation of resources
  11. implementation
  12. post-test and overall evaluation
  13. evaluation of outcomes for students
  14. evaluation of procedures.
A return to point 8 may be necessary, or the process may be ceased if successful, or the intervention is designed for maintenance and the stemming of a decline. A return to point 3 may be necessary if the intervention is not successful.

Tier 1 intervention would involve those in the care and contact of students. These are typically teachers and teaching support staff. Tier 2 interventions would be the domain of special education teachers, counsellors and therapists, with some involvement from school psychologists. Tier 3 would relate to the school specialists and ESPs with expertise and experience in dealing with complex problems (e.g., enuresis, truancy, eating disorders, behaviour management). So the role of ESP is less testing for all, and more inclusive and intense intervention at the right moment.

This new role presents a more appropriate approach for educational professionals to take action collectively. It is important however, to acknowledge that similar approaches have experienced difficulty following their implementation in schools. In the United States of America, the introduction of response to intervention (RtI) as a requirement for learning difficulties has blurred the roles of professionals in schools. There can be a lack of clarity in many RtI models as to whose responsibility it is for instructional delivery, intervention selection and implementation, progress monitoring of RtIs, protocols for movement of students among tiers, and diagnosis (Mastropieri & Scruggs, 2005). Professional development, internships and placements in teaching and ESP training programs should ideally include an RtI focus (Jimerson, Burns & VanDerHeyden, 2007), with specialist teachers trained in case management and ESPs trained in leadership, consultation practices, strategic intervention and evaluation of effectiveness.

It is proposed that a three-tier professional assessment RtI model can address the needs of all students and will provide capabilities for implementing class-wide to individualised interventions to address problems. Psychologists can be specifically trained to implement and evaluate evidence-based interventions, particularly moving into, during and out of Tiers 2 and 3. Similarly, as schools become more focused on measuring student outcomes, teachers are encouraged to base instruction on evidence, thereby reducing the gap between research and practice (Burns & Ysseldyke, 2009). General and special education interventions have followed for some time the practice of medicine and psychologists in adopting scientific evidence as a basis for selecting teaching practices (Oakley, 2002). However, there is more work to be done. For example, Emmer and Stough (2001) highlighted the importance of special and general education teachers receiving training in educational psychology, specifically classroom and behaviour management (McIntosh, Brown & Borgmeier, 2008), to deal effectively with Tier 1 and 2 problems.

Assessment, diagnosis, intervention and limits of practice

The International Test Commission (ITC; 2001) guidelines outlined the necessary competencies (knowledge, skills, abilities and other personal characteristics) needed by test users, however they did not specify the professional background or discipline of the test user. The competencies are specified in terms of assessable performance criteria. These criteria provide the basis for developing specifications of the evidence of competence that would be expected from someone seeking qualification as a test administrator. Such competencies cover important issues such as professional and ethical standards in testing, rights of the test taker and other parties involved in the testing process, choice and evaluation of alternative tests, test administration, scoring and interpretation, and report writing and feedback. Both ESPs and SETs need to adhere to the scope and limitation of their practice determined by their previous training and current supervision.

Further, new approaches to assessment in the form of dynamic assessment have been proposed as models incorporating a collaborative approach to testing and intervention. Internships and placements in teaching and school psychology training programs should ideally include an RtI focus with a range of assessment approaches (Saeki, Jimerson, Earhart, Hart, Renshaw, Singh & Stewart, 2011; Rosenfield, 2008). Dynamic assessment is gaining impetus as it is amenable to qualitative analysis and is inclusive of the views of all concerned in the case management approach. It was considered a very worthwhile and valuable part of ESP practice (Lawrence & Cahill, 2014).

On testing, the International Test Commission (ITC) guidelines do not contain detailed descriptions of knowledge and skills, although they refer to declarative knowledge as being basic knowledge of psychometric properties and procedures and technical knowledge of a test, knowledge of test measurement and test results, and understanding of relevant theories as necessary (International Test Commission, 2001). Thus, we argue that psychological and educational theories and constructs should also be included in preparation programs for ESPs and teachers. The argument could be made for a psychologist to do more psychological interpretation/ assessment, whereas a special education teacher does more intervention with their students although teachers adopting a clinical praxis model are doing more assessments, testing and interpretation as part of the regular duties (Ashton & Roberts, 2006).

To relate this back to the discourse surrounding RtI, the changing roles of ESPs and teachers in RtI systems must be clarified to support effective decision-making (Fletcher, Francis, Morris & Lyon, 2005). The lack of clarity is the main challenge (Mastropieri & Scruggs, 2005), and it could be countered by referring to the ITC guidelines coupled with an RtI model with outlined responsibilities for instructional delivery, intervention selection and implementation, progress monitoring of RtIs, protocols for movement of students among tiers, and diagnosis (Cavendish, 2013). This could be tied to a collaborative professional philosophy of inclusive instructional consultation (Hatzichristou & Lampropoulou, 2004; Rosenfield, 2008). In line with this approach and the ITC guidelines, pre-service training of both educational psychologists and teachers should emphasise collaborative practices of assessment, diagnosis and intervention in school settings. Training for clinical specialists and special education teachers in test administration, interpretation, evaluation, intervention planning, treatment and reporting is necessary to ensure implementation and treatment integrity. Further training for ESPs in school wide and regional leadership and consultancy is also necessary.

Inclusion and consultation as guiding principles

Teachers perform an important role in the early identification, referral and management of children and young people with learning disorders, ADHD, and autism spectrum disorders. In a recent study, Moldavsky, Pass and Sayal (2014) found that teachers felt competent to and preferred to assess and manage student's difficulties with support provided by their fellow educational colleagues. They further noted that teachers contribute information about behaviour and functioning to the diagnostic assessment of students by specialist healthcare professionals and their views were essential to establish whether a student's difficulties were pervasive and caused impairment (e.g. as required by diagnostic criteria for ADHD; American Psychiatric Association, 2000).

Further, in regards to the potential of diagnostic standardised interviewing in diagnosis of selected DSM-V categories, there is research that has shown that both professional and non-professional assessments and judgements do not differ remarkably. Both professional and non-professional users of the Diagnostic Interview Schedule (DIS) are able to use diagnostic instruments in a reliable manner and that their judgements tend to agree with those of professionals (Wittchen, Semler & von Zerssen, 1985; Helzer et al., 1985).

In regards to students with disabilities, ESPs and teachers who "share an ability to assess and diagnose the learning problems of students" can better "link assessment results with instruction" (Smith, 2007) – that is, by sharing a philosophy and operating together. School professionals with assessment and diagnostic skills will be the "key player(s) in education's team approach to assessment and diagnosis" (Hatzichristou & Lampropoulou, 2004; National Clearinghouse for Professions in Special Education, 2000; Rosenfield, 1992). The viability and distinctiveness of the educational diagnostician should be evident (Sutton, Frye & Frawley, 2009). However, the philosophy and history of diagnostics and its association with deficits models explaining assessment and intervention make it very questionable from adherents of the strength-based approach that has become a foundation of practice in education. If policy makers and school administrators recognise this convergence of skills and activities and support the need for collaboration while respecting the individual expertise of practitioners from different fields, within a RtI framework, specialists in schools will be better placed to meet the predicted increase in numbers of learners likely to require individualised assessment and intervention to remain mainstream.

Multidisciplinary teams

For the practice of assessment in school settings, multidisciplinary teams within an instructional consultation framework should become the norm (Hatzichristou & Lampropoulou, 2004; Rosenfield, 2008). Multi-disciplinary teams may function more effectively if there are dedicated case-managers or leaders coordinating outcomes for the student with special needs, drawing on the expertise of relevant professionals meeting as required with parents and school staff. ESPs contribute to multidisciplinary teams, both with and through teachers in order to support and improve student outcomes (Maliphant, Cline & Frederickson, 2013). Beyond the provision of evidence-supported approaches to treatment, determined through the use of testing, ESPs must also consider practical applicability of their assessments and recommendations (Maliphant et al., 2013). Similarly, teachers need to consider the fidelity of planned interventions articulated by the ESPs. While the collaboration between teachers and other school professionals, particularly ESPs, has been viewed as difficult due to the assumption that each hold differing standpoints and the often difficult decision as to who is to make the final decisions, we argue for respectful dialogue and shared and negotiated roles within schools for the purpose of joint assessment, planning, intervention and implementation of treatment and follow-up evaluation. Experienced and trained leaders with expertise in instructional consultation should lead such teams.

It must be stated that evaluation of the effectiveness of groups and their interventions becomes important as multidisciplinary groups organise interventions and the effectiveness of the group needs to be evidenced in the service of the progress of the student being assisted. While the future work of teachers, ESPs and other education professionals is likely to incorporate technological advances, ultimately it should be fit for purpose "in the service of their clients and profession" (Cavanagh & Grant, 2006). This will impact educational philosophy in regards to the way we think about humanity, teaching and learning, student engagement and achievement. Broadening means of student observations, diagnosis, and teaching will invariably bear positive implications for stakeholders. Assessment is now at the forefront of processes for educational improvement, both in terms of policy and practice. Yet research shows a lack of deep understanding and expertise, among educational professionals, of research-based principles for effective assessment policy and practice (Astin & Antonio, 2012).

Concluding comments

We propose that to resolve the problem of demand for assessment and intervention in school settings, a broadening of the competencies and skills of teachers to accommodate the need of some assessments previously performed by some specialists is necessary. Further, ESPs and specialist educators need to re-distribute their roles and responsibilities in line with RtI and instrumental consultations. The aim is to provide high quality training for specialist teachers to allow for more direct assessment and intervention at Tier 1 and Tier 2. Correspondingly, this would release ESPs to engage in more consultation and training of school staff and focus on assessment and intervention with students presenting with more complex problems and engaging in a variety of activities of intervention – particularly moving between Tiers 2 and 3. We have argued that as a result of the increasing need for varying levels of assessments of students, and as the roles and demands of ESPs in schools outweigh supply and time, and a range of professional services need to be shared depending on training, expertise and supervision. At the direct intervention level, the roles and functions of educational psychological assessments are changing and that a broader view of assessors and assessments needs to be considered.

Various clinical and psychological problems (for example, related to mood: depression, anxiety; and social: aggression, social anxiety; and adjustment and developmental issues) should continue to be differentiated from learning and educational problems, however, clinical and specialist teachers need to take and be encouraged to assume a greater role in the full range of activities associated with clinical intervention, at tier 1 and 2. All educators individualising learning for students need to be able to assess the progress of learning within the context of curriculum areas and find solutions to enhance the trajectory of learning of each student, ideally based on a strength-based approach. Psychologists practising in schools may need to engage more in psychological and clinically complex cases through multidisciplinary teams and lead such teams when necessary. Notwithstanding, ESPs and clinical teachers in schools essentially approach student assessment with the same purpose and skills, albeit from different backgrounds, perspectives and theoretical orientations. We have argued for a more inclusive approach and for the foundations of this approach to be set in pre-service training of both ESPs and teachers and supported in schools and professional training – particularly incorporating the key notions of teaching as a "clinical practice". It will also require extensive professional development for teachers who have not had clinical pre-service training and want to practice clinically within school settings, especially in psychometric testing procedures. Correspondingly, current teacher education courses may need to re-orient the curriculum and training focus and new specialist, Masters courses may be necessary.

The professions of educational and developmental psychologists and specialist teaching are experiencing new demands as the ways that testing and assessing students become more frequent and are used to inform the competence and capability of students. These demands present challenges to provide better information on students collectively and individually, with more meaningful and effective, evidence-based interpretation and intervention to meet the needs of students, their parents and school communities. By providing better pre-service and in-service training on specific elements of assessment, testing and intervention and evaluation as well as consultation practices, case management and leadership, a more responsive and informed workforce will be operating in schools. Without such training, practices in schools will be limited and service provision will be diminished as demand increases. While the focus on basic skills and the need for disciplinary knowledge cannot be ignored, there is a need to work consultatively towards increasing the capacity of school staff to manage assessment and intervention. Further, following the International Test Commission's (2001) encouragement, we acknowledge the value of including a wider range of professionals in assessment. Such inclusive practices should be supported by State and Federal Government policy initiatives and funding for training and resources to ensure the effectiveness of such changes.


AITSL (2011). Australian Professional Standards for Teachers. Australian Institute for Teaching and School Leadership. http://www.aitsl.edu.au/australian-professional-standards-for-teachers

Allen, K. A. & Bowles, T. V. (2014). Examining the effects of brief training on the attitudes and future use of behavioral methods by teachers. Behavioral Interventions, 29(1), 62-76. http://dx.doi.org/10.1002/bin.1376

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.

Ashton, R. & Roberts, E. (2006). What is valuable and unique about the educational psychologist? Educational Psychology in Practice, 22(2),111-123. http://dx.doi.org/10.1080/02667360600668204

Astin, A. W. & Antonio, A. L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. (2nd ed.). Lanham, Maryland: Rowman & Littlefield Publishers & American Council on Education.

Australian Psychological Society (2014). College of Educational and Developmental Psychologists Newsletter, October. APS. http://www.psychology.org.au/Assets/Files/CEDP-newsletter-October2014.pdf

Bowles, T. (2009). Writing about Australian educational and developmental psychology: A 25 year retrospective summary. Australian Educational and Developmental Psychologist, 26(1), 4-19. https://search.informit.com.au/documentSummary;dn=127711366617723;res=IELHSS

Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3-12. http://dx.doi.org/10.1111/j.1745-3992.2010.00195.x

Burns, K. M. & Ysseldyke, E. J. (2009). Reported prevalence of evidence-based instructional practices in special education. The Journal of Special Education, 43(1), 3-11. http://dx.doi.org/10.1177/0022466908315563

Canadian Psychological Association (2007). Professional practice guidelines for school psychologists in Canada. http://www.cpa.ca/cpasite/UserFiles/Documents/publications/CPA%20Practice%20Guide.pdf

Canning, P. & Strong, C. (1996). Special matters: The report of the review of special education. St. John's, Newfoundland, Canada: Government of Newfoundland and Labrador.

Cavanagh, M. J. & Grant, A. M. (2006). Coaching psychology and the scientist-practitioner model. In D.A. Lane & S. Corrie (Eds.), The modern scientist-practitioner: A guide to practice in psychology. London: Routledge.

Cavendish, W. (2013). Identification of learning disabilities implications of proposed DSM-5 criteria for school-based assessment. Journal of Learning Disabilities, 46(1), 52-57. http://dx.doi.org/10.1177/0022219412464352

Cohen, R. J., Swerdlik, M. E. & Phillips, S. M. (1996). Psychological testing and assessment: An introduction to tests and measurement (3rd ed.). Mountain View, CA: Mayfield Publishing Co.

Corkum, P., French, F. & Dorey, H. (2007). School psychology in Nova Scotia: A survey of current practices and preferred future roles. Canadian Journal of School Psychology, 22(1), 108-120. http://dx.doi.org/10.1177/0829573507301121

Dinham, S. (2013). Connecting clinical teaching practice with instructional leadership. Australian Journal of Education, 57(3), 225-236. http://dx.doi.org/10.1177/0004944113495503

Dinham, S. (2016). Leading learning and teaching. Melbourne: ACER Press.

Dunn, L. M. & Dunn, L. M. (2007). PPVT-4: Peabody picture vocabulary test manual. Circle Pines, MN: American Guidance Service.

Dunsmuir, S. & Kratochwill, T. R. (2013). From research to policy and practice: Perspectives from the UK and the US on psychologists as agents of change. Educational & Child Psychology, 30(3), 60-71. http://shop.bps.org.uk/publications/publication-by-series/educational-and-child-psychology/educational-child-psychology-vol-30-no-3-september-2013-perspectives-on-sir-cyril-burt-and-100-years-of-professional-educational-psychology.html

Emmer, E. T. & Stough, L. M. (2001). Classroom management: A critical part of educational psychology, with implications for teacher education. Educational Psychologist, 36(2), 103-112. http://dx.doi.org/10.1207/S15326985EP3602_5

Evans, S. P., Grahamslaw, L., Henson, L. & Prince, E. (2012). Is the restructured initial professional training in educational psychology fit for purpose? Educational Psychology In Practice, 28(4), 373-393. http://dx.doi.org/10.1080/02667363.2012.725976

Fagan, T. K. & Wise, P. S. (1994). School psychology: Past, present, and future. White Plains, NY: Longman.

Fagan, T. K. & Wise, P. S. (2007). School psychology: Past, present and future (3rd Ed.). Bethesda, MD: National Association of School Psychologists.

Fletcher, J. M., Francis, D. J., Morris, R. D. & Lyon, G. R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child & Adolescent Psychology, 34(3), 506-522. http://dx.doi.org/10.1207/s15374424jccp3403_7

Fox, L., Carta, J., Strain, P. S., Dunlap, G. & Hemmeter, M. L. (2010). Response to intervention and the pyramid model. Infants and Young Children, 23(1), 3-13. http://dx.doi.org/10.1097/IYC.0b013e3181c816e2

Gilman, R. & Gabriel, S. (2004). Perceptions of school psychological services by education professionals: Results from a multi-state survey pilot study. School Psychology Review, 33(2), 271-286. http://www.nasponline.org/publications/periodicals/spr/volume-33/volume-33-issue-2/perceptions-of-school-psychological-services-by-education-professionals-results-from-a-multi-state-survey-pilot-study

Gresham, F. M. (2004). Current status and future directions of school-based behavioral interventions. School Psychology Review, 33(3), 326-343. http://www.nasponline.org/publications/periodicals/spr/volume-33/volume-33-issue-3/current-status-and-future-directions-of-school-based-behavioral-interventions

Harris , G. E. & Joy, R. M. (2010). Educational psychologists' perspectives on their professional practice in Newfoundland and Labrador. Canadian Journal of School Psychology, 25(2), 205-220. http://dx.doi.org/10.1177/0829573510366726

Hatzichristou, C. & Lampropoulou, A. (2004). The future of school psychology conference: A cross-national approach to service delivery. Journal of Educational and Psychological Consultation, 15(3-4), 313-333. http://dx.doi.org/10.1080/10474412.2004.9669520

Helzer, J. E., Robins, L. N., McEvoy, M. A., Spitznagle, E. L., Stoltzman, R. K., Farmer, A. & Brockington, I. F. (1985). A comparison of clinical and diagnostic interview schedule diagnoses: Physician reexamination of lay-interviewed cases in the general population. Archives of General Psychiatry, 42(7), 657-666. http://dx.doi.org/10.1001/archpsyc.1985.01790300019003

Hughes, C. A. & Dexter, D. D. (2011). Response to intervention: A research-based summary. Theory into Practice, 50(1), 4-11. http://dx.doi.org/10.1080/00405841.2011.534909

Hutton, J. B., Dubes, R. M. & Muir, S. (1992). Assessment practices of school psychologists: Ten years later. School Psychology Review, 21, 271-284. http://www.nasponline.org/publications/periodicals/spr/volume-21/volume-21-issue-2/assessment-practices-of-school-psychologists-ten-years-later

Ingram, R. (2013). Interpretation of children's views by educational psychologists: Dilemmas and solutions. Educational Psychology in Practice, 29(4), 335-346. http://dx.doi.org/10.1080/02667363.2013.841127

International Test Commission (2001). International guidelines for test use. International Journal of Testing, 1(2), 93-114. http://dx.doi.org/10.1207/S15327574IJT0102_1

Jimerson, S. R., Burns. M. K. & VanDerHeyden, A. M. (Eds) (2007). Handbook of response to intervention: The science and practice of assessment and intervention. New York: Springer.

Jimerson, S. R., Sharkey, J. D., Nyborg, V. & Furlong, M. J. (2004). Strength-based assessment and school psychology: A summary and synthesis. The California School Psychologist, 9(1), 9-19. http://dx.doi.org/10.1007/BF03340903

Johnston, P. & Costello, P. (2005). Principles for literacy assessment. Reading Research Quarterly, 40(2), 256-267. http://dx.doi.org/10.1598/RRQ.40.2.6

Jordan, J. J., Hindes, Y. L. & Saklofske, D. H. (2009). School psychology in Canada: A survey of roles and functions, challenges and aspirations. Canadian Journal of School Psychology, 24(3), 245-264. http://dx.doi.org/10.1177/0829573509338614

Keamy, R. K., Nicholas, H., Mahar, S. & Herrick, C. (2007). Personalising education: From research to policy and practice. Victorian Department of Education and Early Childhood Development, Office for Education Policy and Innovation, Paper No. 11. http://www.eduweb.vic.gov.au/edulibrary/public/publ/research/publ/personalising-education-report.pdf

Kettler, R. J., Elliott, S. N., Davies, M. & Griffin, P. (2012). Testing a multi-stage screening system: Predicting performance on Australia's national achievement test using teachers' ratings of academic and social behaviours. School Psychology International, 33(1), 93-111. http://dx.doi.org/10.1177/0143034311403036

Lawrence, N. & Cahill, S. (2014). The impact of dynamic assessment: An exploration of the views of children, parents and teachers. British Journal of Special Education, 41(2), 191-211. http://dx.doi.org/10.1111/1467-8578.12060

Maliphant, R., Cline, T. & Frederickson, N. (2013). Educational psychology practice and training: The legacy of Burt's appointment with the London County Council. Educational and Child Psychology, 30(3), 46-59. http://shop.bps.org.uk/publications/educational-child-psychology-vol-30-no-3-september-2013-perspectives-on-sir-cyril-burt-and-100-years-of-professional-educational-psychology.html

Mastropieri, M. A. & Scruggs, T. E. (2005). Feasibility and consequences of response to intervention: Examination of the issues and scientific evidence as a model for the identification of individuals with learning disabilities. Journal of Learning Disabilities, 38(6), 525-531. http://dx.doi.org/10.1177/00222194050380060801

McIntosh, K., Brown, J. A. & Christopher J. Borgmeier, C. J. (2008). Validity of functional behavior assessment within a response to intervention framework: Evidence, recommended practice, and future directions. Assessment for Effective Intervention, 34(1), 6-14. http://dx.doi.org/10.1177/1534508408314096

McLean Davies, L., Anderson, M., Deans, J., Dinham, S., Griffin, P., Kameniar, B., Page, J., Reid, C., Rickards, F., Tayler, C. & Tyler, D. (2013). Masterly preparation: Embedding clinical practice in a graduate pre-service teacher education programme. Journal of Education for Teaching, 39(1), 93-106. http://www.tandfonline.com/doi/abs/10.1080/02607476.2012.733193?journalCode=cjet20

Moldavsky, M., Pass, S. & Sayal, K. (2014). Primary school teachers' attitudes about children with attention deficit/hyperactivity disorder and the role of pharmacological treatment. Clinical Child Psychological Psychiatry, 19(2), 202-216. http://dx.doi.org/10.1177/1359104513485083

NAPLAN (2013). National Assessment Program, Literacy and Numeracy. http://www.nap.edu.au/naplan/naplan.html

National Clearinghouse for Professions in Special Education (NCPSE) (2000, Spring). Educational diagnostician: Making a difference in the lives of students with special needs. [not found 6 Mar 2016] http://www.cec.sped.org/AM/Template.cfm?Section=Job_

New South Wales Department of Education and Communities (2011). The psychological and wellbeing needs of children and young people: Models of effective practice in educational settings: Final report. Urbis Pty Ltd. http://www.det.nsw.edu.au/media/downloads/about-us/statistics-and-research/public-reviews-and-enquiries/school-counselling-services-review/models-of-effective-practice.pdf

North Carolina School Psychology Association (2005). Credentials for administration, scoring, and interpretation of standardization achievement tests for eligibility determination. [not found 6 Mar 2016] http://ncschoolpsy.org/NCSPAPositionPaperonCredentialingrevised22Feb06.doc

Oakley, A. (2002). Social science and evidence-based everything: The case of education. Educational Review, 54(3), 277-286. http://dx.doi.org/10.1080/0013191022000016329

Psychology Board of Australia (2015). Public consultation on proposed amendments to the Provisional registration standard and the Guidelines for the 4+2 internship program. Current Consultations of the Psychology Board of Australia. http://www.psychologyboard.gov.au/News/Past-Consultations.aspx

Reschly, D. J. (2000). The present and future status of school psychology in the United States. School Psychology Review, 29(4), 507-522. http://www.nasponline.org/publications/periodicals/spr/volume-29/volume-29-issue-4/the-present-and-future-status-of-school-psychology-in-the-united-states

Reschly, J. D. (1997). Utility of individual ability measures and public policy choices for the 21st century. School Psychology Review, 26(2), 234-241. http://www.nasponline.org/publications/periodicals/spr/volume-26/volume-26-issue-2/utility-of-individual-ability-measures-and-public-policy-choices-for-the-21st-century

Reynolds, M. (1999). Standards and professional practice: The TTA and initial teacher training. British Journal of Educational Studies, 47(3), 247-260. http://dx.doi.org/10.1111/1467-8527.00117

Rosenfield, S. (2008). Best practices in instructional consultation and instructional consultation teams. In A. Thomas & P. Harrison (Eds), Best Practices in School Psychology VI, 1645-1660. Bethesda, MD: National Association of School Psychologists.

Rosenfield, S. (1992). Developing school-based consultation teams: A design for organizational change. School Psychology Quarterly, 7(1), 27-46. http://psycnet.apa.org/doi/10.1037/h0088248

Saeki, E., Jimerson, S. R., Earhart, J., Hart, S. R., Renshaw, T., Singh, R. D. & Stewart, K. (2011). Response to intervention (RtI) in the social, emotional, and behavioral domains: Current challenges and emerging possibilities. Contemporary School Psychology, 15, 43-52. http://files.eric.ed.gov/fulltext/EJ934705.pdf

Salvia, J. & Ysseldyke, J. E. (2004). Assessment in special and inclusive education (9th ed.). Boston, MA: Houghton Mifflin.

Scottish Executive (2006). Review of provision of educational psychology services in Scotland. The Currie Report, Edinburgh: Scottish Executive HMIE. http://www.gov.scot/Publications/2002/02/10701/File-1

Smith, D. (2007). Academic achievement: Assessment in Tiers 2 and 3. Presentation at the annual conference of the Council for Educational Diagnostic Services, New Orleans, LA.

Sutton, J. P., Frye, E. M. & Frawley, P. A. (2009). Nationally Certified Educational Diagnostician (NCED): The professional credential for assessment specialists. Assessment for Effective Intervention, 35(1), 17-23. http://dx.doi.org/10.1177/1534508408325579

Sutton, J. P. & Letendre, L. (2000). Preserving the testing rights of diagnosticians: FACT to the rescue! Presentation at the annual conference of the Council for Educational Diagnostic Services, November, San Diego, CA.

Tayler, C. (2000). Monitoring young children's literacy learning. In C. Barratt-Pugh & M. Rohl (Eds.), Literacy in the early years (pp. 197-222). Crows Nest: Allen & Unwin.

Wechsler, D. (2005). Wechsler Individual Achievement Test 2nd Edition (WIAT II). London: The Psychological Corporation.

Wiig, E. H., Semel, E. & Secord, W. A. (2013). Clinical evaluation of language fundamentals, 5th edition - Screening test (CELF-5 screening test). Toronto, Canada: The Psychological Corporation.

Wiliam, D., Lee, C., Harrison, C. & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education, 11(1), 49-65. http://dx.doi.org/10.1080/0969594042000208994

Wittchen, H. U., Semler, G. & von Zerssen, D. (1985). A comparison of two diagnostic methods: Clinical ICD diagnoses vs DSM-III and research diagnostic criteria using the Diagnostic Interview Schedule (version 2). Archives of General Psychiatry, 42(7), 677-684. http://dx.doi.org/10.1001/archpsyc.1985.01790300045005

Woodcock, R. W., McGrew, K. S. & Mather, N. (2014). Woodcock-Johnson IV Test. Itasca, IL: Riverside Publishing Company.

Zweback, S. & Mortenson, B. P. (2002). State certification/licensure standards for educational diagnosticians: A national review. Education, 123(2), 370-374, 379.

Authors: Dr Terry Bowles is a Senior Lecturer in Educational Psychology, Melbourne Graduate School of Education, The University of Melbourne.
Email: tbowles@unimelb.edu.au

Associate Professor Janet Scull, Faculty of Education, Monash University.
Email: janet.scull@monash.edu

Professor John Hattie, Melbourne Graduate School of Education, The University of Melbourne.
Email: jhattie@unimelb.edu.au

Associate Professor Janet Clinton, Melbourne Graduate School of Education, The University of Melbourne.
Email: jclinton@unimelb.edu.au

Sr Dr Geraldine Larkins is a senior lecturer in Education Studies/Professional Experience in the Faculty of Education and Arts at the Australian Catholic University's Melbourne Campus.
Email: geraldine.larkins@acu.edu.au

Vincent Cicconi, Melbourne Graduate School of Education, The University of Melbourne.
Email: vincent.cicconi@unimelb.edu.au

Doreen Kumar, Melbourne Graduate School of Education, The University of Melbourne.
Email: doreen.kumar@unimelb.edu.au

Jessica L. Arnup, Melbourne Graduate School of Education, The University of Melbourne.

Please cite as: Bowles, T., Scull, J., Hattie, J., Clinton, J., Larkins, G., Cicconi, V., Kumar, D. & Arnup, J. L. (2016). Conducting psychological assessments in schools: Adapting for converging skills and expanding knowledge. Issues in Educational Research, 26(1), 10-28. http://www.iier.org.au/iier26/bowles.html

[ PDF version of this article ] [ Contents Vol 26 ] [ IIER Home ]
This URL: http://www.iier.org.au/iier26/bowles.html
Created 10 Mar 2016. Last revision: 12 Mar 2016.
HTML: Roger Atkinson [rjatkinson@bigpond.com]