Building resiliency: Introducing the pre-service Special Educator Efficacy Scale
Mary M. Lombardo-Graves
University of Evansville, USA
The goal of this study was to examine existing teaching self-efficacy instruments for an appropriate measure for pre-service special education candidates. As the review of literature for this study revealed, there were very few self-efficacy instruments specific to special education, and these focused on specific populations and settings. During the preparation for this study, there were no teaching self-efficacy scales to date designed specifically to measure initial special education teaching self-efficacy during teacher preparation. The Pre-Service Special Educator Efficacy Scale (SEES-I) instrument was created using research based guidelines and Council for Exceptional Children (CEC) standards for the initial skill set of novice special educators. This study makes a contribution to the field of special education and teaching self-efficacy research by developing and analysing a new instrument to measure pre-service special education teaching self-efficacy.
These statistics and persistent gaps in student achievement nationwide, particularly among students with disabilities, have prompted further investigation into the preparation and retention of special education teachers (U.S. Department of Education, National Center for Education Statistics, 2016). To meet the above mentioned challenges, several theories have been explored to improve teacher retention and effectiveness. Teacher self-efficacy based on Bandura's (1997) cognitive theory of social learning has been researched extensively (Coladarci & Brenton, 2012; Lee, Patterson & Vega, 2011; Pendergast, Garvis & Keogh, 2011; Shippen, Flores, Crites, Patterson, Ramsey, Houchins, & Jolivette, 2011; Skaalvik & Skaalvik; 2007; Woolfolk Hoy, 2007). High teacher self-efficacy has been considered a predictor of teachers who may be better able to deal with the challenges of the first years of teaching. Teacher self-efficacy is also considered to be an indicator of teacher motivation, resiliency, and effectiveness in the classroom (Lee, Patterson & Vega, 2011; Pendergast, Garvis & Keogh, 2011). High levels of teacher self-efficacy are associated with confidence in meeting student needs, improving student motivation, and higher levels of student achievement (Woolfolk Hoy, 2007). The ability of individuals to influence the world around them is strongly linked to the belief in their ability to bring about change. A teacher's sense of self-efficacy has also been associated with personal goal setting and the persistence to meet these goals.
The significance of this study has implications for all stakeholders, including pre-service teacher educators, special educators, administrators, parents, and students with disabilities. At the time of this study, the research on teacher self-efficacy has shown limited application to special educators, as most scales addressed general education and content specific teaching self-efficacy. Researchers addressing special education teaching self-efficacy have focused on specific settings or categories of disabilities (Hartmann, 2012; Ruble, Toland, Birdswhistell, McGrew & Usher, 2013; Sharma, Loreman & Forlin, 2012; Taylor, 2012; Walls, 2007). The existing teacher self-efficacy instrumentation does not address general roles and responsibilities of the special educator (Brownell, Ross, Colon & McCallum, 2005; Duffy & Forgan, 2005; Washburn-Moses, 2010). The researcher sought to address the need for a teacher self-efficacy measurement instrument specific to the initial skill set required by CEC for special educators entering the profession (Council for Exceptional Children, 2013)
The term teacher self-efficacy was originally conceived by Research and Development (RAND) Corporation researchers using two items from Rotter's (1966) locus of control instrument. Researchers conducting studies for the RAND Corporation created a scale for measuring a teaching self-efficacy score. This instrument identified two dimensions related to teacher self-efficacy. Personal teaching efficacy (PTE) referred to teachers' personal beliefs in their ability to produce desired results. General teaching efficacy (GTE) was defined as a teacher's effectiveness and power of teaching to produce results among students in the classroom (Skaalvik & Skaalvik, 2007).
Several versions of the Teacher Efficacy Scale (TES) were developed in an ongoing effort to identify the most effective way to measure teacher self-efficacy (Bandura, 1997; Schmitz & Schwarzer, 2000; Tschannen-Moran & Woolfolk Hoy, 2007); however, Bandura's (1997) work was the foundation for the development of many teacher self-efficacy measurement instruments and continued research. Bandura's Teacher Self-Efficacy Scale is based on the belief that a teacher's efficacy beliefs are not consistent across content areas or teaching tasks. The scale was developed to include six dimensions in the measurement of teaching efficacy, including: Efficacy to Influence Decision Making, Instructional Self-Efficacy, Disciplinary Self-Efficacy, Efficacy to Enlist Parental Involvement, Efficacy to Enlist Community Involvement, and Efficacy to Create a Positive School Climate. The 100-point confidence scale ranged from (0) 'Cannot do at all' to (100) 'Highly certain can do'.
The Teachers' Sense of Efficacy Scale (TSES) was developed at The Ohio State University, and a factor analysis identified three dimensions of teacher efficacy (Tschannen-Moran & Woolfolk Hoy, 2007). These dimensions were student engagement, instructional strategies, and classroom management. The respondents were asked to rate the 24 items on a nine-point scale in terms of how much they believed they could contribute to the situations presented. The responses ranged from (1) 'Nothing' to (9) 'A great deal'. This scale has been used internationally by researchers, with translations in Turkish, Chinese, Arabic, Greek, and Portuguese. Tschannen-Moran and Woolfolk Hoy (2007) provided information on construct validity and reliabilities, and the analysis of the instrument showed correlations among the variable mean scores, standard deviations, and Cronbach's alphas (Cronbach, 1982; Field, 2013).
Schmitz and Schwarzer (2000) developed another instrument to measure teacher self-efficacy. The researchers identified four specific areas within the teaching profession they believed to be of great importance to effective teaching. These areas were defined as professional development, accomplishments, interactions, and the ability to cope with stress. The response format required respondents to rate efficacy beliefs ranging from (1) 'Not true at all' to (4) 'Exactly true.' The ten items were constructed using Bandura's (1997) guidelines based on social cognitive theory. The researchers tested for validity and test-retest reliability for two trial years for optimum validity. The results indicated the more specific instrument was a reliable measure and yielded higher associations with personal attitudes toward teaching than general efficacy instruments.
The Norwegian Teacher Self-Efficacy Scale (Skaalvik & Skaalvic, 2007) was adapted from the TSES (Tschannen-Moran & Woolfolk Hoy, 2007) to study the effects of self-efficacy on teacher burn-out. This multi-dimensional scale consisted of 24 items and also followed Bandura's (1997) guidelines for survey item creation. The dimensions measured teachers' self-efficacy across instruction, differentiating for individual student needs, motivating students, maintaining discipline, collaborating with colleagues and parents, and coping with change. Each dimension contained four items with responses based on a seven-point scale. Analysis of the instrument showed correlations among the variable mean scores, standard deviations, and Cronbach's alphas.
Bandura's (1997) guidelines for item construction were a common theme across the existing scales. The items included in the scales were specific and addressed the domains of functioning being studied. The suggested "I can" phasing of the items was used. The majority of the scale construction was based on the suggested format of a single interval ranging from zero to ten. Bandura (1997) also suggested pre-testing all of the items in the instrument. Data from pilot survey administration were analysed, and the studies provided evidence of validity and reliability. The above mentioned measures of general teaching self-efficacy lead to the examination of teaching self-efficacy for specific populations and content areas and informed the creation of a teaching self-efficacy scale for special educators.
Additional studies addressed teaching self-efficacy for specific categories of disabilities. A preliminary study of the Autism Self-Efficacy Scale for Teachers (ASSET) analysed a pilot administration of the tool (Ruble et al., 2013). The evaluation of the new measure indicated one factor and responses with internal consistency and construct validity. The Teacher Inventory (Paneque & Barbetta, 2006) was developed to measure self-efficacy beliefs of special educators working with English language learners with disabilities. This instrument was again designed using Bandura's (1997) guidelines and contained 20 items based on a nine-point scale as well as open-ended questions. The results indicated higher levels of efficacy were associated with the teachers' proficiency in the students' native language.
Along with these self-efficacy instruments designed to measure teaching efficacy for specific areas of disability, instruments were being developed to measure teaching self-efficacy in specific special education settings and grade levels. Taylor (2012) used case study research including the use of a teaching self-efficacy scale to measure self-efficacy for special education teachers at the secondary level. Two studies focused on special education teaching self-efficacy teacher efficacy for implementing inclusive practices. (Sharma, Loreman & Forlin, 2012; Walls, 2007). Walls (2007) created the Teacher Efficacy for the Inclusion of Young Children with Disabilities (TEIYD). This research compared self-efficacy for the inclusion setting among pre-service teachers in two teacher preparation programs, early childhood and early childhood special education. The researcher provided a reliability and validity analysis, and the findings indicated no significantly significant differences between groups.
The Teacher Efficacy Scale was modified for use in two studies to measure self-efficacy among special educators in the resource setting and at the elementary and secondary level (Coladarci & Brenton, 2012; Shippen, et al., 2011). A factor analysis was conducted to test the validity of the revised instrument. The items were modified by adding "with disabilities" to the statements regarding students. It was reported that the factor analysis revealed comparable results to the original scale designed for regular educators. The study conducted by Coladarci and Brenton (2012) also examined the effects of teacher supervision on self-efficacy, and the findings revealed a significant positive relationship between the variables.
Lee, Patterson and Vega (2011) also conducted research that measured teacher self-efficacy based on the quality of the content, support, and resources in the preparation of special education teachers. The survey tool measured both personal teaching efficacy (PTE) and general teaching efficacy (GTE). The PTE was defined as the level of personal confidence in the ability to teach, while GTE referred to the individuals' feeling of power within teaching. The researchers investigated the preparation of pre-service teachers participating in an alternative certification program in the state of California to address a special education teacher shortage. The participants (N=154) were all novice special education teachers holding alternative credentials.
Lee, Patterson and Vega (2011) examined the correlation between the components of the special education teacher preparation alternative certification program and perceived teaching efficacy. The results indicated that the PTE and GTE were unrelated factors. The respondents (N=92) indicated higher levels of PTE compared to GTE. They also reported high levels of support during teacher preparation and diminished support when they entered the field due to limited contact with special education mentors. The questions regarding challenges to being an effective teacher revealed three major themes: working conditions, support, and student issues. The working condition issues were related to a lack of resources, planning time, and large caseloads. The respondents also reported a lack of support from administrators and access to special education mentors. There were also concerns over dealing with severe student discipline challenges. There was limited access to supplementary personnel and services for the teachers dealing with students in need behaviour interventions and supports. They included detailed tables to illustrate the various categories, demographics, and descriptive statistics; however, they did not offer evidence of validity or reliability testing for the instrumentation used. The vast majority of research on special education teaching self-efficacy in the above mentioned studies was related to in-service teaching and only one examined self-efficacy among pre-service teachers.
Pendergast, Garvis, and Keogh (2011) conducted a study involving pre-service teachers over three Graduate Diploma of Education programs: Early Childhood, Primary and Secondary. The researchers utilised the Teacher Sense of Efficacy Scale (Tschannen-Moran & Woolfolk Hoy, 2007) to measure self-efficacy during the first week of the first semester, prior to any classroom experience, and again at the end of the final semester after completing a seven week practical experience. The scale consisted of three subscales and measured self-efficacy in instructional strategies, classroom management, and student engagement. This particular study focused solely on the teacher preparation program and its relationship to pre-service teacher perceptions of self-efficacy. The scale consisted of 24 items based on a nine-point continuum, with nine being the highest level of self-efficacy. The findings revealed a decline in mean and standard deviation for teacher self-efficacy between a survey one mean of 7.40 (SD=0.77) and a survey two mean of 6.98 (SD=1.29). Although the findings were surprising, the discussion of these findings indicated the decline may have been a result of the candidates' beliefs prior to practical experience changing once they had actually experienced the reality of classroom teaching.
Another example of a quantitative study at the pre-service level focused on a specific mentoring intervention for teachers of primary science (Hudson & Skamp, 2003). This study utilised a two-group post-test only design. There was a group of 60 final-year pre-service teachers (control group) and a second group of 12 final-year pre-service teachers (intervention group). The intervention group was provided with a four-week intensive mentoring intervention on the teaching of primary science. A five factor self-efficacy survey was then administered to both groups at the end of the semester. The findings suggested evidence of improved teaching practices of the mentees included in the study. The researchers asserted a specific and intensive mentoring intervention may be effective in improving teacher readiness even when administered over a relatively short period of time. Some limitations to the study were a relatively small sample size and a four-week period during one academic semester.
The majority of the research conducted in the development of self-efficacy during teacher preparation utilised qualitative phenomenological case studies, which included interviews, observations, focus groups, artefacts, and reflective journaling. There were relatively few quantitative studies focused specifically on self-efficacy beliefs of pre-service special education teachers. The need for quality program design in special education and specialised training has evolved from the passage of Federal mandates in an age of accountability (U.S. Department of Education, 2016). Teacher education programs are responsible for the development of pre-service teacher identity and self-efficacy. A high level of self-efficacy at the pre-service teacher level translates into better resiliencies among novice teachers and effective teaching skills (Pendergast, et al., 2011).
The study included pre-service special education teacher candidates enrolled in two accredited special education teacher preparation programs, one at each university. The participants were undergraduate candidates seeking initial licensure in special education from one private and one public institution. They were enrolled in at least one of the nine sections of special education coursework with an associated, semester-long clinical internship or student teaching practicum. The criteria for participation also included the requirement of the completion of a minimum of one clinical internship. This criterion ensured that the participants had some experience in the classroom and could provide responses based on practical experience and exposure to realistic roles of special educators. The candidates ranged in age from 19-22 years and were in the second to fourth year of a four-year teacher preparation program.
The preliminary work of the instrument construction consisted of pilot questionnaires and open-ended interviews (Bandura, 1997). The documentation and analysis of the responses from pre-service special education candidates provided information on the tasks, domains, and challenges to efficacy. The candidates identified areas they felt they needed to develop to improve teaching self-efficacy and readiness for the first year of teaching. The data and information from research literature were used to develop the survey items. Then the pilot instrument was reviewed by five scholars in various fields of study including teaching self-efficacy measurement, special education, and quantitative research.
The guidelines for item construction included the avoidance of non-specific examples. The items were created to be as specific as possible, to avoid ambiguity, and to be tailored to the particular domain of functioning being studied (Bandura, 1997). Because self-efficacy is perceived as self-reported capabilities, the suggested phasing of the items included "I can" statements rather than statements of intent such as "I will." Bandura also offered recommendations for a scale construction based on 100 points and a ten point interval ranging from (0) 'Cannot do' to (100) 'Highly certain can do' or a simpler format developed on a single interval ranging from zero to ten.
Bandura (1997) strongly suggested pre-testing all of the items in the instrument. Details from the pilot survey are included in the next section. Items for this study that were too general were re-written or removed. Items that appeared to test similar dimensions of special educator self-efficacy were combined within the instrument scoring. The items were designed to measure efficacy in specific roles and responsibilities of a special educator's initial teaching skill set (Council for Exceptional Children, 2013). When the pilot analysis revealed items in which the maximum efficacy level was selected by the test respondents, the items were adapted to increase the difficulty level of the task. Cronbach's (1982) alpha was used to assess the internal reliability of the scores.
Another consideration in creating the efficacy scale for this study was the response bias possible with self-assessment instruments. Administration instructions were utilised as a tool to reduce the occurrence of response bias (Bandura, 1997). The instrument was completed privately with identification coding rather than respondent names and was administered anonymously through a computerised data collection system. The researcher included a statement of anonymity and the purpose of the research to encourage frankness in responses. The importance of the participants' contributions to the field of study was emphasised. Bandura (1997) recommended a very general, non-descriptive instrument title to avoid any influence on item responses. The instrument included sample items to familiarise the respondents with the measurement scale being used prior to completing the actual efficacy items being studied.
The survey instrument was created using recommended guidelines and consisted of 23 numerical scale (0-10) response items (Woolfolk Hoy, 2007). Discussion and interviews with pre-service special education teacher candidates were used to identify the domains of special education pre-service teacher efficacy and the challenges that impeded the perceived levels of pre-service teaching efficacy. Candidates revealed areas of professional preparation they believed needed further development prior to the first year of teaching. Input from pre-service candidates was compared to initial teaching standards for special educators in the USA (Council for Exceptional Children, 2013) and used to create survey items for the Pre-Service Special Educators Efficacy Scale (Appendix A) employed in this study.
Validity of the scores resulting from the SEES-I instrument was addressed through a factor analysis. The analysis was conducted on pilot scales to determine how pre-service special educators respond to items and identify consistent factors. A longer scale was developed for pre-service teachers, as previous research indicated less validity in the factor structure among these respondents (Woolfolk Hoy, 2000). The instrument items were aligned with current USA standards (Council for Exceptional Children, 2013) for added validity. The language used to construct survey items was consistent with descriptors provided in recent United States CEC Initial Level Special Educator Preparation Standards.
|I can ...||Mean||Std. Dev.|
|support struggling students||7.8519||1.69670|
|plan for ELL||5.4444||2.35137|
|motivate reluctant learners||7.0000||1.92847|
|promote cooperative learning||7.8889||1.55079|
|redirect disruptive students||7.3333||1.59026|
|use a variety of assessments||7.2963||1.78454|
|keep students engaged||7.5556||1.16775|
|record frequency data||6.8519||2.19378|
|facilitate IEP meetings||5.4074||2.87671|
|use data to create benchmarks and goals||6.9630||2.22371|
|collaborate with IEP team members||7.3333||2.55841|
|complete IEP paperwork||6.3704||2.79659|
|use a variety of strategies||7.6667||1.80907|
|create transition plans||6.1111||2.79906|
|use assistive technology||7.1111||1.93489|
|aware of SPED law||6.8519||1.82348|
|develop supportive partnerships with families||7.8148||1.78916|
|Note: Survey responses are based on a scale ranging from (0) Strongly disagree to (10) Strongly agree.|
Construct validity is an ongoing process and is grounded in theory and hypothesis testing (Bandura, 1997). A principal axis factorial analysis was chosen and conducted on the 23-item SEES instrument to assess the dimensionality of the scale. The goal of the instrument was to remain true to the intended measure in an effort to represent face validity. The pilot administration of the instrument indicated a mean completion time of 5.4 minutes. Table 1 represents the descriptive statistics. An initial data screening revealed no missing values, a statistically significant Bartlett's measure of sphericity (< .001 ), and a determinant of the matrix large enough to suggest there were no multicollinearity problems within the data set (Field, 2013). The Kaiser-Meyer-Olkin statistic (KMO = .702) falls above the minimum criterion of .5, which indicated an adequate sample size for factor analysis with over 10 cases per variable.
The item correlation matrix indicated correlation coefficients that were not excessively large, so the researcher did not choose to eliminate any items as a result of the pilot study analysis. Both orthogonal and oblique rotations were employed for a comparison of correlation coefficients between factors (Field, 2013). The rotation results indicated correlations between three extracted factors, and the constructs being measured appeared to be interrelated. The researcher examined the item clusters with variables loading highly (standardised loadings > .4) and identified patterns associated with scale items among three factors that accounted for approximately 70% of the variance. The scree plot revealed a break and levelling off after the third component. A comparison of eigenvalues from the exploratory factor analysis and the criterion values from the parallel analysis support the researcher's decision to retain only three factors (See Table 2).The three-factor analysis is represented in Table 3 with subscales identified and labelled.
|Criterion value from|
|I can ...||Learner development and learner differences||Instruction and strategies||Curriculum content and planning|
|complete IEP paperwork||.823|
|facilitate IEP meetings||.790|
|collaborate with IEP team members||.702|
|be aware of SPED (special education) law||.598|
|use data to create benchmarks and goals||.597|
|use a variety of assessments||.507|
|create transition plans||.442|
|develop supportive partnerships with families||.397|
|support struggling students||.873|
|redirect disruptive students||.830|
|motivate reluctant learners||.820|
|promote cooperative learning||.652|
|plan for ELL||.648|
|use a variety of strategies||-.860|
|use assistive technology||-.682|
|keep students engaged||-.642|
|record frequency data||-.552|
|Note: Rotation method: Varimax with Kaiser normalisation|
The pattern matrix was examined to identify themes and label subscales to align with these standards. Table 4 includes a summary of each subscale with corresponding scale items.
A reliability analysis was conducted to assess the reliability of the SEES items. The reliability analysis revealed the value of Cronbach's alpha (Subscale 1: α = .954; Subscale 2: α = .895; Subscale 3: α = .923), which indicated the reliability of the scores obtained from the SEES instrument was good (Cronbach, 1982). The values of Cronbach's alpha when specific items were deleted did not substantially increase the overall alpha value. The researcher determined that it was not necessary to remove items to improve reliability.
|Learner development and learner differences||7.||I can create a behaviour intervention plan (BIP).|
|8.||I can facilitate the inclusion of my students in general education settings by collaborating with general education teachers.|
|11.||I can use a variety of assessments to determine the academic needs of my students.|
|14.||I can facilitate an individualised education program (IEP) annual review meeting.|
|15.||I can use assessment data to create short term behavioural objectives/benchmarks.|
|16.||I can collaborate with all members of the IEP team to develop appropriate individualised annual goals.|
|18.||I can complete the required IEP paperwork.|
|20.||I can create a transition plan for students with disabilities as they prepare for secondary education.|
|22.||I am aware of special education mandates, policies, and procedures.|
|23.||I can develop supportive partnerships with families.|
|Instruction and strategies||1.||I can support struggling students.|
|2.||I can plan instruction to address the linguistic and cultural characteristics of English language learners (ELL) with disabilities.|
|3.||I can motivate reluctant learners.|
|4.||I can promote cooperative learning.|
|5.||I can overcome adverse situations that impede student learning.|
|9.||I can redirect disruptive behaviours.|
|Curriculum content and planning||6.||I can use functional behavioral assessment (FBA) procedures to determine the reasons for inappropriate behaviours displayed by students with severe cognitive and communicative disabilities.|
|10.||I can make accommodations and modify curriculum based on students' needs.|
|12.||I can keep students engaged and on task.|
|13.||I can record frequency data for behaviour intervention plans (BIP).|
|17.||I can differentiate instruction to meet the diverse needs of my students.|
|19.||I can use a variety of strategies to reach students with disabilities.|
|21.||I can use assistive technology devices to support communication, learning, and improved functional capabilities of individuals with disabilities.|
The pilot administration of the SEES-I instrument contained 23 Likert-type scale (0-10) items. The pilot administration was completed by 243 pre-service special education candidates. The responses were analysed for validity through a factor analysis and parallel analysis. The findings from these analyses were consistent and identified three subscales, including: Learner development and differences, Instruction and strategies, and Curriculum content and planning. These three factors accounted for approximately 70% of explained variance. The usual standard for an instrument to be considered reliable is over 50% of overall explained variance (Field, 2013).
The correlations of the scale items were examined, and correlations of items with each other were found to be very high. When internal consistency values of responses within the scale are taken into consideration, it is seen that the scores had high reliability coefficients across all three subscales. Accordingly, obtained reliability coefficients indicated that the scale is considered reliable (Cronbach, 1982). Upon the evaluation of all these results, the scale is understood to be utilisable in social sciences research (Mertens, 2010).
This study made contributions to the field of special education and teaching self-efficacy research by developing and accessing a new instrument to measure pre-service special education teaching self-efficacy. The researcher believes it was beneficial to the field of special education to develop an appropriate, valid, and reliable instrument for measuring special education teaching self-efficacy during teacher preparation. This instrument may prove beneficial in measuring special education teacher self-efficacy throughout teacher preparation.
Pre-service teachers with a passion and commitment to work with this special population of students should be afforded every opportunity to enjoy longevity in their calling. It is also the belief of this researcher that students with disabilities deserve every opportunity for success. Specifically, this population of learners deserves highly qualified, self-efficacious teachers.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2) 191-215. http://psycnet.apa.org/doi/10.1037/0033-295X.84.2.191
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Billingsley, B. (2003). Research summary: Special education teacher retention and attrition: A critical analysis of the literature. Gainesville: University of Florida, Center on Personnel Studies in Special Education.
Brownell, M. T., Ross, D. D., Colon, E. P. & McCallum, C. L. (2005). Critical features of special education teacher preparation: A comparison with general teacher education. The Journal of Special Education, 38(4), 242-252. http://files.eric.ed.gov/fulltext/EJ693855.pdf
Council for Exceptional Children (CEC) (2013). What every special educator must know: Ethics, standards and guidelines (3rd ed.). Arlington, VA: Council for Exceptional Children.
Coladarci, T. & Brenton, W. A. (2012). Teacher efficacy, supervision, and the special education resource-room teacher. The Journal of Educational Research, 90(4), 230-239. http://dx.doi.org/10.1080/00220671.1997.10544577
Cronbach, L. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.
Duffy, M. & Forgan, J. (2005). Mentoring new special education teachers: A guide for mentors and program directors. Thousand Oaks, CA: Corwin Press.
Erdem, E. & Demirel, O. (2007). Teacher self-efficacy belief. Social Behavior and Personality, 35(5), 573-586. https://doi.org/10.2224/sbp.2007.35.5.573
Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). Los Angeles, CA: SAGE.
Hartmann, E. (2012). A scale to measure teachers' self-efficacy in deaf-blindness education. Journal of Visual Impairment & Blindness, 106(11), 728-738. https://www.afb.org/jvib/Newjvibabstract.asp?articleid=jvib061103
Hudson, P. & Skamp, K. (2003, July). An evaluation of a mentoring intervention for developing pre-service teachers primary science teaching. Paper presented at the annual conference of the Australasian Science Education Research Association, Melbourne, Victoria. [see https://eprints.qut.edu.au/13249/ and http://epubs.scu.edu.au/educ_pubs/1677/]
Lee, Y., Patterson, P. P. & Vega, L. A. (2011). Perils to self-efficacy perceptions and teacher-preparation quality among special education intern teachers. Teacher Education Quarterly, 38(2), 61-76. https://eric.ed.gov/?id=EJ926860
Mertens, D. (2010). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods (3rd ed.). Thousand Oaks, CA: SAGE.
Paneque, O. M. & Barbetta, P. M. (2006). A study of teacher efficacy of special education teachers of English language learners with disabilities. Bilingual Research Journal, 30(1), 171-193. http://dx.doi.org/10.1080/15235882.2006.10162871
Pendergast, D., Garvis, S. & Keogh, J. (2011). Pre-service student teacher self-efficacy beliefs: An insight into the making of teachers. Australian Journal of Teacher Education, 36(12), 46-57. http://dx.doi.org/10.14221/ajte.2011v36n12.6
Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall.
Rotter, J. B. (1966). Generalized expectancies of internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80(1), 1-28. http://psycnet.apa.org/doi/10.1037/h0092976
Ruble, L., Toland, M., Birdwhistell, J., McGrew, J. & Usher, E. (2013). Preliminary study of the autism self-efficacy scale for teachers (ASSET). Research in Autism Spectrum Disorders, 7(9), 1151-1159. https://doi.org/10.1016/j.rasd.2013.06.006
Schmitz, G. S. & Schwarzer, R. (2000). Selbstwirksamkeitserwartung von Lehrern: LŠngsschnittbefunde mit einem neuen Instrument [Perceived self-efficacy of teachers: Longitudinal findings with a new instrument]. Zeitschrift für Pädagogische Psychologie / German Journal of Educational Psychology, 14(1), 12-25. https://doi.org/10.1024//1010-06126.96.36.199
Sharma, U., Loreman, T. & Forlin, C. (2012). Measuring teacher efficacy to implement inclusive practices. Journal of Research in Special Educational Needs, 12(1), 12-21. https://doi.org/10.1111/j.1471-3802.2011.01200.x
Shippen, M. E., Flores, M. M., Crites, S. A., Patterson, D., Ramsey, M. L., Houchins, D. E. & Jolivette, K. (2011). Classroom structure and teacher efficacy in serving students with disabilities: Differences in elementary and secondary teachers. International Journal of Special Education, 26(3), 36-44. https://www.uv.uio.no/isp/forskning/aktuelt/aktuelle-saker/2011/dokumenter/journal_spec.ed.26%203.pdf
Skaalvik, E. M. & Skaalvik, S. (2007). Dimensions of teacher self-efficacy and relations with strain factors, perceived collective teacher efficacy, and teacher burnout. Journal of Educational Psychology, 99(3), 611-625. http://psycnet.apa.org/doi/10.1037/0022-06188.8.131.521
Taylor, S. (2012). An examination of secondary special education teachers' self-reported efficacy and performance through the use of case study methodology. PhD thesis, Auburn University, USA. https://etd.auburn.edu/bitstream/handle/10415/3414/Taylor_SL_Final.pdf?sequence=2
Tschannen-Moran, M. & Woolfolk Hoy, A. (2007). The differential antecedents of self-efficacy beliefs of novice and experienced teachers. Teaching and Teacher Education, 2396), 944-956. https://doi.org/10.1016/j.tate.2006.05.003
United States Census Bureau (2016). Quickfacts data. http://www.census.gov
U. S. Department of Education (2016). Individuals with Disabilities Education Act (IDEA) data. http://www.ideadata.org
Walls, S. (2007). Early childhood preservice training and perceived teacher efficacy beliefs concerning the inclusion of young children with disabilities. Unpublished dissertation, Auburn University, Alabama, USA. http://hdl.handle.net/10415/1354
Ware, H. & Kitsantas, A. (2007). Teacher and collective efficacy beliefs as predictors of professional commitment. The Journal of Educational Research, 100(5), 303-310. http://dx.doi.org/10.3200/JOER.100.5.303-310
Washburn-Moses, L. (2010, December). Rethinking mentoring: Comparing policy and practice in special and general education. Educational Policy Analysis Archives, 18(32). http://epaa.asu.edu/ojs/article/view/716
Woolfolk Hoy, A. (2007). Educational psychology (10th ed.). Boston, MA: Allyn & Bacon.
Woolfolk Hoy, A. (2000). Educational psychology in teacher education. Educational Psychologist, 35(4), 257-270. http://dx.doi.org/10.1207/S15326985EP3504_04
Directions: The following statements represent a proposed skill set for beginning special educators. Please indicate your level of confidence for each of the statements by choosing a response from (0) Strongly disagree to (10) Strongly agree. Please circle a response for each statement.
The purpose of this information is research related and may be used to assess and design program requirements. Your frank responses are appreciated and will remain anonymous.
|I can lift 200 pounds.||0||1||2||3||4||5||6||7||8||9||10|
|I can run three miles.||0||1||2||3||4||5||6||7||8||9||10|
|1.||I can support struggling students.||0||1||2||3||4||5||6||7||8||9||10|
|2.||I can plan instruction to address the linguistic and cultural characteristics of English language learners with disabilities.||0||1||2||3||4||5||6||7||8||9||10|
|3.||I can motivate reluctant learners||0||1||2||3||4||5||6||7||8||9||10|
|4.||I can promote cooperative learning.||0||1||2||3||4||5||6||7||8||9||10|
|5.||I can overcome adverse situations that impede||0||1||2||3||4||5||6||7||8||9||10|
|6.||I can use functional behavioural assessment (FBA) procedures to determine the reasons for inappropriate behaviours displayed by students with severe cognitive and communicative disabilities.||0||1||2||3||4||5||6||7||8||9||10|
|7.||I can create a behaviour intervention plan (BIP).||0||1||2||3||4||5||6||7||8||9||10|
|8.||I can facilitate the inclusion of my students in general education settings by collaborating with general education teachers.||0||1||2||3||4||5||6||7||8||9||10|
|9.||I can redirect disruptive behaviours.||0||1||2||3||4||5||6||7||8||9||10|
|10.||I can make accommodations and modify curriculum based on students' needs.||0||1||2||3||4||5||6||7||8||9||10|
|11.||I can use a variety of assessments to deter-mine the academic needs of my students.||0||1||2||3||4||5||6||7||8||9||10|
|12.||I can keep students engaged and on task.||0||1||2||3||4||5||6||7||8||9||10|
|13.||I can record frequency data for behaviour intervention plans (BIP).||0||1||2||3||4||5||6||7||8||9||10|
|14.||I can facilitate an individualised education program (IEP) annual review meeting.||0||1||2||3||4||5||6||7||8||9||10|
|15.||I can use assessment data to create short term behavioural objectives/benchmarks.||0||1||2||3||4||5||6||7||8||9||10|
|16.||I can collaborate with all members of the IEP team to develop appropriate individualised annual goals.||0||1||2||3||4||5||6||7||8||9||10|
|17.||I can differentiate instruction to meet the diverse needs of my students.||0||1||2||3||4||5||6||7||8||9||10|
|18.||I can complete the required IEP paperwork.||0||1||2||3||4||5||6||7||8||9||10|
|19.||I can use a variety of strategies to reach students with disabilities.||0||1||2||3||4||5||6||7||8||9||10|
|20.||I can create a transition plan for students with disabilities as they prepare for secondary education.||0||1||2||3||4||5||6||7||8||9||10|
|21.||I can use assistive technology devices to to support communication, learning, and improved functional capabilities of individuals with disabilities.||0||1||2||3||4||5||6||7||8||9||10|
|22.||I am aware of special education mandates, policies, and procedures.||0||1||2||3||4||5||6||7||8||9||10|
|23.||I can develop supportive partnerships with families.||0||1||2||3||4||5||6||7||8||9||10|
Freshman/1st year ______; Sophomore/2nd year____;
Junior/3rd year_________; Senior/4th year_________.
Gender: Female_____; Male_______.
Experience (Level of preparation completed):
First clinical experience_________; Second clinical experience ______
Third clinical experience________; Student teaching______________
Institution type: Public_______; Private______
Please feel free to provide additional explanations or questions about any of the above responses.
Thank you for taking the time to complete the survey.
|Author: Dr Mary M. Lombardo-Graves holds an EdD in Curriculum and Instruction from Northern Illinois University, an MA in Educational Leadership and Administration from Concordia University in River Forest, and a BA in Special Education and Teaching from Northeastern Illinois University. Currently Mary is Assistant Professor of Special Education at University of Evansville, Yorkville, Illinois, USA.|
Please cite as: Lombardo-Graves, M. M. (2017). Building resiliency: Introducing the pre-service Special Educator Efficacy Scale. Issues in Educational Research, 27(4), 803-821. http://www.iier.org.au/iier27/lombardo-graves.html