Table of Contents Author Guidelines Submit a Manuscript
Education Research International
Volume 2011, Article ID 149530, 7 pages
http://dx.doi.org/10.1155/2011/149530
Research Article

Teacher Efficacy as a Multigroup Model Using Latent Class Analysis

1Department of Teacher Education and Administration, University of North Texas, Denton, TX 76203-5017, USA
2School of Education, Hamline University, Saint Paul, MN 55104-1284, USA

Received 24 August 2010; Accepted 16 November 2010

Academic Editor: Gwo-Jen Hwang

Copyright © 2011 Colleen Eddy and Donald Easton-Brooks. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Research measuring teacher efficacy suggests that participants are representative of one-efficacy group. Of the few studies, which measures efficacy as a multidimensional occurrence, teachers are presented as having either low or high efficacy. These studies often use mean or median splits to determine low and high efficacy groups. What is of concern is whether there is a significant probability that those in the low and high groups are actually representative of the data Further, a question exists of whether teacher efficacy is statistically representative of one-efficacy group or representative of more than two efficacy groups. Using Latent Class Analysis (LCA), this study found that mathematics efficacy groups of preservice teachers vary based on where they were in their academic program.

1. Introduction

Various studies have shown that teacher efficacy is an important component in demonstrating the ability of teachers to teach [18]. The findings suggest that if teachers have a high belief in their ability to teach, students benefit from these teachers. While the results of teacher efficacy are consistent, the way in which teacher efficacy is measured is inconsistent. One school of thought is to view teacher efficacy as a homogeneous phenomenon where teachers are viewed as having a common belief about their ability to teach as measured on a continuum from low to high efficacy [3, 916]. This approach is very common in measuring attitudes and beliefs and suggests that those with low or high attitudes or beliefs will have some effect on the academic outcomes of students. However, the approach is limiting in that it does not allow researchers to determine at what point teacher efficacy starts to have a positive or negative effect of student outcomes.

The approach taken by the researchers of this study is that attitudes, behaviors, and professional approaches are seldom homogeneous. Agreeably, in examining teacher efficacy, researchers [2, 1722] have used various techniques for demonstrating that teacher efficacy is a function of a two-group phenomenon in which a high and low teacher efficacy group is determined through various techniques including mean and median splits. These approaches help in the understanding that those in a low scoring clustered group will perform differently than those in a high scoring clustered group. However, techniques such as mean and median splits are bias by nature. Given that groups are divided by a mean or median cutoff value, those with extremely low efficacy scores are measured against those with extremely higher efficacy scores. Therefore it should be expected that a significant difference between the two groups exists. However, researchers of this approach assume that the data is representative of two groups (a low- and high-score group). The limitation in this approach is whether the data is support of a two-group model, meaning what is the probability that the data totally represents a significant difference between groups. Based on the two assumptions mentioned (homogeneous group beliefs and split-group beliefs), the research question of the present study is if teacher efficacy is statistically representative of one-efficacy group or representative of a multiple-efficacy groups using a more robust statistical analysis.

The robust statistical analysis chosen to address the research question was Latent Class Analysis (LCA). Generally, LCA is used to determine the conditional probability that outcome scores are reflective of subgroups of cases in multivariate data [2325]. In this current study, LCA was used to determine the probability or likelihood that mathematics efficacy of preservice teachers is representative of a single clustered belief or representative of multiple subclustered groups. An efficacy group is defined in the present study as participants quantitatively falling into a particular group (i.e., high, middle, or low) based on their personal mathematics teaching efficacy (PMTE) and mathematics teaching outcome expectancy (MTOE) score. The purpose of the present study was to analyze the Mathematics Teaching Efficacy Belief Instrument (MTEBI) scores for entering and midpoint preservice elementary teachers (PSETs) based on their PMTE and MTOE scores using LCA to determine if teacher efficacy presented a one or multiple group model.

2. Literature Review

This study examines efficacy in association with mathematics teacher efficacy. Mathematics teacher efficacy is traced to Bandura’s [26] social cognitive theory and Rotter’s [27] locus of control theory. More specifically, teacher efficacy is the teacher’s belief that he/she has the knowledge and skills to influence academic outcomes. From the work of Bandura and Rotter and the work of Gibson and Dembo [19] found that efficacy represents a two-subscale model and developed the Teacher Efficacy Scale (TES) to assess the relationship between teacher efficacy and outcome expectancy.

Based on Gibson and Dembo [19] and Bandura’s [28] notice that efficacy is dependent on context, Enochs and Riggs [29] developed a reliable preservice science teaching efficacy instrument, the Science Teaching Efficacy Beliefs Instrument (STEBI-B), which was modified from Riggs’ [7] in-service science teaching efficacy instrument (STEBI-A). This scale contains two subscales that measure personal teacher efficacy and outcome expectancy. Formally, the subscales of STEBI-B are the Personal Science Teaching Efficacy Belief Scale (PSTE) and the Science Teaching Outcome Expectancy Scale (STOE). Enochs et al. [2] later adapted the STEBI-B, creating the Mathematics Teaching Efficacy Beliefs Instrument (MTEBI). Like the STEBI-B, the MTEBI is used with preservice teachers. The researchers found the two subscales, Personal Mathematics Teaching Efficacy Belief Scale (PMTE) ( ) and the Mathematics Teaching Outcome Expectancy Scale (MTOE) ( ), to be a reliable and valid instrument for measuring the mathematics teaching efficacy of preservice elementary teachers (PSETs). PMTE is the preservice teachers’ belief in one’s ability to be an effective mathematics teacher, and MTOE is the preservice teachers’ beliefs that effective teaching of mathematics can bring about student learning regardless of external factors [29].

2.1. Teacher Efficacy as Multiple Level Models

As discussed previously, there are two main approaches taken in the study of teacher efficacy. One approach categorizes participants as a homogenous group based on their efficacy scores. The second approach assumes that there are subpopulations (high and low efficacy) within the study population. This categorization of teachers used by researchers is important because they do not assume that all participants within a group represent one efficacy group. However, previous teacher efficacy research has not typically used sound statistical methods for determining the composition of the reported high- and low-efficacy groups.

The concept of teacher efficacy as a multidimensional model consisting of general and personal efficacy is well established in the literature. However, the concept of efficacy as a multilevel model with more than one nonhomogeneous group [3, 913, 15, 16] is not as well established. In Bandura’s [30] work, he describes various levels of teacher efficacy. His findings suggest groups of low and highly efficacious teachers, with highly efficacious teachers described as having a strong ability to teach difficult students. However, few researchers have evaluated teacher efficacy as a nonhomogeneous model.

Researchers who have viewed teacher efficacy as a multi-group model have shown teacher efficacy to reflect a two group model representing teachers with high and low efficacy. The findings in Ashton’s [31] study of preservice and high school teachers described teachers as having “low-efficacy” (p.305) or “high-efficacy” (p.318). The findings were based on assessing teachers’ response to student learning and student’s ability. Similarly, Gibson and Dembo [19] conducted a 30-item survey of elementary school teachers and found teachers with low and high efficacy. These findings were based on teacher efficacy in response to whole class versus small group instruction.

Using the Gibson and Dembo [19] 30-item instrument, Soodak and Podell [20] described 620 elementary and secondary preservice and practicing teachers as having low and high efficacy. The findings showed a significant interaction between experience level and school level on personal efficacy of elementary teachers. For these teachers personal efficacy was high during the preservice period, but efficacy fell dramatically during the first years of teaching. The findings showed teachers in their first twoyears of practice to have low efficacy in comparison to preservice teachers and in-service teachers with six or more years of experience. This often reflects a reality shock, suggesting teachers’ loss of efficacy based on the reality of real life teaching [3133].

Swars’s [21] qualitative study viewed teacher efficacy as low and high on a continuous scale. The four participants were selected for the study based on those with the two lowest efficacy scores and those with the two highest efficacy scores. Unlike those researchers who view teacher efficacy as a multidimensional tool (i.e., personal and general efficacy), Swars assessed participants using the MTEBI as a one-dimensional tool.

The studies reviewed above view efficacy as a nonhomogeneous two-efficacy group model, with teachers having low and high efficacy. However, these studies present little scientific evidence that the two-group model is accurate. The authors propose that those who score lower on teacher efficacy scales are to some extent different from teachers who score high on teacher efficacy scales. However, statistical analyses are needed to confirm that there are two different efficacy groups.

Woolfolk and Hoy [22] proposed a two-group efficacy model based on a median split approach. The author’s findings of a significant difference between low and high efficacy groups should be expected given that the extremely low scores were in the low-efficacy group and the high scores were in the high-efficacy group. The median split technique is also limiting in that it does not ensure that the two-efficacy group model is the best fit for the data. Howell [34] proposed, in order to use median splits, inferential statistics that are needed to determine if there is significance between the groups. Cohen [35] further suggests that using median splits without inferential methods to show the difference between groups leads to a loss of 20%–65% of the variance. More importantly, these techniques do not determine if a two-class model is the best fit for the data.

Further others [1921, 31] showed that a two-group efficacy model existed based on extreme low efficacy scores and extreme high efficacy scores. These types of two-group model leaves open the question whether a third group (i.e., middle efficacy group) possibly exists.

To determine if efficacy is a one-group, two-group, or multigroup model, statistical analysis such as Latent Class Analysis is needed. This analysis categorizes individuals into classes based on an outcome variable [36]. The analysis has two basic functions. First, the analysis is used to determine the optimal number of classes or groups that best fits the data. Second, the analysis is used to predict the probability that an individual will belong to a particular group or class. Different from the median-split approach, this analysis does not assume that two groups are the best description of the data. Further, unlike median-split, LCA does not assign subjects to a group based solely on high or low scores. The analysis assesses the probability that an individual will be associated with a particular class based on “a set of mutually exclusive latent classes that account for the distribution of cases that occur within a cross tabulation of observed discrete variables”, see [37, page 8].

3. Method

The present study analyzed the MTEBI scores for entering and midpoint preservice elementary teachers (PSETs) based on their PMTE and MTOE scores. Based on the research question, if teacher efficacy is statistically representative of one or multiple efficacy group model using a more robust statistical analysis, the study used LCA to explore this research question by measuring PMTE and MTOE scores of preservice students.

3.1. Participants

The 246 participants in this study were enrolled in courses offered in an elementary teacher education program at a major university (more than 35,000 students) in the south-central part of the United States, beginning in spring of 2008. Participants are considered candidates of the teacher preparation program once they have completed their core course requirements. This usually means that the participants were accepted into the program at the beginning of their junior year if the four-year program has been followed. None of the participants had been enrolled in a course that incorporated field experience prior to this semester. The two groups of PSETs are defined as the follows.

Entering PSETs
These participants are in one of their first courses in the teacher preparation course sequence.

Midpoint PSETs
These participants are in the final sequence of courses in the teacher preparation program, which include mathematical methods for grades k-8. This is the only mathematical pedagogy course in the sequence.

Ninety of the participants were entering PSETs. The other 156 participants were midpoint PSETs. All participants were asked to complete the MTEBI.

3.2. Data Collection

The MTEBI measured PMTE on 13 items and MTOE on 8 items. The items on the two scales rated ranged from 1, strongly disagree, to 5, strongly agree. Items 3, 6, 8, 15, 17, 18, 19, and 21 are negatively worded and were reverse coded for analysis. More information about the development of the instrument and the scoring can be found in [2].

The MTEBI was administered to all participants at the beginning of the spring 2008 semester. The instrument was administered early in the first three weeks of the semester in order to reduce the effects the course might have on students’ efficacy.

4. Results

4.1. Item Analysis

Enochs et al. [2] found that the PMTE and MTOE subscales were statistically reliable instruments, ( and , resp.). Given that the population in the present study was different from Enochs et al., Cronbach alphas were conducted on the subscale scores for each group, entering and midpoint PSETs. Similar to Enochs et al., the findings show the PMTE and MTOE to be a reliable instrument for studying this population. The PMTE alphas were 0.82 and 0.90 and the MTOE alphas were 0.74 and 0.83 for entering and midpoint PSETs, respectively.

4.2. Latent Class Analysis

Latent Class Analysis (LCA), using MPlus 4.1, was used to determine whether mathematics teacher efficacy exists as different efficacy groups. Rather than conceptualizing mathematics teaching efficacy as a continuous outcome, the researchers conceptualized the mathematics teaching efficacy of participants as a result of participants having different levels of mathematics teaching efficacy for both the PMTE and the MTOE. The hypothesis was that participants would fall into one of two different efficacy groups: low or high mathematics teaching efficacy. Since mathematics teaching efficacy is an unobserved trait, efficacy group membership was treated as a latent variable. In this case, four separate LCAs were conducted on the PMTE and MTOE efficacy scores for entering and midpoint preservice elementary teachers (PSETs).

LCA was used to identify the number of efficacy classes that best fit the data for each subscale. One difficulty in determining the number of classes is that no single indicator is commonly accepted to determine the appropriate number of classes in a surveyed population [38]. Instead, several model fit indices criteria are considered together in order to determine which class model best fits the data. In the present study, Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) [39, 40], Vuong-Lo-Mendell-Rubin likelihood ratio test (LMR LRT) [41], Bootstrapped Likelihood Ratio Test (LRT), and Entropy [42, 43] were used to examine the hypothesis of a two-efficacy group model.

Each of these types of criteria is interpreted in different ways. Information Criterion values (AIC and BIC) are used to choose between competing statistical models. In general, lower AIC and BIC criterion values indicate a better model. Likelihood ratios test (LMR LRT and Bootstrapped LRT) utilize -values to determine model fit. Finally, the entropy ranges from 0 to 1, with 1 indicating a greater precision in membership classification.

For example, when analyzing a one-efficacy group model versus two-efficacy group model, a -value of or less indicates that the two-efficacy group model is better suited for the data, whereas a -value greater than indicates that the one-efficacy group model is a significantly more sufficient model. If there is a discrepancy between the likelihood ratio test values, the -value of the Bootstrapped LRT is a more reliable measure than the -value of the LMR LRT [24, 44]. A description of a 2 versus 1 efficacy groups would indicate if two-efficacy groups or one-efficacy group are better suited for the data and 3 versus 2 efficacy groups would indicate if three-efficacy groups or two-efficacy groups are better suited for the data.

Table 1 shows the fit indices for the PMTE subscale for two-efficacy group model versus one-efficacy group model and three-efficacy group model versus two-efficacy group model. Based on the LMR LRT -value of and Bootstrapped LRT -value of , the findings suggest that a two-efficacy group model is a better fit for the data than a one-efficacy model. Since the LCA showed that the two-efficacy model is a better fit for the data, it was important if the data was more reflective of a three-efficacy model versus a two-efficacy group model. The LMR LRT -value ( ) and Bootstrapped LRT -value ( ) were higher than criterion, indicating that the two-efficacy was a better fit for the data. Further, another confirmation was that the lower AIC (575.65 versus 577.60) and BIC (585.61 versus 592.53) indicated that the two versus one class model better fits the data. In both comparisons, the entropy values indicate that the two-efficacy group model is the best fit for the data. The entropy of  0.91 implies that 91% of the participants are accurately categorized in a group. These findings support the hypothesis that there are indeed low and high-efficacy groups of PMTE for both entering and midpoint PSETs.

tab1
Table 1: Fit indices for PMTE.

Table 2 shows fit indices for the MTOE subscale for two-efficacy group model versus one-efficacy group model and three-efficacy group model versus two-efficacy group model. The MTOE of midpoint PSETs demonstrates that a two-efficacy group model best fits the data, based on the lower AIC and BIC values and significant LMR LRT and Bootstrapped LRT -values. However, the nonsignificant LMR LRT and Bootstrapped LRT imply that a one-efficacy group model was a better fit for entering PSETs. These findings demonstrate that as participants continue through their teacher preparation experience, their confidence in their ability to influence mathematics outcomes of elementary students differentiates. On the other hand, entering PSETs are more homogeneous in their beliefs about the teacher’s ability to effect mathematical outcomes of elementary students.

tab2
Table 2: Fit indices for MTOE.

Table 3 shows the mean estimates and variance of the participants by efficacy group. The findings present significant mean estimates and significant variance. These findings indicate that there was a statistically significant difference in the efficacy scores of individuals between each of the efficacy groups. Group one (low teacher efficacy) represented those who scored lower ( ) on the PMTE, while group two (high teacher efficacy) represented those who scored higher ( ) on the PMTE. In addition, the statistically significant variances demonstrate that the scores of participants significantly vary from one another. For entering PSETs, the LCA findings represented a one-group model with students scoring high on MTOE ( ). Unpaired -tests show that all mean estimates were statistically significant from one another, with the exception of the MTOE mean estimates for entering PSETs and midpoint PSETs with high MTOE.

tab3
Table 3: Mean Estimates and Variances of Latent Classes.

These findings imply that, regardless of efficacy group membership, the entering PSETs have significantly greater confidence in their personal ability to teach mathematics than midpoint PSETs. Further, entering PSETs had significantly greater confidence in their ability to effect mathematics outcomes than midpoint PSETS with low MTOE. In contrast, the MTOE of the entering PSETs was not significantly different from midpoint PSETs with high MTOE.

5. Discussion

Researchers [3, 913, 15, 16] often present findings assuming that teacher efficacy is representative of a homogeneous group. The results, as driven by the research question of the present study, indicate that mathematics teacher efficacy of entering and midpoint PSETs cannot be assumed to be completely homogenous or representative of a one-group model. The groups were not homogeneous in relationship to personal mathematics teacher efficacy (PMTE) and mathematics teacher outcome expectancy (MTOE), representing a two-group model. Only the mathematics outcome expectancy (MTOE) of entering PSETs was found to be reflective of a one-group model.

Researchers [2, 14, 1722] were correct to assume that teacher efficacy is generally a two-efficacy group phenomenon. However, the use of advance inferential methods such as LCA helps determine more accurately where the split between the two groups occurs. Further, the use of LCA helps determine group membership and to develop understanding of the significance of the efficacy classes [34, 35]. The two-efficacy group model described in the analysis section above indicates that participants are either high or low in personal mathematics teacher efficacy (PMTE). This is true for entering PSETs even though their mean PMTE score is higher than the mean PMTE scores of midpoint PSETs. LCA provides a statistically sound basis for the grouping of participants into efficacy groups rather than utilizing an arbitrary score to describe a participant as high or low efficacy. The findings in the present study will prove beneficial to educators because entering preservice elementary teachers typically have higher efficacy beliefs than PSETs further into their training.

The MTOE (beliefs that effective teaching of mathematics can bring about student learning regardless of external factors) of entering PSETs can only be described as one group. As is evidenced by their estimated mean ( ; see Table 3), they have high beliefs in teachers’ ability to affect the outcome of students learning mathematics despite the outside factors that impact student learning. Their novice understanding and lack of experiences in teaching students mathematics is the most likely explanation of their initially high MTOE. In contrast, the MTOE of midpoint PSETs was described as having two-efficacy groups. This could be attributed to their growing understanding of the diverse students they will be teaching. However, these students also have had little or no experience in teaching students mathematics.

Further, Table 3 shows that entering PSETs had a higher estimated means in PMTE and MTOE than those midpoint PSETs. These findings were true even as participants were categorized with low and high mathematics teacher efficacy.

Previously, entering PSETs with reported high PMTE scores would have been characterized as a homogenous group that lacked knowledge about what it means to teach. However, the findings of the present study suggest that there is a group of entering PSETs who have low PMTE. This could be related to prior experiences not associated with the program, such as substitute teaching or volunteering with students or personal difficulties or dislike of mathematics.

6. Conclusions

The researchers of this study found that personal mathematics teaching efficacy is conceptually a two-efficacy group phenomenon for entering and midpoint PSETs, with participants being grouped into low- and high-efficacy classes based on their beliefs about their ability to teach mathematics. A two-efficacy group model also exists for the mathematics outcome expectancies for midpoint PSETs.

The relevance of these findings is that mathematics teacher efficacy of all participants does not necessarily occur at the same level across a teacher education program or within a group. Relevant to the argument presented in this study, the findings show that efficacy should not be viewed as a homogeneous construct. Related to methods of splitting groups based on means or medians, these findings show that researchers should not assume that their data is statistically reflective of this type of two-group model. As shown, only three of the four analyses produced a two-efficacy group model. The findings of the MTOE subscale (teachers belief that effective teaching of mathematics can bring about student learning regardless of external factors) among preservice teachers who are early in the program were reflective of a one-group model. The findings show that their efficacy was greater than preservice teachers who were midpoint in the program. To better understand these preservice teachers, using the more advance Latent Transition Analysis (LTA) can give researchers and educators a better view of how preservice student efficacy class changes as they progress through the program. While the LCA helps in understanding whether data of subjects represents various class models, the LTA allows researchers to understand whether a class member changes as subjects move from one stage to another stage within a program, for instance, understanding if mathematics among preservice teachers changes as they move from entrance into the program to later stages of the program.

Although the small sample size and central location of the participants in this study limits the findings of this study to similar populations, we have suggested that future analyses should use LCA for determining efficacy groups. This is in contrast to using means or median splits from past research to determine teacher efficacy of preservice teachers. However, the relevancy of this study is that LCA should be considered when analyzing mathematics teacher efficacy or other types of belief measures. In conclusion, though not part of the main focus of this study, the Cronbach internal consistency analysis conducted shows that the PMTE and the MTOE are statistically reliable scores for assessing mathematics efficacy of preservices teachers. While the reliability of scores associated with the PMTE and MTOE has been assessed for students during the student teaching, this present study assessed the reliability of scores of the preservice teachers who first entered an initial teacher certification program and students who at the end of their teacher prep program.

References

  1. R. Anderson, M. Greene, and P. Loewen, “Relationships among teachers' and students' thinking skills, sense of efficacy, and student achievement,” Alberta Journal of Edcuational Research, vol. 34, no. 2, pp. 148–165, 1988. View at Google Scholar
  2. L. G. Enochs, P. L. Smith, and D. Huinker, “Establishing factorial validity of the mathematics teaching efficacy beliefs instrument,” School Science and Mathematics, vol. 100, no. 4, pp. 194–202, 2000. View at Google Scholar
  3. B. Housego, “Monitoring student teachers’ feelings of preparedness to teach and teacher efficacy in a new elementary teacher education program,” Journal of Education for Teaching, vol. 18, no. 3, pp. 259–272, 1992. View at Google Scholar
  4. W. K. Hoy and A. E. Eollfolk, “Socialization of student teachers,” American Education Research Journal, vol. 27, pp. 279–300, 1990. View at Google Scholar
  5. D. Huinker and S. K. Madison, “Preparing efficacious elementary teachers in science and mathematics: the influence of methods courses,” Journal of Science Teacher Education, vol. 8, no. 2, pp. 107–126, 1997. View at Google Scholar
  6. H. Ohmart, The effects of an efficacy intervention on teachers’ efficacy feelings, Unpubished doctoral dissertation, University of Kansas, Lawrence, Kan, USA, 1992, University Microfilms No. UMI 9313150.
  7. I. Riggs, The development of an elementary teachers' science teaching efficacy belief instrument, Unpublished dissertation, Kansas State University, 1988.
  8. M. Tschannen-Moran and A. W. Hoy, “Teacher efficacy: capturing an elusive construct,” Teaching and Teacher Education, vol. 17, no. 7, pp. 783–805, 2001. View at Publisher · View at Google Scholar
  9. C. Benz, L. Bradley, M. Alderman, and M. Flowers, “Personal teaching efficacy: developmental relationships in education,” Journal of Educational Research, vol. 85, no. 5, pp. 274–283, 1992. View at Google Scholar
  10. H. Ebmeier, “How supervision influences teacher efficacy and commitment: an investigation of a path model,” Journal of Curriculum and Supervision, vol. 18, no. 2, pp. 110–141, 2003. View at Google Scholar
  11. J. Gorrell and Y. S. Hwang, “A study of efficacy beliefs among preservice teacher in Korea,” Journal of Research and Development in Education, vol. 28, no. 2, pp. 101–105, 1995. View at Google Scholar
  12. T. R. Guskey and P. D. Passaro, “Teacher efficacy: a study of construct dimensions,” American Educational Research Journal, vol. 31, pp. 627–643, 1994. View at Google Scholar
  13. E. Guyton, M. Fox, and K. Sisk, “Comparison of teaching attitudes, teacher efficacy, and teacher performance of first year teachers prepared by alternative and traditional teacher education programs,” Action in Teacher Education, vol. 13, no. 2, pp. 1–9, 1991. View at Google Scholar
  14. S. P. Rushton, “Student teacher efficacy in inner-city schools,” The Urban Review, vol. 32, no. 4, pp. 365–383, 2000. View at Google Scholar
  15. S. Swars, L. C. Hart, S. Z. Smith, M. E. Smith, and T. Tolar, “A longitudinal study of elementary pre-service teachers' mathematics beliefs and content knowledge,” School Science and Mathematics, vol. 107, no. 8, pp. 325–335, 2007. View at Google Scholar
  16. J. D. Wilson, “An evaluation of the field experiences of the innovative model for the preparation of elementary teachers for science, mathematics, and technology,” Journal of Teacher Education, vol. 47, no. 1, pp. 53–59, 1996. View at Google Scholar
  17. P. T. Ashton and R. Webb, Making a Difference: Teacher’s Sense of Efficacy and Student Achievement, Longman, New York, NY, USA, 1986.
  18. G. Gerges, “Factors influencing preservice teachers’ variation in use of instructional methods: why is teacher efficacy not a significant contributor?” Teacher Education Quarterly, pp. 71–88, 2001. View at Google Scholar
  19. S. Gibson and M. H. Dembo, “Teacher efficacy: a construct validation,” Journal of Educational Psychology, vol. 76, no. 4, pp. 569–582, 1984. View at Publisher · View at Google Scholar
  20. L. C. Soodak and D. M. Podell, “Efficacy and experience: perceptions of efficacy among preservice and practicing teachers,” Journal of Research and Development in Education, vol. 30, no. 4, pp. 214–221, 1997. View at Google Scholar
  21. S. L. Swars, “Examining perceptions of mathematics teaching effectiveness among elementary preservice teachers with differing levels of mathematics teacher efficacy,” Journal of Instructional Psychology, vol. 32, no. 2, pp. 139–147, 2005. View at Google Scholar
  22. A. E. Woolfolk and W. K. Hoy, “Prospective teachers’ sense of efficacy and beliefs about control,” Journal of Educational Psychology, vol. 82, no. 1, pp. 81–91, 1990. View at Google Scholar
  23. TP. Meiser, M. Hein-Eggers, P. Rompe, and G. Rudinger, “Analyzing homogeneity and heterogeneity of change using rasch and latent class models: a comparative and integrative approach,” Applied Psychological Measurement, vol. 19, no. 4, pp. 377–391, 1995. View at Google Scholar
  24. G. J. McLachlan, “On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture,” Applied Statistics, vol. 36, no. 3, pp. 318–324, 1987. View at Google Scholar
  25. J. S. Uebersax, “Probit latent class analysis with dichotomous or ordered category measures: conditional independence/dependence models,” Applied Psychological Measurement, vol. 23, no. 4, pp. 283–297, 1999. View at Publisher · View at Google Scholar
  26. A. Bandura, “Self-efficacy: toward a unifying theory of behavioral change,” Psychological Review, vol. 84, no. 2, pp. 191–215, 1977. View at Publisher · View at Google Scholar
  27. J. B. Rotter, “Generalized expectancies for internal versus external control of reinforcement,” Psychological monographs, vol. 80, no. 1, pp. 1–28, 1966. View at Google Scholar
  28. A. Bandura, Social Foundations of Thought and Action: A Social Cognitive Theory, Prentice-Hall, Englewood Cliffs, NJ, USA, 1986.
  29. L. G. Enochs and I. M. Riggs, “Further development of an elementary science teaching efficacy belief instrument: a preservice elementary scale,” School Science and Mathematics, vol. 90, pp. 695–706, 1990. View at Google Scholar
  30. A. Bandura, Self-Efficacy: The Exercise of Control, W. H. Freeman and Company, New York, NY, USA, 1997.
  31. P. Ashton, A Study of Teachers' Sense of Efficacy. Final Report, Volume I, Document Reproduction Service No. ED231834, ERIC, 1982.
  32. C. S. Weinstein, “Preservice teachers' expectations about the first year of teaching,” Teaching and Teacher Education, vol. 4, no. 1, pp. 31–40, 1988. View at Google Scholar
  33. A. W. Hoy and R. B. Spero, “Changes in teacher efficacy during the early years of teaching: a comparison of four measures,” Teaching and Teacher Education, vol. 21, no. 4, pp. 343–356, 2005. View at Publisher · View at Google Scholar
  34. D. C. Howell, Statistical Methods for Psychology, Thomson Wadworth, Belmont, Calif, USA, 6th edition, 2007.
  35. J. Cohen, “The cost of dichotomization,” Applied Psychological Measurement, vol. 7, pp. 249–253, 1983. View at Publisher · View at Google Scholar
  36. P. Lazarsfeld and N. Henry, Latent Structure Analysis, Houghton Mifflin, New York, NY, USA, 1968.
  37. A. L. McCutcheon, Latent Class Analysis, Sage University Paper, Sage Publication, Newbury Park, Calif, USA, 1987.
  38. T. Jung and K. A. S. Wickrama, “An introduction to latent class growth analysis and growth mixture modeling,” Social and Personality Psychological Compass, vol. 2, pp. 302–317, 2008. View at Google Scholar
  39. S. L. Sclove, “Application of model-selection criteria to some problems in multivariate analysis,” Psychometrika, vol. 52, no. 3, pp. 333–343, 1987. View at Publisher · View at Google Scholar
  40. C. C. Yang, Finite mixture model selection with psychometrics applications, Ph.D. Dissertation, University of California, Los Angeles, Calif, USA, 1998.
  41. Y. Lo, N. R. Mendell, and D. B. Rubin, “Testing the number of components in a normal mixture,” Biometrika, vol. 88, no. 3, pp. 767–778, 2001. View at Google Scholar
  42. B. Muthén, “Latent variable analysis: growth mixture modeling and related techniques for longitudinal data,” in Handbook of Quantitative Methodology for the Social Sciences, D. Kaplan, Ed., pp. 345–368, Sage Publications, Newbury Park, Calif, USA, 2004. View at Google Scholar
  43. V. Ramaswamy, W. de Sarbo, D. Reibstein, and W. Robinson, “An empirical pooling approach for estimating marketing mix elasticities with PIMS data,” Marketing Science, vol. 12, pp. 103–124, 1993. View at Google Scholar
  44. G. J. McLachlan and D. Peel, Finite Mixture Models, John Wiley & Sons, New York, NY, USA, 2000.