Abstract

We examine the impact on student achievement of a face-to-face teacher workshop that also provides economics instructors with access to an electronic library of instructional and reference material for their economics classroom—Virtual Economics v. 3 (VE3), offered by the Council for Economic Education. Based on evidence using student and teacher-level administrative data from the Georgia Department of Education and controlling for students’ prior achievement in mathematics, we find evidence that the VE3 workshop experience increases student achievement in high school economics. Our difference-in-differences estimates suggest that teacher participation in the VE3 workshop increases student achievement by 0.061 standard deviations on Georgia’s high stakes economics end-of-course test. Future research should seek estimating the effect of treatments in education such as the VE3 workshop using randomized controlled trials (RCT).

1. Introduction

Education policy analysts and professional educators have long called for more and better professional learning opportunities for in-service teachers (NCTAF, 1996). This issue appears to be a relatively acute problem in high school economics. As early as 1977, researchers called for more content training for high school economics teachers [1]. More recently, Walstad [2] found that the typical teacher who is teaching economics has completed no more than one college course in economics. Mackey et al. suggested that teachers at least minor in economics to be qualified to teach at the high school level. However, few high school economics teachers do so.

State councils on economic education in 45 states and the 275 university-based centers for economic education provide a variety of in-service workshops for teachers that offer both economic content and lesson materials. Some of the more popular offerings include the Stock Market Game workshops (where teachers learn how to administer the Stock Market Game in their classroom and pick up strategies as to how to incorporate economic lessons into the game as their students play), the Teaching High School Economics workshops (where teachers get lesson plans that correlate with state economics standards), and the Virtual Economics v. 3 workshops. In this study we examine the impact of the Virtual Economics v. 3 (VE3) workshop for teachers on their students’ scores on Georgia’s state-mandated and high stakes end-of-course test in economics.

During the time period and place we analyze, the VE3 workshop provided teachers with one full day of face-to-face training. The morning session instructed teachers on how to use the VE3 CD titled “Virtual Economics v. 3: An Interactive Center for Economic Education.” This CD contains over 1,200 economics lessons and accompanying student assessments. It also allows teachers to have access to a web portal that contains access to the library of the CEE’s materials for all of its workshops. The materials on this web portal are meant to augment or substitute for many of the face-to-face workshops offered by state Councils and college- and university-based Centers throughout the nation. It is also intended to extend the CEE’s reach to more teachers of K12 economics in a cost-effective manner.

After an introduction to the VE3 CD, an afternoon session has the teachers demonstrate their ability to find material on the disk and on the web portal. They are also asked to construct a lesson for their classroom using the materials now available to them. At the end of the day’s training, teachers may keep the disk and the access to the CEE web portal that the disk allows. Only teachers who attended the workshop are able to receive the CD and access the materials on the web portal.

The primary motivation behind examining individual workshops such as the CEE’s VE3 is to help teachers and school administrators know whether these workshops are ultimately beneficial for students. In theory, having access to lessons correlated to state economics standards, tailored assessments for specific lessons, and a large number of workshop materials designed to improve the teaching of economics would give teachers better and more focused content and better pedagogical approaches to use in their classrooms. We analyze whether the VE3 experience—the workshop and the materials it makes available—improves student learning.

In this study we provide an estimate of the effect of teachers taking the VE3 workshop on student test scores. We model the outcome of interest, student performance on Georgia’s standardized end-of-course test (EOCT) in economics, as a function of student characteristics and teacher in-service training from this specific workshop. Our intent is to isolate the impact of the VE3 workshop and accompanying CD and web-based material while accounting for other factors that influence student performance. We pay particular attention to the issue of selection into workshop attendance. In particular, teachers may self-select into workshops because they have more interest in teaching economics, or principals may select teachers into workshops because they are less effective at teaching economics relative to other economics teachers. Thus, OLS estimates of the effect of VE3 workshop attendance could be biased upwards or downwards.

To address the issue of selection into workshops, we use a difference-in-differences (DD) approach. This approach seeks to isolate the effect of taking the VE3 workshop from other confounding factors that are unobserved to the researchers. The DD approach compares the effectiveness of teachers who attended the VE3 workshop to teachers who have never attended any GCEE workshop and compares the effectiveness of teachers before and after they attended the VE3 workshop.

The environment in Georgia is uniquely suited for studying the impact of in-service workshops because a course in economics is required for high school graduation and each economics student must take a statewide high-stakes economics test that counts for 15 percent of their course grade.

Our DD estimates suggest that the VE3 workshop leads to a statistically significant increase of 0.061 standard deviations in student performance on Georgia’s economics end-of-course test, all else equal. Since the VE3 workshop represents only one day of training with inexpensive materials provided to participants, these gains in student achievement come at a very low cost when compared to other treatments typically used in k12 education.

We note two important caveats: DD is, of course, a quasiexperimental research design. Future research should seek to use experimental approaches with randomization to obtain better estimates of the effects of in-service training programs for teachers. Even if further research confirms our finding, we do not know what aspect or aspects of the VE3 experience led to the gains in student learning. That is, we cannot disentangle the effects of the face-to-face workshop, the access to the lessons and assessments on the VE3 CD, and the access to the workshop materials available at the web portal. Thus, our estimate is of the impact of the treatment as a whole.

In the next section we review past studies concerning the effectiveness of in-service teacher training and how other researchers have addressed the selection bias issue. In the third section we offer some background information about the in-service workshops offered through GCEE and other state councils and why the Georgia testing environment is uniquely suited for addressing this issue. We describe the data in the fourth section and present our empirical model and results in the fifth one. We provide concluding remarks in the sixth section.

2. Prior Studies of In-Service Teacher Training

Education policy makers, analysts, and groups that represent professional educators routinely call for more in-service professional learning opportunities for k12 teachers (NCTAF, 1996). Almost all of the empirical work in the economics literature that seeks to evaluate in-service professional learning concerns the evaluation of workshops for high school economics teachers. Below we discuss the research from economic education and the two notable exceptions that study other types of in-service professional learning for k12 teachers. We also discuss the research literature on this topic from the field of education.

Much of the early literature in economic education investigated individual workshops with relatively small groups of teachers. These earlier studies typically contain 300 to 600 student observations. The more recent studies tend to have larger sample sizes with 1,500 student observations being about average. The benefit of the earlier studies is that the researchers were often able to pre- and posttest students (and in some cases teachers) to directly measure changes in understanding and attitudes toward economics. While the studies differ in approaches, most find that workshops for teachers are associated with gains in student learning.

Highsmith [3] compared the student results on the Test of Economic Understanding between students of teachers who had attended training workshops to students of teachers who did not attend workshops. Thorton and Vredeveld [4] used a similar experimental design. Both studies matched teachers who had attended a workshop to teachers who did not based on demographic characteristics, experience, and amount of economics coursework in college in an attempt to control for the nonrandom selection of teachers into their treatments under study. Schober [5] refined the previous approaches by adding a pretest to his experimental design. This allowed him to control for preexisting knowledge which other studies had not been able to do. Cargill et al. [6] follow the same approach as Schober. Cargill et al.’s study differs from all other studies reviewed here in that the teachers they study received over 30 hours of economics training—roughly the equivalent of a stand-alone economics college course. All of the above-mentioned studies found positive impacts of in-service teacher training.

Walstad [7] introduced the notion that workshops and other in-service training programs may do more than provide information for teachers. They may change teachers’ attitudes toward economics. Walstad endeavored to address the issue of both workshops and teacher attitudes affecting student achievement. Walstad also acknowledged teachers may self-select into workshops because they have more interest in teaching economics.

None of the research that examines the impact of in-service training for teachers offers a truly random selection of teachers into the treatment group. Consequently, controlling for selection issues is an important focus in this literature. Following Walstad’s work, researchers have attempted to separate the effect of workshops on student achievement from the effect of having a more interested teacher. Walstad allowed that the selection may be in the other direction as well. Perhaps less qualified teachers are selected into workshops (or are compelled to do so). Bosshardt and Watts [8, 9] and Watts and Bosshardt [10] introduced the use of fixed-effects models to address the problem of unobserved time-invariant characteristics of either schools or teachers. This allows them to control for some of the characteristics that may compel teachers to participate in in-service training. Bosshardt and Watts’ research also ushered in the use of larger data sets as they took advantage of standardized tests that were administered to a national sample of students.

Another strand of research examines the impact that specific workshops that use material published by the Council for Economic Education have on various measures of student performance. Grimes [11] found that use of the Choices and Changes material led to statistically significant increases in student test performance when compared to a control group of students. Notable in the study was the large sample of teachers and students from nine cities nationwide. C. L. Harter and J. H. Harter [12] and Swinton et al. [13] look at the Financial Fitness for Life workshop and accompanying material. Both studies find that students benefit from having teachers who attended this workshop. Swinton et al. [14] uses information pertaining to over 160,000 Georgia students to examine the impact of the Georgia Council on Economic Education’s full complement of in-service training workshops. None of the workshops examined lasts more than two days, and the economic content of the different workshops varies tremendously. They found the impact of taking one or two workshops was not statistically different from zero. But the impact was positive and significant for teachers who attended three or more workshops. Swinton, et al. [14] use teacher fixed effects to help control for the selection of teachers into workshops. Swinton et al. [15] find a positive and relatively large impact on student achievement from the Teaching High School Economics workshop.

Walstad and Buckles [16] cast some doubt on the efficacy of additional economic education for teachers. In an examination of National Assessment of Educational Progress (NAEP) data, they find that students who had a teacher with certification, college course work, or even a degree in economics do not have significantly higher scores than other students as measured on the economics portion of the NAEP test. Similarly, they find that having teachers who have participated in in-service economics workshops seems to have little discernible impact on student test scores. The authors believe that this result may be because there might not have been an accurate linkage between individual teachers to their students’ test scores.

While most of the research by economists has focused on economic education, there are two noteworthy studies in the economics literature of the effectiveness of in-service teacher training outside of economic education. Angrist and Lavy [17] show that in-service training coupled with other school-wide reforms can aid teachers in their efforts to teach both math and language skills in Jerusalem public schools. The treatment they analyze is not, however, limited just to the impact of in-service training for teachers. The program in the Jerusalem schools that they study was a comprehensive attempt to improve educational outcomes. Given that Jerusalem schools were not randomly given the treatment, they use difference-in-differences (DD) and matching approaches to analyze the effects of this comprehensive, expensive treatment. Their matching approaches involve matching similar students and schools in the pretreatment period and comparing their achievement in the posttreatment period. This matching allows them to analyze whether students and schools that had similar achievement before the treatment had systematically different levels of achievement posttreatment, depending on whether they experienced the treatment.

Angrist and Lavy find that some students in their study experienced a 0.25 standard deviation increase in test scores after their teachers and schools participated in the training and other programs. The treatment that produced these gains cost about $12,000 per class of students—in 1994.

Jacob and Lefgren [18] also studied programs of in-service teacher training that cut across academic disciplines and target general measures of learning such as reading and math scores. All of the schools in their study were initially low performing schools in Chicago. Schools received the additional in-service teacher training if they fell below an achievement threshold chosen by the central office of Chicago Public Schools. The programs they analyzed cost up to $90,000 per school which was paid by the school district’s central office, and some individual schools provided additional funds to supplement the program. Jacob and Lefgren use a regression discontinuity (RD) design to analyze the academic performance of students in schools just below and just above the threshold used to determine which schools received the treatment—the extra in-service teacher training. Their RD approach exploits the fact that schools that had almost identical student achievement results initially were exogenously given different resources—schools just below the threshold received extra in-service training for their teachers, while schools just above the threshold did not. If other factors that influence student achievement vary in a continuous manner for these schools just above and just below the threshold and if the choice of the threshold was exogenous, then this RD approach should be akin to an analysis that had data from random assignment of the treatment to schools around the threshold.

In contrast to Angrist and Lavy, Jacob and Lefgren find no evidence that in-service programs helped Chicago teachers improve the performance of their students on standardized tests.

There are a large number of studies in the education literature that seek to evaluate in-service professional learning for teachers. Bressoux [19] observes that most of the early work in this area is “process evaluation” that does not evaluate the effect of training on outcomes or involves comparisons of different training programs with no control group. Nevertheless, Kennedy [20] provides an early meta-analysis of this literature. She finds that only 12 of 93 studies are able to detect a positive and statistically significant effect of in-service teacher training on student achievement. A more recent meta-analysis, conducted by Blank et al. [21], reports that in-service training programs for math and science teachers tends to improve student achievement. Blank et al., however, make no mention of the issue of the selection of teachers into in-service training programs and how that can bias estimates of the effects of these programs on student achievement. We have reviewed several of the studies used by Blank et al. and note that they do not attempt to deal with the issue of selection bias—schools or teachers selecting or being selected into in-service training.

Following Angrist and Lavy, our study uses a difference-in-differences approach in an attempt to control for teacher selection into workshops. We cannot use a regression discontinuity design like Jacob and Lefgren because there is no exogenous threshold that determines the selection of teachers into VE3 workshops.

3. In-Service Workshops for Economics Teachers and the Testing Environment in Georgia

3.1. In-Service Workshops for Economics Teachers

The Georgia Council on Economic Education (GCEE) offers in-service workshops for teachers of k12 students in Georgia. The Virtual Economics v. 3 (VE3) workshop is but one of almost 40 different workshops available through GCEE1. In 2008, GCEE budgeted $243,500 to provide 122 in-service workshop opportunities to teachers throughout Georgia.

Since 2006 over 2,000 k12 Georgia teachers have attended VE3 workshops that provided them a copy of the VE3 CD and trained them in its use. There is no registration fee for teachers to take the workshop and GCEE reimburses school systems for substitute teachers. The VE3 CD makes available the entire library of CEE’s proprietary materials to teachers. The VE3 features an online link to CEE materials that give teachers access to 1,200 reproducible economics and personal finance lessons; search tools that allow teachers to find lessons by concept, keyword, grade level, or state or national economics standards; detailed demonstrations of 51 key economic concepts; and teaching tips for classrooms (CEE, 2010). It is possible that both the workshop and the materials on the VE3 CD and the CEE’s web portal could have an impact on a teacher’s effectiveness in the classroom–the CD and web portal offer many materials an in-service teacher might want to prepare lesson plans and assess student learning. Because we cannot disentangle the two effects, we examine the overall effect.

3.2. Testing Environment in Georgia

Since 2004 Georgia has required every student who is enrolled in or receives credit for a course coved by an End of Course Test (EOCT) to take the EOCT once the student completes the course. Currently there are standardized EOCTs in eight different courses, including economics, algebra, and geometry2. These tests counted for 15 percent of their final grade during the time period we study, and students must achieve at least a 70 in each class to graduate. While it is possible to pass the courses without passing the EOCT, the testing environment is high-stakes. Consequently, students have an incentive to perform well on the test. This high stakes testing in an economics course required for high school graduation makes Georgia uniquely suited for studying the impact of in-service workshops. The EOCTs provide a standardized measure of student performance across the entire cohort of students.

The Economics EOCT covers five domain areas: (1) Fundamentals of Economics, (2) Microeconomic Concepts, (3) Macroeconomic Concepts, (4) International Economics, and (5) Personal Finance Economics. The test is a standardized, 90 question test with normed scores ranging from 200 to 750.3 A score of 400 represents the minimum score required for passing the test.

During the time period covered by our data, all public school students had to take algebra and geometry and the same standardized end-of-course Tests (EOCT) in both math courses. Like economics and the other courses subject to an EOCT, the Algebra and Geometry EOCTs, at that time, each counted as 15 percent of each student’s course grade, so students had an incentive to perform well on the exams4. Since prior achievement in mathematics has been shown to be a strong predictor of success in high school economics (see, e.g., [13, 22]) performance on the Algebra and Geometry EOCTs serve as strong controls for aptitude and achievement prior to taking an economics course. Therefore, we have a useful measure of each student’s preparedness for economics.

4. Data

The student level administrative data for our research come from the Georgia Department of Education (GaDOE). Our database contains each student’s economics EOCT score matched to information such as gender, economic status, and ethnicity. The data also contain information about students’ scores on two mathematics EOCTs (high school algebra and geometry). These mathematics scores are important because they allow us to control for a measure of student achievement prior to taking an economics class5. We restrict our attention to observations of students who have completed both math courses before they took their economics course to control for preexisting student knowledge. We have information on all Georgia public high school students who took Georgia’s mandatory economics course for the academic years, 2005-2006, 2006-2007, or 2007-2008.

Our measure of teacher attendance at a VE3 workshop comes from GCEE. We observe whether each teacher who taught the high school economics course had taken the VE3 workshop or any other GCEE workshop.

The outcome of interest is the EOCT score in economics. Students and teachers may have learned over time how to adjust to the testing environment. The annual trend of scores on the EOCTs has been upward since its first implementation in 2004. This general trend indicates, in part, that teachers and students may be getting more familiar with the testing environment over time. Therefore, we norm the test results so that they are comparable from year to year. We create Z-scores for each test score equal to: (, where is the individual student’s EOCT score at time t, is the mean of the test scores in year t, and is the standard deviation of the test score for year t.

In order to isolate the impact of the VE3 workshops and materials we first model other factors that will impact student test scores. The data allow us to include demographic information for each student as control variables—including gender, ethnicity, economic status, and disability status, as defined by the GaDOE. Gender is represented by an indicator variable that equals one for “male” and zero for “female.” Ethnicity is a vector of four indicator variables that represent “African American,” “Asian,” “Hispanic,” and “Other,” respectively. “White, non-Hispanic” is the omitted comparison group. Economic status is an indicator variable that equals one if the student is categorized by the GaDOE as being “Economically Disadvantaged,” (defined as eligible for a free or reduced price lunch). Similarly, Disability status is an indicator variable that equals one if the GaDOE defines the student as one who is a “Student with Disabilities”—which covers a broad range of disabilities. Combined with the information on each student’s prior achievement in two mathematics courses, these variables give us a picture of the characteristics that explain some of the student’s preexisting abilities and control for other factors that are often linked with academic performance.

In order to help control for teacher selection into workshops and to better isolate the impact of the VE3 workshop on student achievement, we restrict our treatment group () in our primary results to include only teachers who have had the VE3 workshop but have not had any other workshops from the GCEE. We restrict our control group () in our primary results to include only teachers who have not had any workshops from the GCEE. Therefore, our treatment group of teachers is comprised of those teachers who have attended only the VE3 workshop and no other GCEE workshops. Our control group of teachers is comprised of all teachers who have attended no GCEE workshops whatsoever. Thus, we eliminate teachers who have already demonstrated the motivation to attend other GCEE offerings. This will help us ensure that any effect we uncover is due to the VE3 workshop and not workshops in general. We end up with 335 teachers of the 1,472 unique teacher observations across the three years as having attended the VE3 workshop.

Tables 1(a) and 1(b) present the summary statistics of our data. Table 1(a) presents the statistics for students who had a teacher who had attended no GCEE workshops including the VE3 workshop (). About 23 percent of the students in our sample had a teacher who had attended a VE3 workshop and no other GCEE workshops. A summary of their data appears in Table 1(b).

We restrict the sample from the population of students who took the economics EOCT from 2006 to 2008 in a number of ways. First, we eliminate students who do not have a recorded EOCT in both geometry and algebra. This can occur for two reasons. Either the student took one or both of the math courses before the testing regime was fully implemented in 2006, or the student took the economics course prior to, or concurrently with one (or both) of the math courses6. This restriction eliminated 58.7 percent of the student observations. Second, we eliminate observations of students who have teachers who have attended multiple GCEE workshops. This is an attempt to reduce potential selection bias as these teachers have already demonstrated a predilection toward in-service training. This eliminates another 27,500 student observations. Third, we eliminate AP Economics students from the sample. We do this because GCEE provides the training for AP Economics teachers in Georgia. This restriction eliminates 3,467 student observations. Including these observations would potentially confound the impact of the VE3 workshop with other workshops and with any effects of AP coursework on student achievement. Finally, in the difference-in-differences model we eliminate 1,245 students from our sample because their teachers were observed having attended the VE3 workshop before they began teaching the first time they appeared in our data. As discussed below, our difference-in-differences approach requires that the treatment sample of teachers () is observed teaching before they attended the workshop.

Comparing students whose teachers did not attend a VE3 workshop (, control group, Table 1(a)) with students in the treatment group (, Table 1(b)), students in the treatment group have higher test scores, are more likely to be white, and less likely to be economically disadvantaged. However, none of the differences in means between the treatment and control group are statistically significant.

The GCEE began offering the VE3 workshop and CD to teachers starting in mid-2006. The first year of our student-level data comes from the 2005-2006 academic year. Thus, we have one year of student test results before any teacher had access to VE3. This feature of our data, as discussed in the next section, helps us to identify any effects of VE3 on student achievement.

5. Empirical Models of Student Achievement in High School Economics and Results

The typical approach in studies that analyze student achievement is to construct an education production function that hypothesizes that a measure of student achievement is a function of student characteristics, a set of inputs, and a measurable treatment. Our model builds on the work of Bosshardt and Watts [8, 9], Watts and Bosshardt [10], and Swinton et al. [14]. We specify an educational production function where the observed student outcome (normed score on the Economics EOCT) is a function of the student’s demographic characteristics (gender, ethnicity, economic status and disability status), a measure of the students’ human capital before the course (Algebra and Geometry EOCT scores), and the observable teacher characteristic (attendance or no attendance at a VE3 workshop). Our first empirical model takes the form:

In both (1) and below in (2), represents the normed value of ’s end-of-course economics test score. The vector represents the seven student demographic variables available in the Georgia administrative data base: ethnicity, disability, gender, and free or reduced-price lunch. Ethnicity is classified as African American, Hispanic, Asian, and other with white being the omitted category. and represent the student’s end-of-course test scores in geometry and algebra, respectively. Finally, represents a dummy variable indicating whether or not the student’s teacher has attended a VE3 workshop prior to teaching the student at time . If a teacher took the VE3 workshop just prior to the 2007-2008 academic year, would be equal to “1” for the 2007-2008 academic year and equal to “0” for 2005-2006 and 2006-2007.

Because unobservable teacher characteristics may influence student outcomes, we also use a difference-in-differences (DD) model to account for these systematic but unobservable characteristics that differ from teacher to teacher. This second empirical model takes the form: where Treated_Teachers equals “1” for students who have teachers who eventually took the VE3 workshop during the sample period. For example, if a given teacher took the VE3 workshop just prior to the 2007-2008 school year, Treated_Teachers would be equal to “1” for her students in the 2005-2006, 2006-2007, and 2007-2008 school years. Thus, the Treated_Teachers dummy variable allows us to estimate any time-invariant average differences in teacher effectiveness or other unobservables for teachers who took the VE3 workshop relative to teachers who did not take the workshop. The variable Year_Dummy equals “1” for the last two years of our sample period, 2006-2007 and 2007-2008. This variable captures any secular change in student performance between the pretreatment and posttreatment time periods not captured by the norming of test scores. For example, if there was some factor—other than the VE3 workshop—that led to a secular increase or decrease in student learning for all students in the posttreatment time period, the coefficient on Year_Dummy would pick up such factors. Bertrand et al. [23] and Angrist and Pischke [24] recommend structuring the time dummy variable in this manner to avoid creating an overestimate of the treatment effect. We also cluster the error terms in recognition of the likelihood that the error terms for individual students who have the same teacher may be correlated.

The DD approach requires that we observe student test scores for teachers before and after they attended the VE3 workshop and student test scores for a control group of teachers who did not take VE3. The DD estimate of γ3 in (2) measures the difference in student test scores for the treated teachers before and after they attended VE3 relative to the difference in student test scores for the control group of teachers in the pre- and posttreatment periods—hence the term difference-in-differences.

This DD approach is an, albeit imperfect, attempt to address the issue of teacher selection into workshops. While we are agnostic about the direction of such bias—conscientious teachers may be more likely to attend VE3 workshops or principals may require underperforming teachers to attend VE3 workshops—we find it important to allow that teachers differ in systematic ways not otherwise captured in the data available to us.

One assumption of the DD approach is that the difference between the trend in outcomes for the treated teachers relative to the trend in outcomes for the untreated teachers would have been the same if the treated teachers had, in fact, never received the treatment—in our case, attended the VE3 workshop. This counterfactual is, of course, unknowable. We can gain a bit of evidence on this point by comparing the trend in students’ economics test scores from 2005-2006 to 2006-2007 for teachers who went to the VE3 workshop just prior to the 2007-2008 academic year and for teachers who never went to the VE3 workshop during our sample period7. Thus, we are comparing trends for both groups of teachers in the pretreatment period. When we compared those trends, teachers who eventually attended the VE3 workshop experienced a gain in student test scores over those two years that was 0.0186 standard deviations larger than the gain in student test scores for teachers who never attended a VE3 workshop during our sample period. This difference in gains was not statistically significant. While this difference in trends is very small in magnitude and statistically insignificant, one could argue that it does suggest that our DD estimate, reported in Table 3 below, of the effect of the VE3 workshop is slightly overstated. Alternatively, one could argue that a “regression to the mean” effect would suggest that our DD estimate of the effect of VE3 is slightly understated. We leave it to the reader to make his or her own inference.

The DD approach is in the family of fixed effects estimators. We could have employed a teacher fixed effects approach that included only a teacher fixed effect in place of the Treated_Teachers and Year_Dummy variables mentioned above. The DD approach is superior because it allows more of the variation in the data to identify the treatment effect—the effect of VE3 attendance on Economics EOCT scores. The DD estimator identifies the treatment effect of VE3 attendance by comparing teachers before and after they took VE3 and the control group of teachers before and after the treatment time period. The fixed effects approach used in our earlier versions of this paper used only the before and after results for teachers who took VE3 to identify the effect of VE3. Angrist and Pischke [24] provides a good explanation of DD and fixed effects estimation8.

Table 2 presents OLS estimates from the linear regression specified in (1). Table 3 presents results from our DD regression specified in (2). Both regressions are estimated with standard errors clustered by teacher to allow for the error terms for individual students who have the same teacher to be correlated.

The estimated coefficients for all control variables do not vary in any meaningful ways between Tables 2 and 3. Students with higher prior achievement in mathematics score higher on the Economics EOCT. Low income students, female students, and minority students score lower on the Economics EOCT, controlling for prior achievement in mathematics. Each of these estimated coefficients (except for students from “other” races/ethnicities—other) is statistically different from zero at . The estimated coefficient on other is statistically different from zero at .

The variable of interest is VE3, which equals “1” if the student’s teacher attended a VE3 workshop and zero otherwise. In the OLS results (Table 2), the estimated coefficients of the effect of VE3 workshop attendance on student achievement on Georgia’s Economics EOCT is large, positive, and statistically significant—0.117 with . This point estimate suggests that a student who has an economics teacher who took the VE3 workshop would score 0.117 standard deviations higher on the Economics EOCT relative to an otherwise identical student who had an economics teacher who did not take the VE3 workshop. As discussed previously, this estimated coefficient could be biased upwards or downwards because of unobserved teacher selection into workshop attendance. That is, more effective or more diligent teachers may choose to take the VE3 workshop or less effective or less diligent teachers may be selected into VE3 workshop attendance by their principals. If more effective or diligent teachers select into VE3 workshop attendance, then the OLS estimate of the effect of the VE3 workshop in Table 2 is biased upwards (positive selection of teachers into VE3). If less effective or less diligent teachers are selected into VE3 workshop attendance by their principals, then the OLS estimate of the effect of VE3 is biased downward in Table 2 (negative selection of teachers into VE3).

To help control for this selection issue, we use a difference-in-differences (DD) approach and estimate the empirical model specified in (2). The results in Table 3 were obtained using this DD approach.

Using the DD model presented in (2), the estimated coefficient on VE3 is 0.061 and is reported in Table 3. Given the conversion of EOCT scores into -scores, this coefficient estimate implies that a teacher who attended a VE3 workshop would have students who scored 0.061 standard deviations higher on the Economics EOCT relative to a teacher who did not attend this workshop, all else equal. The large and statistically significant decrease in the estimated VE3 coefficient between the OLS results in Table 2 and the DD results in Table 3 indicates positive selection of teachers into VE3 workshop attendance. Further, the estimated coefficient on Treated_Teacher(0.55) is positive indicating that, on average, teachers who eventually attended the VE3 workshop were more effective teachers before attending the workshop than teachers who did not. However, this coefficient estimate is not quite statistically significant (). This finding of positive selection is consistent in the economic education literature reviewed in Section 2 above.

Our DD results suggest that, all else equal, the VE3 in-service workshops for high school economics teachers improve student achievement by about 6 percent of a standard deviation in test scores. As suggested by an anonymous referee, we also estimated our model at the teacher level—by regressing average student Economics EOCT scores on average student characteristics, where the unit of observation is a teacher. Estimating (2) using this teacher-level data produced an estimate of the effect of VE3 on economics test scores of 0.0986 standard deviations.

The VE3 workshops cost about $100 per teacher per workshop. It seems that getting an increase of about 6 percent of a standard deviation in test scores for such a low cost compares extremely favorably with the cost effectiveness of other treatments such as reducing class sizes. Krueger [25] performs a cost-benefit analysis of the class size reductions that were part of the Tennessee Student/Teacher Achievement Ratio (STAR) experiment. He reports that, depending on how one views the results of this experiment, reducing class sizes from 22 to 15 students in grades K-3 increases student achievement by 0.2 or 0.1 standard deviations. Krueger also reports that reducing class sizes by seven students costs approximately $3,500 per student per year in 1997-1998. Given these large costs to reduce class sizes for a benefit of 0.2 or 0.1 standard deviations, it seems that our results suggest that the in-service economics workshops considered in this paper are cost effective relative to reducing class sizes by this magnitude.

6. Concluding Remarks

The purpose of our paper is to analyze whether the Virtual Economics v. 3 (VE3) in-service workshop for k12 teachers appears to increase student learning in a high school economics course. Since 2006 over 2,000 k12 Georgia teachers have attended VE3 workshops that provided them a copy of the VE3 CD and trained them in its use. The VE3 CD makes available the entire library of CEE’s proprietary materials to teachers [26].

We use data for all Georgia public high school students who took Georgia’s mandatory course in economics for the academic years, 2005-2006, 2006-2007, or 2007-2008. The environment in Georgia is uniquely suited for studying the impact of in-service workshops because a course in economics is required for high school graduation and each economics student must take a statewide high-stakes economics test that counts for 15 percent of their Economics course grade.

Our difference-in-differences (DD) estimates suggest that the VE3 workshop leads to a statistically significant increase of 6.1 percent of a standard deviation in student performance on Georgia’s economics end-of-course test, all else equal. To put this estimate in a meaningful context, the impact of coming from an economically disadvantaged household (as defined by receiving a free or reduced-price lunch) decreases the expected economics EOCT score by 7.4 percent of a standard deviation (Table 3), after conditioning on prior mathematics achievement9. The estimate of the impact of one day of in-service education is about 82 percent of the magnitude of poverty.

The VE3 workshop represents only one day of training with inexpensive materials provided to participants. These gains in student achievement come at a very low cost ($100 per teacher) when compared to other treatments typically used in k12 education such as across the board reductions in class size which require the hiring of more teachers and the construction of more classroom space.

Our DD approach is an attempt to control for the non-random selection of teachers into VE3 workshop attendance. Admittedly, our approach is quasiexperimental. Future research should seek to use true experimental approaches with randomization to obtain better estimates of the effects of in-service training programs for teachers. Even if further research confirms our finding, we do not know what aspect or aspects of the VE3 experience led to the gains in student learning. That is, we cannot disentangle the effects of the face-to-face workshop, the access to the lessons and assessments on the VE3 CD, and the access to the workshop materials available at the web portal.

Sosin et al. [22] warn that not all attempts to infuse technology into the classroom are equally beneficial. But, we provide some evidence from Georgia that the VE3 in-service workshop and materials can augment CEE’s efforts to help a broad audience of economics teachers improve their effectiveness. We hope that the results of this paper will be helpful to public school administrators and teachers, the GCEE and other state councils, college, and university-based centers for economic education and the council for economic education as they make resource decisions about professional learning opportunities for high school economics teachers.

Endnotes

  1. For a list of workshops offered see Swinton et al. [14].
  2. For the time period we study, the eight courses that had accompanying EOCTs were Algebra, Geometry, United States History, Economics/Business/Free Enterprise, Biology, Physical Science, Ninth Grade Literature and Composition, and American Literature and Composition [27].
  3. Of the 90 questions, 75 count toward the student’s score. Each test also field tests 15 questions which do not count toward the student’s score.
  4. Currently, Georgia has a new math curriculum and algebra and geometry are no longer offered as a stand-alone course, and EOCTs count for 20 percent of each student’s course grade.
  5. Georgia students typically take economics in the senior year of high school, while taking algebra and geometry prior to that. In our empirical work, we use only students who had measures of prior achievement in algebra and geometry.
  6. For example, if a student took one of the math courses in 9th grade in 2003 and economics in 12th grade in 2006 we would not have a math test on record. Therefore, that student’s record would be incomplete. We also estimated the model using only the geometry test score as a measure of prior achievement in mathematics. This increased our sample size, but the results regarding the effect of VE3 workshop attendance were slightly higher—but not statistically different—than the results reported below.
  7. We thank and anonymous referee for suggesting that we compare these pretreatment trends in student test scores.
  8. Again, we thank an anonymous referee for suggesting that we use the DD approach.
  9. We are not suggesting that the entire effect of poverty on student achievement in economics is −7.4 percent of a standard deviation. Surely, poverty also impacts prior achievement in mathematics—thus, the effect of poverty on student achievement overall is much larger. However, poverty is estimated to decrease achievement in economics by 7.4 percent of one standard deviation, conditional on prior mathematics achievement. Our preferred estimate of the effect of VE3 (4.2 percent of a standard deviation as reported in Table 3) suggests that the VE3 program can decrease about 82 percent of on-going and negative effects of poverty on student achievement.