Abstract

This study examines participating tutors’ four-year feedback on a faculty-embedded tutor training program using structural equation modeling. Data from 333 tutors across all four departments in the Faculty of Business and Economics was used. Results indicate that the quality of the training session directly influences tutors’ perceived need for further skills development training and that the partial effect of facilitator effectiveness on the latter is not significant. This indicates that tutors’ indication of their future need for relevant skills in teaching and use of technology is highly influenced by the quality of the training session and much less on the facilitator’s effectiveness in delivering it. This has implications for both the way the tutor training program is perceived and the redevelopment of the questionnaire in the future.

1. Introduction

Undergraduate teaching in Australian universities is predominantly a responsibility of lecturers and tutors. Lecturers have the responsibility to deliver whole class lectures while tutors typically work in small groups. Although lecturers play a key role, tutors have increasingly been recognised as important members of the faculty due to their immense role in providing the appropriate environment for more intimate learning experience. Tutors can provide the opportunity for more intense discussion and interaction, providing undergraduate students the opportunity to apply what they learn in lectures and on their own during tutorials. Specifically, tutoring provides direct instruction to a more manageable group than in lectures, modeling of thinking processes used in problem-solving and practical exercises, and providing immediate feedback. Tutors are most usually the first point of contact of students in regard to subject content and administration.

The increased international student enrolment in Australia in the past years has contributed to increasing reliance of universities on tutors and sessional staff to manage the teaching load [1]. Previous years’ statistics from the Department of Education, Employment, and Workplace Relations also indicate universities’ continuing recruitment of sessional staff to manage this load [2]. This puts emphasis on tutors’ effective transition to teaching in higher education. Thus, it is important to assist them during this transition phase, such as providing them the opportunity to participate in tutor training programs.

Development programs for tutors and new lecturers have been around for many years, whether such programs are administered centrally or within faculties. For the faculty-embedded programs, such as in Melbourne University’s Faculty of Business and Economics, for example, they provide tutors a program designed to assist them with business and economics specific information about teaching and learning. However, there is a need to develop a greater understanding of their perceptions of such programs. It would be very useful to learn from their experiences, particularly for program review and development.

There are few studies that evaluate tutor training programs in general. The studies are sporadic, with one or two in few disciplines and with no consistent ways or types of evaluation. The literature offers research, surrounding tutor training program evaluation, about the causes of nonparticipation in tutor training programs (e.g., [3]) and assessment of program effectiveness (e.g., [4, 5]). However, these studies only provide an atomistic view of how tutor training programs elsewhere are run and evaluated. Where tutor training models are offered, these have not been extensively adopted, adapted, or even reviewed. At least some programs across Australia that prepare academic staff in particular have received some attention recently [6].

In addition, the use of structural equation modeling to examine perceptions of tutors generally, and in the business disciplines specifically, has not been recently explored. Studies such as the one by Van Berkel and Dolmans [7] utilised path analysis to look at how tutor competencies impact learning outcomes on a medical tutoring program, but since their study does not model latent variables, their approach is closer to a multivariate regression analysis than a structural equations model. Few studies using latent modeling approach have been found in medical education, such as offering suggestions on aiming tutor training programs towards improving assessment practice (e.g., [8], although their study lacks clear path analysis and a well-defined structural model) while others investigated the impact of tutors on student learning (e.g., [9, 10]). Davis and Wong [11] used a structural equation model which is quite similar to the approach in this study although it focused on identifying factors affecting the learners’ experience and their use of technology in an online learning environment. Another study focused on the design of the assessment of programs for teacher training using structural equations [12].

To date, specific studies within the business discipline include exploring tutors’ conceptions of excellent tutoring (e.g., [13]); exploring tutor training models (e.g., [14, 15]); suggestions in designing a tutor training program (e.g., [16]); and the dynamics of how evaluation indicators of a training program transfer to trainee needs [17]. There is little attention to using participant feedback to provide useful information to facilitators about other aspects of their program. This study fills this gap and provides an important contribution to the existing research on tutor training program evaluation by using structural equation modeling to analyse where tutors attribute their need for further training to following successful completion of a faculty-embedded tutor training program.

The use of structural equation modeling in this study is a new and alternative approach to analysing the data and the authors recognise that there are some other methods available. By using structural equation modeling, new insights into how factors strongly load onto another can be gained. Specifically, this study answers the following questions. (1) By participating in the tutor training program, what relationships can be established between facilitator effectiveness, the quality of the training session, and the new tutors’ need for further training? (2) What might be useful information that can be used to validate the tutor training survey questionnaire?

1.1. Background: The TLU Tutor Training Program

The Teaching and Learning Unit (TLU) at the Faculty of Business and Economics, University of Melbourne, is the first embedded unit of its kind within a business and economics faculty in Australia. It provides, among others, undergraduate and graduate student learning programs and resources, transition support, academic development for new staff and tutor training. It services all staff and students of the faculty since 1998.

The TLU has been running the tutor training program for a few years now and it is designed for new tutors from across the four departments in the Faculty: Accounting and Business Information Systems, Economics, Finance, and Marketing and Management. It introduces new tutors to excellent teaching practice, offers practical ways to improve their teaching and enhance students’ learning, and provides them an opportunity to learn from the other more experienced tutors in the department. The program comprises a three-hour initial training session, an observation of and feedback on their teaching practice, and a follow-up session around week 6 or 7. At the end of the program at each semester, each tutor is asked to fill in a survey form to report their perceptions on facilitator effectiveness, the quality of the training session, and their future training needs. Over the past years, the training sessions have been facilitated by experienced educators in higher education. Experienced tutors from across the four departments are also invited to share their experiences during these sessions, including teaching strategies and the nature of working within the department. In each session, tutors are provided with tutor training “guides” that include the following:(i)the tutor role and responsibilities,(ii)how to plan, structure, and facilitate a tutorial,(iii)how to start your first tutorial,(iv)tutorial questioning techniques,(v)encouraging student participation in tutorials,(vi)teaching international students in tutorials,(vii)assessment and marking,(viii)evaluating tutorials.

These guides or resources, together with the knowledge of fundamental concepts of teaching and learning, are discussed with tutors, allowing for meaningful discussions around the opportunities, issues, and challenges in tutoring. New tutors learn from the experiences of both the TLU facilitator and the experienced tutors by answering questions they might have about transitioning themselves from students to teachers in classrooms.

Tutors come from a variety of fields and stages in their study. In the past years, senior undergraduate students and postgraduate students (including international and/or exchange students), even former lecturers and practitioners, participated in the program. Tutors have been involved in a number of undergraduate subjects across all four departments, mostly in those that come from the Bachelor of Commerce degree.

2. Methods

In summary, this study involved feedback from tutors over four years, rating their satisfaction on facilitator effectiveness, the quality of the training session, and their future training needs using a questionnaire with a 5-point rating scale.

2.1. Participants

This study gathered data from 333 tutors who participated in the TLU tutor training program from seven semesters (2003–2006) across all four departments. However, only 326 valid cases out of the initial 333 were included in the analysis. These tutors taught undergraduate subjects during these periods and are generally composed of senior undergraduate students, masters, and Ph.D. students within the Faculty.

2.2. Instrument

The instrument used in this study is called the TLU Tutor Training Program Evaluation Questionnaire. It is administered by the facilitator(s) at the end of the program each semester. It has not changed during these years providing a consistent data pattern. It was developed by the TLU primarily to solicit feedback from participants to continuously improve the program. It consists of 16 questions, grouped into two parts, which examine tutor satisfaction and perceived need for future training based on their experience. The questionnaire is included in the appendix.

2.2.1. Missing Values

As common with survey research, a number of missing responses were found in the questionnaire. In this study, the missing data were minimal and all item level, with less than 7% as the maximum missing response among the 16 questions (most questions have 1–4% missing responses). Listwise or pairwise deletion would have been acceptable and are the most common methods of dealing with this number of missing data [18, 19]. Nevertheless, the authors feel that other types of data imputation are more effective and thus decided to conduct regression imputation through SPSS 18 [20]. As an overview, the procedure as implemented in SPSS uses a technique called an iterative Markov Chain Monte Carlo (MCMC) method to fit a regression equation using the nonmissing values as predictors and done iteratively until missing values in all specified variables have been imputed [20]. Further details of missing data imputation are discussed more comprehensively elsewhere (see [18, 19, 21]). Table 1 presents the descriptive statistics after the imputation was conducted.

2.3. Design

The main design to test the hypotheses of this study involves structural equation modeling (SEM) analysis utilising the AMOS 18 [22] statistical program. Maximum likelihood estimation method was used throughout the analysis and modification indices were requested in the analysis of preliminary models to facilitate respecification. The first part of this SEM analysis is a confirmatory factor analysis of the two main components of the measurement model. The second part combines these two components into a generalised main model. Both of these parts are reported separately in the following sections.

2.3.1. Measurement Model

The instrument used in this study can be grouped into two parts. Part 1 is a satisfaction measure on the facilitator and the content of the workshop (Questions  1–8) while the other part is a rating scale that measures the need for future training in specific areas (Questions  9–16). The instrument asked participants to rate their satisfaction on a scale of 1 to 5 (“Strongly Disagree” to “Strongly Agree”) across all 16 questions. Part 1 consists of four questions that relate to tutors’ perceptions of the facilitator’s personal attributes and four other questions that relate to the use, coverage, and duration of the workshop. Part 2 focuses on two aspects: use of technology resources to aid teaching and teaching strategies. These two parts constitute the measurement part of the model. The initial or hypothesised path diagrams are shown in Figure 1 for measurement model A and Figure 2 for measurement model B.

To investigate whether the manifest variables in each model load on a single latent variable, a confirmatory factor analysis was conducted. Model fit was assessed and whether it fails to achieve adequate fit, the models were then respecified using both theoretical grounds of the measurement instrument and modification indices as guides [23, 24]. The main purpose of this analysis is to determine if the questions for both models load onto a single construct or to the hypothesis that each model is actually a measurement of two related constructs.

2.3.2. Structural Model

The structural model consists of the latent variables from the measurement model and a path diagram that hypothesise a partial mediation of the facilitator effectiveness on the effect of the quality of the training session on the perceived need for future training (see Figure 3). The main hypothesis of this model is that the quality of the training session has a direct effect on perceived future training needs of the participants and this effect is only partially mediated by the facilitator effects, if at all.

In putting together the respecified measurement models A and B, a higher-order latent variable was additionally specified for the latter to reflect the common theme between the two constructs that emerged from the confirmatory factor analysis for model B. This latent variable, labelled as “perceived future skills training needs,” represent what the tutors have come to realise as an area of interest where they currently lack or seek to further develop.

3. Results

The analysis of the tutor training data resulted in the identification of a full generalised model, stemming from the respecification of two measurement models—models A and B—as discussed below.

3.1. Measurement Models
3.1.1. Model A

The initial measurement model does not fit, with normed chi-square 𝜒 2 / d f = 2 3 . 8 5 , R M S E A = . 2 7 , 𝑃 < . 0 1 . This suggests that the hypothesis of a single latent variable may not be supported, and that it appears that the questions are actually measuring more than one construct. Following the natural grouping of the questions into 2 subgroups, the model was respecified using these subgroups as the new latent variables. Covariances among the error terms were also incorporated based on suggestions from the modification indices. The path diagram for the respecified model is shown in Figure 4. This new model still does not have exact fit, but reporting other fit indices could provide a more complete picture than just reporting exact fit statistics [25, 26]. As such, the normed chi-square is approaching marginal significance, 𝜒 2 / d f = 2 . 1 6 and other measures indicate acceptable fit [27], R M S E A = . 0 6 , 𝑃 = . 2 7 , standardised R M R = . 0 2 . Even with only marginal fit, theoretical grounds and the substantial improvement in fit compared to the initial model (normed chi-squares 23.85 versus 2.16) support the hypothesis that Model A measures two constructs instead of one.

3.1.2. Model B

The single latent variable initial model does not fit, with normed chi-square 𝜒 2 / d f = 4 . 2 2 , R M S E A = . 1 0 , 𝑃 < . 0 1 . Similar to the process previously described, modification indices were taken into account, as well as qualitative examination of the questions themselves, in creating possible covariances between the error terms as well as what hypothesised constructs to specify. In respecifying Model B, the regression weights of each question  were examined and the actual questions were then reviewed by the authors. It emerged that question  12 in particular was too vague in the context of this measurement model, and this is reflected in the low regression weight. This question  was then dropped from the model for the final respecification.

The resulting respecified model is shown in Figure 5. This new model achieved exact fit, 𝜒 2 ( 9 ) = 1 2 . 3 , 𝜒 2 / d f = 1 . 3 7 , 𝑃 = . 2 0 . This shows that the measurement model B best describes the measurement of two constructs which, while highly correlated, 𝑟 = . 9 2 , are still distinctly separate. With this significant correlation, a higher-order latent variable is added when Model B was incorporated into the generalised full model.

3.1.3. Generalised Full Model

Results from the analysis indicate that the generalised full model has acceptable fit, albeit not exact, with normed chi-square, 𝜒 2 / d f = 1 . 8 8 , R M S E A = . 0 5 , 𝑃 = . 3 8 , standardised R M R = . 0 2 . Item reliabilities are indicated by the squared multiple correlations of the indicator variables [23] and all are greater than  .80 for the “skills in using technology tools” and “teaching strategies” constructs, suggesting that these are good measures of the underlying constructs. Item reliabilities for the indicator variables of “facilitator effectiveness” ( 𝑅 2 =.52–.83) and “quality of the training session” ( 𝑅 2 =.69–.71) are less substantial but are still acceptable.

A more substantive portion of the results focuses on the paths from the quality of the training session to perceived future skills training needs (see Figure 6 for the path diagram). In the structural model, the regression weight from the quality of the training session to perceived future skills training needs is significant, 𝐵 = 1 . 2 4 , S E = . 1 2 , 𝑃 < . 0 1 . However, the regression weight of facilitator effectiveness to future needs is not significant, 𝐵 = 0 . 1 4 , S E = . 2 3 , 𝑃 = . 5 4 . All other regression weights in the full model are significant (Table 2). The standardised direct effects of each latent variable are presented in Table 3. Figure 6 presents the standardised estimates of the generalised full model.

Looking at the direct effects of the training session and facilitator on perceived needs, we find support for the main hypothesis that the quality of the training session has a direct effect on the perceived future skills training needs of the participants. Further, while there is substantial direct effect from the quality of the training session to facilitator effectiveness, the partial mediation by the facilitator effectiveness to perceived skills training needs is not significant. This effect shows that the quality of the training session is the main influence affecting tutor satisfaction in the training program, more than that of facilitator factors, and directly influencing the increase in the perception (or realisation) of the tutors regarding relevant skills for future training.

4. Discussion

The results of this study point to some important findings about the tutor training program. First, because the training sessions are facilitated by experienced educators and tutors in a discussion-based format, the value of those discussions is central to determining the tutors’ need for further training. The discussions have been critical for new tutors as they provided the opportunity to learn teaching strategies in specific situations, what specific skills are required, and important information about working within their specific departments. Based upon this, they tend to clearly identify what further skills development training they might require.

Second, the results suggest that a number of items need revision. Question  12 is too vague and should be dropped or changed. The need for training on how to “increase student preparation” was probably seen as somewhat ambiguous or confusing. In addition, the error variance between Question  9 and 16 suggests that they could be loading onto another different construct. This possibility was explored but the initial analysis worsened the model fit. The authors speculate that a revision of these two questions and possibly inclusion of additional related questions could improve the measurement model. Given that Question  16 is loading onto the construct almost perfectly, one possible revision would be to split the construct “Teaching strategies” and expand Question  16 to more indicators. Additional indicator items for the construct “skills in using technology tools” might also substantially improve the model. This is of course a recommendation for future study.

Finally, the hypothesis on partial mediation by facilitator effectiveness on further skills development needs is not supported by the data. This result suggests that facilitator effectiveness does not substantially affect the impact of training session quality on future skills training need. This can also be interpreted as suggesting that the perceived quality of the training session is independent of the facilitator administering the session, either because the facilitators are equally proficient, or the tutors give more emphasis on the usefulness and practicality of the training session. This does not mean, however, that the less emphasis given to facilitator effectiveness can mean that it is less important to have well-structured, clear, and engaging presentation. In fact, data shows that tutors were highly satisfied with the facilitator’s knowledge of the subject matter (e.g., teaching and learning principles, engaging students, teaching international students, and so on), followed by clear communication of ideas and concepts and being prepared and organised (see Table 1). This result can be compared to the one obtained by Van Berkel and Dolmans [7], where tutor ability impacts directly on the students’ problem-based learning outcomes. It has to be noted, however, that Van Berkel and Dolmans looked at tutor effects on students while this study looks at facilitator effects on tutor trainees.

Notwithstanding the smaller than expected loading of facilitator effectiveness, it can still be argued that the richness of the discussions during the three-hour training sessions is supported by effective facilitation, particularly through sharing practical tips and advice, the opportunity to learn from the more experienced tutors, and to meet other tutors in their department. It is interesting to note that while statistically nonsignificant, 𝐵 = 0 . 1 4 , S E = . 2 3 , 𝑃 = . 5 4 , the regression weight is negative, implying that as facilitator effectiveness increases, the perceived need for further skills development decreases. Tutors who find their facilitator to be skilled in running the tutor training program will be less likely to find that they would require further training. It would be of future interest to measure facilitator effectiveness more comprehensively than the current questionnaire, which only focuses on the tutors’ perception of their facilitator and which is quite possibly biased.

One of the possible limitations of this study is the modest sample sizes, totalling only 326 valid cases across four departments. A larger sample might have allowed us to conduct more sophisticated multigroup analyses, although our sample size at 326 is well above the rule of thumb that requires the minimum sample size to be greater than the number of parameters to be estimated and satisfies sample size adequacy based on the Hoelter index, critical 𝑁 ( 0 . 0 1 ) = 2 4 5 [24]. Another limitation is that the questionnaire is comparatively short, with only eight items per subset. In addition, this questionnaire was not originally designed for quantitative data analyses that include structural equation modeling. As such, confirmatory factor analysis has revealed weaknesses in the questionnaire design. This limits the usefulness of the data, but it also provides us with a clearer path towards further improvement of this particular instrument in the future.

5. Implications for Academic Development

The findings suggest a number of implications for academic development. These implications are around examining the nature of tutor training programs, using participant feedback, and better enabling student learning.

5.1. Innovating Training Programs for New Tutors

Academic developers need not limit their willingness to innovate similar tutor training programs. The quality of training sessions depends upon the facilitator, content, and delivery, and influences participants’ identification of other skills required. This creates an opportunity for academic developers to continuously examine their tutor training and development programs, or similar other programs offered for new staff in similar capacity, by selecting suitable staff and matching appropriate content to the needs of tutors to allow effective identification of specific skills further required.

5.2. Examining the Impact on Teaching and Learning

By being able to identify further skills required, facilitators should focus on developing tutors to be better at enabling student learning. As the indication of tutors’ future need for relevant skills in teaching (e.g., managing groups) and use of technology (e.g., Blackboard) is highly influenced by the quality of the training session, the emphasis should be on how such skills and technology can be used to support and enhance student learning.

5.3. Exploring Tutors’ Needs in Depth

The results should not be taken to mean that facilitators are not effective at influencing tutors to identify other skills they might require. The quality of the training session lends itself to the expertise of the staff and their effective facilitation skills. The implication for academic development is allowing staff to specifically focus on refining content and delivery to better understand tutors’ needs. As some tutors require more in-depth exploration of how specific skills (e.g., managing groups, dealing with difficult situations) can be developed or how to use the university’s human resource administration system (i.e., Themis), facilitators should also help tutors assess their current skills level and what types and level of assistance they require.

5.4. Coordinating with the Departmental Staff

By pointing to specific skills new tutors think they will require, facilitators should point tutors to the appropriate units or departments and the services on offer to support them. In the faculty, for instance, the four departments offer services and support to tutors independently, such as orientation programs and opportunities to meet head tutors. There is a strong need to coordinate with responsible staff in each department further skills identified by tutors and discuss how they can be effectively developed.

5.5. Delivering Consistent Training Sessions

In the Faculty, tutor training programs have been delivered by different individuals (either because of staff leaving or job rotation) over the past years. The implication of this for academic development is in maintaining a consistent approach to maintaining the quality of the training sessions.

5.6. Redeveloping Ways of Collecting Feedback

The results of the study point to the importance of examining ways of collecting feedback from participants. Of benefit is redeveloping the questionnaire as a result of this study. There are other ways academic developers can get feedback from participants apart from the questionnaire similar to the one that was used. In the case of this program, the initial individual consultations and follow-up sessions provided important information about their needs prior to and after six weeks of tutoring. This has been important in identifying the help they needed.

5.7. Using SEM as an Alternative to Other Methods

Academic developers who may wish to explore the use of SEM may find that it can provide alternative and useful ways of analysing program effectiveness. It can potentially highlight links or effects between program variables that may not be apparent in other methodologies. For example, using SEM, the pathways between program effectiveness and factor loadings of components of “future skills need” can become clearer.

6. Conclusion

The study analysed tutors’ feedback from a four-year tutor training program ran by the Teaching and Learning Unit. It revealed that the quality of the training session significantly loads onto their future skills development needs. This has impact on both the way the program is perceived and the redevelopment of the questionnaire in the future. In regard to the program, the study clearly highlights the critical importance of the training session and its ability to help tutors identify what other skills they might require. It emphasises the immense value and contribution of the initial training session in terms of learning new skills in tutoring and identifying some other skills tutors feel are necessary to prepare them for their role. Note that new tutors who participate in the program may only hold a tutoring commitment for one semester. However, there is some interest to further their skills which they hope would be useful in the future. There is also a message sent to current and future facilitators. It indicates that the materials used, the topics covered, and the pace and duration of the session are critically important in providing the opportunity for tutors to identify other needed skills. Thus, there is a need to continually improve the tutor training guides provided to tutors.

In regard to questionnaire design, the study allows us to compare this and the current questionnaire used from 2007 and to develop an instrument that examines more closely the facilitator effects and other factors at play. The redevelopment of the questionnaire can also consider factors such as the perceptions about the guest experienced tutors, tutors’ perceptions of their roles prior to joining the program and after tutoring for a semester, and the usefulness of the classroom observation feedback reported to each tutor. Other specific measures of how helpful the tutor training guides can also be included.

Appendix

See Table 4.