Developmental Issues in Chinese AdolescentsView this Special Issue
Postlecture Evaluation of a Positive Youth Development Subject for University Students in Hong Kong
The purpose of this study was to examine the postlecture evaluation by the students taking a course (Tomorrow's Leaders) that attempted to promote their leadership qualities and intrapersonal competencies at The Hong Kong Polytechnic University in Hong Kong. Except for the last lecture, students were invited to respond to a 12-item postlecture questionnaire after each lecture. Results showed that the students had positive perceptions of the subject, class, and teacher attributes, and they had positive global evaluation of the teacher and the subject. The postlecture evaluation questionnaire was found to possess good psychometric properties. Multiple regression analyses showed that subject, class, and teacher attributes were predictive of global evaluation of the lecture and the teacher. In conjunction with other evaluation findings, the present findings strongly suggest that students had positive perceptions of the attributes and benefits of “Tomorrow's Leaders.”
Client satisfaction or subjective outcome evaluation is a widely used evaluation method in human services. The common form of subjective outcome evaluation is to distribute a client feedback questionnaire to the clients which may have both quantitative rating items and open-ended questions. In the social welfare context, social workers normally invite the program participants to complete a client satisfaction questionnaire at the end of the program. For example, in the Project P.A.T.H.S. in Hong Kong, subjective outcome evaluation is used to capture the views of the program participants as well as the program implementers. For the program participants, a subjective outcome evaluation form (Form A) is used to gauge their perceptions of the program, instructors, and effectiveness of the program. On the other hand, subjective outcome evaluation forms (Form B and Form C) are used to assess the perceptions of the program implementers on the program, implementers, and effectiveness of the program. Previous research findings showed the value of subjective outcome evaluation strategies in assessing program effectiveness [1–4].
Subjective outcome evaluation is also commonly used in the education sector. For example, it is a common practice for universities throughout the world to evaluate the feedback of students using subjective outcome evaluation. A review of the literature shows that many measures have been developed and studies have been conducted to examine their psychometric properties in the Western world. For example, Cohen proposed that six dimensions of teaching (skills, rapport, structure, difficulty, interaction, and feedback) could be used to assess student feedback . With reference to the Students’ Evaluations of Educational Quality (SEEQ), Marsh and Roche  identified nine dimensions of student feedback, including learning, teacher enthusiasm, organization, group interaction, individual rapport, breadth of coverage, examinations, assignments/readings, and workload. The SEEQ was translated into Chinese and there was support for the validity of the assessment tool [7, 8]. Kim et al.  identified eight broad dimensions underlying course evaluation, including teacher character traits, management of the class, assignments, course design, testing, grading, feedback, and course materials. Kember et al.  used the Student Feedback Questionnaire to evaluate teaching which included six dimensions. These dimensions were learning outcomes, interaction, individual help, organization and presentation, motivation, and feedback.
Several observations can be highlighted from the literature review on course evaluation based on subjective outcome evaluation method. First, while different conceptual frameworks were adopted in different studies, there were similarities across studies. For example, many researchers proposed qualities of the teacher, objectives, teaching techniques, and teacher-student relationship as the basic dimensions of evaluation in their frameworks. Second, exploratory and confirmatory factor analyses were commonly used to examine the underlying dimensions of different course evaluation instruments. Nevertheless, while factor analytic might yield findings that can support elegant statistical models, interpretation of the findings is not always simple. For example, with reference to the framework proposed by Kember and Leung , although the proposed model provided an adequate fit to the data, items in the “Challenging Beliefs” and “Motivation” domains are not conceptually pure. Third, in contrast to the vast number of related studies in the West, there are very few studies on Chinese course evaluation questionnaires in different Chinese contexts. Fourth, different course evaluation questionnaires are used by different tertiary institutions in Hong Kong with different dimensions covered in the evaluation questionnaires. In fact, different institutions differ widely on the design of course evaluation questionnaires. Besides, psychometric properties of the instruments are rarely reported. Published scientific findings on the reliability, validity, and norms of the course evaluation assessment tools are almost nonexistent. As such, there is a need to document the psychometric properties of course evaluation tools. Finally, although there are questionnaires on course evaluation, effort to evaluate individual lecture (i.e., postlecture evaluation) is comparatively weak.
While postcourse evaluation can give a global picture about the quality of the course and teacher performance, it is argued that evaluation of individual lectures (i.e., postlecture evaluation) is equally important for several reasons. First, postlecture evaluation can give detailed information about the relevance of the lecture content and quality of lecture delivery. Such specific information is helpful for lecture improvement. Second, as postcourse evaluation takes place at the end of a course, its timeliness in feedback is not quick. In contrast, postlecture evaluation can yield immediate information that can be used by the instructor to plan for the next lecture. Finally, it can be argued that postcourse evaluation may have greater bias because students respond according to their general impression only. Besides, memory decay and reconstruction may affect the recalled information. On the other hand, postlecture evaluation enjoys the advantage of immediacy where students can evaluate the lecture based on the information freshly acquired. Against this background, the present study attempted to carry out postlecture evaluation of a course on positive youth development for university students in Hong Kong.
Under the new 4-year curriculum in The Hong Kong Polytechnic University, there are 30 credits in the General University Requirements (GUR) as follows: (a) Language and Communication (9 credits); (b) Freshman Seminar (3 credits); (c) Leadership and Intrapersonal Development (3 credits); (d) Service Learning (3 credits); (e) Broadening Subjects chosen from 4 clusters (12 credits); (f) Healthy Life Style (Non-Credit Bearing). With specific reference to the requirement in Leadership and Intrapersonal Development, a subject entitled “Tomorrow’s Leaders” was developed by the author based on the positive youth development framework. The positive youth development constructs covered in the course included self-understanding, emotional competence, cognitive competence, resilience, spirituality, social competence, moral competence, positive identity, interpersonal communication, conflict resolution, relationship building, and assertiveness. Through lectures, class activities, and assignments, students are helped to understand the attributes of a successful leader, conduct personal reflections, and cultivate their awareness of the importance of intrapersonal and interpersonal attributes of university students (see the appendix). Conceptually speaking, the topics covered in the course are based on the positive youth development framework which is also adopted in the Project P.A.T.H.S. in Hong Kong, with HK$400 million for the initial phase and HK$350 million for the extension phase. To date, evaluation findings based on different evaluation strategies showed two observations: (a) different stakeholders generally had positive views of the program, implementers, and effectiveness and such perceptions were consistent across different stakeholders; (b) compared with control participants, students in the experimental schools had better positive development and they displayed lower levels of substance abuse, delinquency behavior, and intention to engage in risk behavior [12–16].
According to the subject syllabus, the objectives of the course are (a) to enable students to learn and integrate theories, research and concepts of the basic personal qualities (particularly intrapersonal and interpersonal qualities) of effective leaders; (b) to train students to develop and reflect on their intrapersonal and interpersonal qualities; (c) to promote the development of an active pursuit of knowledge on personal qualities in leadership amongst students. On successfully completing this subject, it is expected that students will be able to (a) understand and integrate theories, research and concepts on the basic qualities (particularly intrapersonal and interpersonal qualities) of effective leaders in the Chinese context; (b) develop self-awareness and understanding of oneself; (c) acquire interpersonal skills; (d) develop self-reflection skills in their learning; (e) recognize the importance of active pursuit of knowledge on intrapersonal and interpersonal leadership qualities.
The proposed subject was piloted in the second term of 2010/11 school year. To understand the effectiveness of the course, multiple evaluation strategies were used. First, objective outcome evaluation utilizing a one-group pretest-posttest design was used where pretest and posttest data were collected from the students taking the course. Second, postcourse subjective outcome evaluation was conducted where students were invited to respond to a subjective outcome evaluation form including items assessing their perceptions of the course, instructor, and perceived effectiveness of the program at the last lecture. Third, process evaluation via systematic observations was carried out by two trained colleagues to understand the program implementation details in 14 lectures as well as program adherence. Fourth, qualitative evaluation via focus groups involving students based on schools randomly selected from the participating schools was carried out. Finally, qualitative evaluation using reflection notes was conducted. In this paper, findings based on the quantitative data collected in postlecture evaluation are reported in this paper.
2.1. Participants and Procedures
The subject was offered to four classes of students, with a total of 268 students (65 in Class A, 68 in Class B, 66 in Class C, and 69 in Class D). At the end of Lecture 1 to Lecture 13, students were invited to respond to a subjective outcome evaluation form on their perceptions of the content of the lecture and their views. There are 12 items and one open-ended question in the evaluation form. The items cover various areas of lecture, including design, atmosphere, peer interaction, student interest, student participation, opportunities for reflection, degree of helpfulness to personal development, instructor’s mastery of lecture, instructor’s use of teaching methods, helpful of the lecture to students, global evaluation of lecture, and global evaluation of the lecturer. The respondents were required to respond on a six-point scale with “Strongly Disagree,” “Disagree,” “Slightly Disagree,” “Slightly Agree,” “Agree,” and “Strongly Agree” as the response options. The items are as follows.(i)Item 1: The design of this lecture was very good.(ii)Item 2: The classroom atmosphere of this lecture was very pleasant.(iii)Item 3: There was much peer interaction amongst the students in this lecture.(iv)Item 4: I am interested in the content of this lecture.(v)Item 5: There was much student participation in this lecture.(vi)Item 6: There were many opportunities for reflection in this lecture.(vii)Item 7: This lecture is helpful to my personal development.(viii)Item 8: The lecturer had a good mastery of the lecture material.(ix)Item 9: The lecturer used different methods to encourage students to learn.(x)Item 10: The lecturer in this lecture was able to help students understand the knowledge covered in the lecture.(xi)Item 11: Overall speaking, I have very positive evaluation of the lecturer in this lecture.(xii)Item 12: Overall speaking, I have very positive evaluation of this lecture.
Conceptually speaking, it was hypothesized that items 1, 4, 6 and 7 are related to the attributes of the subject (Subject Attributes), items 2, 3 and 5 are related to the attributes of the class (Class Attributes), and items 8, 9 and 10 were related to the attributes of the teacher (Teacher Attributes). Item 11 and Item 12 are items that are designed to assess the global evaluation of the teacher and the course.
A total of 2,039 questionnaires were collected for all lectures throughout the course. On the day of data collection, the purpose of the evaluation was mentioned, and confidentiality of the data was repeatedly emphasized to all students. All participants responded to the items and question in the evaluation form in a self-administration format. Adequate time was provided for the participants to complete the questionnaire. In the present paper, focus would be put on the findings based on the quantitative data.
2.2. Data Analysis
Percentage analyses were used to examine the perceptions of the students on the course and teacher performance. Factor analysis was performed for the Lecture 1 data (i.e., first batch of data) to examine the structure of Item 1 to Item 10 to see whether there was supported for the three dimensions—subject attributes, class attributes and teacher attributes. To examine whether subject attributes, class attributes and teacher attributes predicted the overall evaluation of the teacher (Item 11) and the subject (Item 12), multiple regression analyses were performed. All analyses were performed by using the Statistical Package for Social Sciences Version 16.0.
A total of 2,039 postlecture subjective outcome evaluation forms were collected after Lecture 1 to Lecture 13. The quantitative findings based on the closed-ended questions are presented in this paper, with the percentage findings presented in Table 1 and the mean findings presented in Table 2. Several observations can be highlighted from the percentage findings presented in Table 1. In the first place, most participants generally had positive perceptions of the course, including its design (Item 1), student interest (Item 4), reflection (Item 6), and benefits (Item 7). For example, 91% of the participants regarded the program design as positive; 85% of the participants agreed that class promoted reflection. Besides, students perceived the class atmosphere to be pleasant (Item 2: 87%), with much peer interaction (Item 3: 88%) and student participation (Item 5: 86%). Finally, teachers were perceived to have good mastery of the course (Item 8), used varied teaching methods (Item 9), and were able to help students understand knowledge (Item 10). Regarding global evaluation of the subject, 93% and 90% had positive evaluation of the teacher and subject, respectively.
Concerning the psychometric properties of the scale, reliability analysis showed that the 12-item scale was internally consistent in different lectures (Table 1). Both alpha and mean interitem correlation coefficients were found to be in the high range. Regarding the factor structure of the 10 specific items, principal factor analysis followed by promax rotation showed that three factors could be meaningfully extracted, accounted for 75% of the variance. Factor I included items 1, 4, 6, and 7, and it was labeled a Subject Attributes factor (alpha , mean interitem correlation ). The second factor included items 2, 3, and 5 (alpha , mean interitem correlation ). Because these items are basically concerned with lecture delivery in class, this factor was labeled Class Attributes factor. The third factor included items 8, 9, and 10 that could be labeled a Teacher Attributes factor (alpha ; mean interitem correlation ). The pattern matrix can be seen in Table 3.
To examine how the subject, class, and teacher attributes contributed to the global evaluation of the teacher and the course, multiple regression analyses were carried out for the data collected from each lecture. Multiple regression analyses showed that subject, class, and teacher attributes predicted global evaluation of the teacher and lecture (Table 4). Among these different aspects, findings showed that subject and teacher attributes showed greater influence on the global evaluation of the teacher and the class.
The present paper examines the postlecture subjective outcome evaluation of a subject entitled “Tomorrow’s Leaders” offered at The Hong Kong Polytechnic University. Several observations can be highlighted from the present study. First, the students generally perceived the subject positively in terms of the subject, class, and teacher attributes. The findings also showed that very high proportions of the students had positive global evaluation of the teacher and the subject. Consistent with other forms of evaluation, the present findings showed that the students had positive evaluation of the subject.
Concerning the psychometric properties of the 12-item postlecture subjective outcome evaluation form, reliability analyses showed that the scale was highly reliable in different lectures. Furthermore, consistent with the original conceptual model, factor analyses showed that three dimensions, including subject attributes, class attributes, and teacher attributes were identified and reliability of the related subscales were on the high side. As there are every few published studies on postlecture evaluation in different Chinese contexts, the present findings are interesting additions to the literature. As client satisfaction surveys are commonly criticized as invalid in the field of human service, there is a need to develop validated measures in this field. As pointed out by Royse , using validated measures of client satisfaction would “eliminate many of the problems found in hastily designed questionnaires” (page 265). Hence, the present study is a positive response. Nevertheless, it is noteworthy that there are several limitations of the present study. First, as the present findings were based on small samples, there is a need to replicate the findings in large samples. Second, future studies should examine the validity of the 12-item postlecture subjective outcome evaluation form. Third, as the sample size was small, the stability of the factors should be examined in future studies.
Finally, regarding predictors of perceived effectiveness of the course based on the data in different lectures, findings showed that subject, class, and teacher attributes predicted global evaluation of the teacher, although subject and teacher attributes appeared to be stronger predictors. Similarly, although there are findings showing that subject, class, and teacher attributes predicted global evaluation of the subject, subject and teacher factors were stronger predictors. These findings concur with the previous findings that program and implementer characteristics are important factors leading to program effectiveness . With specific reference to program implementers, Donnermeyer and Wurschmidt  asserted that implementers’ “level of enthusiasm and support for a prevention curriculum influences their effectiveness because their attitudes are communicated both explicitly and subtly to students during the time it is taught and throughout the remainder of the school day” (page 259-260). However, it is noteworthy that there are few studies of predictors of effectiveness of intervention programs. Berkel et al.  remarked that “program evaluations have rarely examined more than one dimension in a single study and thus have not untangled possible relations between them” (page 24). Durlak and DuPre  further argued that most of the intervention studies failed to examine the relative importance of different predictors of program effectiveness. Hence, the present study is a constructive response to these criticisms.
Methodologically speaking, there may be queries on the use of multiple regression in looking at the relationships between the specific aspects and the global outcomes because it is expected that subject, student, and class attributes are highly correlated. However, it is noteworthy that multiple regression analysis is frequently used to examine predictors of program effectiveness. For example, Byrnes et al.  showed that program adherence and quality of program implementation were significant predictors of participants’ satisfaction towards the program. Of course, the use of structural equation modeling would give a clear picture on the factors affecting program effectiveness in future.
There are several limitations of this study. First, as only four classes of students were involved in this study, it would be desirable to include more students so that the generalizability of the findings could be enhanced. In addition, it would be helpful to examine the postlecture evaluation findings in different groups of students. For example, it would be interesting to ask whether the subject has different impact for social science and nonsocial science students. Second, the limitations of using a quantitative approach to examine the subjective experiences of the informants should be noted. The use of qualitative techniques in this context would be very helpful. In the present study, data based on one open-ended question were collected and the findings would be reported in another study. Third, as there are many threats to the internal validity of a one-group pretest and posttest research design, addition of a control group can help to examine the impact of the intervention on the program participants. Despite these limitations, the present study is a ground-breaking study in different Chinese contexts and it is a good response to the appeal that psychosocial competencies should be promoted in university students .
See Table 5.
An earlier version of this paper was presented at the “International Conference on Transitioning to Adulthood in Asia: Courtship, Marriage and Work” held at the Asia Research Institute, National University of Singapore on July 21-22, 2011, which was jointly organized by the Asia Research Institute, Faculty of Arts and Social Sciences of the National University of Singapore and the Ministry of Community Development, Youth and Sports. The author thanks Professor Jean Yeung for her invitation extended to him to give an invited paper at the conference. This work and the course on “Tomorrow’s Leaders” are financially supported by The Hong Kong Polytechnic University. Members of the Curriculum Development Team include Daniel Shek, Yat Hung Chui, Siu Wai Lit, Yida Chung, Sowa Ngai, Yammy Chak, Pik Fong Tsui, Cecilia Ma, Lu Yu, and Moon Law.
D. T. L. Shek and R. C. F. Sun, “Development, implementation and evaluation of a holistic positive youth development program: project P.A.T.H.S. in Hong Kong,” International Journal on Disability and Human Development, vol. 8, no. 2, pp. 107–117, 2009.View at: Google Scholar
P. A. Cohen, “Student ratings of instruction and student achievement: a meta-analysis of multi-section validity studies,” Review of Educational Research, vol. 51, no. 3, pp. 281–309, 1981.View at: Google Scholar
H. W. Marsh and L. A. Roche, “Making students' evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility,” American Psychologist, vol. 52, no. 11, pp. 1187–1197, 1997.View at: Google Scholar
H. W. Marsh, K. T. Hau, C. M. Chung, and T. L. P. Siu, “Students' evaluations of university teaching: chinese version of the students' evaluations of educational quality instrument,” Journal of Educational Psychology, vol. 89, no. 3, pp. 568–572, 1997.View at: Google Scholar
C. Kim, E. Damewood, and N. Hodge, “Professor attitude: its effect on teaching evaluations,” Journal of Management Education, vol. 24, no. 4, pp. 458–473, 2000.View at: Google Scholar
D. T. L. Shek and C. S. M. Ng, “Early identification of adolescents with greater psychosocial needs: an evaluation of the project P.A.T.H.S. in Hong Kong,” International Journal of Disability and Human Development, vol. 9, pp. 291–299, 2010.View at: Google Scholar
D. T. L. Shek, C. S. M. Ng, and P. F. Tsui, “Qualitative evaluation of the project P.A.T.H.S.: findings based on focus groups,” International Journal of Disability and Human Development, vol. 9, pp. 307–313, 2010.View at: Google Scholar
D. Royse, Research Methods in Social Work, Brooks Cole, Pacific Grove, Calif, USA, 2004.
J. F. Donnermeyer and T. N. Wurschmidt, “Educators' perceptions of the D.A.R.E. program,” Journal of Drug Education, vol. 27, no. 3, pp. 259–276, 1997.View at: Google Scholar