Abstract

The project P.A.T.H.S. is an indigenously developed positive youth development program in Hong Kong. In the extension phase (2009/2010 school year), subjective outcome evaluation data were collected from 231 schools involving 89,068 participants after completion of the curricula-based Tier 1 Program. With schools as the units of analysis, results showed that participants generally had positive perceptions of the program content and implementers, with over four-fifth of the participants regarded the program as helpful to them. There were some significant grade differences in the subjective outcome evaluation findings, although the related effect size was not strong. Multiple regression analyses revealed that program content and program implementers predicted perceived effectiveness of the program. The present study suggests that irrespective of cohorts, students in the junior secondary years perceived the program to be beneficial to them.

1. Introduction

The increasing popularity of implementing effective adolescent prevention programs in recent decades has been a key initiative to tackle adolescent developmental problems [14]. Researchers [57] identified eight factors that are essentials for the implementation of adolescent prevention programs. These include fidelity (i.e., the extent to which the program is implemented as originally designed), dosage (i.e., the number of sessions offered during implementation), quality of delivery (i.e., the extent to which the program is delivered in an authentic manner), participant responsiveness (i.e., participants’ involvement and satisfaction), program differentiation (i.e., the extent to which a program’s theory and practices can be distinguished from other available programs), monitoring (i.e., documenting the nature and amount of services received by the service recipients), program reach (i.e., the proportion of the intended audience who participated in the intervention), and adaptation (i.e., the extent to which the program is different from the original designed during implementation).

Research findings showed that positive attitudes toward the program content and program implementers were associated with program outcomes [813]. However, little is known about the relative influence of these factors on program effectiveness as prior studies mainly focused on one component only [6, 1416]. For example, Rohrbach et al. [13] noted the interrelationships of these factors and suggested to explore their relative influences on program effectiveness in the future evaluation research. Berkel et al. [17, page 24] highlighted that “program evaluations have rarely examined more than one dimension in a single study and thus have not untangled possible relations between them”. To fill this gap, the present study explored the relative influence of two program implementation factors on perceived program outcomes.

Prevention researchers noted the importance of providing culturally competent interventions for a given population [1820]. However, as adolescent prevention programs were predominantly conducted in the Western countries, it is not clear whether the previous findings would vary by different subgroups of participants, such as adolescents in non-Western contexts. This question makes sense because assuming that the application of concepts and behaviors is universal to every individual in a population is debatable which may lead to problematic results [21]. Catalano et al. [22] argued that more effort is needed “to understand how well they can be implemented in real-world settings and what effects they are likely to have…and examine differences of effects on relevant subgroups (e.g., culture, gender, age, etc.)” (page S93). It appears that findings from non-Western cultural contexts would certainly expand the scope of program evaluation literature.

The Project “P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme” is a large-scale positive youth development program designed for junior secondary school students (Secondary 1 to 3, i.e., Grades 7 to 9) in Hong Kong [23]. The word “P.A.T.H.S.” denotes Positive Adolescent Training through Holistic Social Programmes. It consists of two tiers of program. The Tier 1 Program targets all students joining the program in a particular form (i.e., universal prevention initiative). Through the use of structured curriculum, students learn competencies with reference to the 15 positive youth development construct [23]. The Tier 2 Program is specially designed for students with greater psychosocial needs in different psychosocial domains (i.e., selective prevention). After completion of the Tier 1 Program, program participants were required to complete a subjective outcome evaluation form (Form A).

Qualitative and quantitative data collected based on the original phase of the project generally suggested that participants (students and program implementers) perceived the program positively [2435]. However, little is known whether the impact of program implementation factors on program effectiveness would be sustained in the extension phase. Also, it is not clear whether these relationships would vary by students’ grade level. In particular, the relative influence of these factors on program outcomes is relatively unexplored. Against the aforementioned background, the purpose of the study was to examine the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. and to test the relative influence of two aspects of program implementation, namely, perceptions of the program (content as well as implementation) and program implementers on perceived program effectiveness. It also attempted to investigate whether the predictive effects of these factors would differ across grade levels.

2. Methods

2.1. Participants and Procedures

A total of 231 schools with 89,068 students joined the Project P.A.T.H.S. in the extension phase of the Full Implementation Phase in the school year 2009/2010. (The initial phase of the project was started from the academic year 2005/2006 to 2008/2009.) A total of 577 aggregated data sets from the participating schools were collected across three grade levels (i.e., Secondary 1 level: 219 schools; Secondary 2 level, 185 schools; and Secondary 3 level, 173 schools). The mean number of students per school was 154.36 (ranged from 6 to 240 students), with an average of 4.50 classes per school (ranged from 1 to 12 classes). Among them, 32.24% of the respondent schools adopted the full program (i.e., 20-hour program involving 40 units) whereas 67.76% of the respondent schools adopted the core program (i.e., 10-hour program involving 20 units). The mean number of sessions used to implement the program was 28.54 (ranged from 2 to 48 sessions). While 47.31% of the participating schools incorporated the program into the formal curriculum (e.g., Liberal Studies, Life Education), 52.69% used other modes (e.g., classes and events that differed from normal class schedule) to implement the program. The mean number of social workers and teachers implementing the program per school was 1.71 (ranged from 0 to 7) and 5.11 (ranged from 0 to 27), respectively.

After completion of the Tier 1 Program, the participants were invited to respond to a Subjective Outcome Evaluation Form (Form A) developed by the first author [36]. The data collection was carried out at the last session of the program. On the day of data collection, the purpose of the evaluation was mentioned, and confidentiality of the data was repeatedly emphasized to all students. The students were asked to indicate their wish if they did not want to participate in the study (i.e., passive informed consent was obtained from the students). All participants responded to all scales in the evaluation form in a self-administration format. Adequate time was provided for the participants to complete the questionnaire.

2.2. Instruments

The Subjective Outcome Evaluation Form (Form A) was used. Broadly speaking, there are several parts in this evaluation form as follows:(i)participants’ perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondents’ participation during class (10 items);(ii)participants’ perceptions of the program implementers, such as the preparation of the instructor, professional attitude, involvement, and interaction with the students (10 items);(iii)participants’ perceptions of the effectiveness of the program, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items);(iv)the extent to which the participants would recommend the program to other people with similar needs (1 item);(v)the extent to which the participants would join similar programs in the future (1 item);(vi)overall satisfaction with the program (1 item);(vii)things that the participants learned from the program (open-ended question);(viii)things that the participants appreciated most (open-ended question);(ix)opinion about the instructor(s) (open-ended question);(x)areas that require improvement (open-ended question).

For the quantitative data, the implementers collecting the data in each school were requested to input the data in an EXCEL file developed by the research team which would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the reports, they were also requested to submit the soft copy of the consolidated data sheets. In the reports prepared by the schools, the workers were also required to estimate the degree of adherence to the program manuals (i.e., the extent to which the program is implemented in accordance with the program manuals). To facilitate the program evaluation, the research team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data [36]. In addition, adequate training was provided to the implementers during the 20-hour training workshops on how to collect and analyze the data collected by Form A. After receiving the consolidated data by the funding body, the data were aggregated to reconstruct the overall profile based on the subjective outcome evaluation data by the research team.

2.3. Data Analyses

Percentage findings were examined using descriptive statistics. A composite measure of each domain (i.e., perceived qualities of program, perceived qualities of program implementers, and perceived program effectiveness) was created based on the total scores of each factor divided by the number of items in that domain. Pearson correlation analysis was used to examine if the program content and program implementers were related to the program effectiveness. One-way analysis of variance (ANOVA) was used to assess the differences in the mean of each factor across grade levels. Multiple regression analysis was performed to compare which factor would predict the program effectiveness. All analyses were performed by using the Statistical Package for Social Sciences Version 19.0.

3. Results

Quantitative findings based on the closed-ended questions are presented in this paper. Several observations can be highlighted from the findings. In the first place, roughly four-fifth of the participants generally had positive perceptions of the program (Table 1), including clear objectives of the curriculum (85.32%), well-planned teaching activities (83.59%), and adequate peer interaction among the students (82.90%). In addition, a high proportion of the students had positive evaluation of the instructors (Table 2). For example, 89.44% of the participants perceived that the program implementers were very involved; 89% of the participants agreed that implementers encouraged them to participate in the activities; 88.86% perceived that the implementers were ready to offer help when they are in needs.

As shown in Table 3, more than four-fifth of the respondents perceived that the program promoted their development, including the ability to distinguish between the good and the bad (86.04%), competence in making sensible and wise choices (85.15%), ability to resist harmful influences (85.04%), and overall development (85.33%). Interestingly, while roughly three-quarter (78.55%) of the participants would recommend the program to their friends who have similar needs, only 67.79% of them would join similar programs in the future. Finally, more than four-fifth (85.65%) of the participants indicated that they were satisfied with the program (Table 4). Regarding the degree of program adherence estimated by the program implementers, the mean level of adherence was 83.50%, with a range from 14.5% to 100%.

Results of reliability analysis showed that Form A was internally consistent (Table 5): 10 items related to the program content ( 𝛼 = . 9 8 ), 10 items related to the program implementers ( 𝛼 = . 9 9 ), 16 items related to the benefits ( 𝛼 = 1 . 0 0 ), and the overall 36 items measuring program effectiveness ( 𝛼 = . 9 9 ). Results of correlation analysis showed that both program content ( 𝑟 = . 8 4 , 𝑃 < . 0 1 ) and program implementers ( 𝑟 = . 7 6 , 𝑃 < . 0 1 ) were strongly associated with program effectiveness. These positive relationships were consistent across all grade levels (Table 6).

To examine differences in the subjective outcome measures (i.e., program content, program implementers, and program effectiveness) across levels, a series of one-way ANOVAs were performed with different subjective outcome indicators as dependent variables and grade level (i.e., Secondary 1 to 3 levels) as an independent variable. Significant results were found for program content ( 𝐹 ( 2 , 5 7 4 ) = 6 . 0 7 , 𝑃 < . 0 1 ), program implementers ( 𝐹 ( 2 , 5 7 4 ) = 8 . 6 2 , 𝑃 < . 0 1 ), program effectiveness ( 𝐹 ( 2 , 5 7 4 ) = 1 1 . 5 1 , 𝑃 < . 0 1 ), and the total scale ( 𝐹 ( 2 , 5 7 4 ) = 9 . 8 5 , 𝑃 < . 0 1 ) (Table 5).

Post hoc analysis using the Bonferroni adjustment ( 𝑃 = . 0 2 ) revealed that significant differences were found between Secondary 1 ( 𝑀 = 4 . 3 7 ) and Secondary 2 ( 𝑀 = 4 . 2 6 ) students toward their perceptions on program content ( 𝑃 < . 0 1 ) and their perceptions on program implementers (Secondary 1: 𝑀 = 4 . 6 8 , Secondary 2: 𝑀 = 4 . 5 5 , 𝑃 < . 0 1 ). Significant grade differences were also shown when comparing students’ perceptions toward the program effectiveness (Secondary 1: 𝑀 = 3 . 5 0 , Secondary 2: 𝑀 = 3 . 3 7 ; Secondary 3: 𝑀 = 3 . 4 1 , 𝑃 < . 0 1 ). Similar results were revealed in the overall program effectiveness (Secondary 1: 𝑀 = 4 . 0 7 , Secondary 2: 𝑀 = 3 . 9 5 , 𝑃 < . 0 1 ; Secondary 3: 𝑀 = 4 . 0 0 , 𝑃 < . 0 5 ). It is noteworthy that the previous differences were not significant between Secondary 2 and 3 classes ( 𝑃 > . 0 5 ). Overall speaking, junior students perceived the program more effective than their senior counterparts. However, it is noteworthy that the effect size of grade differences was not strong.

Table 7 presents multiple regression analysis results. Program content was positively associated with perceived program effectiveness ( 𝑃 < . 0 1 ). On the other hand, program implementer was not associated with program effectiveness ( 𝑃 > . 0 5 ). However, the result based on the Secondary 2 students showed that perception toward the program implementer was negatively associated with perceived program effectiveness ( 𝛽 = . 2 8 , 𝑃 < . 0 1 ). Further analyses showed that program content ( 𝛽 = . 8 5 , 𝑃 < . 0 1 ) had a significant predictive effect on program effectiveness while this relationship was not significant in program implementer ( 𝛽 = . 0 1 , 𝑃 > . 0 5 ). This model explained 71% of the variance toward the prediction of program effectiveness.

4. Discussion

The present findings revealed that the program participants generally rated their participation in the program positively. In line with the previous findings using various methods and collected from different sources [2430], the majority of the participants reported that they were satisfied with the program content, had an enjoyable experience, and perceived the program as beneficial to develop personal and social competencies. The present study provided support for the hypothesis that perceptions of the program content and program implementers were positively associated with program effectiveness. Findings suggested that participants’ needs and interests were satisfied as they would participate again or recommend similar programs to their peers in the future. Taken as a whole, evaluation findings in the extension phase are highly similar to those reported in the original phase. From a triangulation point of view, data collected from different sources based on different methods generally suggest that the program is well received by different stakeholders.

The second aim of the study was to examine the relative influence of two program implementation factors (i.e., perceived program attributes and program implementers) on program effectiveness. Results of the regression analyses indicated that program content but not program implementers had a significant predictive effect on program effectiveness outcome. In line with previous studies [8, 9, 3739], clear objectives of the curriculum, provision of well-designed teaching activities, participants’ active participation, and perception of a motivated learning environment were associated with program outcomes. The findings indicated the importance of a well-planned program and the success of eliciting participants’ engagement for program effectiveness.

Another purpose of the study was to examine whether the relationships between program implementation factors and program evaluation outcomes would vary by the students’ grade levels. Consistent with the previous study [34], Secondary 1 students perceived the program more favorably as compared to their higher grade level counterparts (i.e., Secondary 2 and 3 students). This observation might be related to the characteristics of the students. Compared to Secondary 3 students, Secondary 1 students were new to the project, and they were more interested and motivated to learn and participate in the program activities. Also, senior students were likely to act critically and engage in rebellious behaviors during this period of stress. Nevertheless, the differences observed were not great, and further studies to examine the related phenomena are needed.

It is interesting to note the negative predictive effect of program implementers on perceived program effectiveness. Some might question whether this result was related to the program implementers’ teaching background. It is noteworthy that all program implementers of the Tier 1 Program were all experienced teachers and frontline social workers who had at least 3 years of experience in working with youths and received relevant formalized training workshops for more than 20 hours. Second, previous study [34] showed that program participants generally perceived program implementers positively (e.g., using effective and interactive teaching methods and skills, eliciting participants’ learning motivation in the learning process, displaying enthusiasm in teaching). One possible explanation of this unexpected result might be related to the unit of analysis of the data. In the current study, data were aggregated at the school-level and the school means for each scale were computed and used for analysis. Clearly, it is important to examine this issue again using individual data rather than aggregate data. In addition, it would be helpful to test the associations between various dimensions of program implementation and program outcomes using advanced statistical technique. To increase the precision of measuring the effects of different program implementation factors on each level (e.g., students, classroom, and schools), future research should use multilevel statistical modeling to analyze the nested data (i.e., students nested within classrooms/schools).

While the present study focused on the influence of two program implementation factors, it is possible that other facilitators (e.g., fidelity, adaptation, dosage, reach) will also influence program effectiveness. For example, high levels of fidelity and increased cultural relevance of the program have been associated with program outcomes [6, 40, 41]. Program evaluation researchers noted the need of developing a theoretical model that identifies how different implementation factors exert their influence on program outcomes and thus untangle their conjointly effects on program effectiveness outcomes. Future research should include other factors in order to depict a comprehensive picture about the complex process of effective program implementation.

Providing a positive developmental experience in early adolescence would promote individuals’ different competencies and reduce negative outcomes [42]. Consistent with western literature, a positive youth development program appears to be a promising approach to promote individuals’ personal, emotional, social, and spiritual competencies and to deter a range of problem and risky behavior among Hong Kong adolescents [43, 44]. Understanding the factors underlying the complex program implementation process is critical in achieving their intended outcomes. The findings in the present study underscore the impact of program content and program implementers on perceived program effectiveness.

There are several limitations of this study. First, the use of self-report measure from a single perspective is limited to give a full picture concerning subjective outcome evaluation. However, this approach is commonly used in program evaluation research [6, 7, 9, 12]. In addition, reliability of the scales is very promising. Therefore, we could argue that the findings in the study are reliable and valid. Another limitation is the cross-sectional nature of the data. Future research should collect data at several points in time and also include predictors from various contexts, such as school and community. In particular, in seeking to monitor the rate of change of the perceived program effectiveness over time, growth curve modeling could be used to examine whether the predictive effects of the program implementation components on the shape of growth and the variability in individual trajectories would vary by the number of waves. However, in doing this, we must collect anonymous personal identifiers from the students. Third, aggregated data with schools with units of analyses rather than individual data were used in this study. Theoretically speaking, it would be interesting to look at the differences in the findings based on these two methods. Finally, ordinary least square analyses were used. As structural equation modeling may give a better estimation of the suppressors in the predictors, such techniques could be considered in future studies.

Despite the aforementioned limitations, the current study contributes to the positive youth development literature. It sheds light on what program components are associated with perceived program effectiveness. Shek et al. [45] argued that more research work is needed on subjective outcome evaluation, especially in social work education. To promote the dissemination of efficacious programs, it is important to consider characteristics of the participants. As supported by Catalano et al. [22], “if we are to discern why these (positive youth development) programs are effective, it is clear that it will be important in the future for programs to define and assess implementation methods and change strategies, and that they also evaluate the impact on youth development constructs…. and how these effects varied by subgroups” (page S94). The findings of the study attempt to address this gap in the program evaluation research. It provides insights to practitioners when designing and implementing effective positive youth development programs for Chinese adolescents. Most importantly, in conjunction with the previous findings, the present findings show that the influence of program attributes and program implementers on program effectiveness is relatively stable in different cohorts of students in Hong Kong [4650].