The Scientific World Journal

The Scientific World Journal / 2012 / Article
Special Issue

Developmental Issues in Chinese Adolescents

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 493957 | 9 pages | https://doi.org/10.1100/2012/493957

Subjective Outcome Evaluation of the Project P.A.T.H.S. in Different Cohorts of Students

Academic Editor: Joav Merrick
Received03 Sep 2011
Accepted02 Nov 2011
Published01 Aug 2012

Abstract

The project P.A.T.H.S. is an indigenously developed positive youth development program in Hong Kong. In the extension phase (2009/2010 school year), subjective outcome evaluation data were collected from 231 schools involving 89,068 participants after completion of the curricula-based Tier 1 Program. With schools as the units of analysis, results showed that participants generally had positive perceptions of the program content and implementers, with over four-fifth of the participants regarded the program as helpful to them. There were some significant grade differences in the subjective outcome evaluation findings, although the related effect size was not strong. Multiple regression analyses revealed that program content and program implementers predicted perceived effectiveness of the program. The present study suggests that irrespective of cohorts, students in the junior secondary years perceived the program to be beneficial to them.

1. Introduction

The increasing popularity of implementing effective adolescent prevention programs in recent decades has been a key initiative to tackle adolescent developmental problems [14]. Researchers [57] identified eight factors that are essentials for the implementation of adolescent prevention programs. These include fidelity (i.e., the extent to which the program is implemented as originally designed), dosage (i.e., the number of sessions offered during implementation), quality of delivery (i.e., the extent to which the program is delivered in an authentic manner), participant responsiveness (i.e., participants’ involvement and satisfaction), program differentiation (i.e., the extent to which a program’s theory and practices can be distinguished from other available programs), monitoring (i.e., documenting the nature and amount of services received by the service recipients), program reach (i.e., the proportion of the intended audience who participated in the intervention), and adaptation (i.e., the extent to which the program is different from the original designed during implementation).

Research findings showed that positive attitudes toward the program content and program implementers were associated with program outcomes [813]. However, little is known about the relative influence of these factors on program effectiveness as prior studies mainly focused on one component only [6, 1416]. For example, Rohrbach et al. [13] noted the interrelationships of these factors and suggested to explore their relative influences on program effectiveness in the future evaluation research. Berkel et al. [17, page 24] highlighted that “program evaluations have rarely examined more than one dimension in a single study and thus have not untangled possible relations between them”. To fill this gap, the present study explored the relative influence of two program implementation factors on perceived program outcomes.

Prevention researchers noted the importance of providing culturally competent interventions for a given population [1820]. However, as adolescent prevention programs were predominantly conducted in the Western countries, it is not clear whether the previous findings would vary by different subgroups of participants, such as adolescents in non-Western contexts. This question makes sense because assuming that the application of concepts and behaviors is universal to every individual in a population is debatable which may lead to problematic results [21]. Catalano et al. [22] argued that more effort is needed “to understand how well they can be implemented in real-world settings and what effects they are likely to have…and examine differences of effects on relevant subgroups (e.g., culture, gender, age, etc.)” (page S93). It appears that findings from non-Western cultural contexts would certainly expand the scope of program evaluation literature.

The Project “P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme” is a large-scale positive youth development program designed for junior secondary school students (Secondary 1 to 3, i.e., Grades 7 to 9) in Hong Kong [23]. The word “P.A.T.H.S.” denotes Positive Adolescent Training through Holistic Social Programmes. It consists of two tiers of program. The Tier 1 Program targets all students joining the program in a particular form (i.e., universal prevention initiative). Through the use of structured curriculum, students learn competencies with reference to the 15 positive youth development construct [23]. The Tier 2 Program is specially designed for students with greater psychosocial needs in different psychosocial domains (i.e., selective prevention). After completion of the Tier 1 Program, program participants were required to complete a subjective outcome evaluation form (Form A).

Qualitative and quantitative data collected based on the original phase of the project generally suggested that participants (students and program implementers) perceived the program positively [2435]. However, little is known whether the impact of program implementation factors on program effectiveness would be sustained in the extension phase. Also, it is not clear whether these relationships would vary by students’ grade level. In particular, the relative influence of these factors on program outcomes is relatively unexplored. Against the aforementioned background, the purpose of the study was to examine the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. and to test the relative influence of two aspects of program implementation, namely, perceptions of the program (content as well as implementation) and program implementers on perceived program effectiveness. It also attempted to investigate whether the predictive effects of these factors would differ across grade levels.

2. Methods

2.1. Participants and Procedures

A total of 231 schools with 89,068 students joined the Project P.A.T.H.S. in the extension phase of the Full Implementation Phase in the school year 2009/2010. (The initial phase of the project was started from the academic year 2005/2006 to 2008/2009.) A total of 577 aggregated data sets from the participating schools were collected across three grade levels (i.e., Secondary 1 level: 219 schools; Secondary 2 level, 185 schools; and Secondary 3 level, 173 schools). The mean number of students per school was 154.36 (ranged from 6 to 240 students), with an average of 4.50 classes per school (ranged from 1 to 12 classes). Among them, 32.24% of the respondent schools adopted the full program (i.e., 20-hour program involving 40 units) whereas 67.76% of the respondent schools adopted the core program (i.e., 10-hour program involving 20 units). The mean number of sessions used to implement the program was 28.54 (ranged from 2 to 48 sessions). While 47.31% of the participating schools incorporated the program into the formal curriculum (e.g., Liberal Studies, Life Education), 52.69% used other modes (e.g., classes and events that differed from normal class schedule) to implement the program. The mean number of social workers and teachers implementing the program per school was 1.71 (ranged from 0 to 7) and 5.11 (ranged from 0 to 27), respectively.

After completion of the Tier 1 Program, the participants were invited to respond to a Subjective Outcome Evaluation Form (Form A) developed by the first author [36]. The data collection was carried out at the last session of the program. On the day of data collection, the purpose of the evaluation was mentioned, and confidentiality of the data was repeatedly emphasized to all students. The students were asked to indicate their wish if they did not want to participate in the study (i.e., passive informed consent was obtained from the students). All participants responded to all scales in the evaluation form in a self-administration format. Adequate time was provided for the participants to complete the questionnaire.

2.2. Instruments

The Subjective Outcome Evaluation Form (Form A) was used. Broadly speaking, there are several parts in this evaluation form as follows:(i)participants’ perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondents’ participation during class (10 items);(ii)participants’ perceptions of the program implementers, such as the preparation of the instructor, professional attitude, involvement, and interaction with the students (10 items);(iii)participants’ perceptions of the effectiveness of the program, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items);(iv)the extent to which the participants would recommend the program to other people with similar needs (1 item);(v)the extent to which the participants would join similar programs in the future (1 item);(vi)overall satisfaction with the program (1 item);(vii)things that the participants learned from the program (open-ended question);(viii)things that the participants appreciated most (open-ended question);(ix)opinion about the instructor(s) (open-ended question);(x)areas that require improvement (open-ended question).

For the quantitative data, the implementers collecting the data in each school were requested to input the data in an EXCEL file developed by the research team which would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the reports, they were also requested to submit the soft copy of the consolidated data sheets. In the reports prepared by the schools, the workers were also required to estimate the degree of adherence to the program manuals (i.e., the extent to which the program is implemented in accordance with the program manuals). To facilitate the program evaluation, the research team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data [36]. In addition, adequate training was provided to the implementers during the 20-hour training workshops on how to collect and analyze the data collected by Form A. After receiving the consolidated data by the funding body, the data were aggregated to reconstruct the overall profile based on the subjective outcome evaluation data by the research team.

2.3. Data Analyses

Percentage findings were examined using descriptive statistics. A composite measure of each domain (i.e., perceived qualities of program, perceived qualities of program implementers, and perceived program effectiveness) was created based on the total scores of each factor divided by the number of items in that domain. Pearson correlation analysis was used to examine if the program content and program implementers were related to the program effectiveness. One-way analysis of variance (ANOVA) was used to assess the differences in the mean of each factor across grade levels. Multiple regression analysis was performed to compare which factor would predict the program effectiveness. All analyses were performed by using the Statistical Package for Social Sciences Version 19.0.

3. Results

Quantitative findings based on the closed-ended questions are presented in this paper. Several observations can be highlighted from the findings. In the first place, roughly four-fifth of the participants generally had positive perceptions of the program (Table 1), including clear objectives of the curriculum (85.32%), well-planned teaching activities (83.59%), and adequate peer interaction among the students (82.90%). In addition, a high proportion of the students had positive evaluation of the instructors (Table 2). For example, 89.44% of the participants perceived that the program implementers were very involved; 89% of the participants agreed that implementers encouraged them to participate in the activities; 88.86% perceived that the implementers were ready to offer help when they are in needs.


Respondents with positive responses (options 4–6)
S1S2S3Overall
n%n%n%N%

(1) The objectives of the curriculum are very clear26,18185.9822,38784.3321,93385.6670,50185.32
(2) The design of the curriculum is very good25,22482.8821,35180.4921,17682.7267,75182.03
(3) The activities were carefully planned25,66484.4721,77982.2121,50484.1068,94783.59
(4) The classroom atmosphere was very pleasant24,97782.2921,56881.5621,41083.8167,95582.55
(5) There was much peer interaction amongst the students25,07882.8621,61281.9521,38083.9068,07082.90
(6) I participated actively during lessons (including discussions, sharing, games, etc.)25,11082.6021,24680.2621,04882.3667,40481.74
(7) I was encouraged to do my best24,08579.2420,42577.0820,35879.6364,86878.65
(8) The learning experience I encountered enhanced my interest toward the lessons24,16879.7020,42577.2420,39879.9664,99178.97
(9) Overall speaking, I have very positive evaluation of the program24,09279.3320,59277.7620,48980.1665,17379.08
(10) On the whole, I like this curriculum very much24,30580.2520,67278.2220,53080.4965,50779.65

All items are on a 6-point Likert scale with 1: strongly disagree; 2: disagree; 3: slightly disagree; 4: slightly agree; 5: agree; 6: strongly agree. Only respondents with positive responses (options 4–6) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

Respondents with positive responses (options 4–6)
S1S2S3Overall
n%n%n%N%

(1) The instructor(s) had a good mastery of the curriculum26,68987.7222,95786.6222,66188.6672,30787.67
(2) The instructor(s) was well prepared for the lessons27,11989.1523,26387.7522,82289.2973,20488.73
(3) The instructor(s)’ teaching skills were good26,68887.8722,67685.6222,42187.8371,78587.11
(4) The instructor(s) showed good professional attitudes27,07789.1023,20487.5622,80589.3173,08688.66
(5) The instructor(s) was very involved27,28389.7623,40088.3823,00790.1773,69089.44
(6) The instructor(s) encouraged students to participate in the activities27,25589.7323,26587.8022,84289.4673,36289.00
(7) The instructor(s) cared for the students26,60287.6022,72685.7922,38487.6671,71287.02
(8) The instructor(s) was ready to offer help to students when needed27,10189.2423,23587.7422,88289.6173,21888.86
(9) The instructor(s) had much interaction with the students26,12785.9922,42984.6822,20586.9970,76185.89
(10) Overall speaking, I have very positive evaluation of the instructors27,01688.8323,32687.9722,91589.6773,25788.82

All items are on a 6-point Likert scale with 1: strongly disagree; 2: disagree; 3: slightly disagree; 4: slightly agree; 5: agree; 6: strongly agree. Only respondents with positive responses (options 4–6) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

As shown in Table 3, more than four-fifth of the respondents perceived that the program promoted their development, including the ability to distinguish between the good and the bad (86.04%), competence in making sensible and wise choices (85.15%), ability to resist harmful influences (85.04%), and overall development (85.33%). Interestingly, while roughly three-quarter (78.55%) of the participants would recommend the program to their friends who have similar needs, only 67.79% of them would join similar programs in the future. Finally, more than four-fifth (85.65%) of the participants indicated that they were satisfied with the program (Table 4). Regarding the degree of program adherence estimated by the program implementers, the mean level of adherence was 83.50%, with a range from 14.5% to 100%.


The extent to which the Tier 1 Program (i.e., the program in which all students have joined) has helped your studentsRespondents with positive responses (options 3–5)
S1S2S3Overall
n%n%n%N%

(1) It has strengthened my bonding with teachers, classmates, and my family24,66481.0320,82578.7820,60580.7166,09480.17
(2) It has strengthened my resilience in adverse conditions25,20082.8721,32980.6920,96382.1367,49281.90
(3) It has enhanced my social competence25,83385.0021,68182.1521,39083.8668,90483.67
(4) It has improved my ability in handling and expressing my emotions25,56984.1421,56781.6921,26083.3768,39683.07
(5) It has enhanced my cognitive competence25,59284.2821,53881.6921,13982.8768,26982.95
(6) My ability to resist harmful influences has been improved26,18786.2222,19084.0321,64784.8870,02485.04
(7) It has strengthened my ability to distinguish between the good and the bad26,47287.1522,44285.0221,90985.9470,82386.04
(8) It has increased my competence in making sensible and wise choices26,28986.5422,13783.8621,68585.0470,11185.15
(9) It has helped me to have life reflections25,32883.4221,67882.1421,44284.1368,44883.23
(10) It has reinforced my self-confidence.25,05782.5020,92079.2520,55280.6466,52980.80
(11) It has increased students’ self-awareness25,46583.8721,38281.0821,05182.6067,89882.52
(12) It has helped students to face the future with a positive attitude25,74984.8221,67582.1621,42384.1468,84783.71
(13) It has helped students to cultivate compassion and care about others25,59184.2921,77582.5621,42084.0168,78683.62
(14) It has encouraged students to care about the community24,98482.2821,12880.1020,83081.7066,94281.36
(15) It has promoted students’ sense of responsibility in serving the society25,30983.3421,33680.8520,96282.1967,60782.13
(16) It has enriched the overall development of the students26,18986.2122,18984.1021,82985.6770,20785.33

All items are on a 5-point Likert scale with 1: unhelpful; 2: not very helpful; 3: slightly helpful; 4: helpful; 5: very helpful. Only respondents with positive responses (options 3–5) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.
(a) If your friends have needs and conditions similar to yours, will you suggest him/her to join this course?

Respondents with positive responses (options 3-4)
S1S2S3Overall
n%n%n%N

24,32481.9219,88676.4119,32677.3263,53678.55

The item is on a 4-point Likert scale with 1: definitely will not suggest; 2: will not suggest; 3: will suggest; 4: definitely will suggest. Only respondents with positive responses (options 3-4) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.
(b) Will you participate in similar courses again in the future?

Respondents with positive responses (options 3-4)
S1S2S3Overall
n%n%n%N

21,24271.5017,00365.3016,64766.5754,89267.79

The item is on a 4-point Likert scale with 1: definitely will not teach; 2: will not teach; 3: will teach; 4: definitely will teach. Only respondents with positive responses (options 3-4) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.
(c) On the whole, are you satisfied with this course?

Respondents with positive responses (options 4–6)
S1S2S3Overall
n%n%n%N

25,97887.0522,00584.2321,45285.6669,43585.65

All items are on a 5-point Likert scale with 1: unhelpful; 2: not very helpful; 3: slightly helpful; 4: helpful; 5: very helpful. Only respondents with positive responses (options 3–5) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

Results of reliability analysis showed that Form A was internally consistent (Table 5): 10 items related to the program content ( 𝛼 = . 9 8 ), 10 items related to the program implementers ( 𝛼 = . 9 9 ), 16 items related to the benefits ( 𝛼 = 1 . 0 0 ), and the overall 36 items measuring program effectiveness ( 𝛼 = . 9 9 ). Results of correlation analysis showed that both program content ( 𝑟 = . 8 4 , 𝑃 < . 0 1 ) and program implementers ( 𝑟 = . 7 6 , 𝑃 < . 0 1 ) were strongly associated with program effectiveness. These positive relationships were consistent across all grade levels (Table 6).


S1S2S3Overall
MαMαMαMα
(SD)(Mean#)(SD)(Mean#)(SD) (Mean#)(SD)(Mean#)

Program content (10 items)4.37** (.29).98 (.84)4.26** (.31).99 (.89)4.33 (.32).99 (.90)4.32 (.31).98 (.87)
Program implementers (10 items)4.68** (.30).99 (.89)4.55** (.31)1.00 (.95)4.61 (.31)1.00 (.95)4.61 (.31).99 (.93)
Program effectiveness (16 items)3.50** (.25).99 (.91)3.37** (.29)1.00 (.95)3.41** (.30)1.00 (.96)3.43 (.28)1.00 (.94)
Total effectiveness (36 items)4.07** (.25).99 (.76)3.95** (.29)1.00 (.85)4.00* (.29).99 (.84)4.01 (.28).99 (.82)

#Mean interitem correlations.
* 𝑃 < . 0 5 ; ** 𝑃 < . 0 1 ; Bonferroni adjustment ( 𝑃 = . 0 2 ). S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

VariableS1S2S3Overall

Program content (10 items).78**.88**.87**.84**
Program implementers (10 items).73**.78**.76**.76**

** 𝑃 < . 0 1 . S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

To examine differences in the subjective outcome measures (i.e., program content, program implementers, and program effectiveness) across levels, a series of one-way ANOVAs were performed with different subjective outcome indicators as dependent variables and grade level (i.e., Secondary 1 to 3 levels) as an independent variable. Significant results were found for program content ( 𝐹 ( 2 , 5 7 4 ) = 6 . 0 7 , 𝑃 < . 0 1 ), program implementers ( 𝐹 ( 2 , 5 7 4 ) = 8 . 6 2 , 𝑃 < . 0 1 ), program effectiveness ( 𝐹 ( 2 , 5 7 4 ) = 1 1 . 5 1 , 𝑃 < . 0 1 ), and the total scale ( 𝐹 ( 2 , 5 7 4 ) = 9 . 8 5 , 𝑃 < . 0 1 ) (Table 5).

Post hoc analysis using the Bonferroni adjustment ( 𝑃 = . 0 2 ) revealed that significant differences were found between Secondary 1 ( 𝑀 = 4 . 3 7 ) and Secondary 2 ( 𝑀 = 4 . 2 6 ) students toward their perceptions on program content ( 𝑃 < . 0 1 ) and their perceptions on program implementers (Secondary 1: 𝑀 = 4 . 6 8 , Secondary 2: 𝑀 = 4 . 5 5 , 𝑃 < . 0 1 ). Significant grade differences were also shown when comparing students’ perceptions toward the program effectiveness (Secondary 1: 𝑀 = 3 . 5 0 , Secondary 2: 𝑀 = 3 . 3 7 ; Secondary 3: 𝑀 = 3 . 4 1 , 𝑃 < . 0 1 ). Similar results were revealed in the overall program effectiveness (Secondary 1: 𝑀 = 4 . 0 7 , Secondary 2: 𝑀 = 3 . 9 5 , 𝑃 < . 0 1 ; Secondary 3: 𝑀 = 4 . 0 0 , 𝑃 < . 0 5 ). It is noteworthy that the previous differences were not significant between Secondary 2 and 3 classes ( 𝑃 > . 0 5 ). Overall speaking, junior students perceived the program more effective than their senior counterparts. However, it is noteworthy that the effect size of grade differences was not strong.

Table 7 presents multiple regression analysis results. Program content was positively associated with perceived program effectiveness ( 𝑃 < . 0 1 ). On the other hand, program implementer was not associated with program effectiveness ( 𝑃 > . 0 5 ). However, the result based on the Secondary 2 students showed that perception toward the program implementer was negatively associated with perceived program effectiveness ( 𝛽 = . 2 8 , 𝑃 < . 0 1 ). Further analyses showed that program content ( 𝛽 = . 8 5 , 𝑃 < . 0 1 ) had a significant predictive effect on program effectiveness while this relationship was not significant in program implementer ( 𝛽 = . 0 1 , 𝑃 > . 0 5 ). This model explained 71% of the variance toward the prediction of program effectiveness.


Predictors
Program contentProgram implementersModel
β aβ a 𝑅 𝑅 2

S1.62**.18.78.61
S21.14** . 2 8 ** .89.78
S3.94** . 0 8 .87.76
Overall.85** . 0 1 .84.71

aStandardized coefficients.
** 𝑃 < . 0 1 . S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

4. Discussion

The present findings revealed that the program participants generally rated their participation in the program positively. In line with the previous findings using various methods and collected from different sources [2430], the majority of the participants reported that they were satisfied with the program content, had an enjoyable experience, and perceived the program as beneficial to develop personal and social competencies. The present study provided support for the hypothesis that perceptions of the program content and program implementers were positively associated with program effectiveness. Findings suggested that participants’ needs and interests were satisfied as they would participate again or recommend similar programs to their peers in the future. Taken as a whole, evaluation findings in the extension phase are highly similar to those reported in the original phase. From a triangulation point of view, data collected from different sources based on different methods generally suggest that the program is well received by different stakeholders.

The second aim of the study was to examine the relative influence of two program implementation factors (i.e., perceived program attributes and program implementers) on program effectiveness. Results of the regression analyses indicated that program content but not program implementers had a significant predictive effect on program effectiveness outcome. In line with previous studies [8, 9, 3739], clear objectives of the curriculum, provision of well-designed teaching activities, participants’ active participation, and perception of a motivated learning environment were associated with program outcomes. The findings indicated the importance of a well-planned program and the success of eliciting participants’ engagement for program effectiveness.

Another purpose of the study was to examine whether the relationships between program implementation factors and program evaluation outcomes would vary by the students’ grade levels. Consistent with the previous study [34], Secondary 1 students perceived the program more favorably as compared to their higher grade level counterparts (i.e., Secondary 2 and 3 students). This observation might be related to the characteristics of the students. Compared to Secondary 3 students, Secondary 1 students were new to the project, and they were more interested and motivated to learn and participate in the program activities. Also, senior students were likely to act critically and engage in rebellious behaviors during this period of stress. Nevertheless, the differences observed were not great, and further studies to examine the related phenomena are needed.

It is interesting to note the negative predictive effect of program implementers on perceived program effectiveness. Some might question whether this result was related to the program implementers’ teaching background. It is noteworthy that all program implementers of the Tier 1 Program were all experienced teachers and frontline social workers who had at least 3 years of experience in working with youths and received relevant formalized training workshops for more than 20 hours. Second, previous study [34] showed that program participants generally perceived program implementers positively (e.g., using effective and interactive teaching methods and skills, eliciting participants’ learning motivation in the learning process, displaying enthusiasm in teaching). One possible explanation of this unexpected result might be related to the unit of analysis of the data. In the current study, data were aggregated at the school-level and the school means for each scale were computed and used for analysis. Clearly, it is important to examine this issue again using individual data rather than aggregate data. In addition, it would be helpful to test the associations between various dimensions of program implementation and program outcomes using advanced statistical technique. To increase the precision of measuring the effects of different program implementation factors on each level (e.g., students, classroom, and schools), future research should use multilevel statistical modeling to analyze the nested data (i.e., students nested within classrooms/schools).

While the present study focused on the influence of two program implementation factors, it is possible that other facilitators (e.g., fidelity, adaptation, dosage, reach) will also influence program effectiveness. For example, high levels of fidelity and increased cultural relevance of the program have been associated with program outcomes [6, 40, 41]. Program evaluation researchers noted the need of developing a theoretical model that identifies how different implementation factors exert their influence on program outcomes and thus untangle their conjointly effects on program effectiveness outcomes. Future research should include other factors in order to depict a comprehensive picture about the complex process of effective program implementation.

Providing a positive developmental experience in early adolescence would promote individuals’ different competencies and reduce negative outcomes [42]. Consistent with western literature, a positive youth development program appears to be a promising approach to promote individuals’ personal, emotional, social, and spiritual competencies and to deter a range of problem and risky behavior among Hong Kong adolescents [43, 44]. Understanding the factors underlying the complex program implementation process is critical in achieving their intended outcomes. The findings in the present study underscore the impact of program content and program implementers on perceived program effectiveness.

There are several limitations of this study. First, the use of self-report measure from a single perspective is limited to give a full picture concerning subjective outcome evaluation. However, this approach is commonly used in program evaluation research [6, 7, 9, 12]. In addition, reliability of the scales is very promising. Therefore, we could argue that the findings in the study are reliable and valid. Another limitation is the cross-sectional nature of the data. Future research should collect data at several points in time and also include predictors from various contexts, such as school and community. In particular, in seeking to monitor the rate of change of the perceived program effectiveness over time, growth curve modeling could be used to examine whether the predictive effects of the program implementation components on the shape of growth and the variability in individual trajectories would vary by the number of waves. However, in doing this, we must collect anonymous personal identifiers from the students. Third, aggregated data with schools with units of analyses rather than individual data were used in this study. Theoretically speaking, it would be interesting to look at the differences in the findings based on these two methods. Finally, ordinary least square analyses were used. As structural equation modeling may give a better estimation of the suppressors in the predictors, such techniques could be considered in future studies.

Despite the aforementioned limitations, the current study contributes to the positive youth development literature. It sheds light on what program components are associated with perceived program effectiveness. Shek et al. [45] argued that more research work is needed on subjective outcome evaluation, especially in social work education. To promote the dissemination of efficacious programs, it is important to consider characteristics of the participants. As supported by Catalano et al. [22], “if we are to discern why these (positive youth development) programs are effective, it is clear that it will be important in the future for programs to define and assess implementation methods and change strategies, and that they also evaluate the impact on youth development constructs…. and how these effects varied by subgroups” (page S94). The findings of the study attempt to address this gap in the program evaluation research. It provides insights to practitioners when designing and implementing effective positive youth development programs for Chinese adolescents. Most importantly, in conjunction with the previous findings, the present findings show that the influence of program attributes and program implementers on program effectiveness is relatively stable in different cohorts of students in Hong Kong [4650].

References

  1. R. F. Catalano, M. W. Arthur, J. D. Hawkins, M. L. Berglund, and J. J. Olson, “Comprehensive community and school-based interventions to prevent antisocial behavior,” in Serious and Violent Juvenile Offenders, R. Loeber and D. F. Farringto, Eds., pp. 248–283, Sage, Thousand Oaks, Calif, USA, 1998. View at: Google Scholar
  2. D. C. Gottfredson, Delinquency and Schools, Cambridge University Press, New York, NY, USA, 2001.
  3. A. A. Payne and R. Eckert, “The relative importance of provider, program, school, and community predictors of the implementation quality of school-based prevention programs,” Prevention Science, vol. 11, no. 2, pp. 126–141, 2010. View at: Publisher Site | Google Scholar
  4. S. J. Wilson, M. W. Lipsey, and J. H. Derzon, “The effects of school-based intervention programs on aggressive behavior: a meta-analysis,” Journal of Consulting and Clinical Psychology, vol. 71, no. 1, pp. 136–149, 2003. View at: Publisher Site | Google Scholar
  5. A. V. Dane and B. H. Schneider, “Program integrity in primary and early secondary prevention: are implementation effects out of control?” Clinical Psychology Review, vol. 18, no. 1, pp. 23–45, 1998. View at: Publisher Site | Google Scholar
  6. J. A. Durlak and E. P. DuPre, “Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation,” American Journal of Community Psychology, vol. 41, no. 3-4, pp. 327–350, 2008. View at: Publisher Site | Google Scholar
  7. L. Dusenbury, R. Brannigan, M. Falco, and W. B. Hansen, “A review of research on fidelity of implementation: implications for drug abuse prevention in school settings,” Health Education Research, vol. 18, no. 2, pp. 237–256, 2003. View at: Publisher Site | Google Scholar
  8. S. M. Blake, L. Simkin, R. Ledsky, C. Perkins, and J. M. Calabrese, “Effects of a parent-child communications intervention on young adolescents' risk for early onset of sexual intercourse,” Family Planning Perspectives, vol. 33, no. 2, pp. 52–61, 2001. View at: Google Scholar
  9. C. Garvey, W. Julion, L. Fogg, A. Kratovil, and D. Gross, “Measuring participation in a prevention trial with parents of young children,” Research in Nursing and Health, vol. 29, no. 3, pp. 212–222, 2006. View at: Publisher Site | Google Scholar
  10. C. Eames, D. Daley, J. Hutchings et al., “Treatment fidelity as a predictor of behaviour change in parents attending group-based parent training,” Child: Care, Health and Development, vol. 35, no. 5, pp. 603–612, 2009. View at: Publisher Site | Google Scholar
  11. M. S. Forgatch, G. R. Patterson, and D. S. DeGarmo, “Evaluating fidelity: predictive validity for a measure of competent adherence to the oregon model of parent management training,” Behavior Therapy, vol. 36, no. 1, pp. 3–13, 2005. View at: Publisher Site | Google Scholar
  12. G. Prado, H. Pantin, S. J. Schwartz, N. S. Lupei, and J. Szapocznik, “Predictors of engagement and retention into a parent-centered, ecodevelopmental HIV preventive intervention for hispanic adolescents and their families,” Journal of Pediatric Psychology, vol. 31, no. 9, pp. 874–890, 2006. View at: Publisher Site | Google Scholar
  13. L. A. Rohrbach, M. Gunning, P. Sun, and S. Sussman, “The project towards no drug abuse (TND) dissemination trial: implementation fidelity and immediate outcomes,” Prevention Science, vol. 11, no. 1, pp. 77–88, 2010. View at: Publisher Site | Google Scholar
  14. D. S. Elliott and S. Mihalic, “Issues in disseminating and replicating effective prevention programs,” Prevention Science, vol. 5, no. 1, pp. 47–53, 2004. View at: Publisher Site | Google Scholar
  15. C. J. Lillehoj, K. W. Griffin, and R. Spoth, “Program provider and observer ratings of school-based preventive intervention implementation: agreement and relation to youth outcomes,” Health Education & Behavior, vol. 31, no. 2, pp. 242–257, 2004. View at: Publisher Site | Google Scholar
  16. K. Shelef, G. M. Diamond, G. S. Diamond, and H. A. Liddle, “Adolescent and parent alliance and treatment outcome in multidimensional family therapy,” Journal of Consulting and Clinical Psychology, vol. 73, no. 4, pp. 689–698, 2005. View at: Publisher Site | Google Scholar
  17. C. Berkel, A. M. Mauricio, E. Schoenfelder, and I. N. Sandler, “Putting the pieces together: an integrated model of program implementation,” Prevention Science, vol. 12, no. 1, pp. 23–33, 2011. View at: Publisher Site | Google Scholar
  18. G. J. Botvin, “Advancing prevention science and practice: challenges, critical issues, and future directions,” Prevention Science, vol. 5, no. 1, pp. 69–72, 2004. View at: Publisher Site | Google Scholar
  19. F. G. Castro, M. Barrera, and C. R. Martinez, “The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit,” Prevention Science, vol. 5, no. 1, pp. 41–45, 2004. View at: Publisher Site | Google Scholar
  20. L. A. Dusenbury, R. Brannigan, W. B. Hansen, J. Walsh, and M. Falco, “Quality of implementation: developing measures crucial to understanding the diffusion of preventive interventions,” Health Education Research, vol. 20, no. 3, pp. 308–313, 2005. View at: Publisher Site | Google Scholar
  21. A. von Eye, “Developing the person-oriented approach: theory and methods of analysis,” Development and Psychopathology, vol. 22, no. 2, pp. 277–285, 2010. View at: Publisher Site | Google Scholar
  22. R. F. Catalano, L. E. Gavin, and C. M. Markham, “Future directions for positive youth development as a strategy to promote adolescent sexual and reproductive health,” Journal of Adolescent Health, vol. 46, no. 3, pp. S92–S96, 2010. View at: Publisher Site | Google Scholar
  23. D. T. L. Shek, “Construction of a positive youth development program in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 18, no. 3, pp. 299–302, 2006. View at: Google Scholar
  24. D. T. L. Shek, “Using students' weekly diaries to evaluate positive youth development programs: are findings based on multiple studies consistent?” Social Indicators Research, vol. 95, no. 3, pp. 475–487, 2010. View at: Publisher Site | Google Scholar
  25. D. T. L. Shek and R. C. F. Sun, “Interim evaluation of the secondary 3 program of project P.A.T.H.S.: insights based on the experimental implementation phase,” International Journal of Public Health, vol. 1, pp. 289–300, 2009. View at: Google Scholar
  26. D. T. L. Shek, C. M. S. Ma, and R. C. F. Sun, “Evaluation of a positive youth development program for adolescents with greater psychosocial needs: integrated views of program implementers,” The Scientific World Journal, vol. 10, pp. 1890–1900, 2010. View at: Publisher Site | Google Scholar
  27. D. T. L. Shek and C. S. M. Ng, “Subjective outcome evaluation of the Project P.A.T.H.S. (secondary 2 program): views of the program participants,” The Scientific World Journal, vol. 9, pp. 1012–1022, 2009. View at: Publisher Site | Google Scholar
  28. D. T. L. Shek and R. C. F. Sun, “Evaluation of Project P.A.T.H.S. (secondary 1 program) by the program participants: findings based on the full implementation phase,” Journal of Adolescence, vol. 43, no. 172, pp. 807–822, 2008. View at: Google Scholar
  29. D. T. L. Shek, R. C. F. Sun, and C. W. Chan, “Evaluation of project P.A.T.H.S. (secondary 2 program) by the program participants: findings based on the experimental implementation phase,” The Scientific World Journal, vol. 8, pp. 526–535, 2008. View at: Publisher Site | Google Scholar
  30. D. T. L. Shek, C. S. M. Ng, and P. F. Tsui, “Qualitative evaluation of the project P.A.T.H.S.: findings based on focus groups,” International Journal on Disability and Human Development, vol. 307, no. 9, pp. 303–313, 2010. View at: Google Scholar
  31. D. T. L. Shek and R. C. F. Sun, “Subjective outcome evaluation based on secondary data analyses: the project P.A.T.H.S. in Hong Kong,” The Scientific World Journal, vol. 10, pp. 224–237, 2010. View at: Publisher Site | Google Scholar
  32. D. T. L. Shek, “Quantitative evaluation of the training program of the project P.A.T.H.S. in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 22, no. 3, pp. 425–435, 2010. View at: Google Scholar
  33. D. T. L. Shek and C. M. S. Ma, “Subjective outcome evaluation findings: factors related to the perceived effectiveness of the tier 2 program of the project P.A.T.H.S,” The Scientific World Journal, vol. 10, no. 2010, pp. 250–260, 2010. View at: Publisher Site | Google Scholar
  34. D. T. L. Shek, C. M. S. Ma, and C. Y. P. Tang, “Subjective outcome evaluation of the project P.A.T.H.S.: findings based on different datasets,” International Journal on Disability and Human Development, vol. 10, pp. 249–255, 2011. View at: Google Scholar
  35. D. T. L. Shek, R. C. F. Sun, and C. Y. P. Tang, “Experimental implementation of the secondary 3 program of Project P.A.T.H.S.: observations based on the co-walker scheme,” The Scientific World Journal, vol. 9, pp. 1003–1011, 2009. View at: Publisher Site | Google Scholar
  36. D. T. L. Shek, A. M. H. Siu, J. H. Y. Lui et al., “P.A.T.H.S. to adulthood: a jockey club youth enhancement scheme (evaluation manual),” Social Welfare Practice and Research Centre, The Chinese University of Hong Kong, Hong Kong, 2006. View at: Google Scholar
  37. R. M. Rapee, A. Wignall, J. Sheffield et al., “Adolescents' reactions to universal and indicated prevention programs for depression: perceived stigma and consumer satisfaction,” Prevention Science, vol. 7, no. 2, pp. 167–177, 2006. View at: Publisher Site | Google Scholar
  38. P. H. Tolan, L. D. Hanish, M. M. McKay, and M. H. Dickey, “Evaluating process in child and family interventions: aggression prevention as an example,” Journal of Family Psychology, vol. 16, no. 2, pp. 220–236, 2002. View at: Publisher Site | Google Scholar
  39. N. Baydar, M. J. Reid, and C. Webster-Stratton, “The role of mental health factors and program engagement in the effectiveness of a preventive parenting program for head start mothers,” Child Development, vol. 74, no. 5, pp. 1433–1453, 2003. View at: Google Scholar
  40. S. A. McGraw, D. E. Sellers, E. J. Stone et al., “Using process data to explain outcomes: an illustration from the child and adolescent trial for cardiovascular health (CATCH),” Evaluation Review, vol. 20, no. 3, pp. 291–312, 1996. View at: Google Scholar
  41. N. S. Ialongo, L. Werthamer, S. G. Kellam, C. H. Brown, S. Wang, and Y. Lin, “Proximal impact of two first-grade preventive interventions on the early risk behaviors for later substance abuse, depression, and antisocial behavior,” American Journal of Community Psychology, vol. 27, no. 5, pp. 599–641, 1999. View at: Publisher Site | Google Scholar
  42. J. A. Durlak, R. D. Taylor, K. Kawashima et al., “Effects of positive youth development programs on school, family, and community systems,” American Journal of Community Psychology, vol. 39, no. 3-4, pp. 269–286, 2007. View at: Publisher Site | Google Scholar
  43. D. T. L. Shek and L. Yu, “Prevention of adolescent problem behavior: longitudinal impact of the project P.A.T.H.S. in Hong Kong,” The Scientific World Journal, vol. 11, pp. 546–567, 2011. View at: Publisher Site | Google Scholar
  44. D. T. L. Shek and C. M. S. Ma, “Impact of the project P.A.T.H.S. on adolescent developmental outcomes in Hong Kong: findings based on seven waves of data,” International Journal of Adolescent Medicine and Health. In press. View at: Publisher Site | Google Scholar
  45. D. T. L. Shek, H. K. Ma, and J. Merrick, Positive Youth Development: Development of a Pioneering Program in a Chinese Context, Freund Publishing House, London, UK, 2007.
  46. D. T. L. Shek and R. C. F. Sun, “Effectiveness of the tier 1 program of project P.A.T.H.S.: findings based on three years of program implementation,” The Scientific World Journal, vol. 10, pp. 1509–1519, 2010. View at: Publisher Site | Google Scholar
  47. D. T. L. Shek, C. S. M. Ng, and P. F. Tsui, “Qualitative evaluation of the project P.A.T.H.S.: findings based on focus groups,” International Journal on Disability and Human Development, vol. 9, no. 4, pp. 307–313, 2010. View at: Publisher Site | Google Scholar
  48. D. T. L. Shek, “Using students' weekly diaries to evaluate positive youth development programs: are findings based on multiple studies consistent?” Social Indicators Research, vol. 95, no. 3, pp. 475–487, 2010. View at: Publisher Site | Google Scholar
  49. D. T. L. Shek, “Quantitative evaluation of the training program of the project P.A.T.H.S. in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 22, no. 3, pp. 425–435, 2010. View at: Publisher Site | Google Scholar
  50. D. T. L. Shek and R. C. F. Sun, “Development, implementation and evaluation of a holistic positive youth development program: project P.A.T.H.S. in Hong Kong,” International Journal on Disability and Human Development, vol. 8, no. 2, pp. 107–117, 2009. View at: Publisher Site | Google Scholar

Copyright © 2012 Daniel T. L. Shek and Cecilia M. S. Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

786 Views | 480 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.