The Scientific World Journal

The Scientific World Journal / 2012 / Article
Special Issue

Developmental Issues in Chinese Adolescents

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 589257 | https://doi.org/10.1100/2012/589257

Daniel T. L. Shek, Lu Yu, "Subjective Outcome Evaluation of the Project P.A.T.H.S. (Extension Phase) Based on the Perspective of Program Implementer", The Scientific World Journal, vol. 2012, Article ID 589257, 8 pages, 2012. https://doi.org/10.1100/2012/589257

Subjective Outcome Evaluation of the Project P.A.T.H.S. (Extension Phase) Based on the Perspective of Program Implementer

Academic Editor: Joav Merrick
Received01 Sep 2011
Accepted02 Nov 2011
Published02 Aug 2012

Abstract

A total of 231 schools participated in the Project P.A.T.H.S. in 2009/2010 school year. After completion of the Tier 1 Program, subjective outcome evaluation data were collected from 3,259 program implementers. Based on the consolidated data with schools as units, results showed that participants had positive perceptions of the program, implementers, and benefits of the program. More than four-fifth of the implementers regarded the program as helpful to the program participants. Multiple regression analyses revealed that perceived qualities of the program and the program implementers predicted perceived effectiveness of the program. Similar to previous studies, compared to implementers’ perception about their performance, the perceived program content appeared to be a stronger predictor of program success. The present study provides additional support for the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. in Hong Kong.

1. Introduction

How to prevent adolescent risk behavior, such as delinquency, drug abuse, unprotected sexual behavior, and school failure, has been a challenging issue for psychologists, educators, policy makers, and other helping professionals [13]. In recent years, the research paradigm that different adolescent risk behaviors are treated as separate and independent problems is changing. Instead, emphasis has been put on the interconnections among various risk behaviors and their shared risk, protective, and facilitative factors. Both theoretical models and empirical studies have supported one common predictor of a wide range of risk behaviors in youth—positive youth development or youth developmental assets [4]. Accordingly, numerous youth programs have been developed with a focus on promoting the development of core competences and adaptive features of adolescents, which can be generally subsumed under the category of positive youth development approach [57].

The approach of positive youth development has been widely adopted in designing programs for adolescents in the west [8]. However, such programs are rarely developed and carried out in Asian countries, especially different Chinese communities [9]. In view of this situation, Shek and researchers from five universities in Hong Kong developed a large-scale positive youth development program entitled the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) to promote healthy development in Hong Kong adolescents and to prevent various youth risk behaviors [10, 11]. Funded by the Hong Kong Jockey Club Trust Charities since 2005 (with funding of HK$400 million in the initial phase and HK$350 million in the extension phase), the project has been implemented in about half of the secondary schools in Hong Kong for consecutively seven years. There are two tiers of program in the project. Tier 1 Program is a universal positive youth development program provided for secondary 1 to 3 students in Hong Kong. Tier 2 Program takes a selective approach which aims at around one-fifth of the Tier 1 Program participants who have greater psychosocial needs.

As the Project P.A.T.H.S. has been implemented in a large scale in Hong Kong adolescents, one important question that must be asked is how effective the project is. To answer this question, systematic evaluation of the program is necessary. Since the launch of the program, numerous evaluation studies have been carried out, with the use of a variety of evaluative strategies, including objective outcome evaluation, subjective outcome evaluation, focus group interview, case studies, direct observation, and a longitudinal randomized group controlled trial [12, 13]. Findings based on these evaluation studies in the past seven years have generally shown positive program effects of the Project P.A.T.H.S. in promoting different competences and developmental assets and preventing various risk behaviors in the program participants [1417]. For example, based on eight waves of data collected in five consecutive years, Shek and colleagues reported that students who had participated in the Project P.A.T.H.S. showed better developmental outcomes than did students in a randomized controlled group, in terms of both positive youth development indicators (e.g., resilience, moral competence, and prosocial involvement) and different risk behaviors such as substance abuse and delinquent behaviors [18, 19].

While objective outcome evaluation, particularly randomized controlled trial, is considered the “gold” standard for the assessment of program effectiveness, subjective outcome evaluation has several unique advantages in program evaluation [2022]. First, as compared to objective outcome evaluation, subjective outcome evaluation provides a way to find out different stakeholders’ opinions and subjective experiences of the program. Second, subjective outcome evaluation offers immediate and important information about the implementation of a program before its effects on objective indicators can be observed. Third, subjective outcome evaluation is a more cost-effective evaluative method than objective outcome evaluation. Fourth, subjective outcome evaluation by program implementers contains valuable message about problems and difficulties encountered in program implementation which contribute to the improvement of the program in the future. In evaluating the Project P.A.T.H.S., subjective outcome evaluation was conducted in both program participants and program implementers to obtain a comprehensive picture about different stakeholder’s views towards the project [23].

Although very encouraging evaluation findings have been reported for the initial phase of the project, it is important to know whether similar positive findings could be found for the extension phase. Against this background, subjective outcome evaluation findings based on the perspectives of program workers who implemented the Tier 1 Program in the 2009/2010 school year were reported in this paper. In addition, instructors’ perceptions about the program, their own performance, and the effectiveness of the project were contrasted among different grade levels to learn about whether program workers at different grades have different views about the program. Previous findings suggested that instructors who taught the curriculum in the lower forms had more positive perceptions than did instructors teaching the program in the higher forms. As such, it was hypothesized that similar pattern regarding the grade effect on program implementers’ subjective evaluation would also be observed in the present sample. Besides, the relationships among program implementers’ views towards the program, perceptions about the instructor, and the overall effectiveness of the program were examined to gain a further understanding of critical factors that influence perceived program effectiveness by program workers. Based on prior findings, it was hypothesized that program implementers’ perceived program quality and their own performance would significantly predict their subjective evaluation on program effectiveness.

2. Methods

2.1. Participants and Procedures

A total of 231 schools joined the Project P.A.T.H.S. in the fourth year of the full implementation phase in the 2009/2010 school year (i.e., the first year in the extension phase), with 219, 185, and 173 schools in secondary 1, secondary 2 and secondary 3 levels, respectively. The mean number of students per school was 154.36 (ranged from 6 to 240 students), with an average of 4.50 classes per school (ranged from 1 to 12 classes). Among them, 32.24% of the respondent schools adopted the full program (i.e., 20-hour program involving 40 units) whereas 67.76% of the respondent schools adopted the core program (i.e., 10-hour program involving 20 units). The mean number of sessions used to implement the program was 28.54 (ranged from 2 to 48 sessions). While 47.31% of the respondent schools incorporated the program into the formal curriculum (e.g., Liberal Studies, Life Education), 52.69% used other modes (e.g., using form teacher’s periods and other combinations) to implement the program. The mean numbers of social workers and teachers implementing the program per school per form were 1.71 (ranged from 0 to 7) and 5.11 (ranged from 0 to 27), respectively.

After the Tier 1 Program was completed, the implementers were invited to respond to a Subjective Outcome Evaluation Form (Form B) developed by the first author [24]. In the school year 2009-2010, a total of 3,259 questionnaires were completed. The data collection was conducted after the completion of the program. To facilitate the program evaluation, the Research Team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data [24]. In addition, adequate training was provided to the implementers during the 20-hour training workshops on how to collect and analyze the data collected by Form B.

2.2. Instruments

The Subjective Outcome Evaluation Form (Form B) was used in the present study, including the following parts.(i)Program implementers’ perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondents’ participation during class (10 items).(ii)Program implementers’ perceptions of their own practice, including their understanding of the course, teaching skills, professional attitude, involvement, and interaction with the students (10 items).(iii)Implementers’ perceptions of the effectiveness of the program on students, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items).(iv)The extent to which the implementers would recommend the program to other students with similar needs (1 item).(v)The extent to which the implementers would teach similar programs in future (1 item).(vi)The extent to which the program implementation has helped the implementers’ professional growth (1 item).(vii)Things that the implementers obtained from the program (open-ended question).(viii)Things that the implementers appreciated most (open-ended question).(ix)Difficulties encountered (open-ended question).(x)Areas that require improvement (open-ended question).

For the quantitative data, the program workers who collected the data were requested to input the data in an EXCEL file developed by the Research Team which would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the hard copy of the reports, they were also requested to submit the soft copy of the consolidated data sheets. After receiving the consolidated data by the funding body, the research team aggregated the data to “reconstruct” the overall profile based on the subjective outcome evaluation data. It should be noted that although both qualitative and quantitative data were collected, the present paper only focused on the quantitative reports. Qualitative findings are to be reported elsewhere.

2.3. Data Analysis

Percentage data were examined using descriptive statistics. A composite measure of each factor (i.e., perceived qualities of program content, perceived qualities of program implementers, and perceived program effectiveness) was created based on the total scores of each scale divided by the number of items. Pearson correlation analysis was used to examine if the program content and program implementers were related to the program effectiveness. To compare program implementers’ evaluation across different grades, several one-way ANOVAs were conducted with the three subscale scores as the dependent variables and grade as the independent variable. Hierarchical linear regression analyses were further performed to examine the relationship between different aspects of implementers’ evaluation about the project and the program effectiveness. All analyses were performed by using the Statistical Package for Social Sciences Version 17.0.

3. Results

The quantitative findings based on the closed-ended questions are presented in this paper. Several observations can be highlighted from the findings. First, the participants generally had positive perceptions of the program (Table 1), including clear objectives of the curriculum (94.80%), well-planned teaching activities (90.51%), and very pleasant classroom atmosphere (87.88%). Second, a high proportion of the implementers had positive evaluation of their performance (Table 2). For example, 98.18% of the implementers perceived that they were ready to help their students; 97.43% of the implementers expressed that they cared for the students; 96.35% believed that they had good professional attitudes. Third, as shown in Table 3, many implementers perceived that the program promoted the development of students, including their resilience (91.84%), social competence (93.73%), life reflections (91.61%), and overall development (93.16%). Fourth, 89.73% of the implementers would recommend the program to students with similar needs. Fifth, 83.36% of the implementers expressed that they would teach similar courses again in the future. Finally, 84.37% of the respondents indicated that the program had contributed to their professional development.


Respondents with positive responses (options 4–6)
S1S2S3Overall
𝑛 % 𝑛 % 𝑛 % 𝑁 %

(1) The objectives of the curriculum are very clear.121495.3797994.8688793.96308094.80
(2) The design of the curriculum is very good.112588.3787384.6880385.06280186.24
(3) The activities were carefully planned.116091.1992489.7185490.47293890.51
(4) The classroom atmosphere was very pleasant.115690.9589787.1779684.50284987.88
(5) There was much peer interaction amongst the students.112788.7487584.8777682.73277885.77
(6) Students participated actively during lessons (including discussions, sharing, games, etc.).112188.0686684.0075580.15274284.47
(7) The program has a strong and sound theoretical support.109085.6988986.0681286.11279185.93
(8) The teaching experience I encountered enhanced my interest in the course.105182.7682680.1975179.64262881.04
(9) Overall speaking, I have very positive evaluation of the program.107184.2683981.3075680.08266682.11
(10) On the whole, students like this curriculum very much.108685.3881679.1572977.31263181.05

Note: all items are on a 6-point Likert scale with 1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, and 6 = strongly agree. Only respondents with positive responses (options 4–6) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

Respondents with positive responses (options 4–6)
S1S2S3Overall
𝑛 % 𝑛 % 𝑛 % 𝑁 %

(1) I have a good mastery of the curriculum.114290.2189787.3482587.77286488.59
(2) I prepared well for the lessons.114390.7191989.5783589.11289789.89
(3) My teaching skills were good.115391.5192189.5983288.79290690.11
(4) I have good professional attitudes.122096.6099396.6989895.63311196.35
(5) I was very involved.119494.4696593.8786391.81302293.50
(6) I gained a lot during the course of instruction.108386.0285983.7278683.71272884.62
(7) I cared for the students.123797.63100297.4791397.13315297.43
(8) I was ready to offer help to students when needed.124798.42101098.2591997.77317698.18
(9) I had much interaction with the students.120094.7996493.8787593.09303994.00
(10) Overall speaking, I have very positive evaluation of myself as an instructor.122896.9298195.4389995.64310896.07

Note: all items are on a 6-point Likert scale with 1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, and 6 = strongly agree. Only respondents with positive responses (Options 4–6) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

The extent to which the Tier 1 Program (i.e., the program in which all students have joined) has helped your studentsRespondents with positive responses (options 3–5)
S1S2S3Overall
𝑛 % 𝑛 % 𝑛 % 𝑁 %

(1) It has strengthened students’ bonding with teachers, classmates, and their families.116391.7993290.5784690.00294190.88
(2) It has strengthened students’ resilience in adverse conditions.112789.0290587.7882988.19293691.84
(3) It has enhanced students’ social competence.118593.5394191.6385490.95254193.73
(4) It has improved students’ ability in handling.116491.9493490.7784289.77210888.83
(5) It has enhanced students’ cognitive competence.111888.3888786.0383188.50196472.74
(6) Students’ ability to resist harmful influences has been improved.112689.0889987.3781486.69200464.79
(7) It has strengthened students’ ability to distinguish between the good and the bad.118193.2194891.9585691.36246376.37
(8) It has increased students’ competence in making sensible and wise choices.115291.0791588.9283889.15293890.79
(9) It has helped students to have life reflections.110487.4890988.3483989.26291791.61
(10) It has reinforced students’ self-confidence.106383.9783581.0776981.81238986.31
(11) It has increased students’ self- awareness.117993.8793790.9786491.91224390.52
(12) It has helped students to face the future with a positive attitude.109186.7288485.9181887.11191674.35
(13) It has helped students to cultivate compassion and care about others.110988.2390387.8482387.55199563.58
(14) It has encouraged students to care about the community.103282.2384682.1476581.30220568.46
(15) It has promoted students’ sense of responsibility in serving the society.103081.9484381.7777982.78271284.07
(16) It has enriched the overall development of the students.118994.6795392.4387793.20304495.27

Note: all items are on a 5-point Likert scale with 1 = unhelpful, 2 = not very helpful, 3 = slightly helpful, 4 = helpful, and 5 = very helpful. Only respondents with positive responses (Options 3–5) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

Reliability analysis with the schools as the unit of analyses showed that Form B was internally consistent (Table 4): 10 items related to the program ( 𝛼 = . 9 5 ) , 10 items related to the implementer ( 𝛼 = . 9 4 ) , 16 items related to the benefits ( 𝛼 = . 9 8 ) , and the overall 36 items measuring program effectiveness ( 𝛼 = . 9 8 ) . Results of correlation analyses showed that both program content ( 𝑟 = . 7 9 , 𝑃 < . 0 1 ) and program implementers ( 𝑟 = . 6 5 , 𝑃 < . 0 1 ) were strongly associated with program effectiveness.


S1S2S3Overall
𝑀 𝛼 𝑀 𝛼 𝑀 𝛼 𝑀 𝛼
(SD)(Mean#) (SD)(Mean#)(SD)(Mean#) (SD)(Mean#)

Program content (10 items)4.47 (.42).95 (.65)4.39 (.47).95 (.66)4.36 (.53).95 (.67)4.41 (.48).95 (.66)
Program implementers (10 items)4.68 (.31).93 (.59)4.62 (.36).94 (.63)4.63 (.40).96 (.69)4.65 (.36).94 (.64)
Program effectiveness (16 items)3.44 (.40).97 (.71)3.41 (.43).98 (.71)3.41 (.44).98 (.73)3.42 (.42).98 (.71)
Total effectiveness (36 items)4.07 (.34).98 (.52)4.02 (.39).98 (.57)4.01 (.42).98 (.60)4.04 (.38).98 (.56)

#Mean interitem correlations.

To examine differences in the perceived variables (i.e., program content, program implementers, and program effectiveness) across grade levels, several one-way ANOVAs were performed with the perceived variables as dependent variables and grade level (i.e., secondary 1 to 3) as independent variable. Significant results were only found in program content, 𝐹 ( 2 , 5 7 4 ) = 3 . 7 7 , 𝑃 = . 0 2 . Post hoc analysis using Tukey’s procedure with Bonferroni adjustment (i.e., 𝑃 = . 0 2 ) revealed that significant difference was found between secondary 1 ( 𝑀 = 4 . 4 7 ) and secondary 3 ( 𝑀 = 4 . 3 6 ) participants ( 𝑃 = . 0 3 ), with the secondary 1 program perceived to be relatively more favorable than the Secondary 3 Program.

Multiple regression analyses were performed on both the whole sample and the responses of students in different grades separately. Table 5 presents the findings. Overall, higher positive views towards the program and program implementers predicted higher perceived program effectiveness ( 𝑃 < . 0 1 ). The prediction of program effectiveness was stronger for perceptions of program ( 𝛽 = . 7 0 ) than for views towards implementers ( 𝛽 = . 1 3 ) . The model explained 63% of the variance toward the prediction of program effectiveness. For participants in different grades, the pattern of relationships and the amount of variance in program effectiveness explained by the two predictors were very similar. While views towards program content consistently predicted program effectiveness across grades, the relationship between views towards implementers and program effectiveness was only significant for the analyses based on the secondary 2 participants.


Predictors Model
Program contentProgram implementers

𝛽 a 𝛽 a 𝑅 𝑅 2
S1.71**.09.77.59
S2.65**.20**.80.64
S3.74**.09.82.67
Overall.70**.13**.79.63

aStandardized coefficients.
* 𝑃 < . 0 5 , ** 𝑃 < . 0 1 .

4. Discussion

The present study investigated the subjective outcome evaluation by program workers who implemented the Tier 1 Program of the Project P.A.T.H.S. in the 2009/2010 academic year. The findings showed that program implementers generally held positive views towards the program and the instructors and perceived the program as effective to promote healthy development of the participants. Program implementers’ perceptions about the program and instructor significantly predicted their subjective evaluation about the program effectiveness, with views towards program content being a stronger predictor than views towards instructors. Moreover, these findings were held true for participants from different grade levels. There are three unique features of this study. First, the sample size was quite large. Actually, it is very rare to see such a large number of program implementers participated in outcome evaluation in the literature. Second, a validated measure of subjective outcome evaluation was used. Third, as there are few studies on the evaluation of positive youth development programs in general, particularly in Chinese people, the present study is an important addition to the literature.

Overall, more than 80% of the participated program implementers had positive evaluation about different aspects of the program content, including the good curriculum design, strong theoretical support, pleasant classroom atmosphere, and active participation of the students. In particular, more than 90% of the instructors agreed that the objectives of the curriculum were very clear and the activities were carefully planned. Explicit learning objectives with respect to the required skills and a variety of instructional activities to facilitate learning are two critical components in outcome-based education which embraces the notion that the learner is accountable for his or her own achievements and represents the most updated approach to nowadays education [2528]. The present findings that these two items received the highest subjective evaluation from teachers suggest that the outcome-based approach has been well incorporated in the implementation of the Project P.A.T.H.S. Other opinions from teachers such as “there was much peer interaction amongst the students” and “on the whole students like this curriculum very much” provide further support for the successfulness of using this approach to deliver the Project P.A.T.H.S. in Hong Kong students.

Program implementers also viewed their own performance in teaching the program favorably, in terms of mastery and preparedness of the curriculum, teaching skills and attitudes towards the course and students, personal gains, interaction with students, and general evaluation of oneself as an instructor of the program. While self-fulfilling prophecy may explain the findings, it is noteworthy that this observation is consistent with previous findings that the students also perceived the instructors in a favorable light [12, 29], hence supporting the validity of the present finding.

With respect to the perceived effectiveness of the program, program implementers regarded the program as having promoted positive development in the participated students in multiple areas. For example, more than 90% of the instructors agreed that the project had enhanced students’ bonding with others, resilience in adverse conditions, social competence, ability to make sensible and wise choices, and overall development. Students who attended the program were evaluated as having more life reflections and self-awareness. These findings are consistent with previous results based on other evaluation methods regarding the effectiveness of the Project P.A.T.H.S., such as the objective outcome evaluation and the subjective evaluation by students [29, 30].

Program implementers’ subjective outcome evaluation was also compared among different grades. No significant grade differences were detected in program implementers’ views about their own performance and perceived effectiveness of the program, which suggests that program implementers from different grade levels had similar favorable views towards the instructor and the program effectiveness. However, it was found that program content was evaluated more positively by secondary 1 implementers than by secondary 3 implementers. Similar findings were also noted in previous studies. While the curriculum designed for different grades has different content including various activities and topics for discussion, the basic framework of the course that consists of eight core positive youth development constructs is the same across grade. Therefore, the secondary 1 program may be perceived as more fresh and attractive to teachers than secondary 3 program. Besides, students in junior grade may also show more interests and better involvement in the course than senior students who attended the program since they entered to the secondary school. This finding provides some insights for the curriculum design in the future. Perhaps more novel units and topics especially suitable for senior secondary students could be developed and incorporated into the curriculum. Despite the grade difference, program implementers in the secondary 3 grade still reported favorable views towards the curriculum, with more than three fourths of the participants having positive evaluation about different aspects of the program content, which suggests that the curriculum is generally well received by the instructors.

Results of regression analyses suggest that for the whole sample of students, both perceived program and instructors significantly predicted the perceived effectiveness of the program, supporting the critical roles of program quality and implementers in program success. However, when data in different grades were analyzed separately, while program workers’ subjective evaluation of the program quality consistently predicted perceived effectiveness of the program across grade, the effect of views about instructors’ performance was only significant for secondary 2 participants. Apparently, program worker’s evaluation about the program content appeared to be a stronger predictor than did their evaluation about instructors’ performance. Similar findings were reported in Shek et al.’s paper [31]. While a variety of factors at different ecological levels were found to affect the implementation of a program, high program quality has always been considered the first requisite to the success of the program [32, 33]. Without a good design of the curriculum in the very beginning, it is impossible that the program will produce desirable outcomes in its participants, even with excellent program staff, highly-motived students, and supportive administrative environment. In fact, it is likely that quality of program and quality of implementers interactively affect the effectiveness of a program. For example, good curriculum content often increases the interests and motivation of instructors to teach the course [34], and thus the instructors may spend more time in preparation, show more passion in their teaching, and deliver the content in a more effective way. Therefore, when the effects of program content were controlled, the prediction of program instructors’ performance on program effectiveness decreased. Another possibility is that this may be a statistical artifact as the range of scores for the evaluation of instructors was not wide. Future studies may focus on examining the interactive effects between program content and program implementers to identify more fundamental factors that determine program success. In addition, it is unclear why in the present study the evaluation of instructors only predicted program effectiveness for secondary 2 participants, but not for secondary 1 and 3 participants. This finding is inconsistent with previous report [31] and the literature in which the critical role of program implementers to program success is constantly highlighted [13, 21]. Obviously, replication study is needed. In particular, grade difference in the effect of program implementers’ performance on program effectiveness should be further explored.

There are several limitations of the present study that should be acknowledged. First, the data were collected in a self-reported manner, which may be biased by the implementers’ personal attitudes and perceptions towards the program. To reduce the potential bias, several measures were taken. First, program implementers responded to the questionnaire anonymously, and the confidentiality was repeatedly assured. Second, in the questionnaire, no threatening questions were asked that might elicit the respondents’ feelings of role conflict and social desirability. Third, participants were encouraged to candidly report their negative views or feelings in the survey, and open-ended questions were provided for the teachers to record their suggestions on how to improve the program. Despite of these measures, the present findings should be interpreted with cautions, and evaluative studies that use other approaches, such as objective outcome evaluation based on developmental indicators, program participants’ subjective evaluation, and process evaluation must be conducted for the purpose of triangulation. The second limitation of the present study is that only two general indicators of program quality and program implementers’ performance were used to predict overall effectiveness of the project, which makes it impossible to identify specific aspects that are particularly important for program success. Besides, different factors may increase/decrease the program effects in different areas. For example, good performance of the teacher may have particular effects in strengthening students’ bonding with teachers. Future studies may include different indicators of program content and implementers as well as program effectiveness in the prediction model. Thirdly, previous studies have revealed that school and organization characteristics influence program effectiveness and implementation quality [3436]. These contextual factors should be considered in further research. Finally, as the present findings were “reconstructed” from the evaluation reports submitted by the agencies, the unit of analyses was schools, instead of individuals. Therefore, individual variations were lost in the process which may lower the power of statistical analyses. Despite of these limitations, the present study constitutes an important addition to the current literature about the effectiveness of the Project P.A.T.H.S. in promoting positive youth development in Hong Kong adolescents.

Acknowledgment

The preparation for this paper and the Project P.A.T.H.S. were financially supported by The Hong Kong Jockey Club Charities Trust.

References

  1. Substance abuse and mental health services administration, Results from the 2009 National Survey on Drug Use and Health: Volume I. Summary of National Findings, Office of Applied Studies, Rockville, Md, USA, 2010.
  2. I. Arteaga, C. C. Chen, and A. J. Reynolds, “Childhood predictors of adult substance abuse,” Children and Youth Services Review, vol. 32, no. 8, pp. 1108–1120, 2010. View at: Publisher Site | Google Scholar
  3. A. Adesola, “Time to act,” Drug Salvation Force, vol. 2, p. 49, 1998. View at: Google Scholar
  4. K. J. Pittman, M. Irby, J. Tolman, N. Yohalem, and T. Ferber, “Preventing problems, promoting development, encouraging engagement: competing priorities or inseparable goals?” Forum for Youth Investment, Impact Strategies, Washington, DC, USA, 2003, http://www.forumfyi.org/. View at: Google Scholar
  5. S. J. Wilson and M. Lipsey, “The effects of school-based social information processing interventions on aggressive behavior, part I: universal programs,” Campbell Collaboration Systematic Review, vol. 5, 2006. View at: Google Scholar
  6. R. E. Tremblay, L. Pagani-Kurtz, L. C. Masse, F. Vitaro, and R. O. Pihl, “A bi-modal preventive intervention for disruptive kindergarten boys: its impact through mid-adolescence,” Journal of Consulting and Clinical Psychology, vol. 63, no. 4, pp. 560–568, 1995. View at: Publisher Site | Google Scholar
  7. J. D. Hawkins, E. C. Brown, S. Oesterle, M. W. Arthur, R. D. Abbott, and R. F. Catalano, “Early effects of communities that care on targeted risks and initiation of delinquent behavior and substance use,” Journal of Adolescent Health, vol. 43, no. 1, pp. 15–22, 2008. View at: Publisher Site | Google Scholar
  8. R. F. Catalano, M. L. Berglund, J. A. M. Ryan, H. S. Lonczak, and J. D. Hawkins, “Positive youth development in the United States: research findings on evaluations of positive youth development programs,” Prevention and Treatment, vol. 25, no. 1, pp. 1–111, 2002. View at: Google Scholar
  9. D. T. L. Shek and L. Yu, “A review of validated youth prevention and positive youth development programmes in Asia,” International Journal of Adolescent Medicine and Health, vol. 23, no. 4, pp. 317–324, 2011. View at: Publisher Site | Google Scholar
  10. D. T. L. Shek and H. K. Ma, Positive Youth Development: Development of a Pioneering Program in a Chinese Context, Freund Publishing Company, London, UK, 2002.
  11. D. T. L. Shek, “Conceptual framework underlying the development of a positive youth development program in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 18, no. 3, pp. 303–314, 2006. View at: Google Scholar
  12. D. T. L. Shek, “Objective and subjective outcome evaluation of project P.A.T.H.S.: first year evaluation findings,” International Public Health Journal, vol. 1, pp. 245–254, 2009. View at: Google Scholar
  13. D. T. L. Shek, L. Yu, and V. Y. T. Ho, “Implementation of the secondary 2 program of the project P.A.T.H.S.: observations based on the co-walker scheme,” International Journal of Adolescent Medicine and Health. In press. View at: Publisher Site | Google Scholar
  14. D. T. L. Shek, “Effectiveness of the tier 1 program of project P.A.T.H.S.: findings based on the first 2 years of program implementation,” TheScientificWorldJOURNAL, vol. 9, no. 2009, pp. 539–547, 2009. View at: Publisher Site | Google Scholar
  15. D. T. L. Shek and R. C. F. Sun, “Effectiveness of the tier 1 program of project P.A.T.H.S.: findings based on three years of program implementation,” TheScientificWorldJOURNAL, vol. 10, no. 2010, pp. 1509–1519, 2010. View at: Publisher Site | Google Scholar
  16. D. T. L. Shek and C. M. S. Ma, “Impact of the project P.A.T.H.S. in the junior secondary school years: individual growth curve analyses,” TheScientificWorldJOURNAL, vol. 11, no. 2011, pp. 253–266, 2011. View at: Publisher Site | Google Scholar
  17. D. T. L. Shek and L. Yu, “Prevention of adolescent problem behavior: longitudinal impact of the project P.A.T.H.S. in Hong Kong,” TheScientificWorldJOURNAL, vol. 11, no. 2011, pp. 546–567, 2011. View at: Publisher Site | Google Scholar
  18. D. T. L. Shek and L. Yu, “Longitudinal impact of the project P.A.T.H.S. on adolescent risk behavior: what happened after five years,” TheScientificWorldJOURNAL. In press. View at: Google Scholar
  19. D. T. L. Shek and C. M. S. Ma, “Impact of the project P.A.T.H.S. in the junior secondary school years: objective outcome evaluation based on eight waves of longitudinal data,” TheScientificWorldJOURNAL. In press. View at: Google Scholar
  20. H. R. Winefield and J. A. Barlow, “Client and worker satisfaction in a child protection agency,” Child Abuse & Neglect, vol. 19, no. 8, pp. 897–905, 1995. View at: Publisher Site | Google Scholar
  21. D. Peterson and F. A. Esbensen, “The outlook is G.R.E.A.T.: what educators say about school-based prevention and the gang resistance education and training (G.R.E.A.T.) program,” Evaluation Review, vol. 28, no. 3, pp. 218–245, 2004. View at: Publisher Site | Google Scholar
  22. L. M. Najavits, F. Ghinassi, A. van Horn et al., “Therapist satisfaction with four manual-based treatments on a national multisite trial: an exploratory study,” Psychotherapy, vol. 41, no. 1, pp. 26–37, 2004. View at: Google Scholar
  23. D. T. L. Shek, A. M. H. Siu, and Y. L. Tak, “Subjective outcome evaluation of the project P.A.T.H.S.: findings based on the perspective of the program implementers,” TheScientificWorldJOURNAL, vol. 7, no. 2007, pp. 195–203, 2007. View at: Publisher Site | Google Scholar
  24. D. T. L. Shek, A. M. H. Siu, J. Lui, and W. M. D. Lung, “P.A.T.H.S. to adulthood: a jockey club youth enhancement scheme (evaluation manual),” The Social Welfare Practice and Research Centre, Chinese University of Hong Kong, Hong Kong, China, 2006. View at: Google Scholar
  25. H. Van der Horst and R. McDonald, OBE, Outcomes-Based Education: A Teacher's Manual, kagiso publishers, Pretoria, South Africa, 1997.
  26. C. A. Capper and M. T. J. Jamison, “Outcome-based education re-examined: from structural functionalism to post structuralism,” Educational Policy, vol. 7, no. 4, pp. 427–446, 1993. View at: Google Scholar
  27. D. Conradie, “Outcome-based education (OBE). What is it?” Environmental Protection Bulletin, vol. 13, pp. 8–11, 1997. View at: Google Scholar
  28. C. Olivier, “Outcome-based education and training programmes: processes, knowledge, skills, OBET production,” Ifafi, South Africa, 1997. View at: Google Scholar
  29. D. T. L. Shek and C. S. M. Ng, “Subjective outcome evaluation of the project P.A.T.H.S. (secondary 2 program): views of the program participants,” TheScientificWorldJOURNAL, vol. 9, no. 2009, pp. 1012–1022, 2009. View at: Publisher Site | Google Scholar
  30. D. T. L. Shek, “Objective outcome evaluation of the project P.A.T.H.S. in Hong Kong: findings based on individual growth curve models,” TheScientificWorldJOURNAL, vol. 10, no. 2010, pp. 182–191, 2010. View at: Publisher Site | Google Scholar
  31. D. T. L. Shek, C. M. S. Ma, and C. Y. P. Tang, “Subjective outcome evaluation of the project P.A.T.H.S.: findings based on different datasets,” International Journal of Adolescent Medicine and Health, vol. 23, no. 3, pp. 237–243, 2011. View at: Google Scholar
  32. B. L. Riley, S. M. Taylor, and S. J. Elliott, “Determinants of implementing heart healthy promotion activities in Ontario public health units: a social ecological perspective,” Health Education Research, vol. 16, no. 4, pp. 425–441, 2001. View at: Google Scholar
  33. M. C. Shediac-Rizkallah and L. R. Bone, “Planning far the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy,” Health Education Research, vol. 13, no. 1, pp. 87–108, 1998. View at: Publisher Site | Google Scholar
  34. R. P. Weissberg and M. U. O'Brien, “What works in school-based social and emotional learning programs for positive youth development,” The Annals of the American Academy of Political and Social Science, vol. 591, pp. 86–97, 2004. View at: Publisher Site | Google Scholar
  35. A. A. Payne, D. C. Gottfredson, and G. D. Gottfredson, “School predictors of the intensity of implementation of school-based prevention programs: results from a national study,” Prevention Science, vol. 7, no. 2, pp. 225–237, 2006. View at: Publisher Site | Google Scholar
  36. D. S. Elliott and S. Mihalic, “Issues in disseminating and replicating effective prevention programs,” Prevention Science, vol. 5, no. 1, pp. 47–53, 2004. View at: Publisher Site | Google Scholar

Copyright © 2012 Daniel T. L. Shek and Lu Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views860
Downloads466
Citations

Related articles