Table of Contents Author Guidelines Submit a Manuscript
Retracted

This article has been retracted as it is found to contain a substantial amount of material, without referencing, from the published article titled “Process Evaluation of a Positive Youth Development Program: Project P.A.T.H.S,” Ben M. F. Law, and Daniel T. L. Shek, Research on Social Work Practice, vol. 21, no. 5, pp. 539-548, 2011.

View the full Retraction here.

References

  1. B. M. F. Law and D. T. L. Shek, “Process evaluation of a positive youth development program in Hong Kong based on different cohorts,” The Scientific World Journal, vol. 2012, Article ID 736730, 9 pages, 2012.
The Scientific World Journal
Volume 2012, Article ID 736730, 9 pages
http://dx.doi.org/10.1100/2012/736730
Research Article

Process Evaluation of a Positive Youth Development Program in Hong Kong Based on Different Cohorts

1Department of Social Work and Social Administration, The University of Hong Kong, Hong Kong
2Department of Applied Social Sciences, The Hong Kong Polytechnic University, Hong Kong
3Public Policy Research Institute, The Hong Kong Polytechnic University, Hong Kong
4Department of Social Work, East China Normal University, Shanghai, China
5Kiang Wu Nursing College of Macau, Macau, China
6Division of Adolescent Medicine, Department of Pediatrics, Kentucky Children's Hospital, University of Kentucky College of Medicine, Lexington, KY, USA

Received 6 November 2011; Accepted 25 December 2011

Academic Editor: Joav Merrick

Copyright © 2012 Ben M. F. Law and Daniel T. L. Shek. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

There are only a few process evaluation studies on positive youth development programs, particularly in the Chinese context. This study aims to examine the quality of implementation of a positive youth development program (the Project P.A.T.H.S.: Positive Adolescent Training through Holistic Social Programmes) and investigate the relationships among program adherence, process factors, implementation quality, and perceived program success. Process evaluation of 97 classroom-based teaching units was conducted in 62 schools from 2005 to 2009. Findings based on different cohorts generally showed that there were high overall program adherence and implementation quality. Program adherence and implementation process were highly correlated with quality and success of the program. Multiple regression analyses further showed that both implementation process and program adherence are significant predictors of program quality and success. Theoretical and practical implications of the findings are discussed.

1. Introduction

Program evaluation is systematic assessment of the process and outcomes of a program with the aim of contributing to program improvement, such as deciding whether to adopt the program further, enhancement of existing intervention protocols, and compliance with a set of explicit or implicit standards [1]. This paper documents the process evaluation of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes), a large-scale positive youth development program in Hong Kong.

Outcome evaluation focuses mainly on the results of the program, whereas process evaluation is concerned with how the program is actually delivered [2, 3]. Process evaluation is widely adopted in prevention science, such as nursing care [4, 5], chronic illness prevention programs [69], smoking cessation programs [1012], dietary programs [1317], and AIDS rehabilitation programs [1822]. In social work practice, process evaluation has been used in family programs [23, 24], but is not commonly used in youth programs [2527].

Process evaluation consists of five components, namely, program adherence, implementation process, intended dosage, macrolevel implication, and process-outcome linkage [28]. Program adherence deals with whether the program is being delivered as intended according to the original program design. It is an important factor affecting the quality of program implementation [3, 29]. True program fidelity is not easily achieved because program implementers often change or adapt the program content during actual implementation, whether intentionally or otherwise. Studies have shown that a number of preventive programs do not follow the prescribed program content entirely and adaptation is often made for specific target groups [30, 31]. One study found tension between the desire of the program implementer to adhere to the manualized plan and the desire to make adaptations in accordance with the needs of clients [32]. Although it is not an easily resolved issue, program fidelity is generally encouraged, especially when programs are designed with vigorous trial runs and repeated success rates [27, 33, 34].

Process factors are elements that are contingent to implementation quality or success and can be observed during the implementation process. There is a variety of process factors in terms of program characteristics and program developers’ needs. Some programs are even designed with their own process measurements [35]. Common process factors in prevention programs include program receivers’ engagement, program implementers’ use of feedback, goal attainment, and program implementers’ familiarity with the program receivers. Program dosage, which refers to the effort by the program implementer to follow the required time prescribed for a program, is considered another process factor because inadequate time affects the quality of program implementation [27, 36]. Dosage also refers to the group size of program receivers. A discrepancy between the intended and actual program receiver to program implementer ratio affects the program delivery process as well [26].

Process evaluation can provide important findings with macrolevel program implications, such as the importance of the engagement of different community stakeholders [37, 38], client needs [11], assessment of the environment [39, 40], and challenges of the programs for a particular context [41]. Finally, process evaluation and outcome evaluation are strongly linked. Process evaluation sheds light on which types of intervention strategies or processes are related to the program success [5, 11]. These factors can be amplified during program reimplementation.

The components of process evaluation point towards its importance. First, outcome evaluation provides inadequate hints on the quality of program implementation. Process evaluation demystifies the “black box” of intervention and aids in the understanding of the elements of program success or failure [42]. Process evaluation facilitates the program developers to understand fully the strengths and weaknesses of the developed programs. Program implementers can follow the suggestions from the process evaluation for further program delivery. This is one essence of evidence-based practice, and also the foundation of bridging the gap between research and practice [43, 44]. Second, process evaluation can inform program developers about whether the program is delivered according to the standardized manual. The existence of other activities different from those intended by the program developers will not truly reflect the effectiveness of the prescribed program. Third, different human organizations and communities arrange the program in various settings, with different levels of involvement by the stakeholder and support. Perceptions of the program also vary across program implementers and program receivers. Process evaluation can document the variety of implementations in real human service settings based on the same manualized plans. Finally, process evaluation provides insights for program developers and implementers into the linkage between process and outcome. These insights allow both program developers and implementers to delineate areas that are either successful or require improvement during the process and connect them with the program outcomes.

Many primary prevention programs and positive youth development programs have been developed in the West to address the growing adolescent development problems, such as substance abuse, mental health problems, and school violence [45]. However, in Hong Kong, there are very few systematic and multiyear positive youth development programs. To promote holistic development among adolescents in Hong Kong, the Hong Kong Jockey Club Charities Trust approved the release of HK$750 million (HK$400 million for the first phase and HK$350 million for the second phase) to launch a project entitled “P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme.” The trust invited academics from five universities in Hong Kong to form a research team in order to develop a multiyear universal positive youth program [46].

The project commenced in 2004 and is targeted to end by 2012. There are two tiers of programs in this project. The Tier 1 Program is a universal positive youth development program where students from Secondary 1 (Grade 7) to Secondary 3 (Grade 9) participate in a classroom-based program, normally with 20 h of training in the school year in each grade. Around one-fifth of adolescents with more psychosocial needs join the Tier 2 Program, which consists of intensive training on volunteer service, adventure-based counseling camp, and other experiential learning activities.

The project was designed according to 15 constructs conducive to adolescent healthy development [47]: promotion of bonding, cultivation of resilience, promotion of social competence, promotion of emotional competence, promotion of cognitive competence, promotion of behavioral competence, promotion of moral competence, cultivation of self-determination, promotion of spirituality, development of self-efficacy, development of a clear and positive identity, promotion of beliefs in the future, provision of recognition for positive behavior, provision of opportunities for prosocial involvement, and promotion of prosocial norms. The overall objective of the Tier 1 Program is to promote holistic development among junior secondary school students in Hong Kong.

There are two implementation phases: the Experimental Implementation Phase and the Full Implementation Phase. The Experimental Implementation Phase aims at accumulating experience from trial teaching and administrative arrangement. Program materials are revised and refined during this phase. The Full Implementation Phase aims at executing the programs in full force. Several lines of evidence support the effectiveness of the Tier 1 Program, including the evaluation findings based on randomized group trials [48, 49], subjective outcome evaluation [50], qualitative findings based on focus group interviews with program implementers and students [51], interim evaluation [52], analyses of the weekly diaries of students [53], and case studies [54]. The evaluation findings based on different evaluation strategies indicate that the Project P.A.T.H.S. promotes the development of its participants.

Process evaluation has also been carried out in both the Experimental and Full Implementation Phases for different grades [55, 56]. The evaluation results generally indicate that the quality of implementation and program adherence are high. Program adherence is the objective estimation of the adoption percentage from the manualized plan for real service delivery. A variety of process factors exist. A review of the literature indicates that the following program attributes can affect the quality and success of the positive youth development program implementation [31, 42, 5759].(1)Student Interest. A successful program usually elicits the interest of the students.(2)Active Involvement of Students. The more involved the students are, the higher the possibility that the program can achieve its outcomes.(3)Classroom Management. The program implementer can manage student discipline during student activities. Students obey the requirements set by the program implementer and are attentive.(4)Interactive Delivery Method. Interactive delivery is better than didactic delivery for positive youth development programs.(5)Strategies to Enhance the Motivation of Students. The use of various learning strategies can enhance the engagement of the students and result in positive learning outcomes.(6)Positive Feedback. The use of praise and encouragement throughout the lessons by the program implementers can promote the engagement of the students.(7)Familiarity of Implementers with the Students. All other things being equal, a high degree of familiarity with the students is positively related to student learning outcome.(8)Reflective Learning. The program implementer should engage the students in reflection and deeper learning. This can lead to growth and meaningful changes among the students.(9)Program Goal Attainment. The achievement of program goals constitutes program success.(10)Time Management. Efficient time management ensures that the majority of the program materials are carried out with high program adherence.(11)Familiarity of Program Implementers with the Implementation Materials. Familiarity with the material ensures that the messages are conveyed effectively to the students.

These eleven attributes can form a checklist for evaluating the implementation process.

On the other hand, program quality is the subjective appraisal of the program implementation process by the observer. It can be reflected from the implementation atmosphere and the interaction between program implementers and students. Program success refers to the extent of unit objective attainment and the subjective evaluation of the response of the students to the program.

By conducting secondary data analyses on all datasets collected over the past 4 years regarding process evaluation, the current study has two purposes: (1) to understand the program implementation quality across different cohorts in terms of program adherence, process factors, program quality, and success and (2) to explore factors contributing to the overall quality and success of the Tier 1 Program. Correspondingly, there are two research questions: (1) what is the overall implementation quality of the Tier 1 Program of the Project P.A.T.H.S. in Hong Kong? and (2) how are program adherence and other indicators of process evaluation related to the implementation quality and success of the Tier 1 Program?

2. Method

2.1. Participants and Procedure

From 2005 to 2009, the total number of schools that participated in the Project P.A.T.H.S. was 244. Among them, 46.27% of the respondent schools adopted the full program (i.e., 20 h program involving 40 units), whereas 53.73% of the respondent schools adopted the core program (i.e., 10 h program involving 20 units).

A total of 62 schools were randomly selected to participate in the study of process evaluation (23 schools for Secondary 1, 21 for Secondary 2, and 18 for Secondary 3); 25.80% of the participating schools adopted the core program, while the remaining 74.2% adopted the full program. Around 65% of schools incorporated the program into their formal curriculum (e.g., Liberal Studies, Life Education, and Religious Studies), 27.42% used the class teacher’s period to implement the program, and less than 8% used other modes. The average number of students in the participating schools was 33.91. The characteristics of the schools that joined this process evaluation study can be seen in Table 1.

tab1
Table 1: Descriptive profile of participating schools from 2005 to 2009.

Process evaluation was carried out in each participating school using systematic observations of actual classroom program delivery. For each school that joined the process evaluation, one to two program units were evaluated by two independent observers who are project colleagues with master’s degrees. A total of 97 units were observed for this study. During the observation, observers sat at the back of the classroom and evaluated the method by which the units were actually implemented through completing several evaluation forms.

2.2. Instruments

Program Adherence
Observers were requested to rate program adherence in terms of percentage (i.e., the correspondence between actual program delivery and stipulated program materials). Aggregating all data across studies, Pearson correlation analyses showed that the ratings of program adherence were highly reliable across raters , suggesting that the assessment of program adherence was consistent.

Implementation Process Checklist (IPC)
Observers were requested to report their observations using a 7-point Likert scale ranging from 1 (extremely negative) to 7 (extremely positive) on the items. Aggregating the data across studies, the internal consistency of the scale, as shown by Cronbach’s alpha, was 0.93. The interrater reliability of IPC, as shown by Pearson correlation, was 0.72 .

Process Outcomes
Two items were used to evaluate the process outcome: implementation quality and implementation success. Observers were requested to indicate their observations using a 7-point Likert scale ranging from 1 (poor) to 7 (excellent). A higher score represents better implementation quality or success. The interrater reliability for implementation quality, as shown by Pearson correlation, was 0.74 , whereas that for implementation success was 0.64 . In short, the aggregated data across studies suggest that the observations on process outcome were consistent across raters.

3. Results

As the interrater reliabilities of the scores across all units were high, the ratings of each item by the two observers were averaged to form a combined indicator. Table 2 shows the descriptive profile of the evaluative indicators for process evaluation. The overall program adherence to the established manual ranged from 13.00 to 100.00%, with an average overall adherence of 85.14%. For the 7-point items, a stringent score of 5.0 or more was used as the cut-off point to indicate high ratings. The mean scores for implementation quality and success were 5.32 (SD = 0.86) and 5.34 (SD = 0.77), suggesting a high level of implementation. The mean scores of the 11 process evaluation items ranged from 4.96 to 5.62. Classroom management (M = 5.62) and familiarity with students (M = 5.43) had the highest scores, whereas reflective learning (M = 4.96) had the lowest score. Apart from reflective learning, all scores were on the high side.

tab2
Table 2: Descriptive statistics of evaluation Items.

Table 3 shows the intercorrelations among program adherence, implementation process, implementation quality, and implementation success. All variables were significantly correlated with each other. The correlations between quality and success and between process and quality were high, whereas the correlations between process and adherence and between quality and adherence were relatively low. Multiple regression analyses were further performed, in which program adherence and implementation process were entered as predictors, and implementation quality and implementation success as two separate dependent variables. Table 4 shows the results for the prediction of implementation quality. Both implementation process and program adherence significantly predicted program quality with a large proportion of variance in the dependent variable being explained . The effect size of the variance explained Cohen , as calculated by , is 2.13, which is large. Table 5 shows the results of the prediction of implementation success. Similarly, implementation process and program adherence significantly predicted program success . The effect size is 2.03, which is also large.

tab3
Table 3: Intercorrelations among program adherence, implementation process, implementation quality, and implementation success.
tab4
Table 4: Regression table of implementation quality.
tab5
Table 5: Regression table of implementation success.

4. Discussion

This study attempts to integrate the process evaluation findings based on multiple studies via secondary data analyses. There are several unique features of this study. First, a large number of teaching units and schools was evaluated. Second, two observers who were independent raters conducted the assessment. Third, a structured and reliable measure of program implementation was used. Fourth, inter-rater reliability analyses showed that the observations were basically reliable. Finally, this is the first large-scale process evaluation of positive youth development programs in the Chinese context.

Despite the discrepancy in the ratings of program adherence on different units, the overall degree of adherence to the program is on the high side. This observation is generally consistent with previous findings generated from separate process evaluation studies conducted by observers [55, 56] and subjective outcome evaluations reported by the program implementers [54]. Most program content is well designed for implementation. This can be attributed to the fact that all program materials have gone through trial teaching. They have already been revised and refined according to prior teaching experience. Thus, program implementers may not have great difficulty in following the teaching plans. These findings dispute the common myth that curriculum-based positive youth development programs cannot be used easily and require major adaptations or modifications.

The findings on program adherence are very encouraging because program adherence is generally low in the international context. In a meta-analysis of evaluation studies of primary and early secondary prevention programs published between 1980 and 1994, Dane and Schneider [2] showed that only 39 out of 162 evaluation studies documented procedures of fidelity. Domitrovich and Greenberg [60] also reported that among the 34 effective prevention programs under review, only 21% examined whether the effective intervention was related to outcomes. O’Connor et al. [61] suggested that there are certain risky adaptations for program adherence, such as reducing the number or length of sessions, lowering the level of participant engagement, eliminating key messages, removing topics, and changing the theoretical approach. Obviously, program adherence coupled with the effective use of the self, and good interaction between implementers and students, can be more difficult than expected. This requires intensive training and personal reflexivity on the part of the social worker.

The present study found that different aspects of the program delivery were perceived to be positive, highlighting the fact that the Tier 1 Program of the Project P.A.T.H.S. was well received by both program implementers and students. Moreover, the implementation was regarded as successful by the observers, although relatively low average ratings were reported on time management and reflective learning. These findings are similar to those based on the Experimental Implementation Phase [55, 56]. There are two possible explanations for the low ratings. First, due to the usual didactic teaching style in Hong Kong, students are not accustomed to reflecting on their everyday life practice in classroom settings. Hence, the students cannot easily shift their learning modes from one-way knowledge dissemination to reflective learning. Second, the overpacking of the curriculum may have prevented the students from carrying out reflections on their learning. Overpacking could have also contributed to the unsatisfactory rating of time management.

The current study also found that program adherence and implementation process are closely associated with implementation quality and success. For positive youth programs, an interactive program delivery is the key milestone for program quality and success [57]. This explains the high correlations among these factors. Furthermore, implementation process and program adherence were found to predict implementation quality or success. Theoretically, both process and content are critical to program quality and success. All these findings suggested that the need for modifying the units in the implementation process was not high. Again, these findings could be used to challenge the common myth that curricula-based positive youth development programs cannot be easily used in reality and major modifications must be made for different adolescent populations. This demystification provides an evidence-based justification for following the manuals in an authentic manner.

Findings of the present study have two implications. The first implication is on the conceptual level. When we are concerned about program implementation regardless of external environment (i.e., macrolevel implication and dosage), we can focus on three variables: program adherence, implementation process, and context. These variables are all related to implementation quality or success. The present findings provide conceptual insights for understanding program quality or success. The second implication is on a practical level. The process variables covered in the study can actually be used in other social work or health science contexts, especially in educational and developmental groups. All these measures are important for positive youth programs and should be brought to the fore in the training. Youth workers, social workers, and teachers should be aware that implementation process is critical for classroom-based psychosocial intervention programs.

This study has several limitations. First, only 62 randomly selected schools participated in this study. Although the number of schools can be regarded as respectable, inclusion of more schools with diverse backgrounds would be helpful. Second, process evaluation with reference to macro-level implication, dosage issues [62], and school characteristics can help program developers to understand the quality of the program implementation process further. Third, the observation may have a confounding effect. Students may be more cooperative when there are visitors or outside observers because the students do not want to ruin the reputation of their schools. As Chinese students, they may also want to “give face” to the program implementers [63] and intentionally perform better in front of the raters. Fourth, consistent with the intrinsic problem of all observation studies where time sampling is involved, one needs to be conscious of the degree of generalizability of the present findings to other temporal and spatial contexts. Despite these limitations, the current process evaluation findings suggest that the quality of implementation of the Tier 1 Program of the Project P.A.T.H.S. is generally high.

Acknowledgments

The preparation for this paper and the Project P.A.T.H.S. were financially supported by the Hong Kong Jockey Club Charities Trust.

References

  1. S. Zakrzewski, C. Steven, and C. Ricketts, “Evaluating computer-based assessment in a risk-based model,” Assessment and Evaluation in Higher Education, vol. 34, no. 4, pp. 439–454, 2009. View at Publisher · View at Google Scholar
  2. A. V. Dane and B. H. Schneider, “Program integrity in primary and early secondary prevention: are implementation effects out of control?” Clinical Psychology Review, vol. 18, no. 1, pp. 23–45, 1998. View at Publisher · View at Google Scholar · View at Scopus
  3. C. E. Domitrovich and M. T. Greenberg, “The study of implementation: current findings from effective programs that prevent mental disorders in school-aged children,” Journal of Educational and Psychological Consultation, vol. 11, no. 2, pp. 193–221, 2000. View at Google Scholar · View at Scopus
  4. L. A. Huryk, “Factors influencing nurses' attitudes towards healthcare information technology,” Journal of Nursing Management, vol. 18, no. 5, pp. 606–612, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. J. E. Painter, J. M. Sales, K. Pazol, T. Grimes, G. Wingood, and R. J. DiClemente, “Development, theoretical framework, and lessons learned from implementation of a school-based influenza vacation intervention,” Health Promotion Practice, vol. 11, supplement 3, pp. 42S–52S, 2010. View at Google Scholar
  6. S. M. Braun, J. C. van Haastregt, A. J. Beurskens, A. I. Gielen, D. T. Wade, and J. M. Schols, “Feasibility of a mental practice intervention in stroke patients in nursing homes: a process evaluation,” BMC Neurology, vol. 10, article 74, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. T. Karwalajtys, B. McDonough, H. Hall et al., “Development of the volunteer peer educator role in a community cardiovascular health awareness program (CHAP): a process evaluation in two communities,” Journal of Community Health, vol. 34, no. 4, pp. 336–345, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. F. S. Mair, J. Hiscock, and S. C. Beaton, “Understanding factors that inhibit or promote the utilization of telecare in chronic lung disease,” Chronic lllness, vol. 4, no. 2, pp. 110–117, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. E. Shevil and M. Finlayson, “Process evaluation of a self-management cognitive program for persons with multiple sclerosis,” Patient Education and Counseling, vol. 76, no. 1, pp. 77–83, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Gnich, C. Sheehy, A. Amos, M. Bitel, and S. Platt, “A Scotland-wide pilot programme of smoking cessation services for young people: process and outcome evaluation,” British Journal of Addiction, vol. 103, no. 11, pp. 1866–1874, 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. K. Kwong, A. K. Ferketich, M. E. Wewers, A. Shek, T. Tsang, and A. Tso, “Development and evaluation of a physician-led smoking cessation intervention for low-income Chinese Americans,” Journal of Smoking Cessation, vol. 4, no. 2, pp. 92–98, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Quintiliani, M. Yang, and G. Sorensen, “A process evaluation of tobacco-related outcomes from a telephone and print-delivered intervention for motor freight workers,” Addictive Behaviors, vol. 35, no. 11, pp. 1036–1039, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Allicock, M. K. Campbell, C. G. Valle et al., “Evaluating the implementation of peer counseling in a church-based dietary intervention for African Americans,” Patient Education and Counseling, vol. 81, no. 1, pp. 37–42, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. D. Bowes, M. Marquis, W. Young, P. Holowaty, and W. Isaac, “Process evaluation of a school-based intervention to increase physical activity and reduce bullying,” Health Promotion Practice, vol. 10, no. 3, pp. 394–401, 2009. View at Google Scholar · View at Scopus
  15. A. Hart Jr., D. J. Bowen, C. L. Christensen et al., “Process evaluation results from the eating for a healthy life study,” American Journal of Health Promotion, vol. 23, no. 5, pp. 324–327, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. R. Muckelbauer, L. Libuda, K. Clausen, and M. Kersting, “Long-term process evaluation of a school-based programme for overweight prevention,” Child Care, Health and Development, vol. 35, no. 6, pp. 851–857, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Salmela, M. Poskiparta, K. Kasila, K. Vahasarja, and M. Vanhala, “Transtheoretical model-based dietary interventions in primary care: a review of the evidence in diabetes,” Health Education Research, vol. 24, no. 2, pp. 237–252, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. M. G. B. C. Bertens, E. M. Eiling, B. van den Borne, and H. P. Schaalma, “Uma Tori! evaluation of an STI/HIV-prevention intervention for Afro-Caribbean women in the Netherlands,” Patient Education and Counseling, vol. 75, no. 1, pp. 77–83, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. J. L. Fraze, J. D. Uhrig, K. C. Davis et al., “Applying core principles to the design and evaluation of the “Take Charge. Take the Test” campaign: what worked and lessons learned,” Public Health, vol. 123, supplement 1, pp. e23–e30, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Hargreaves, A. Hatcher, V. Strange et al., “Process evaluation of the Intervention with Microfinance for AIDS and Gender Equity (IMAGE) in rural South Africa,” Health Education Research, vol. 25, no. 1, pp. 27–40, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. D. J. Konkle-Parker, J. A. Erlen, and P. M. Dubbert, “Lessons learned from an HIV adherence pilot study in the Deep South,” Patient Education and Counseling, vol. 78, no. 1, pp. 91–96, 2010. View at Publisher · View at Google Scholar
  22. W. Mukoma, A. Flisher, N. Ahmed et al., “Process evaluation of a school-based HIV/AIDS intervention in South Africa,” Scandinavian Journal of Public Health, vol. 37, supplement 2, pp. 37–47, 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. A. N. Cohen, S. M. Glynn, A. B. Hamilton, and A. S. Young, “Implementation of a family intervention for individuals with schizophrenia,” Journal of General Internal Medicine, vol. 25, supplement 1, pp. S32–S37, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. K. L. Kumpfer, M. Pinyuchon, A. T. de Melo, and H. O. Whiteside, “Cultural adaptation process for international dissemination of the strengthening families program,” Evaluation and the Health Professions, vol. 31, no. 2, pp. 226–239, 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. M. W. Beets, B. R. Flay, S. Vuchinich, A. C. Acock, K. K. Li, and C. Allred, “School climate and teachers' beliefs and attitudes associated with implementation of the positive action program: a diffusion of innovations model,” Prevention Science, vol. 9, no. 4, pp. 264–275, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. S. Franzen, S. Morrel-Samuels, T. M. Reischl, and M. A. Zimmerman, “Using process evaluation to strengthen intergenerational partnerships in the youth empowerment solutions program,” Journal of Prevention and Intervention in the Community, vol. 37, no. 4, pp. 289–301, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. C. C. Johnson, Y. L. Lai, J. Rice, D. Rose, and L. S. Webber, “ACTION live: using process evaluation to describe implementation of a worksite wellness program,” Journal of Occupational and Environmental Medicine, vol. 52, supplement 1, pp. S14–S21, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. M. A. Scheirer, “Designing and using process evaluation,” in Handbook of Practical Program Evaluation, J. S. Wholey, H. P. Hatry, and K. E. Newcomer, Eds., Jossey-Bass, San Francisco, Calif, USA, 1994. View at Google Scholar
  29. A. A. Fagan, K. Hanson, J. D. Hawkins, and M. W. Arthur, “Bridging science to practice: achieving prevention program implementation fidelity in the community youth development study,” American Journal of Community Psychology, vol. 41, no. 3-4, pp. 235–249, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. D. S. Elliott and S. Mihalic, “Issues in disseminating and replicating effective prevention programs,” Prevention Science, vol. 5, no. 1, pp. 47–53, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. M. Nation, C. Crusto, A. Wandersman et al., “What works in prevention: principles of effective prevention programs,” American Psychologist, vol. 58, no. 6-7, pp. 449–456, 2003. View at Publisher · View at Google Scholar · View at Scopus
  32. L. Wegner, A. J. Flisher, L. L. Caldwell, T. Vergnani, and E. A. Smith, “Healthwise South Africa: cultural adaptation of a school-based risk prevention programme,” Health Education Research, vol. 23, no. 6, pp. 1085–1096, 2008. View at Publisher · View at Google Scholar · View at Scopus
  33. S. F. Griffin, S. Wilcox, M. G. Ory et al., “Results from the active for Life process evaluation: program delivery fidelity and adaptations,” Health Education Research, vol. 25, no. 2, pp. 325–342, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. D. K. Wilson, S. Griffin, R. P. Saunders, H. Kitzman-Ulrich, D. C. Meyers, and L. Mansard, “Using process evaluation for program improvement in dose, fidelity and reach: the ACT trial experience,” International Journal of Behavioral Nutrition and Physical Activity, vol. 6, article 79, 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Yamada, B. Stevens, S. Sidani, J. Watt-Watson, and N. de Silva, “Content validity of a process evaluation checklist to measure intervention implementation fidelity of the EPIC intervention,” Worldviews on Evidence-Based Nursing, vol. 7, no. 3, pp. 158–164, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. Y. O. Ferguson, E. Eng, M. Bentley et al., “Evaluating nurses' implementation of an infant-feeding counseling protocol for HIV-infected mothers: the ban study in Lilongwe, Malawi,” AIDS Education and Prevention, vol. 21, no. 2, pp. 141–155, 2009. View at Publisher · View at Google Scholar · View at Scopus
  37. B. Zani and E. Cicognani, “Evaluating the participatory process in a community-based health promotion project,” Journal of Prevention and Intervention in the Community, vol. 38, no. 1, pp. 55–69, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. S. B. Carswell, T. E. Hanlon, K. E. O'Grady, A. M. Watts, and P. Pothong, “A preventive intervention program for urban African American youth attending an alternative education program: background, implementation, and feasibility,” Education and Treatment of Children, vol. 32, no. 3, pp. 445–469, 2009. View at Google Scholar · View at Scopus
  39. M. Eisenberg, “Integrating a school-based health intervention in times of high-stakes testing: lessons learned from full court press,” Health Promotion Practice, vol. 10, no. 2, pp. 284–292, 2009. View at Publisher · View at Google Scholar · View at Scopus
  40. D. Stewart, “Implementing mental health promotion in schools: a process evaluation,” The International Journal of Mental Health Promotion, vol. 10, no. 1, pp. 32–41, 2008. View at Google Scholar
  41. R. M. S. Louis, J. E. Parow, D. W. Eby, C. R. Bingham, H. M. Hockanson, and A. I. Greenspan, “Evaluation of community-based programs to increase booster seat use,” Accident Analysis and Prevention, vol. 40, no. 1, pp. 295–302, 2008. View at Publisher · View at Google Scholar · View at Scopus
  42. T. W. Harachi, R. D. Abbott, R. F. Catalano, K. P. Haggerty, and C. B. Fleming, “Opening the black box: using process evaluation measures to assess implementation and theory building,” American Journal of Community Psychology, vol. 27, no. 5, pp. 711–731, 1999. View at Publisher · View at Google Scholar · View at Scopus
  43. J. Saul, A. Wandersman, P. Flaspohler, J. Duffy, K. Lubell, and R. Noonan, “Research and action for bridging science and practice in prevention,” American Journal of Community Psychology, vol. 41, no. 3-4, pp. 165–170, 2008. View at Publisher · View at Google Scholar · View at Scopus
  44. A. Wandersman, J. Duffy, P. Flaspohler et al., “Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation,” American Journal of Community Psychology, vol. 41, no. 3-4, pp. 171–181, 2008. View at Publisher · View at Google Scholar · View at Scopus
  45. D. T. L. Shek and J. Merrick, “Promoting positive development in Chinese adolescents: the project P.A.T.H.S. in Hong Kong,” International Journal of Public Health, vol. 1, no. 3, pp. 237–241, 2009. View at Google Scholar
  46. D. T. L. Shek, “Adolescent developmental issues in Hong Kong: relevance to positive youth development programs in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 18, no. 3, pp. 341–354, 2006. View at Google Scholar · View at Scopus
  47. D. T. L. Shek, “Construction of a positive youth development program in Hong Kong,” International Journal of Adolescent Medicine and Health, vol. 18, no. 3, pp. 299–302, 2006. View at Google Scholar · View at Scopus
  48. D. T. L. Shek and C. M. S. Ma, “Impact of the project P.A.T.H.S. in the junior secondary school years: individual growth curve analyses,” The Scientific World Journal, vol. 11, pp. 253–266, 2011. View at Publisher · View at Google Scholar
  49. D. T. L. Shek and L. Yu, “Prevention of adolescent problem behavior: longitudinal impact of the project P.A.T.H.S. in Hong Kong,” The Scientific World Journal, vol. 11, pp. 546–567, 2011. View at Publisher · View at Google Scholar
  50. D. T. L. Shek and R. C. F. Sun, “Subjective outcome evaluation of the project P.A.T.H.S.: qualitative findings based on the experiences of program implementers,” The Scientific World Journal, vol. 7, supplement 1, pp. 1024–1035, 2007. View at Publisher · View at Google Scholar
  51. D. T. L. Shek and T. Y. Lee, “Qualitative evaluation of the project P.A.T.H.S.: findings based on focus groups with student participants,” International Journal of Adolescent Medicine and Health, vol. 20, no. 4, pp. 449–462, 2008. View at Google Scholar · View at Scopus
  52. D. T. L. Shek, R. C. F. Sun, and A. M. H. Siu, “Interim evaluation of the secondary 2 program of project P.A.T.H.S.: insights based on the experimental implementation phase,” The Scientific World Journal, vol. 8, pp. 61–72, 2008. View at Publisher · View at Google Scholar
  53. D. T. L. Shek, R. C. F. Sun, M. L. Ching, D. W. M. Lung, and C. L. Sui, “Evaluation of project P.A.T.H.S. in Hong Kong: utilization of student weekly diary,” The Scientific World Journal, vol. 8, pp. 13–21, 2008. View at Publisher · View at Google Scholar
  54. D. T. L. Shek and R. C. F. Sun, “Implementation of a positive youth development program in a Chinese context: the role of policy, program, people, process, and place,” The Scientific World Journal, vol. 8, pp. 980–996, 2008. View at Publisher · View at Google Scholar
  55. D. T. L. Shek, H. K. Ma, J. H. Y. Lui, and D. W. M. Lung, “Process evaluation of the Tier 1 program of the project P.A.T.H.S,” The Scientific World Journal, vol. 6, pp. 2264–2273, 2006. View at Publisher · View at Google Scholar
  56. D. T. L. Shek, Y. L. Tak, and R. C.F. Sun, “Process evaluation of the implementation of the secondary 2 program of project P.A.T.H.S. in the experimental implementation phase,” The Scientific World Journal, vol. 8, pp. 83–94, 2008. View at Publisher · View at Google Scholar
  57. Collaborative for Academic, “Social and emotional learning implementation guidance,” 2010, http://casel.org/why-it-matters/benefits-of-sel/.
  58. C. L. Ringwalt, S. Ennett, R. Johnson et al., “Factors associated with fidelity to substance use prevention curriculum guides in the nation's middle schools,” Health Education and Behavior, vol. 30, no. 3, pp. 375–391, 2003. View at Publisher · View at Google Scholar · View at Scopus
  59. N. S. Tobler, T. Lessard, D. Marshall, P. Ochshorn, and M. Roona, “Effectiveness of school-based drug prevention programs for marijuana use,” School Psychology International, vol. 20, no. 1, pp. 105–137, 1999. View at Google Scholar · View at Scopus
  60. C. E. Domitrovich and M. T. Greenberg, “The study of implementation: current findings from effective programs that prevent mental disorders in school-aged children,” Journal of Educational and Psychological Consultation, vol. 11, no. 2, pp. 193–221, 2000. View at Google Scholar · View at Scopus
  61. C. O’Connor, S. A. Small, and S. M. Cooney, “Program fidelity and adaptation: meeting local needs without compromising program effectiveness,” What works, Wisconsin–Research to Practice Series, vol. 4, pp. 1–5, 2007. View at Google Scholar
  62. R. P. Saunders, M. H. Evans, and P. Joshi, “Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide,” Health Promotion Practice, vol. 6, no. 2, pp. 134–147, 2005. View at Google Scholar · View at Scopus
  63. T. K. P. Leung and R. Y. K. Chan, “Face, favour and positioning—a Chinese power game,” European Journal of Marketing, vol. 37, no. 11–12, pp. 1575–1598, 2003. View at Publisher · View at Google Scholar