Table of Contents Author Guidelines Submit a Manuscript
Journal of Biomedical Education
Volume 2015, Article ID 615169, 8 pages
http://dx.doi.org/10.1155/2015/615169
Research Article

Workplace Based Assessment in an Asian Context: Trainees’ and Trainers’ Perception of Validity, Reliability, Feasibility, Acceptability, and Educational Impact

1Rimba Dialysis Centre, Bandar Seri Begawan BE3119, Brunei Darussalam
2Department of Internal Medicine, RIPAS Hospital, Bandar Seri Begawan BE1518, Brunei Darussalam
3Department of Renal Medicine, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433
4Universiti Brunei Darussalam, Tungku Link Road, Bandar Seri Begawan BE1410, Brunei Darussalam

Received 27 October 2015; Revised 2 December 2015; Accepted 6 December 2015

Academic Editor: Terrence M. Shaneyfelt

Copyright © 2015 Jackson Tan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Workplace based assessment (WPBA) is commonplace in postgraduate training in many countries but is not widely practised and established in Asia. The WPBA tools that are used by the local programme are Mini-Clinical Examination (Mini-CEX), Directly Observed Practical Skills (DOPSs), Multisource Feedback (MSF), and Case Based Discussion (CBD). This cross-sectional study utilised a questionnaire to obtain feedback from both assessors and trainees. Participants rated the tools on a 5-point scale on validity, reliability, feasibility, educational impact, and acceptability. 30 assessors and 23 trainees participated in the study. The percentages of adequate ratings given by trainees for validity, reliability, feasibility, educational impact, and acceptability were 100%, 99%, 91%, 100%, and 100%, respectively, for all tools. There was no difference in perceptions between trainees and assessors for all WPBA tools except MSF (). The common themes that have arisen suggested that applicability of WPBA could be affected by faculty development, endorsement from governing bodies, pervading cultural mindsets, and the complex relationships between doctors and patients; teachers and students; and educators and clinicians. The high level of satisfaction from our respondents indicates that WPBA can be successfully integrated in an Asian postgraduate training programme despite systemic, cultural, and language barriers.

1. Background

Workplace based assessment (WPBA) involves direct observation of trainees’ performances at their workplaces followed by provision of feedback based on the performances and now forms an essential part of trainee evaluation in many countries. It conforms to the highest tier of Miller’s triangle [1] whereby trainees can be observed and assessed in real life situations. This differs from assessments that are traditionally done in postgraduate and undergraduate settings where performance assessments are usually knowledge-based (written or oral examinations), summative (educational supervisor report), or done under simulated clinical situations (Objective Structured Clinical Examination or OSCE). The range of assessment tools that is available has expanded over the past few years and has the ability to assess a wide range of real life competencies that are relevant to medical practice.

We have adopted WPBA in our national postgraduate curriculum for our doctors in foundation training since 2009 [2]. The system was adapted from the established foundation training programme of the United Kingdom where trainees have to fulfil a series of WPBA requirements before being allowed to progress in their training. The WPBA tools that are used by the local programme are Mini-Clinical Examination (Mini-CEX), Directly Observed Practical Skills (DOPSs), Multisource Feedback (MSF), and Case Based Discussion (CBD). These are summatively assessed during the Annual Review of Competence Progression (ARCP) meetings where trainees are expected to produce documents pertaining to all the assessments done during the one-year period. Failure to demonstrate competence progression, that is, failure to achieve the standard for completion including unsatisfactory or insufficient assessments, will usually result in trainees having to repeat the assessments or undergo remedial training.

Whilst WPBA is commonplace in training in countries such as the United Kingdom and the United States, it is not universally utilised. However judging by the burgeoning number of international publications on WPBA, there are signs that this is a growing trend worldwide. This is consistent with the approach by the World Federation for Medical Education (WFME) to develop global assessment programmes to ensure that students achieve the intended clinical competence [13]. The utility or usefulness of an assessment tool has been defined as a product of its validity, reliability, feasibility, acceptability, and educational impact [14]. These principles have been adapted and reported in various publications on WPBA [1517]. Therefore for the basis of this research, judgements on WPBA were made in relation to these five principles. The definitions of these principles are broadly described below:(1)Validity is concerned with the interpretation of assessment results rather than an inherent quality of the tool. It is generally inferred from the evidence presented to assess a particular attribute [18].(2)Reliability refers to the consistency or reproducibility of assessment results over time and instances. Like validity, it is the characteristic of the result or outcome of the assessment and not measuring the tool itself [19].(3)Feasibility (cost-effectiveness): the time, effort, and expenses involved in its implementation (developing, administering, reporting, and interpreting) should justify its use. This must also be weighed against the delivery of clinical services [20].(4)Educational impact refers to the educational message conveyed to the trainees that can lead to learning and training opportunities [15].(5)Acceptability is the belief that the tool is acceptable for practice. If beliefs, attitude, and opinions of both assessors and trainees are not taken into account, then sustainability and survival of the tool may be compromised [14].Based on our personal communications regionally and our interpretations of published data, we deduced that WPBA is not commonly or systematically performed and integrated in the postgraduate settings in many Asian countries. We have used WPBAs in our two-year postgraduate foundation programme since its inception in 2009. We felt our experience with implementation of WPBA as a novel method of trainee assessment for our postgraduate setting has provided some insight into the difficulties that may be experienced by some Asian countries. Therefore, we aimed to further evaluate users’ perceptions of the different qualities of WPBA tools in an Asian context and through literature review on usage of WPBA publications in Asian countries and through our own users’ feedback.

2. Methods

This is a cross-sectional study utilising a questionnaire to obtain feedback from both assessors and trainees on the use of WPBA tools. Assessors and trainees were asked to rate the tools according to the characteristics described previously (validity, reliability, feasibility, educational impact, and acceptability) on a 5-point scale (very poor, poor, average, good, and excellent) in a novel, purpose-made questionnaire. They were also given the opportunity to pick “unsure” or “unable to assess.” For the purpose of this study, perception of adequacy or “adequate” is defined as rating of average or better. Additionally, an opportunity to provide unstructured free text feedback was provided in the questionnaire. For the comparison of perceptions between the assessors and trainees, each WPBA tool was given an overall score, which was derived from the average of scores from the 5 domains (validity, reliability, feasibility, educational impact, and acceptability). Ratings of “very poor” to “excellent” were given incremental scores of “0” to “4.” Higher overall scores would indicate higher level of satisfaction.

All assessors who were involved in WPBA and trainees who have exited the foundation training programme between 2010 and 2014 were invited to participate. The respondents were allowed to provide anonymous feedback to the researchers. Key members of the assessment team were interviewed on the basis of the given feedback. Qualitative analysis of feedback was done using a deductive grounded theory approach to define a structure of codes relevant to the subject. These codes are used to generate themes through iterative readings by the authors. Intercoder reliability was addressed through independent coding of transcripts and comparing the agreements on coding used.

IBM SPSS Windows version 21.0 was used to enter and analyse the data. Descriptive and inferential statistical analyses were applied. We compared the overall scores of WPBA tools between assessors and trainees using independent -test for normally distributed scores (i.e., DOPS and MCEX) whereas Mann-Whitney test was used to compare skewed overall scores (i.e., MSF and CBD). All hypothesis tests were two-sided tests and value less than 0.05 was considered significant difference.

3. Results

A total of 30 assessors and 23 trainees participated in the study. All the trainees were doctors who completed two years of foundation year training in Brunei, since the inception of the programme in 2009. On average, all the trainees will have been exposed to 12 Mini-CEXs, 12 DOPSs, 4 CBPDs, and 2 MSF per year. Table 1 summarises the ratings given by the trainees. All the tools rated well although there were a few “poor” ratings for feasibility of Mini-CEX and DOPS, especially relating to the difficulties in finding the appropriate opportunistic cases and willing experienced assessors. Percentages of “adequate” scores for trainees for validity, reliability, feasibility, educational impact, and acceptability were 100%, 99%, 91%, 100%, and 100%, respectively, for all tools. Percentages of “adequate” scores for CBPD, DOPS, Mini-CEX, and MSF were 100%, 96%, 97%, and 100%, respectively.

Table 1: Trainees’ perception of validity, reliability, feasibility, educational impact, and acceptability of WPBA ().

The number of respondents and the number of assessors who responded did not match because some assessors were not involved with the use of certain assessment tools. Assessors rated all the tools highly but were not as enthusiastic about MSF as they were about the other tools. Some feel that MSF has a limited role in assessing competencies due to cultural and systemic restrictions. Percentages of “adequate” scores for assessors for validity, reliability, feasibility, educational impact, and acceptability were 97%, 93%, 95%, 94%, and 95% for all tools. Percentages of “adequate” scores for CBPD, DOPS, Mini-CEX, and MSF were 98%, 100%, 100%, and 84%, respectively. These are summarised in Table 2.

Table 2: Assessors’ perception of validity, reliability, feasibility, educational impact, and acceptability of WPBA (CBPD: Case Based Presentation and Discussion [], DOPS: Directly Observed Practical Skill [], Mini-CEX: Mini-Clinical Examination [], and MSF: Multisource Feedback []).

Comparison of perceived quality overall scores for each WPBA tool between assessors and trainees was presented in Table 3. It was revealed that trainees had significantly higher scores than assessors in MSF () whereas assessors had significantly higher scores than trainees in Mini-CEX (). Other overall scores were not significantly different between assessors and trainees.

Table 3: Comparison of perceived quality overall scores for each WPBA tool between assessors and trainees.

The common themes that have arisen through the qualitative feedback from assessors and trainees suggested that applicability of WPBA could be affected by faculty development, endorsement from governing bodies, pervading cultural mindsets, and the complex relationships between doctors and patients; teachers and students; and educators and clinicians. These are summarised in Table 4.

Table 4: Themes from qualitative feedback.

4. Discussion

We believe that we are the first Asian country to use a wide spectrum of WPBA tools in a national training curriculum for postgraduate doctors. Judging by the limited publications on WPBA from Asian countries, we made the assumption that WPBA has not been widely used in postgraduate training programmes in Asia. Table 5 summarises the available literature on postgraduate usage of WPBA from different Asian countries. Most of the literature reported satisfaction and benefits to its users but experience with whole-scale usage of WPBA was still lacking. There is a common agreement that WPBA should be implemented to improve postgraduate clinical training system and that the tools could be applicable in local clinical settings [2123]. There were some reports on undergraduate usage of WPBA but these were not included in Table 4 because they did not involve assessments with qualified doctors in real life settings.

Table 5: Literature from Asian countries on postgraduate WPBA.

This study is primarily concerned with the users’ perception of validity, reliability, feasibility, educational impact, and acceptability of WPBA. We feel that, by incorporating views from both assessors and trainees, we are able to get a reasonable judgement on the applicability of WPBA within our setup. Trainees were enthusiastic about all types of WPBA, particularly CBPD. CBPD is a modified version of CBD that we used in our foundation year programme [2]. Our trainees present their cases with PowerPoint presentations to two specialists and other trainees. As with CBD, the presentation focuses on encounters with patients based on outpatient records, inpatient records, or discharge summaries. In addition to the usual competencies that can be assessed by CBD, CBPD allows the assessment of presentation skills, literature review, and group interaction. Our trainees scored CBPD highly in educational impact with more than 90% giving ratings of “good” or “excellent.” All the other WPBA tools also scored well with few ratings below “average.” The main gripe by a minority of trainees was difficulties in getting experienced clinicians to assess or the availability of interesting cases to use for Mini-CEX and DOPS (as evidenced by the relatively poorer scores for feasibility). They did concede that, with proper planning and scheduling, this could have been more feasible if they spread out their assessments throughout the year, rather than clustering them at the time of appraisal meetings.

Trainers were also generally positive with their reviews on WPBA. Interestingly, MSF was less well received by trainers because it was felt that there was poor evidence on its usage within our cultural and educational setup. Most untrained assessors (especially allied healthcare professionals) were uncomfortable with providing feedback or criticisms, even if the responses were anonymous. Some experienced assessors felt overburdened by the workload created by WPBA. As would be expected in a fledgling programme, there were not enough experienced assessors to share the assessment duties. Others felt that the validity and reliability of the tools may have been compromised by the use of different languages and dialects other than English. However, it was also observed that experienced assessors were more generous with their ratings, indicating that experience and familiarity with WPBA may have impacted on overall perceptions.

We were able to theme the problems that we faced through the qualitative feedback from this research and personal communication with experienced faculty assessors (Table 4) and to correlate these findings with similar evidence drawn from published Asian literature. The themes are described in the following paragraphs.

4.1. Faculty Development

Faculty training and experience are often seen as impediments to the embracement and integration of WPBA in most training programmes. We have been fortunate to have faculty members that have a strong interest in the subject and educational links to the United Kingdom. Training and awareness have been systematically rolled out to all specialties and hospitals in the country through lectures (as part of continuing professional development), workshops, and local publications. We have enlisted the help of numerous UK-based educationalists to run workshops on an annual basis to update and upgrade knowledge on WPBA. Based on our interpretation of the Asian literature, many centres shared the same concerns about quality of faculty training and expansion of faculty pool [46]. Faculty training is needed to clarify the exact competencies that are to be assessed, while understanding the norms. The art of giving feedback must be addressed to ensure that it can stimulate and enhance improvisation in skills. Standardisation of the assessment can be achieved by regular training to ensure that suboptimal performances and essential skills are not missed by inexperienced assessors. Conflict between dual roles of assessor and teacher should also be addressed and minimised as it may lead to unwillingness to provide bad evaluations and therefore failure to identify trainees in difficulty. This can be in part minimised by appropriate sensitisation and regular training.

4.2. Endorsement from Higher Authorities

We were required to provide a competency-based training programme for our foundation year doctors in 2009. Many institutions in Asia have not embraced WPBA because there were no requirements by national training bodies or similar authorities to endorse and proliferate the usage. WPBA was incorporated into specialty and foundation year training in the UK in 2002 following changes in the training and work patterns within the National Health Service (NHS) [24]. The introduction of the European Working Time Directive meant that there was a deterioration in quality of learning opportunities which necessitate a change to maximise new opportunities for learning [25]. It is a compulsory requirement for junior doctors to comply with assessment procedures if they want to progress with credentialled training. As a result, all institutions and trusts were given guidelines and procedures to follow to set up assessment processes within their setup. A governmental or equivalent regulatory body decision is essential in order to push forward this ethos of competency-based training to ensure patient safety, especially in non-university-affiliated and independent medical institutions where the priority is to strengthen service provision rather than improve training experience of doctors.

4.3. Changing Asian Teaching Perspectives

Competency-based training may not be as deeply rooted in Asian medical institutions. Asian methods of training are more focused on giving knowledge to students and trainees rather than improving clinical abilities. A study comparing a Thai and Canadian residency programme showed that Thai teachers were more likely to emphasise knowledge as the most important attribute of a good teacher, while Canadian teachers valued clinical competence as the most highly valued characteristic [26]. In comparing Chinese and Western perceptions of teaching, Pratt et al. [27] found that the former defined teachers as knowledge experts, the teacher-student relationship as hierarchical but personal, and feedback as designed for identifying weaknesses, whereas the latter defined teachers as knowledge applicators, the teacher-student relationship as less hierarchical but also less personal, and feedback as useful for identifying both weaknesses and strengths. Clearly for successful integration and adaptation of competency-based training, such teaching perceptions of Asian assessors need to be addressed and altered.

4.4. Teacher-Student Relationship

The culture in most Asian countries could be classified as high on power distance and low on individualism [28]. This implies that hierarchy plays a role in teacher-student relationship patterns and students in clinical teaching rarely receive individual feedback from senior clinicians. The implementation of WPBA required a shift in the usual teacher-student interaction from group supervision and feedback to individual students [29]. Typically, students were found to perceive feedback from specialists as more instructive and educationally beneficial than from other health professionals [30]. This means that assessors should ideally be at specialist level but practically, this may not be possible due to large volume of students with low numbers of willing, experienced clinical educators. This is especially true in our setup (and most Asian institutions) as we do not have a “top heavy” hierarchical system unlike our Western counterparts who arguably have a greater number of senior academicians and super-specialists in their ranks. Due to a dearth of experienced specialist educators, we have deliberately involved senior clinicians (nonspecialist levels) in all our workshops with a special emphasis on feedback delivery.

4.5. Doctor-Patient Relationship

The doctor-patient relationship in Asia may be different from the West. Some studies have shown that the prevailing style of communication is often paternalistic or one-way, with a dominant role for the doctor [31, 32]. Efforts in implementing a more desirable partnership style consultation are usually hampered by cultural and systemic factors. Communication pattern is determined by accepted social differences and aimed at avoiding conflict and maintaining a pleasant atmosphere. The relationship between doctors and patients is expected to follow unspoken rules of behavior in which value is placed on politeness and maintaining positive etiquette [32]. Lack of systemic structure in many Asian hospitals meant that approach to medical consultations differs from that from the West. Many outpatient clinics are driven by high volume of unscheduled patients and doctors do not know in advance how many patients they will see and which patients they are seeing. Furthermore, most Asian countries have much lower doctor-to-patient ratios compared to European and American countries [33], indicating that service provision may take precedence over training needs. These organisational and cultural differences may affect the way that assessments are carried out as planning is difficult and patients may be too subservient and cooperative to allow meaningful assessments to take place.

4.6. Educationalist-Clinician Relationship

Integration of the role of medical educators into the planning of clinical curriculum is essential for the proliferation of WPBA. Improvement in the relationship between educators and clinicians is fundamentally important to allow for novel changes in medical education [34]. In the developed country, these roles can often be intertwined and individuals can have dual responsibilities through double affiliations with hospitals and universities. More often, this is not the case in Asian countries where medical educators and clinicians have divorced roles and have separate obligations with their employing bodies. Many hospital staffs and clinicians, for whom teaching is not a core interest, may view WPBA as an added burden to their heavy clinical obligations. We have worked closely with our medical school to ensure that clinicians are recognised and accredited as educators and can contribute to undergraduate and postgraduate curriculum.

The strengths of our study lie in the evaluation of a comprehensive panel of previously validated WPBAs aligned within a detailed curriculum that includes specific aims and objectives, varied learning opportunities (both didactic and experiential), and outcomes. We do however acknowledge the limitations of this largely being a subjective study although characteristics like feasibility and acceptability are subjective in nature. Whilst our numbers are small, many training institutions may have similar numbers given that training capacity will likely be related to bed numbers.

5. Conclusion

Despite the challenges described for the implementation of WPBA, we report a high level of satisfaction among our trainers and trainees. This indicates that WPBA can be successfully integrated into an Asian training programme despite the language, cultural, and systemic barriers. While perception of satisfaction and adequacy by our users may not give absolute evidence on validity and reliability, it is an encouraging step towards the acceptance of these tools into our day-to-day clinical practice. Despite our limitations, we have shown that it is feasible to implement this in our system and our trainees have derived educational benefits throughout this process. From our experience, we feel that faculty development and administrative support are important factors that may influence the adaptation and delivery of WPBA. More research is needed to investigate the relevance and suitability of WPBA to account for the systemic and cultural nuances that are present in the way that Asian countries practise medicine.

Conflict of Interests

The authors have no declaration of interest.

References

  1. G. E. Miller, “The assessment of clinical skills/competence/performance,” Academic Medicine, vol. 65, supplement, pp. S63–S67, 1990. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Tan, S. N. A. Pengiran Tengah, K. K. Tan, A. M. L. Yong, and E. S. F. Chong, “Postgraduate assessments in Brunei Darussalam,” Brunei International Medical Journal, vol. 7, no. 4, p. 229, 2011. View at Google Scholar
  3. Y.-Y. Yang, F.-Y. Lee, H.-C. Hsu et al., “Validation of the behavior and concept based assessment of professionalism competence in postgraduate first-year residents,” Journal of the Chinese Medical Association, vol. 76, no. 4, pp. 186–194, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. K.-C. Liao, S.-J. Pu, M.-S. Liu, C.-W. Yang, and H.-P. Kuo, “Development and implementation of a mini-clinical evaluation exercise (mini-CEX) program to assess the clinical competencies of internal medicine residents: from faculty development to curriculum evaluation,” BMC Medical Education, vol. 13, no. 1, article 31, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. W. Chen, M.-M. Lai, T.-C. Li, P. J. Chen, C.-Y. Chan, and C.-C. Lin, “Professional development is enhanced by serving as a mini-CEX preceptor,” Journal of Continuing Education in the Health Professions, vol. 31, no. 4, pp. 225–230, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. F.-Y. Lee, Y.-Y. Yang, H.-C. Hsu et al., “Clinical instructors' perception of a faculty development programme promoting postgraduate year-1 (PGY 1) residents' ACGME six core competencies: a 2-year study,” BMJ Open, vol. 1, no. 2, Article ID e000200, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Rauf, W. Aurangzeb, S. Abbas, and N. Sadiq, “Work place based assessment in foundation year: Foundation University Medical College experience,” Journal of Ayub Medical College, Abbottabad, vol. 23, no. 4, pp. 76–79, 2011. View at Google Scholar
  8. Y. Zhao, X. Zhang, Q. Chang, and B. Sun, “Psychometric characteristics of the 360 feedback scales in professionalism and interpersonal and communication skills assessment of surgery residents in China,” Journal of Surgical Education, vol. 70, no. 5, pp. 628–635, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Qu, Y.-H. Zhao, and B.-Z. Sun, “Assessment of resident physicians in professionalism, interpersonal and communication skills: a multisource feedback,” International Journal of Medical Sciences, vol. 9, no. 3, pp. 228–236, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Naeem, “Validity, reliability, feasibility, acceptability and educational impact of direct observation of procedural skills (DOPS),” Journal of the College of Physicians and Surgeons Pakistan, vol. 23, no. 1, pp. 77–82, 2013. View at Google Scholar
  11. K. Al Khalifa, A. Al Ansari, C. Violato, and T. Donnon, “Multisource feedback to assess surgical practice: a systematic review,” Journal of Surgical Education, vol. 70, no. 4, pp. 475–486, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Al Ansari, T. Donnon, K. Al Khalifa, A. Darwish, and C. Violato, “The construct and criterion validity of the multi-source feedback process to assess physician performance: a meta-analysis,” Advances in Medical Education and Practice, vol. 5, pp. 39–51, 2014. View at Publisher · View at Google Scholar
  13. P. M. Lilley and R. M. Harden, “Standards and medical education,” Medical Teacher, vol. 25, no. 4, pp. 349–351, 2003. View at Publisher · View at Google Scholar · View at Scopus
  14. C. P. M. Van Der Vleuten, “The assessment of professional competence: developments, research and practical implications,” Advances in Health Sciences Education, vol. 1, no. 1, pp. 41–67, 1996. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Wragg, W. Wade, G. Fuller, G. Cowan, and P. Mills, “Assessing the performance of specialist registrars,” Clinical Medicine, vol. 3, no. 2, pp. 131–134, 2003. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Chandratilake, M. Davis, and G. Ponnamperuma, “Evaluating and designing assessments for medical education: the utility formula,” The Internet Journal of Medical Education, vol. 1, no. 1, pp. 1–17, 2010. View at Google Scholar
  17. Z. Setna, V. Jha, K. A. M. Boursicot, and T. E. Roberts, “Evaluating the utility of workplace-based assessment tools for speciality training,” Best Practice and Research: Clinical Obstetrics and Gynaecology, vol. 24, no. 6, pp. 767–782, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. S. M. Downing, “Validity: on the meaningful interpretation of assessment data,” Medical Education, vol. 37, no. 9, pp. 830–837, 2003. View at Publisher · View at Google Scholar · View at Scopus
  19. S. M. Downing, “Reliability: on the reproducibility of assessment data,” Medical Education, vol. 38, no. 9, pp. 1006–1012, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. P. Ram, R. Grol, J. J. Rethans, B. Schouten, C. van der Vleuten, and A. Kester, “Assessment of general practitioners by video observation of communicative and medical performance in daily practice: issues of validity, reliability and feasibility,” Medical Education, vol. 33, no. 6, pp. 447–454, 1999. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Tanabe, “An update on postgraduate clinical training,” Rinsho Shinkeigaku, vol. 53, no. 11, pp. 1139–1141, 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. T. Singh and J. N. Modi, “Workplace-based assessment: a step to promote competency based postgraduate training,” Indian Pediatrics, vol. 50, no. 6, pp. 553–559, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. T. Singh and R. Sood, “Workplace-based assessment: measuring and shaping clinical learning,” National Medical Journal of India, vol. 26, no. 1, pp. 42–46, 2013. View at Google Scholar · View at Scopus
  24. S. Carr, “The Foundation Programme assessment tools: an opportunity to enhance feedback to trainees?” Postgraduate Medical Journal, vol. 82, no. 971, pp. 576–579, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. S. Scallan, “Education and the working patterns of junior doctors in the UK: a review of the literature,” Medical Education, vol. 37, no. 10, pp. 907–912, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. A. K. Wong, “Culture in medical education: comparing a Thai and a Canadian residency programme,” Medical Education, vol. 45, no. 12, pp. 1209–1219, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. D. Pratt, M. Kelly, and W. S. S. Wong, “Chinese conceptions of ‘effective teaching’ in Hong Kong: towards culturally sensitive evaluation of teaching,” International Journal of Lifelong Education, vol. 18, no. 4, pp. 241–258, 1999. View at Publisher · View at Google Scholar
  28. G. Hofstede, “Cultural differences in teaching and learning,” International Journal of Intercultural Relations, vol. 10, no. 3, pp. 301–320, 1986. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Suhoyo, J. Schönrock-Adema, G. R. Rahayu, J. B. M. Kuks, and J. Cohen-Schotanus, “Meeting international standards: a cultural approach in implementing the mini-CEX effectively in Indonesian clerkships,” Medical Teacher, vol. 36, no. 10, pp. 894–902, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. Y. Suhoyo, E. A. Van Hell, T. S. Prihatiningsih, J. B. M. Kuks, and J. Cohen-Schotanus, “Exploring cultural differences in feedback processes and perceived instructiveness during clerkships: replicating a Dutch study in Indonesia,” Medical Teacher, vol. 36, no. 3, pp. 223–229, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. M. Claramita, J. V. Dalen, and C. P. M. Van Der Vleuten, “Doctors in a Southeast Asian country communicate sub-optimally regardless of patients' educational background,” Patient Education and Counseling, vol. 85, no. 3, pp. e169–e174, 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Claramita, M. D. F. Nugraheni, J. van Dalen, and C. van der Vleuten, “Doctor-patient communication in Southeast Asia: a different culture?” Advances in Health Sciences Education, vol. 18, no. 1, pp. 15–31, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. World Health Organisation, http://www.who.int/gho/health_workforce/physicians_density/en/.
  34. E. Sabel and J. Archer, “‘Medical education is the ugly duckling of the medical world’ and other challenges to medical educators' identity construction: a qualitative study,” Academic Medicine, vol. 89, no. 11, pp. 1474–1480, 2014. View at Publisher · View at Google Scholar · View at Scopus