Table of Contents Author Guidelines Submit a Manuscript
Journal of Biomedical Education

Volume 2014 (2014), Article ID 161204, 6 pages
Research Article

Self-Assessment of Problem Solving Disposition in Medical Students

Tecnológico de Monterrey School of Medicine and Health Sciences, Avenida Morones Prieto 3000 Pte., Colonia Los Doctores, 64710 Monterrey, NL, Mexico

Received 30 April 2014; Revised 14 July 2014; Accepted 24 July 2014; Published 18 August 2014

Academic Editor: James J. Brokaw

Copyright © 2014 Silvia Lizett Olivares-Olivares and Mildred Vanessa López-Cabrera. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Medical schools are committed to both students and society to develop capabilities required to succeed in health care environments. Present diagnosis and treatment methods become obsolete faster, demanding that medical schools incorporate competency-based education to keep pace with future demands. This study was conducted to assess the problem solving disposition of medical students. A three-subcategory model of the skill is proposed. The instrument was validated on content by a group of 17 experts in medical education and applied to 135 registered students on the sixth year of the M.D. Physician Surgeon program at a private medical school. Cronbach’s alpha indicated an internal consistency of 0.751. The findings suggest that selected items have both homogeneity and validity. The factor analysis resulted in components that were associated with three problem-solving subcategories. The students’ perceptions are higher in the pattern recognition and application of general strategies for problem solving subcategories of the Problem solving disposition model.

1. Introduction

Training medical students is a long and complex process: it requires the assimilation of knowledge, development of attitudes, and the acquisition of values and skills. Those skills have become an important topic of discussion in medical education because they lead curricula development. The private university where this study was developed recently declared the competencies that a physician must have at graduation in order to meet the highest standards in the national and international context. Those skills are as follows:(i)application of the clinical skills learned to perform diagnosis, promote health, and prevention of diseases;(ii)applying knowledge of basic, clinical, and social sciences in medical practice;(iii)developing critical thinking and clinical reasoning;(iv)generating a positive change in the person’s health and treating them with respect and empathy;(v)educating society with a culture of prevention;(vi)as globalization rises, the physician must be ready to perform in health systems around the globe.

The private university is interested in the assessment of the level of acquisition of competencies in their students, including their attitudes towards competencies, to fulfill the mission statement and to certify the competence of future physicians in their accreditation requirements.

Epstein and Hundert [1] define competence as the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotion, values, and reflection in daily practice for the benefit of the individual and the community being served. Villa and Poblete classify competences as being instrumental, interpersonal, or systemic [2]. Instrumental refers to those skills that are associated with the individual and involve a combination of methodological and cognitive abilities. Interpersonal skills consider the relationship with others, and systemic skills are those required to manage complex and multiple variables.

Although several authors agree on the importance of developing generic skills in medical students, they propose different approaches to assess their competence [1, 35]. These evaluation methods are: multiple choice exams, work based assessments, and objective structured clinical examinations (OSCEs) [1].

Epstein and Hundert explain that although there are available instruments for the assessment of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork are necessary [1]. The Individual Generic Skills Test was developed by faculty members of a private medical school in Mexico to assess the disposition of higher education students on diverse instrumental competencies. The complete instrument measures self-perception on information literacy, problem solving, time management, self-direction, decision making, and critical thinking. The purpose of this paper is to assess the validity and reliability of the problem solving section of this test and to present a descriptive analysis of the results.

2. Materials and Methods

The study design is quantitative, descriptive, and nonexperimental [6]. The method consisted of several phases: (1) design a competency disposition model, (2) determine the items to assess students’ performance based on the model, (3) validate and update items with experts, and (4) apply the test and analyze results. The following description of this method is regarding the problem solving disposition section of the Individual Generic Skills Test.

2.1. Designing of the Problem Solving Disposition Model

According to Sullivan and Chumbley [7] medical professionals require critical thinking, problem solving, and decision-making in their clinical practice. Epstein and Hundert [1] classify competencies considering diverse traits and skills required for physicians (cognitive, technical, integrative, contextual, interpersonal, and affective). Cognitive competences include core knowledge, oral and written communication, information management, abstract problem solving, and self-learning. Similarly, the American Medical Association evaluates professional clinical competence considering recalling facts, principles and theories; solving problems and describing procedures; demonstrating skills in a controlled environment; and subsequent application to a real case [1].

In general, the ability to solve problems includes an inquiring process that incorporates the details of an undesired situation to select the most suitable solution. Williams and Reid [8] state that the mental process required to solve a problem includes understanding the problem, describing its context, and identifying the decisions to be analyzed. Norman and Schmidt propose a three-category model: (1) acquisition of factual knowledge, (2) mastery of general principles that can be transferred to solve new similar problems, and (3) pattern recognition [9].

Based on this framework, we evaluated a three-category model for problem solving (Figure 1).

Figure 1: Problem solving disposition model.

The proposed model includes the basic stages of cognitive processing: (a) reactivation of prior knowledge, (b) pattern recognition and method selection, and (c) application of strategies to solve the problem.

The proposed categories for problem solving disposition are as follows.

2.1.1. Knowledge of Discipline

Although problem solving is a generic competence, it is always developed in a specific context. The way in which a given problem is approached requires extensive knowledge about one or more related topics.

2.1.2. Pattern Recognition

Once the information about the problem has been understood, including missing and given facts, the person must recognize patterns and possible alternatives to achieve a valid solution. This requires reflection on the context and identification of advantages and constraints to arrive at a conclusion. Oblinger and Verville [10] consider that the way in which information is represented in solving problems has a substantial effect on the final solution.

2.1.3. Application of General Strategies for Problem Solving

The competence reaches the highest level when the person is able not only to achieve a valid solution, but also to select the best among the available choices. This process should also be efficient in resources, time, and budget [10].

2.2. Instrument Design

An instrument of 74 items was developed to assess the disposition of student self-perception about information literacy, problem solving, time management, self-direction, decision-making, and critical thinking skills. A 5-level Likert scale was selected, ranging from 1 corresponding to strongly agree to 5 corresponding to strongly disagree.

The problem solving section was developed with a total of 12 statements according to the proposed Problem solving disposition model (Figure 1). Questions 1, 2, 3, 7, 10, 11, and 12 have favorable responses closer to strongly agree, and questions 4, 5, 6, 8, and 9 are reversed. These items, as shown in Table 1, were oriented to measure attitudes and self-perception of students about knowledge of the discipline (items 2, 3, and 4), pattern recognition (items 1, 5, 8, and 11), and application of general strategies for problem solving (items 6, 7, 9, 10, and 12).

Table 1: Comparison of the original design of the test and the result after the analysis.
2.3. Expert Validation

The instrument was validated with the methodology proposed by Fisher et al. [11]. They propose the application of a two-stage procedure. The first stage is the use of a Reactive Delphi technique to develop and determine content validity. Content validation is a multimethod, quantitative, and qualitative process that is applicable to all elements of an assessment instrument [12]. The second stage is to administer the test to a sample of students to determine scale construct validity and internal consistency.

For the first stage, the use of the Delphi technique, a group of 17 experts were asked to review the section. Haynes et al. consider that every element of an assessment instrument should be judged by multiple experts to validate relevance, representativeness, specificity, and clarity [12].

The criteria to select experts for this study included the following: a position as faculty member in a higher education institution, at least ten years of experience in education, experience on curriculum design, and acceptance to join freely into the study. As part of the validation process, every expert weighed each item using a 5-level Likert scale to determine the degree to which the item measured the competence of problem solving. The score of 1 implied strongly agree and the score of 5 represented strongly disagree. These data were coded and analyzed using Minitab 17. The item was considered valid only if at least the 80% of experts selected values of three or less on the Likert scale.

2.4. Pilot Study and Statistical Analysis

The test was administered in January 2013 to the total student population (135) registered on the sixth year of the M.D. Physician Surgeon program at a private medical school. Three methods were selected for statistical analysis: Cronbach's alpha, factor analysis, and descriptive statistics.

To determine the internal consistency of both Individual Generic Skills Test and the problem solving section, Cronbach’s alpha coefficient was computed. According to Vogt [13] values greater than 0.70 are considered to have an acceptable level of internal consistency.

Item total correlation was calculated to evaluate if any item in the particular section was consistent with the responses of the others in the same section. The higher the values of the coefficient are, the more clearly the item belongs in the scale.

A factor analysis was also performed to assign model categories to the instrument items applying the criterion recommended by Morales Vallejo [14]. The factor analysis applied was a Principle Components analysis. The extracted factors were determined using scree plots. Finally, a Varimax rotation was applied to achieve a simpler structure. Although the authors stated that for a value of 0.3 is required, we used a value of 0.4 to provide a cleaner factor structure. Finally, descriptive statistics were used to analyze results, considering values closer to 1 as more favorable responses being closer to strongly agree. The negatively-keyed items were reverse-scored.

3. Results and Discussion

According to the evaluation by the 17 experts of the problem solving section, 7 of the 12 original items were kept. Eliminated items were as follows: (4) I have trouble asking for help, knowledge of discipline category; (5) I can talk about my problems for hours without resolving anything, pattern recognition; (6) to solve a problem, it does not matter how much money I spend on it, application of general strategies for problem solving; (8) I get easily distracted while trying to solve a problem, pattern recognition; and (9) the best solution to a problem is the first one I think about, application of general strategies for problem solving, which obtained standard deviations of: 1.495, 1.538, 1.178, 1.497, and 1.263, respectively, representing the five items with higher variance of the original 12.

From the twelve original items proposed, five were eliminated, two reclassified to other categories, and five remained unaffected (Table 1). High correspondence was observed between the removed items selected by experts and the calculated variance. However, knowledge of discipline was left with only one item and pattern recognition with only 2.

Reliability of the 7 items selected by the experts had a Cronbach’s alpha coefficient of 0.751. The reliability of the complete test was 0.779, indicating that The Individual Generic Skills Test is reliable both in general and for the problem solving section.

The number of extracted factors that the scree plot suggested was three components. An orthogonal Varimax rotation was applied to achieve a simpler structure. The loadings are displayed on Table 2. The criterion was that the loadings should be retained on only one of the three components. In the case where items cross-loaded, the higher loading determined where the item would be associated.

Table 2: Factor analysis.

The resulting components from the factor analysis are associated with each of the categories of the Problem solving disposition model. The first component corresponds to item 3 associated with knowledge of discipline. The next component was associated with items 1 and 2 which are represented by pattern recognition. The last component integrates items 7, 10, 11, and 12, which relate to the application of general strategies for problem solving category.

Descriptive analysis includes the results for each item. Table 3 shows the mean and standard deviation of each item. Items with means closer to strongly agree are 7, 10, 11, and 12, which correspond to application of general strategies for problem solving category.

Table 3: Summary of 135 student’s self-assessment of their problem solving skills.

The results of descriptive statistics show that a 2.570 mean for knowledge of discipline, 1.496 for pattern recognition, and 1.427 for application of general strategies for problem solving, which means that students are better self-perceived in the pattern recognition and application of general strategies for problem solving categories of the problem solving model. Although students appear to believe that they know the general strategies to address problems, they are less confident of their ability to use that knowledge. Norman and Schmidt believe that this acquisition of factual knowledge is memory bound [9]. This acquisition of knowledge would rely on the activation of prior knowledge which facilitates the subsequent processing of new information, elaboration of knowledge at the time of learning, enhancing subsequent retrieval, and matching the context to facilitate the recall.

The present study followed the methodology of Fisher et al. [11]. They proposed it in 2001, with the validation of a self-directed learning readiness scale in nurse’s education. Similar results were achieved, as both studies perceive the need to retest the scale and improve the instrument by rewriting some items. According to experts’ feedback, the following items need to be rewritten: (4) I have trouble asking for help, (5) I can talk about my problems for hours without resolving anything, and (8) I get easily distracted while trying to solve a problem. Those items were included to provide a contrast of the information given by the student for similar items, but may be reconsidered in the next application.

4. Conclusions

The final instrument is valid and reliable considering the information given by academic papers, experts, and statistical analysis. However, it requires to be improved in further applications.

In addition to the reverse-scored items, it will be necessary to design new items to enhance each category of the Problem solving disposition model. A longitudinal study is also considered for future research on the medical academic program.

This experience will continue to be applied to other sections of the Individual Generic Skills Test to develop a more solid instrument of self-perception in a competency based education model. These types of instruments are recommended to complement traditional exams and other clinical training evaluations. Many other medical schools in Canada, England, Australia, Spain, and the United States are also improving methods for assessing the professional skills of medical students [1].

In order to improve students’ disposition to the problem solving skill, the application of Problem Based Learning has been incorporated into medical training. Norman and Schmidt explain the need to enhance the acquisition, retention, and use of knowledge in medical and health sciences students [9]. The literature suggests that is difficult to demonstrate changes in problem-solving ability as students evolve from medical school to clinical practice. However, not only is cognitive knowledge important, but also the development and assessment of competencies and attitudes of the students.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. R. M. Epstein and E. M. Hundert, “Defining and assessing professional competence,” The Journal of the American Medical Association, vol. 287, no. 2, pp. 226–235, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Villa and M. Poblete, Aprendizaje Basado en Competencias: Una Propuesta Paran la Evaluación de Competencias Genéricas, Ediciones Mensajero S.A.U., Bilbao, España, 2007.
  3. D. R. Garrison, T. Anderson, and W. Archer, “Critical thinking, cognitive presence, and computer conferencing in distance education,” The American Journal of Distance Education, vol. 15, no. 1, pp. 7–23, 2001. View at Google Scholar
  4. M. Gupta and R. Upshur, “Critical thinking in clinical medicine: what is it?” Journal of Evaluation in Clinical Practice, vol. 18, no. 5, pp. 938–944, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. L. E. Hardin, “Research in medical problem solving: a review,” Journal of Veterinary Medical Education, vol. 30, no. 3, pp. 230–235, 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Hernández, C. Fernández, and P. Baptista, Metodología de la Investigación, McGraw-Hill, México, México, 3rd edition, 2003.
  7. D. L. Sullivan and C. Chumbley, “Critical thinking a new approach to patient care,” Journal of Emergency Medicine Services, vol. 35, no. 4, pp. 48–53, 2010. View at Google Scholar · View at Scopus
  8. J. A. Williams and R. C. Reid, “Developing problem solving and communication skills through memo assignments in a management science course,” Journal of Education for Business, vol. 85, no. 6, pp. 323–329, 2010. View at Google Scholar
  9. G. R. Norman and H. G. Schmidt, “The psychological basis of problem-based learning: a review of the evidence,” Academic Medicine, vol. 67, no. 9, pp. 557–565, 1992. View at Publisher · View at Google Scholar · View at Scopus
  10. D. G. Oblinger and A.-L. Verville, What Business Wants from Higher Education, The Oryx Press, Phoenix, Ariz, USA, 1998.
  11. M. Fisher, J. King, and G. Tague, “Development of a self-directed learning readiness scale for nursing education,” Nurse Education Today, vol. 21, no. 7, pp. 516–525, 2001. View at Publisher · View at Google Scholar · View at Scopus
  12. S. N. Haynes, D. C. S. Richard, and E. S. Kubany, “Content validity in psychological assessment: a functional approach to concepts and methods,” Psychological Assessment, vol. 7, no. 3, pp. 238–247, 1995. View at Publisher · View at Google Scholar · View at Scopus
  13. P. Vogt, Quantitative Research Methods for Professionals, Pearson/Allyn and Bacon, Boston, Mass, USA, 2007.
  14. P. Morales Vallejo, El Análisis Factorial en la construcción e interpretación de tests, escalas y cuestionarios, Universidad Pontificia Comillas, Madrid, Spain, 2013.