Abstract

Interpreting basic chest radiographs is an important skill for internal medicine residents to help them adequately diagnose and manage respiratory diseases. Educators need tools to ensure that they take a systematic approach when creating a curriculum to teach this, as well as other skills, knowledge, or attitudes. Using an instructional design model helps educators accomplish this task by giving them a guide they can follow to ensure that the curriculum meets the needs of the learners. Using the creation of a curriculum to teach chest radiograph interpretation as an example, this paper illustrates how educators can use the ADDIE model of instructional design to help develop their own curricula.

1. Introduction

Although internal medicine residents are expected to acquire competence in the interpretation of basic chest radiograph patterns and incorporate this information into their overall clinical assessment [1], there are challenges to teaching this skill to a large cohort of learners. Ideally, all learners should be provided with similar learning experiences so that everyone has an equal opportunity to attain these competencies. Also, residents should be shown chest radiographs illustrating a variety of findings and of varying degrees of difficulty. To enhance learning through deliberate practice, residents should demonstrate their interpretation skills to a content expert who can provide immediate corrective feedback [2, 3]. An instructional design model can help plan and enact a curriculum that meets these teaching needs.

The goal of this paper is to show educators how to use an instructional design model known as ADDIE [4], an acronym for Analysis, Design, Development, Implementation, and Evaluation, to create a curriculum in medical education. After providing an overview of the ADDIE instructional design model, we will further describe each phase of the model, using the curriculum we crafted to teach chest radiograph interpretation to our internal medicine residents as an example.

2. Background

In our internal medicine residency program, residents undergo a 4-week pulmonary medicine rotation in each of their first and second years of residency. During any given 4-week rotation, there are approximately 4 residents doing the pulmonary rotation, typically two first-year and two second-year residents. Among other learning objectives, they are expected to learn basic chest radiograph interpretation. After analysing the residents’ comments and quantitative course feedback, we discovered several areas where we could improve our delivery of instruction to satisfy the needs of our learners. To structure the development of this improved curriculum, we used the ADDIE model of instructional design.

3. Overview of the ADDIE Model

The purpose of an instructional design model is to help educators ensure that they are teaching the appropriate material in an optimal manner, or to “… provide both an appropriate destination, and the right road to get you there …” [5]. The ADDIE model is one such instructional design model. It has been used to develop curriculum in diverse fields such as library instruction [6] and online continuing education [7]. The phases of this model include analysis, design, development, implementation, and evaluation.

In the analysis phase, educators ascertain the needs of the learners. This involves crafting educational objectives and determining what needs to be taught to accomplish the educational goals. In the design phase, educators create a broad overview, or blueprint, describing how to deliver the instruction to meet the objectives identified during the analysis phase. In the development phase, each component of instruction is planned in as much practical detail as possible to meet the blueprint created during the design phase. In the implementation phase, educators deliver the instruction, with or without first implementing a smaller beta or pilot project. Finally, in the evaluation phase, educators obtain feedback about the program and make the appropriate adjustments to the program of instruction.

While this paper describes the phases sequentially, decisions, in practice, are continually made and revised during the process, moving back and forth between all phases. For example, learning objectives identified in the analysis phase might be deemed too difficult to deliver during the development phase, necessitating slight revision of the objectives. Or, practical difficulties during initial implementation of the learning program might require immediate changes to the elements fostered during the design or development phases. Thus, during the descriptions of each phase that follow, it is important to note that movement from one phase to the next will not be exclusively linear.

3.1. Analysis

During the analysis phase, educators gather more information about the knowledge, skills, or attitudes the learner needs to attain and what needs to be taught to accomplish this learning. It is also important to thoughtfully weed out extraneous information that does not need to be taught to attain the educational goal, thus better focusing time and resources on essential learning needs. This, in turn, enhances the learners’ engagement as they are learning truly applicable information.

Several methods can be used to gather the information during the analysis phase, such as focus groups [810], one-on-one interviews, anonymous questionnaires or surveys [11, 12], mixed qualitative-quantitative studies [13], expert consensus [14, 15] or Delphi studies with content experts [16], audits or tests of current performance [17, 18], opinions of graduates of the program [19], or a combination of these techniques [13, 20]. Using these information gathering tools, the analysis phase can be subdivided into a needs analysis, task analysis, learner analysis, and performance analysis.

First, a needs analysis is conducted to determine whether the particular skill or knowledge we want to teach is truly needed for the learners to function in the workplace and whether they currently lack this skill or knowledge.

In our case, we conducted exit interviews with each resident who went through the pulmonary rotation and asked whether basic chest radiograph interpretation and integration of these findings into the overall clinical assessment were necessary to enhance patient care. We also asked for their feedback in anonymous surveys to capture information they might feel uncomfortable divulging in person. We polled faculty to ascertain their impression of the residents’ chest radiograph interpretation skills. From all of these sources, we ascertained that it was important for residents to attain competence at identifying basic chest radiographic abnormalities so that they could then integrate the findings with the clinical context of the patient. As others have found for their learners [2123], we identified this as a skill in which our learners needed to improve.

Educators must then perform a job or task analysis. Here, the tasks the learners must perform, as well as the knowledge, skills, and attitudes they require, are defined and broken down into components. This, in turn, will later inform the learning objectives.

We asked our faculty (i.e., content experts in respirology) to list basic competencies in chest radiograph interpretation they felt internal medicine residents would require. These competencies included correct identification of normal structures on the chest radiograph, identification of how differences in radiographic technique can influence the appearance and comparisons between chest radiographs, identification of abnormal findings, and distinguishing between abnormal findings which have a similar appearance but are due to different pathologies. We also decided that the residents should use critical thinking when interpreting chest radiographs; that is, they should identify the abnormal finding(s), identify other pertinent negative or positive findings to fine-tune the differential diagnosis, and systematically review the chest radiograph to ensure nothing was missed. This would be similar to their approach to a patient which includes eliciting the chief complaint, ruling in or out other findings on history and physical exam to narrow the differential diagnosis and conducting a review of systems.

The task analysis is also important for deciding (and eliminating) what does not need to be taught to accomplish the task or what could be better taught in a different venue. An example of the latter included instruction on the appropriate indications for performing a chest radiograph; while knowledge of this is important, we decided it was better to teach this during the residents’ clinical rotations, in the proper clinical context.

A learner analysis occurs next; here, educators must establish the learners’ current knowledge and skills, their motivation for learning the subject, and their learning preference. Using focus groups, we ascertained that our internal medicine residents needed a brief overview of the approach to chest radiograph interpretation, that they wanted to demonstrate their interpretation to a content expert who could provide immediate feedback, and that they wanted to see as many chest radiographs as time permitted.

Lastly, the role of the performance analysis is to select measures that will provide information on whether the program of instruction is achieving its goal. As the information gathering tools previously mentioned have varying degrees of validity, reliability, and cost (in time, personnel, and resources), the tools actually selected will balance these factors against the resources available to the educators. We chose to evaluate our program’s effectiveness through faculty’s observations of the residents’ chest radiograph interpretation and the residents’ anonymized course feedback.

3.2. Design

After the analysis phase comes the design phase where educators create an overall blueprint of how the instruction will be delivered. This includes choosing the optimal method(s) of instruction and creating useful, action oriented learning objectives to guide the learning.

Potential methods of instruction include (but are not limited to) large classroom lectures [2426], case based teaching [27, 28], small group teaching [2931], ward based teaching using chest radiographs of current hospital in-patients [32], self-instruction using a textbook of chest radiographs [33], e-learning modules [3436], a flipped classroom [37], or a library of interesting chest radiographs [3842]. But what happens during the instruction, that is, how the instruction is actually delivered, as well as the skill and enthusiasm of the teacher, is arguably more significant than how the instructional method is labelled [43]. Thus, it is important to decide what is needed from the instruction and select or design the method of delivery around that.

Our instructional delivery had to meet several criteria. We decided that residents should demonstrate their chest radiograph interpretation to a content expert who could provide immediate feedback on their interpretation [3] and incorporate some flexibility in their teaching and feedback. This is because it would have been difficult to create a script or answer key to cover all possible interpretations given by the residents who might not only fail to recognize abnormal findings but also misinterpret normal findings as abnormal. Thus, we did not use large classroom lectures, a textbook, or a self-teaching library file. The chest radiographs had to reliably cover a breadth of cases and have a consistent range of complexity [44]. Thus, we did not rely on ward based teaching alone as the variety and complexity of chest radiographs would likely vary between cohorts of learners doing rotations at different times. And, to enhance learning retention, we wanted to facilitate peer to peer learning using small groups [4547].

With this in mind, we chose to deliver instruction in small groups of about 4 to 5 learners. Each resident would interpret a chest radiograph and the instructor would provide feedback on the interpretation. The group could then discuss the interpretation in more detail and ask questions. The instructors would be given a short clinical stem to introduce each chest radiograph, as well as a list of model answers or pertinent teaching points to cover. Each cohort of learners would be given the same chest radiographs to interpret, and the chest radiographs would be selected to illustrate a range of findings of varying complexity.

Next, educators must create learning objectives to guide the learning. Learning objectives should state actions based on observable behaviours [4850] and state the behaviour the learner will be able to perform and the conditions under which the performance is to be shown [51]. Objectives must use action oriented verbs such as “identify,” “demonstrate,” “list,” “compare,” and “contrast” [52], and learners should be able to realistically achieve these learning objectives within the practical constraints and time limits dictated by the method of instructional delivery.

We created learning objectives that fell into one of three main categories. These categories included identification of normal structures (such as “identify normal lung markings and structures”), identification of various abnormal findings (such as “identify bilateral hilar enlargement and list a radiographic differential diagnosis”), and distinguishing between different pathologies which have a similar appearance (such as “identify and contrast the features of a cavity, cyst, and emphysematous bulla”). We incorporated a pattern based approach to identifying abnormalities which would aid in generating a radiographic differential diagnosis [53].

3.3. Development

After choosing the method(s) of instructional delivery and creating the learning objectives in the design phase, the development phase consists of creating and organizing the actual learning material that will be used during instruction. Here, educators take the map or overview created in the design phase and think through, step by step, how to practically deliver each feature of this instruction.

We decided to deliver instruction over 8 teaching sessions, each lasting one hour. We included 6 digital chest radiographs per session (totaling 48 chest radiographs) and, for each radiograph, created a short clinical vignette, a list of model answers, and a list of additional teaching points. Each JPEG image of a chest radiograph had a file size of approximately 600 to 750 kB, with a height of 1290 pixels and a width of 1180 pixels, because image quality becomes suboptimal with lower file sizes [54]. All patient identifiers were removed from the images to anonymize them.

A checklist was created to ensure that each learning objective would be illustrated by at least one of the chest radiographs and ensure that a variety of abnormalities, such as diffuse lung disease, lobar collapse, multifocal opacities, lung lucencies, and hilar and mediastinal abnormalities, were included. We also developed a difficulty scale modeled on what others have created [44] to ensure we gathered chest radiographs with varying degrees of difficulty.

Our department purchased a computer monitor which could rotate from landscape into portrait orientation; this enabled optimal presentation of the JPEG images of the chest radiographs, which usually had greater height than width. We scheduled the sessions in a room with the appropriate size and location. For some of the sessions, we also gathered chest computed tomography images, pathology slides, or lung ultrasound images to further illustrate the radiographic appearance of some lung diseases. Our three instructors were told to follow a similar sequence for each session which usually included presenting the clinical vignette, showing the chest radiograph, selecting one of the learners to interpret it, giving feedback on the interpretation, allowing discussion among the group members, and summarizing with pertinent teaching points.

3.4. Implementation

After thoughtful analysis, design, and development, the instruction must then be implemented or delivered. Educators who wish to implement long, complex courses using a large group of instructors might want to run through a beta test first [55]. Here, a few participants, learners and instructors, run through the course slowly before implementing it, providing feedback after each step in the process and working out unforeseen practical difficulties. This process, while thorough, can consume much time. Alternatively, a pilot test can be administered where part of the instruction is given to a smaller group in real time, just as it would be given to the whole group [56]. Then, problems in implementation, especially time limitations, can be discovered and corrected.

As our course of instruction was relatively short and our group of instructors relatively small, we implemented the program of instruction, in its entirety, from the outset.

3.5. Evaluation

Earlier, during the performance analysis, tools to evaluate the effectiveness of the instruction should have already been considered and selected. Either during or after implementing the instruction, these tools should now be utilized to determine if the program of instruction is achieving its intended goal and what, if any, changes are required to improve the program. In addition to the summative feedback, formative feedback can and should be collected throughout the program of instruction to enable incremental improvements.

We surveyed our faculty who responded that the residents’ interpretation of chest radiographs had significantly improved, particularly in regard to identifying abnormal patterns and findings and generating a differential diagnosis of patterns of abnormalities.

In the 2 years preceding the new curriculum, we asked residents to use a 4-point rating scale (i.e., (1) not useful, (2) somewhat useful, (3) useful, and (4) very useful), to rate whether they felt our teaching of chest radiograph interpretation was effective. Out of 40 respondents, 2.5% gave a rating of “not useful,” 5% gave a rating of “somewhat useful,” 70% gave a rating of “useful,” and 22.5% gave a rating of “very useful.” Then, in the subsequent 2 years after implementing the new curriculum, the evaluation was repeated and out of 46 respondents, 41% gave a rating of “useful” and 59% gave a rating of “very useful,” with no respondents giving a rating of “not useful” or “somewhat useful.” Many of the residents’ comments also stated that our teaching of chest radiograph interpretation had become “the best part” of the pulmonary rotation.

In addition to the formal feedback, instructors and residents provided formative feedback throughout the year. For example, learners pointed out that the viewing angle of our monitor was limited, such that the quality of the images seemed suboptimal unless the person viewing the images was seated directly (or almost directly) in front of the monitor. Thus, we eventually switched to using a high fidelity data projector to cast the radiographic images onto a screen. Also, we made minor adjustments to the guides we gave to the instructors, emphasizing, deemphasizing, or rephrasing certain material to enhance learning.

An important method of evaluating a program’s educational effectiveness is to objectively assess how well the learners have acquired the knowledge, skills, or attitudes being taught. To this end, our residency program plans to review the residents’ performance on objective structured clinical exams (OSCEs) before and after implementing the new curriculum.

4. Conclusion

This paper illustrates how the ADDIE model can help design a curriculum to teach chest radiograph interpretation. Other instructional design models, such as those by Gagne et al. [57] and Thomas and Kern [58], could also help educators systematically construct a curriculum. The advantage of the ADDIE model is that it is simple to use and can be applied to curriculum that teaches knowledge, skills, or attitudes. But regardless of the model being used, a structured, comprehensive approach to curriculum development will aid educators in meeting the needs of their learners. Future study could track whether there are improvements to patient outcome as a result of the educational intervention.

Competing Interests

The author declares that they have no competing interests.