Journal of Ophthalmology

Journal of Ophthalmology / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 576983 |

Jean-Louis Bourges, Isabelle Boutron, Dominique Monnet, Antoine P. Brézin, "Consensus on Severity for Ocular Emergency: The BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE]", Journal of Ophthalmology, vol. 2015, Article ID 576983, 9 pages, 2015.

Consensus on Severity for Ocular Emergency: The BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE]

Academic Editor: Paolo Lanzetta
Received06 May 2015
Accepted07 Jul 2015
Published30 Jul 2015


Purpose. To weigh ocular emergency events according to their severity. Methods. A group of ophthalmologists and researchers rated the severity of 86 common ocular emergencies using a Delphi consensus method. The ratings were attributed on a 7-point scale throughout a first-round survey. Then, the experts were provided with the median and quartiles of the ratings of each item to reevaluate the severity levels being aware of the group’s first-round responses. The final severity rating for each item corresponded to the median rating provided by the last Delphi round. Results. We invited 398 experts, and 80 (20%) of them, from 18 different countries, agreed to participate. A consensus was reached in the second round, completed by 24 experts (43%). The severity ranged from subconjunctival hemorrhages (median = 1, Q1 = 0; Q3 = 1) to penetrating eye injuries collapsing the eyeball with intraocular foreign body or panophthalmitis with infection following surgery (median = 5, Q1 = 5; Q3 = 6). The ratings did not differ according to the practice of the experts. Conclusion. These ratings could be used to assess the severity of ocular emergency events, to serve in composite algorithms for emergency triage and standardizing research in ocular emergencies.

1. Introduction

The modern management of ocular emergencies requires growing fields of expertise to implement appropriate triage of patients [16]. A correlation between final visual outcome and prompt access to specialized care has been reported [710]. One of the most popular tools in Emergency Eye Clinics (EEC) relies on the first level of decision made by an Ophthalmic Nurse Practitioner (ONP) [5, 11, 12]. The patient referred to EEC may eventually be oriented to an ophthalmologist within an arbitrary defined time frame. The New South Wales (NSW) Health and State and the NSW Statewide Ophthalmology Service of Sydney have released recommendations to manage ocular emergency, aiming at implementing some consensual procedures and care [13]. The first formal eye-dedicated triaging system, the RESCUE system, was proposed in 2007 by Rossi et al. [4]. It consists of a color-coding scheme discriminating cases through three grades of severity and promptly identifies those who eventually might require hospitalization. However, patients are also commonly referred to EEC for ocular emergencies by a first-line health care practitioner for further specialized eye care and appropriate treatment. At that time point, an identification of the ocular condition is usually tentatively proposed or at least suspected. Thereafter, the triage efficiency would dramatically benefit from a basic but consensual tool scoring severity for each possible ocular emergency item presenting at emergency room. In addition, such a tool would help manage the downstream of eye emergency. Finally, a consensus tool scoring severity of eye emergency items would eventually serve for research purposes to weigh the severity of rather heterogeneous items in clinical trials.

We were therefore driven to reach a consensus based on the severity of the most common ocular conditions seen at EEC, aiming at attributing to each of them a simple clinical score for severity.

2. Methods

We built the BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE] on the basis of a comprehensive survey conducted with a panel of experts in the field of ophthalmology and/or emergency eye care.

2.1. Study Design

The Delphi method was used to synthesize expert opinions and achieve consensus [14, 15]. Briefly, it entailed a panel of experts who answered the BaSe SCOrE survey and subsequently received feedback on the “group response,” after which the process was repeated again to reach a majority agreement. The Delphi consensus relies on an iterative, anonymous method with controlled feedback and statistical aggregation of group responses [16, 17].

2.2. Steering Committee

A steering committee including three ophthalmologists (M.D., Ph.D.) (Jean-Louis Bourges, Dominique Monnet, and Antoine P. Brézin), one epidemiologist (M.D., Ph.D.) (Isabelle Boutron), and two nurses (Catherine Issad and Michel Alvarez) participating in the ocular emergency health care system was built.

2.3. Emergency Item Identification

Ocular emergency items were listed on the basis of the relevant literature to be further proposed in the survey. The first author reviewed the literature published before August 2012 and available on PubMed, Cochrane Library, and Google Scholar websites. The URLs,, and were visited using the following keywords: eye and ( or )/ and ( or ) and (French [Language] or English [Language]). The star was used as a wildcard. The relevant literature was selected on the basis of paper titles and abstracts. Every generic term pointing to an ocular condition related to emergency care was considered a possible item, captured and sorted into four main categories: anterior segment items, posterior segment items, traumatic items, and items linked to complications following ocular surgery. The list of items was completed, amended, and eventually validated by the steering committee. Finally, the steering committee approved a list of 86 items, classified as 21 anterior segment items, 26 posterior segment items, 25 traumatic items, and 14 items related to complications after ocular surgery. The list was further proposed in the survey. In the survey, the formulation of items was open to free comments from participating experts to ensure the relevance and reliability of the final scoring and to strengthen the consensus.

2.4. Selection of the Panel of Experts

We systematically selected all the email addresses of the corresponding authors of the original articles available in full text and obtained as described above. Further, we systematically selected all corresponding authors with an email address available from an original article published between 2011 and 2012 in six scientific journals (Ophthalmology, Investigative Ophthalmology and Visual Science, Archives of Ophthalmology, British Journal of Ophthalmology, American Journal of Ophthalmology, and Journal of Cataract and Refractive Surgery), rated within the top ten journals for their impact on the field of ophthalmology. We included Ph.D., O.D., or nonclinician health care practitioners in the panel of experts if they were involved in the field of ocular emergency, referring patients to EEC or participating in dedicated research.

2.5. Delphi Consensus Method

We planned to conduct at least two iterative rounds by email, using the SurveyMonkey tool. Additional rounds could be performed until a consensus was reached. Email addresses for the selected experts were extracted from their publications, and they were invited to participate online in the Delphi process. They were sent a standardized information package, including a synopsis of the study and a description of the Delphi process. Experts who did not respond were solicited again by an email reminder sent four days, two weeks, and four weeks after the initial invitation, before being considered permanently unwilling to participate. Experts who declined our invitation were asked to recommend the best expert in the field of ocular emergency. Recommended persons were invited following an identical procedure. The purpose and the methodology of the survey were further detailed for the experts who agreed to participate by providing remote access to a dedicated website. The opportunity to decline to participate permanently was offered to the experts.

In our first Delphi round, each member of the panel evaluated the severity of the 86 items based on a 7-point scale (Supplementary Material available online at For each event, experts were asked the following question: “According to your personal expertise, please rate the severity of the following ocular conditions seen in the emergency room, using the following seven-point ordinal scale.” The 7-point scale was used to collect responses with the anchors “not severe at all” at 0 and “maximal (currently untreatable)” at 6. Severity was left to each expert’s own understanding. Within the survey, a dedicated email address and some free writing space were offered to harvest the experts’ comments and requests. Thus, the experts could comment or suggest items freely and possibly modify items in the next round with the approval of the steering committee. At the end of the first round, the steering committee elaborated a synthesis of the collected data.

In the second round, the experts were informed of each median rating (Q1–Q3) collected from the first round. Considering the same event, the same question, and the same rating scale, the experts were asked to rate each event again by confirming their first rating or modifying it in the light of the first-round results.

2.6. Data Analysis

The answers were collected, reviewed, and analyzed by the steering committee (Jean-Louis Bourges, Isabelle Boutron, Dominique Monnet, Antoine P. Brézin, Catherine Issad, and Michel Alvarez). We applied a last observation carried forward (LOCF) strategy for missing data after the first round. Thus, if an expert did not answer the second round, we considered his/her first-round answer. The concept of consensus within a group was defined as homogeneity or consistency opinion among the experts. Assuming that each item was characterized by a constant but unknown severity, the ratings of the experts could be considered multiple measures of this characteristic. The median rating and the first quartile–third quartile [Q1–Q3]) were established considering each individual item for the whole group.

The final weight for each event was the median rating obtained during the last Delphi round. All analyses were performed on the R statistics system (version 2.10.0;

3. Results

3.1. Participants

We invited 398 experts to participate (Figure 1). A fifth took the first round of the survey (), answering from 18 different countries. Table 1 displays the preferred practice of the participating experts. The experts completing the first round claimed an M.D. (67%), M.D. and Ph.D. (25%), or Ph.D. (4%) professional degree. The other reported professional degrees were optometrist (O.D.) and various other scientific degrees. Fifty-six experts scored all 86 items in the first round and all experts scored all items in round 2. Twenty-four experts (43%) completed the second round.

Professional degree

Country of the professional institution
 Answers ()503231380100%

Preferred practice
 Cornea and external disease1631924%
 Vitreoretinal diseases1271924%
 Cataract and refractive surgery45911%
 Pediatric ophthalmology5168%
 Ophthalmic plastic surgery31156%
 Answers ()49323137999%

Professional experience
 1 to 5 years141812430%
 11 to 20 years131612126%
 6 to 10 years8131215%
 Less than a year3145%
 More than 20 years1151620%
 Not answered34%
 Answers ()493231180100%

3.2. BaSe SCOrE Consensus

Table 2 reports the consensus reached by the experts who ranked the global severity of each item within the four proposed categories. For anterior segment items, the experts considered noninfectious conjunctivitis as the less severe event (median = 1 [Q1 = 1; Q3 = 1]), whereas perforating ulcer of the cornea was ranked as the most concerning item (median = 5 [Q1 = 5; Q3 = 5]). Subconjunctival hemorrhage, classified as a posttraumatic event, was considered the least severe item (median = 1 [Q1 = 0; Q3 = 1]). For items involving the posterior segment, ophthalmic migraine and vitreous floaters were both ranked as the least severe items (median = 1 [Q1 = 1; Q3 = 2]), although they were considered slightly more severe than noninfectious conjunctivitis. Retinal artery occlusion was considered the most severe item (median = 5 [Q1 = 4; Q3 = 6]), ahead of retinal detachment and acute binocular diplopia with neurological symptoms (median = 5 [Q1 = 4; Q3 = 5]). Among traumatic items, penetrating eye injuries either with intraocular foreign body (median = 5 [Q1 = 5; Q3 = 6]) or without intraocular foreign body (median = 5 [Q1 = 5; Q3 = 5]) were ranked highest like panophthalmitis from items following intraocular surgery. The second round only modified 3 median scores with quartiles remaining unaltered: subconjunctival hemorrhage (0.0 to 1.0), retinal detachment (macula off) (5.0 to 4.0), and ophthalmic migraine (2.0 to 1.0). Figure 2 illustrates how the severity ranking was distributed among the four main categories of items. No 0 or 6 median scores were obtained. The median score attribution seemed to follow a Gaussian-shaped distribution.

CategoryItemOpinion submittedMedian weightsQuartile [1st; 3rd]
Yes ()No ()

Anterior segment items
ConjunctivitisNoninfectious7461 [1.0; 1.0]
KeratitisSuperficial punctate keratitis7371[1.0; 1.0]
Contact lensMechanical complication (dislocation, vacuum, etc.)7461[1.0; 2.0]
PterygiaInflamed pterygia7461[1.0; 2.0]
ConjunctivitisViral conjunctivitis7462[1.0; 2.0]
Scleral and episcleral inflammationEpiscleritis (regardless of the cause)7462[1.0; 2.0]
Corneal ulcer (not infected)Unperforated—isolated epithelial defect7372[1.0; 3.0]
ConjunctivitisBacterial conjunctivitis7462[2.0; 2.0]
KeratitisNoninfectious interstitial keratitis7282[2.0; 3.0]
Anterior acute uveitisFirst episode (regardless of the cause)7282[2.0; 3.0]
Contact lensInfectious keratitis without severity factor7463[2.0; 3.0]
Anterior acute uveitisIterative (regardless of the cause)7373[2.0; 3.0]
Scleral and episcleral inflammationScleritis (regardless of the cause)7463[2.0; 3.0]
Lacrimal ductsAcute dacryocystitis7373[2.0; 3.0]
Corneal ulcer (not infected)Unperforated, with stromal involvement7463[2.0; 4.0]
KeratitisInfectious keratitis7373[3.0; 4.0]
Ocular surface burn <9 clock hours of limbus and <75% of conjunctiva7373[3.0; 4.0]
Contact lensInfectious keratitis with severity factor(s) 7464[4.0; 5.0]
GlaucomaAcute angle-closure glaucoma7374[4.0; 5.0]
GlaucomaNeovascular glaucoma7374[4.0; 5.0]
Corneal ulcer (not infected)Perforating ulcer7465[5.0; 5.0]

Posterior segment items
Nonspecific visual symptomsOphthalmic migraine70101[1.0; 2.0]
Nonspecific visual symptomsVitreous floaters70101[1.0; 2.0]
Choroidal new vessels (CNV) or direct complicationPeripheral (exclusively)70102[2.0; 3.0]
MaculaMacular edema7193[2.0; 3.0]
Chorioretinal toxoplasmosis (active phase)Peripheral7193[2.0; 3.0]
Pupil disordersNo oculomotor disturbance7193[2.0; 3.0]
Retinal peripheral tearNo detached edges, no RD associated70103[2.0; 3.0]
Chorioretinal toxoplasmosis (active phase)Vitreous hemorrhage—isolated/unidentified cause7193[2.0; 4.0]
Acute binocular diplopiaWithout neurological symptoms7193[2.0; 4.0]
MaculaMacular hole70103[2.0; 4.0]
Optic neuritis (regardless of the cause)Without neurological symptoms7193[3.0; 4.0]
Retinal vein occlusionBranch (BRVO)70103[3.0; 4.0]
Retinal peripheral tearDetached edges, no RD associated70103[3.0; 4.0]
Posterior segment inflammationVitritis (unidentified etiology)70103[3.0; 4.0]
Posterior segment inflammationRetinal vasculitis7194[3.0; 4.0]
Retinal artery occlusionBranch (BRAO)70104[3.0; 4.0]
Choroidal new vessels (CNV) or direct complicationSubfoveal70104[3.0; 4.0]
Optic neuritis (regardless of the cause)With associated neurological symptoms7194[3.0; 5.0]
Pupil disordersAssociated with oculomotor disturbance7194[3.0; 5.0]
Retinal vein occlusionCentral (CRVO)70104[3.0; 5.0]
Ischemic optic neuropathyIsolated69114[3.0; 5.0]
Retinal detachment (RD)Macula off7194[4.0; 5.0]
Chorioretinal toxoplasmosis (active phase)Foveal or peripapillary7194[4.0; 5.0]
Retinal detachment (RD)Macula on7195[4.0; 5.0]
Acute binocular diplopiaWith associated neurological symptoms7195[4.0; 5.0]
Retinal artery occlusionCentral (CRAO)69115[4.0; 6.0]

Traumatisms items
ConjunctivaSubconjunctival hemorrhage7191[0.0; 1.0]
ConjunctivaConjunctival wound without scleral exposure7191[1.0; 2.0]
Corneal injuryCorneal foreign body away from the axis7192[1.0; 2.0]
Other injuriesEyelid skin injury7192[1.0; 2.0]
ConjunctivaConjunctival wound with scleral exposure7192[2.0; 3.0]
Corneal injuryNonperforating laceration away from the axis7192[2.0; 3.0]
Commotio retinaePeripheral posttraumatic retinopathy70102[2.0; 3.0]
Corneal injuryCorneal foreign body in the visual axis7193[2.0; 3.0]
Blunt injuries with impaired visionHyphema (isolated)7193[2.0; 3.0]
Other injuriesEyelid-margin injury7193[2.0; 3.0]
Other injuriesEyelid levator injury7193[3.0; 3.0]
Corneal injuryNonperforating laceration in the visual axis7193[3.0; 4.0]
Commotio retinaePosttraumatic maculopathy70103[3.0; 4.0]
Blunt injuries with impaired visionIridodialysis (avulsion of the iris root)7193[3.0; 4.0]
Other injuriesTear ducts injury7193[3.0; 4.0]
Corneal injuryPerforating but self-sealing laceration70104[3.0; 4.0]
Blunt injuries with impaired visionChoroidal rupture7194[3.0; 4.0]
Blunt injuries with impaired visionLens dislocation7194[3.0; 5.0]
Penetrating eye injuryWithout intraocular foreign body, eye not collapsed7194[4.0; 5.0]
Corneal injuryPerforating and leaking laceration7195[4.0; 5.0]
Penetrating eye injuryWith intraocular foreign body, eye not collapsed7195[4.0; 5.0]
Blunt injuries with impaired visionScleral rupture7195[4.0; 5.0]
Other injuriesOptic nerve injury7195[4.0; 5.0]
Penetrating eye injuryWithout intraocular foreign body, eye collapsed7195[5.0; 5.0]
Penetrating eye injuryWith intraocular foreign body, eye collapsed7195[5.0; 6.0]

Items from complication following ocular surgery
Acute painWith normal postoperative course69112[2.0; 3.0]
High intraocular pressure (except AACG)25 < IOP < 35 mmHg69112[2.0; 3.0]
Suture-related complicationNoninfectious suture-related complication70102[2.0; 3.0]
High intraocular pressure (except AACG)IOP = 35 mmHg or IOP > 35 mmHg67133[3.0; 4.0]
Laser keratomileusis in situ (LASIK)Complication related to the flap70103[3.0; 4.0]
Keratoplasty (corneal grafting procedure)Suture-related complication (no infection)70103[3.0; 4.0]
Intraocular hemorrhageWithin 3 days after the surgery70104[3.0; 4.0]
Keratoplasty (corneal grafting procedure)Acute rejection70104[3.0; 5.0]
Mechanical complication following surgeryReopening of the surgical wound68124[3.0; 5.0]
Suture-related complicationInfectious suture-related complication69114[3.0; 5.0]
Mechanical complication following surgeryCollapsed eyeball (athalamia)61194[4.0; 5.0]
Keratoplasty (corneal grafting procedure)Graft dislocation70105[4.0; 5.0]
Infection following surgeryBlebitis70105[4.0; 5.0]
Infection following surgeryPanophthalmitis70105[5.0; 6.0]

4. Discussion

The BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE] is, to our knowledge, the first attempt obtained by expert consensus to rate ocular emergency items for severity. No unexpected or surprising rating emerged from the BaSe SCOrE. Actually, consensual scores obtained from the Delphi method faithfully transpose the reality of a day-to-day ophthalmological practice which includes eye emergencies. It seems obvious to every ophthalmologist that, regarding severity, conjunctivitis should be less rated than any acute anterior uveitis. Yet, there had been so far no tool to approach the difference between these two items quantitatively. The present score offers the opportunity to ophthalmological staff members, health care practitioners, or researchers to quantify ocular severity of frequent emergency conditions reproducibly, reliably, and most probably accurately. We foresee the core utility of the BaSe SCOrE both for emergency room staff to triage patients who are suspected to have a specific ocular condition and to standardize research in ocular emergencies. The way this score could be integrated in the triage process still remains to be defined. Even if it is essential, the BaSe SCOrE stays a preliminary step.

It may be also extremely difficult to discriminate severity among miscellaneous items, such as branch retinal vein occlusion (BRVO) and perforating but self-sealing laceration of the cornea. Not only are such items medically highly unrelated, but the actual severity of ocular conditions could at best be approached at the subspecialty level. According to the BaSe SCOrE consensus, the BRVO and the corneal perforation would both fall into the same quartile range, but the former would be given a median score of 3, while the latter would score 4. Thus, the global approach of the consensus demonstrates the ability to weigh the severity of items in situations where physicians would encounter difficulties assessing it.

Throughout the survey, some clinical items were purposely vaguely or rather imprecisely described. The deliberate choice to define each item in one way or another was made to best approach each situation encountered in emergency room.

Similarly, we chose not to define the word “severity” precisely during the consensus process. Severity is a broad term that encompasses several concepts, including the decisive role of immediate care, every foreseeable outcome, invasiveness of treatments, room for worsening, contagiousness, pain, or associated injuries. By taking the survey, each expert had the opportunity to consider every relevant notion, selecting his or her own score. By scoring ocular emergency items in relation to each other and by using severity as a term with a wide meaning, we applied the heuristic method. We believe it generates optimal results for clinical classification, rendering it more relevant in real-life practice and more realistic to implement the BaSe SCOrE at EEC.

Managing an ocular emergency, just like managing other emergencies, usually requires following three main steps: immediate triage, patient examination, and downstream orientation. The first-line triage mainly relies on patient’s medical history and presenting symptoms. The anamnesis is collected widely by nonmedical staff, who optionally refer patients to the specialist, usually using a formalized protocol. In 2006 in Florida, less than half of the ophthalmologists answered emergency department calls [18]. Consequently, patients are oriented within a preset deadline by various health practitioners, who may have a medical degree or simply belong to the paramedic staff with various levels of experience [19]. We postulate that the different health care providers would interact in a rationalized, secured, and even more coherent fashion by using a standard score for severity. Communication is now strongly increased by the use of smartphones, tablets, and laptops through medical applications [20]. Using mobile devices, the BaSe SCOrE could improve the efficiency of triage by giving health care practitioners a consensual tool to standardize their decision making process.

The eye-dedicated RESCUE triage [4] used color coding to identify the acceptable delay for patients to have access to eye care. The triaging system does not integrate a quantified tool to evaluate the severity of ocular items. To be implemented reproducibly worldwide, this system as well as other ocular emergency triage systems could be fine-tuned by integrating the quantitative rating for severity of common emergency items generated by the BaSe SCOrE.

The present consensus displays strengths and limitations. The large number of experts willing to participate in the process makes the mean scores fairly reliable. The extended range of the experts’ affiliations and participating countries adds to the robustness of the BaSe SCOrE, which might fit every EEC worldwide, independently of inherent specificity of local health care systems. The consensus form gave experts the opportunity to provide open comments. However, only two comments were gathered throughout the survey process. This could indicate that the BaSe SCOrE items and groups of items were accurate and meaningful to a wide range of eye care professionals from various countries. On the other hand, participation in the second round was limited. This could have resulted from the experts’ agreement to the first-round results, provided at the time of the second survey. At that point, the experts might have felt no need to contribute further to the process by completing the second round, arguing that they would not change their minds on the basis of the knowledge of the first-round mean scores. It may also have resulted from experts’ weariness in taking time from their busy schedules to complete surveys. Finally, the BaSe SCOrE still needs to be validated in EEC actual practice with a test sample of patients to assess its clinical accuracy by correlating the scores with the real clinical outcomes.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contribution

Conception and design were the responsibility of Jean-Louis Bourges and Isabelle Boutron. Jean-Louis Bourges and Isabelle Boutron were responsible for analysis and interpretation. Data collection was carried out by Jean-Louis Bourges. The paper was written by Jean-Louis Bourges and Isabelle Boutron. Critical revision of the paper was done by Jean-Louis Bourges. Final approval of the paper was by Jean-Louis Bourges, Isabelle Boutron, Dominique Monnet, and Antoine P. Brézin. Statistical analysis was carried out by Isabelle Boutron. Funding was obtained by Antoine P. Brézin and Jean-Louis Bourges had overall responsibility.


The authors would like to thank warmly the panel of experts who agreed to participate in the Delphi process and made the consensus possible: Penelope J. Allen, Sofia Androudi, Florent Aptel, Itay Benzion, Sanjoy Bhattacharya, Brian A. Bonanni, Christophe Chicquet, Antony Clark, Sophie Deng, P. Dickerson, Murat Dogru, Renée du Toit, K. David Epley, Céline Faure, Eric Gabison, Prashant Garg, Birgitte Haargaard, Julia A. Haller, Pedram Hamrah, Mingguang He, Amy L. Hennessy, Lawrence W. Hirst, Fung Rong Hu, Mark Juzych, Justin Kanoff, Alan Kreiger, Hsi-Kung Kuo, Davide Lazzeri, Ralph Levinson, Lance Liu, Tushar M. Ranchod, Heloisa Maestrini, Ezra Maguen, Gerald McGwin, Rachna Meel, Naik Milind, Kevin Miller, Danny Mitry, Inger Montfoort, Yvonne Ou, Rey Pangilinan, Noortje I. Regensburg, Gilles Renard, Tommaso Rossi, Yamanashi Sakurada, Arnaud Sauer, Farideh Sharifipour, Carol Shields, Scott Smith, Yining Strube, Andrew Symons, Irena Tsui, Christophe Wiaux, Kirk Wilhelmus, and Robert P. I. Wisse. They also strongly acknowledge nurses Catherine Issad and Michel Alvarez for their active participation in the steering committee.

Supplementary Materials

The BaSe SCOrE survey consisted of an online form send to the experts. Participating experts completed the first round of the Delphi process provided in appendix 1. A second online form (appendix 2) was send to the expert based on the first form augmented with the median scores and quartiles resulting from the first round.

  1. Supplementary Material


  1. D. I. Flitcroft, M. Westcott, R. Wormald, and R. Touquet, “Who should see eye casualties?: a comparison of eye care in an accident and emergency department with a dedicated eye casualty,” Journal of Accident and Emergency Medicine, vol. 12, no. 1, pp. 23–27, 1995. View at: Publisher Site | Google Scholar
  2. J. Marsden, “An evaluation of the safety and effectiveness of telephone triage as a method of patient prioritization in an ophthalmic accident and emergency service,” Journal of Advanced Nursing, vol. 31, no. 2, pp. 401–409, 2000. View at: Publisher Site | Google Scholar
  3. S. Fenton, E. Jackson, and M. Fenton, “An audit of the ophthalmic division of the accident and emergency department of the royal victoria eye and ear hospital, Dublin,” Irish Medical Journal, vol. 94, no. 9, pp. 265–266, 2001. View at: Google Scholar
  4. T. Rossi, B. Boccassini, M. Iossa, M. G. Mutolo, G. Lesnoni, and P. A. Mutolo, “Triaging and coding ophthalmic emergency: the Rome Eye Scoring System for Urgency and Emergency (RESCUE): a pilot study of 1,000 eye-dedicated emergency room patients,” European Journal of Ophthalmology, vol. 17, no. 3, pp. 413–417, 2007. View at: Google Scholar
  5. J. C. Buchan, A. Ashiq, N. Kitson, J. Dixon, A. Cassels-Brown, and J. A. Bradbury, “Nurse specialist treatment of eye emergencies: five year follow up study of quality and effectiveness,” International Emergency Nursing, vol. 17, no. 3, pp. 149–154, 2009. View at: Publisher Site | Google Scholar
  6. D. Lindfield and R. Das-Bhaumik, “Emergency department management of penetrating eye injuries,” International Emergency Nursing, vol. 17, no. 3, pp. 155–160, 2009. View at: Publisher Site | Google Scholar
  7. N. J. Collignon, “Emergencies in glaucoma: a review,” Le Bulletin de la Societe Belge D'Ophtalmologie, no. 296, pp. 71–81, 2005. View at: Google Scholar
  8. M. L. Durand, “Bacterial endophthalmitis,” Current Infectious Disease Reports, vol. 11, pp. 283–288, 2009. View at: Google Scholar
  9. A. K. Rudkin, A. W. Lee, and C. S. Chen, “Central retinal artery occlusion: timing and mode of presentation,” European Journal of Neurology, vol. 16, no. 6, pp. 674–677, 2009. View at: Publisher Site | Google Scholar
  10. K. Mercieca and N. P. Jones, “Treatment of acute anterior uveitis in the community, as seen in an emergency eye centre. A lesson for the general practitioner?” European Journal of General Practice, vol. 18, no. 1, pp. 26–29, 2012. View at: Publisher Site | Google Scholar
  11. J. Marsden, “The nurse practitioner role in a United Kingdom ophthalmic accident and emergency department—10 years of progress,” Insight—The Journal of the American Society of Ophthalmic Registered Nurses, vol. 24, no. 2, pp. 45–50, 1999. View at: Publisher Site | Google Scholar
  12. B. J. Kirkwood, K. Pesudovs, R. S. Loh, and D. J. Coster, “Implementation and evaluation of an ophthalmic nurse practitioner emergency eye clinic,” Clinical and Experimental Ophthalmology, vol. 33, no. 6, pp. 593–597, 2005. View at: Publisher Site | Google Scholar
  13. W. Sehu, The Eye Emergency Manual. An Illustrated Guide, Department of Health, 2009,
  14. N. Dalkey and O. Helmer, “An experimental application of the Delphi method to the use of experts,” Management Science, vol. 9, no. 3, pp. 458–467, 1963. View at: Publisher Site | Google Scholar
  15. A. L. Delbecq, A. H. van de Ven, and D. H. Gustafson, Group Techniques for Program Planning: A Guide to Nominal Group and Delphi Processes, Scott, Foresman, Glenview, Ill, USA, 1975,
  16. N. C. Dalkey, The Delphi Method: An Experimental Study of Group Opinion, RAND Corporation, 1969,
  17. A. Fink, J. Kosecoff, M. Chassin, and R. H. Brook, “Consensus methods: characteristics and guidelines for use,” The American Journal of Public Health, vol. 74, no. 9, pp. 979–983, 1984. View at: Publisher Site | Google Scholar
  18. M. T. Witmer and C. E. Margo, “Analysis of ophthalmology workforce and delivery of emergency department eye care in Florida,” Archives of Ophthalmology, vol. 127, no. 11, pp. 1522–1527, 2009. View at: Publisher Site | Google Scholar
  19. M.-J. Gallagher, J. Mathison, and M. McKnight, “Autonomous emergency ophthalmic nursing: a pilot educational module,” British Journal of Nursing, vol. 17, no. 17, pp. 1120–1123, 2008. View at: Publisher Site | Google Scholar
  20. R. K. Lord, V. A. Shah, A. N. San Filippo, and R. Krishna, “Novel uses of smartphones in ophthalmology,” Ophthalmology, vol. 117, no. 6, p. 1274, 2010. View at: Publisher Site | Google Scholar

Copyright © 2015 Jean-Louis Bourges et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.