Research Article | Open Access
Evaluation of Safety Performance in a Construction Organization in India: A Study
In India the construction industry is the second largest employer next to agriculture and about 31 million people are employed in construction sector. Indian construction industry is labour intensive comprising of semi- skilled and unskilled workers. The measurement and evaluation of an organization's performance on health and safety conditions at work mainly aims at the provision of information about the current situation and the progress of the strategies, processes and activities that are adopted by an organization with the view to keep H&S hazards under control. The construction industry needs a new paradigm for measuring safety performance on construction sites that is a proactive approach rather than just depending on the reactive data. The proactive approach is able to provide essential feedback on performance before incidents occur. This paper presents proactive safety measures to eliminate unsafe actions/conditions which contribute towards accidents and injuries by conducting safety sampling survey and overall safety performance was evaluated by inter observer reliability of internal and external safety auditors. The study was conducted in a large construction organization, certified under OHSAS 18001 and involved in construction of high rise buildings in India.
The construction industry has often been criticised for its poor performance in health and safety. Brown commented that the manner in which safety is managed in the construction industry has not radically changed over the years . In the manufacturing sector, the working environment and the work methods remain essentially unchanged from day to day. On the other hand, on a construction site, the working environment, the work to be done, and the composition of workers changes continuously. The continuous change generates a greater risk for construction processes, which potentially exposes the workers to unforeseen and unaccustomed hazards. Anderson argued that there are several factors in the construction industry that seem to conspire to create “barriers” to significant widespread safety improvement [2–4]. These include shortcomings in the present general level of health and safety education, general apathy and complacency towards health and safety issues, lack of quality and commitment of site management to give site safety issues the priority they need and/or deserve, lack of sufficient resources allocated to health and safety, overemphasis at site level on production objectives to the obvious detriment of good safe working practices, failure of government to put sufficient resources into safety enforcement, and the lack of focus on the part of some construction professionals in health and safety issues. Tarrants cited that injurious accidents are only one consequence of worker behaviour within specified working conditions; as such, they reveal very little about antecedent behaviour and machine-environment malfunctions that are important contributors to current and future accident problems . Cariel added that only with proper management commitment, planning, and establishment, it is possible to achieve a safer working environment that is also cost effective . The realization of the costs of accidents and human suffering that follows has brought changes in the attitude of management and employees with regards to safety. Both Tarrants  and Laufer and Ledbetter  agree that measurement of safety performance is necessary for the reasons such as to locate and identify problem areas, as a basis for trend comparison, to describe the current safety state of an organization, as a basis for evaluating accident prevention programme effectiveness, to assess accident costs, to establish long-term accident control, and as a basis for quantifying probable risk of injury or other loss. Another notable characteristic of today’s construction sector is the fact that the work is mostly carried out in projects. A project can be seen as a temporary organization; it is started up in order to reach a certain outcome (see Turner and Muller ). Therefore, the employees that participate in a project have much less shared history than do those who work in a regular organization, and the study was conducted at project level. According to Heinrich, the process of accidents causation is a chain of events. The 1st link in the chain is the faulty social environment. It causes human failure, which, in turn, produces unsafe action and unsafe conditions. This creates accidents which end in injury or property damage. Unsafe acts are errors made by workers, and unsafe conditions are existence of error provocation situations. 88% of accidents are due to unsafe acts, 10% due to unsafe condition and 2%, accidents cannot be prevented or un-avoided. To ascertain unsafe actions which contribute major share in accidents, the safety behaviour sampling technique is adopted, and the control charts give information to take immediate actions. Interobserver reliability is useful tool to know the degree of agreement between internal and external auditors while assessing safety management systems in any organization.
2.1. Safety Behaviour Sampling
Safety behavior sampling is a technique of measuring unsafe acts. There can be two causes of errors:(1)error-committing characteristics of people,(2)error-provocation situations.
By providing necessary feedback to people concerning their errors, they can be enabled to reduce their error-committing characteristics. Unsafe acts are errors made by workers, and unsafe conditions are existence of error provocation situations. According to Heinrich, roughly 88% of all accidents are made by unsafe acts of people, 10% are made by unsafe conditions, and 2% by the acts of Nature . Safety behavior sampling is based on the laws of probability. In a process that can be only in two states, safe and unsafe, the total probability is 1% or 100%. In a multiactivity study each observation is in a binary state for each activity Considered. In terms of probabilities it can be expressed as the probability of a single observation in one state, say for safe act. probability of no observations in state . For “” observations where “” is the number of observations in the sample.
This sampling technique has demonstrated usefulness in evaluating unsafe behavior. Here, it is assumed that the percent of time a worker working safely/behaving safely can be determined. In order to obtain a complete and accurate picture of safe/unsafe acts performed by the worker, it is necessary to continuously observe the worker and record data related to unsafe acts. Note that a sufficiently large sample must be obtained for representative results. For a large number of observations, the resulting distribution approaches the shape of a normal curve.
2.1.1. Procedure for Safety Behavior Sampling
Define Work Stations
This includes departments/units in an organization where safety behavior sampling is to be conducted.
Prepare a List of Unsafe Acts
This list can be developed from plant accidents recorded initially and modified later as appropriate. Plant accidents include all accidents, such as disabling injuries, recordable injuries, and first-aid cases.
Conduct a Pilot Study
Prior to conducting a pilot study, one must carefully select times to observe worker behavior. These times must be selected randomly. The number trial observation periods required depends upon the number of persons observed. For a guideline purpose, it is said that a sufficient number of trial observation periods should be selected so that the total sample size is at least 100.
The observer must be instructed to categorize worker behavior as being either safe or unsafe as defined by the entries of behaviors included in the unsafe acts list. Observer should make a trail run and practice how to decide instantaneously whether the observed behavior is safe or unsafe. In addition, the observer should be trained to determine whether the behavior of each worker is safe or unsafe at the time of each observation. While observing a department, the observer should walk through the department and see whether workers are behaving unsafely.
2.2. -Control Chart
Control charts, also known as Shewhart charts or process-behavior charts, in statistical process control are tools used to determine whether or not a manufacturing or business process is in a state of statistical control.
2.2.1. Method of Calculation
Step 1. Calculate the sum of number of observation.
Step 2. Calculate value. (Number of observation daily/sum of observations in that route).
Step 3. Calculate sum and average of value.
Step 4. Calculate UCL and LCL by
Step 5. Put all values for one route and check whether all the point is within the limit or not.
Generally, a confidence level of 95.26% or within 2 SD is considered adequate for most safety behaviour sampling studies. The “2” appears because a 95.44% confidence level is to be provided.
2.3. Safety Audit
Safety auditing is a typical organizational safety assessment activity. According to Cooper, any management system audit should be able to identify, assess and evaluate the organizational problems so that recommendations for improvement can be made . Glendon categorized the safety audits, but management safety audit covers safety matters and involves staff and perhaps specialist auditing staff as well . A safety audit can be performed either internally, where the company’s own personnel reviews the performance, or externally, where the assessment is done by a trained expert from the outside the organization. Clerinx et al. pointed out that the danger of internal safety audits is that the effort is made only to increase the level of health and safety . The audits conducted by the different auditors should reach similar results when the same operation is audited under same conditions.
2.3.1. Interobserver Reliability
The results of an assessment should be reproducible under different conditions. In many cases, different observers or even the same observer at a different time may reach different conclusions. The concept of reliability provides an estimate of how consistently the studied behaviour is observed and scored. Interobserver reliability measures the variation which occurs when an observer performs multiple judgments at different times. Interobserver reliability measures the variation that occurs when two or more persons make judgments independently. Cohen has presented kappa () as a coefficient of agreement for nominal scales. The proportion of agreement corrected for chance is the following : is the observed proportion of agreement. is the proportion of agreement expected by change.
According to Fleiss , both kappa and weighted kappa can be employed as a measure of reliability, while weighted kappa is performed for ordered scales.
2.3.2. Lawshe’s Content Validity Ratio
In this approach, a panel of subject-matter experts (SMEs) is asked to indicate whether or not a measurement item in a set of other than a measurement item is “essential” to the operationalization of a theoretical construct. The SME input is then used to compute the CVR for each th item in a measurement instrument (), where value for the th measurement item, number of SMEs indicating a measurement item is “essential,” and Total number of SMEs in the panel. One can infer from the CVR equation +1.00, where a means that 50% of the SMEs in the panel of size believe that a measurement item is “essential.” A would, therefore, indicate that more than half of the SMEs believe that a particular measurement item is “essential,” and, thereby, face valid. Lawshe [15, page 568] has further established minimum CVR’s for different panel sizes based on a one-tailed test at the significance the panel, then measurement items for a specific construct, whose CVR values are less than 0.37, would be deemed as not “essential” and would be deleted from subsequent consideration.
3. Results and Discussion
3.1. Safety Sampling
The study was conducted in a large construction organization certified under OHSAS 18001 and employing 1200 employees per day. The study was conducted at project level which is treated as temporary organization. The management was keen to know the safety status on weekly basis. The entire project was divided into six segments or routes in such a way that a minimum of 200 employees are covered. In the first stage, safety behaviour sampling was conducted. Safety sampling is a method of measuring defects or unsafe acts while touring specified location by a prescribed tour of 30 minutes, as it is an unscheduled examination of work area by team of observers, and the employees are not aware of the purpose for which the information is collected. The observers engaged in safety sampling are experienced supervisors, engineers, and safety staff. Each observer was allocated one route and observed the safety behaviour. The study was conducted for eight weeks at the same time, and the results were shown in Table 2.
Basing on the safety behaviour sampling data, control ( chart) limits were calculated from week 1 to 8. The upper and lower control limits from week 1 to 8 is shown in Table 3.
The safety behaviour from week 1 to 7 is out of control, and it is under control during eighth week. The control chart for week 8 is shown in Figure 1.
3.2. Content Validity Ratio
From an extensive literature review, a total of 31 success variables were identified. Before including them in the final draft of questionnaire, they were statistically validated using content validity ratio (CVR). This internal validation was carried out by asking 54 experts (i.e., corporate safety heads, safety managers, safety engineers, and senior safety officers who have been involved in managing safety in construction projects for at least 10 years) whether or not the defined 31 variables were “1 = essential”, “2 = useful but not essential”, or “3 = not necessary”.
Degrees of necessity were used as success variables for safety program implementation. The data gathered were then calculated to obtain the CVR based on Lawshe's formula . According to Lawshe, with a panel of 54 respondents, the minimum value of CVR needs to be at least 0.39 in order for it to be acceptable. As a result, variables which have CVR values less than 0.39 were not included in final questionnaire. This preliminary study showed that all 31variables had CVR value greater than 0.39, varying from 0.80–0.95. Thus, it was inferred that all 31 variables were strongly valid for this research, and they could be included in the final form of a questionnaire.
3.3. Interobserver Reliability
To ascertain overall view of unsafe actions/conditions and safety management systems management interested test interobserver reliability between internal and external auditors. The internal auditors are senior safety engineers and safety managers from other sites of the company. The external auditors are qualified lead auditors of OHSAS 18001 from leading certification body. The auditors that is internal (I) and external (E) auditors have been asked to rate the activities mentioned in questionnaire on 1 to 5 likert scale (poor to excellent). The ratings of internal and external auditors for the routes 1 to 6 is shown in Table 4.
Based on the ratings of the auditors, interrater reliability was calculated by using the MedCalc software, and the results are shown in Table 5.
In order to improve safety behaviour of workers, a major programme must be introduced. This could be consisted of safety training programmes, lecture series, and so forth. The safety behaviour sampling study may be conducted on a weekly basis during and after the completion of the program. The safety behaviour control chart for each period following the start of the programme will show if a significant improvement in unsafe behaviour has been achieved. Modification of the program or components of the programme may be carried out as long as the unsafe behaviour is being reduced. Once the minimum of unsafe behaviour has been achieved (i.e., ), the behaviour sampling study may be repeated and the data plotted on the control chart to assure that unsafe behaviour remains at the desired minimum level. The strength of agreement between the internal and external auditors was generally at a lower level. The explanation to this can be the different professional and cultural background of the auditors. The internal auditors had experience of their own safety activities and also capable of identifying the deficiencies in their safety activities. Internal auditors can be utilized to perform auditing of safety management systems at least once in a month, which will help the management to reinforce the safety management system by mitigating the defects.
- Brown, “Total integration of safety professional into project management,” in Proceedings of the 1st International Conference of CIB Working Commission W99, pp. 137–144, Lisbon, Portugal, 1996.
- J. M. Anderson, “Managing safety in construction,” Proceedings of the Institution of Civil Engineers, vol. 92, no. 3, pp. 127–132, 1992.
- J. M. Anderson, “Can construction learn from the safety culture of others,” Construction Manager, vol. 13, no. 9, pp. 15–16, 1997.
- J. M. Anderson, “Addressing barriers to improve safety performance,” Construction Manager, vol. 4, no. 9, pp. 13–15, 1998.
- W . E. Tarrants, The Measurement of Safety Performance, Garland STPM, New York, NY, USA, 1980.
- J. C. R. Cariel, “Safety management in operations,” in Proceedings of the 1st International Conference on Health, Safety & Environmental, pp. 421–428, 1991.
- A. Laufer and W. B. Ledbetter, “Assessment of safety performance measurement at construction sites,” Journal Construction Management & Economics, vol. 112, no. 4, pp. 530–542, 1986.
- J. R. Turner and R. Müller, “On the nature of the project as a temporary organization,” International Journal of Project Management, vol. 21, no. 1, pp. 1–8, 2003.
- D. Cooper, “Safety management system auditing,” in Improving Safety Culture—A Practical Guide, John Wiley & Sons, Chichester, UK, 1998.
- I. Glendon, “Safety auditing,” Journal of Occupational Health and safety, vol. 11, no. 6, pp. 569–575, 1995.
- Clerinx, Langerbergh, and G. Vanden, “Audit in safety management systems in the process industry,” in Proceedings of the Commission for Environmental Cooperation Seminar (CEC '03), Ravello, Italy, October ,1993.
- J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological Measurement, vol. 20, no. 1, pp. 37–46, 1960.
- J. R. Landis and G. G. Koch, “The measurement of observer agreement for categorical data,” Biometrics, vol. 33, no. 1, pp. 159–174, 1977.
- J. L. Fleiss, “The measurement and control of misclassification error,” in Statistical Methods for Rates and Proportions, chapter 12, pp. 140–154, Wiley, New York, NY, USA, 1973.
- C. H. Lawshe, “A quantitative approach to content validity,” Personnel Psychology, vol. 28, no. 4, pp. 563–575, 1975.
Copyright © 2011 S. V. S. Raja Prasad and K. P. Reghunath. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.