Novel Technologies and Applications for Construction Materials 2016View this Special Issue
Research Article | Open Access
Tanvir Manzur, Khaled Mahmood Ehsan, Sinha Lamia Sultana, Samira Mahmud, "Measurement of Surface Damage through Boundary Detection: An Approach to Assess Durability of Cementitious Composites under Tannery Wastewater", Advances in Materials Science and Engineering, vol. 2016, Article ID 5368635, 13 pages, 2016. https://doi.org/10.1155/2016/5368635
Measurement of Surface Damage through Boundary Detection: An Approach to Assess Durability of Cementitious Composites under Tannery Wastewater
Concrete structures are often subjected to aggressive aqueous environments which consist of several chemical agents that can react with concrete to produce adverse effects. A Central Effluent Treatment Plant consisting of reinforced concrete structures which is being constructed at Savar, Bangladesh, is an example of such a case. The purpose of this treatment facility is to reduce the environmental pollution created by tannery wastewater. However, tannery wastewater consists of several chemicals such as sulfates, chlorides, and ammonium, which, from the literature, are known to generate detrimental effects on concrete. Evaluation of durability of concrete structures in such environments is therefore imperative. This paper highlights a technique of boundary detection developed through image processing performed using MATLAB. Cement mortar cubes were submerged in simulated tannery wastewater and the images of the surface of cubes were taken at several time intervals. In addition, readings for compressive strength and weight were also taken on the same days. In this paper, an attempt is made to correlate the results from image processing with that of strength and weight loss. It was found, within the scope of this study, that the specimens which suffered greater strength and weight loss also underwent greater loss of surface area.
Durability of concrete structures is a matter of concern when the structure is exposed to harsh environmental conditions. Assessing the durability of an existing concrete structure in such conditions is neither a very straightforward nor a simple task. It is rather complex, as several factors have a controlling effect on the behavior of concrete and also because no direct measurement of durability can be acquired. Therefore, this paper aims to provide a directive regarding the durability of concrete in tannery wastewater. Simulated water having different proportions of eight primary constituents of tannery waste that have a significant effect on cement composites has been used in this study. The proportions of constituents were determined from the analysis of collected field samples of tannery wastewater. Field samples were collected from the largest tannery industrial site of the country. The findings of this study therefore depict the behavior of concrete when exposed to tannery wastewater composed of these constituents. However, the practical application of the analysis described in this paper lies in the fact that it enables the engineer to perform a study using images of the damaged portion of the structure without the need to be present at the location of the structure itself, which may be beneficial particularly in the case of remote or less easily accessible areas. Also, the use of this technique could be useful when a prompt analysis is required to get a primary idea about the extent of degradation, before pursuing a more detailed study based on which mitigation measures may be employed.
Tannery effluent consists of several chemicals such as high amount of sulfides, lime, ammonium salts, chlorides, sulfate, and protein, which are discharged into the effluent from the beam house operations soaking, liming, and deliming as discussed by Ramasami et al. . It is well known that cement composites undergo deterioration in sulfate rich environments. The extent and kinetics of this deterioration depend on factors as discussed by Skalny et al.  and Brown  such as the sulfate content in the water, wet-dry cycle, and pH of the solution. There are three chemical reactions between sulfate ions and hardened cement pastes, which are recrystallization of ettringite, formation of gypsum, and decalcification of the main cementitious phase (C-S-H). The effects of sulfate attack can be summarized as expansion leading to spalling and disintegration, loss of strength, and loss of mass. Ammonium salts are the most aggressive among the factors which can degrade concrete as discussed by Lea  and Biczok . A very soluble calcium nitrate, a slightly soluble calcium nitroaluminate, and ammonia gas are produced during chemical attack on concrete by ammonium nitrate, inducing total leaching of calcium hydroxide and rapid decalcification of C-S-H. The effects of ammonium nitrate attack can be summarized as very large increase in porosity and notable swelling with the occurrence of cracks due to the formation of expansive crystals. Therefore, it is evident that concrete in contact with tannery waste is susceptible to high degree of deterioration. So, study is necessary to assess the durability of concrete structures constructed in waste prone facilities like Central Effluent Treatment Plant at Savar, Bangladesh.
In this paper, an image processing technique used to calculate the percentage loss of surface area of cement mortar cubes due to being immersed in simulated tannery wastewater has been discussed. This percentage loss of surface area cannot alone give an adequate assessment of the total damage that the specimens underwent. For this study to have a broader scope, more parameters needed to be considered. Correlating the results of image processing with those of strength and weight loss thus enabled a more comprehensive analysis to be performed on the extent of damage. Several difficulties were faced during image processing. For instance, grey level range of the images was very small, which made it difficult to detect the actual boundary line from the binary image. Contrast stretching was applied as a preprocessing technique as a result of which the boundary line detection from the image became easier and precise. Also, during the retrieval of the images, it was noticed that a certain region around the image formed shadow of the mortar cube. As a result, only thresholding would have included shadow when trying to locate the boundary from binary image, which would have been a source of error. The use of standard deviation filter for boundary detection after contrast stretching enabled avoiding the shadow of low contrast and detecting the main boundary. Finally, an efficient boundary detection technique has been developed to evaluate the surface loss of cement composites subjected to tannery wastewater.
2. Materials and Methods
The materials used, the preparation of test specimens, and the tests conducted are all discussed in detail in this section.
The materials used are as follows:(i)Graded standard sand: natural silica sand conforming to the requirements for graded standard sand in ASTM Specification C778 .(ii)Ordinary Portland Cement (OPC).Composition of cement was determined by X-ray fluorescence (XRF) analysis which was conducted using LAB CENTER XRF-1800. The cement composition is provided in Table 1.
2.2. Simulated Tannery Wastewater
Tannery wastewater samples were collected from Hazaribagh Area in Dhaka, Bangladesh. Table 2 shows chemical parameters of this field sample . Wastewater samples, resembling the composition of tannery wastewater, were then simulated in the laboratory. The dominant constituents of tannery wastewater that can make concrete susceptible to degradation were identified from previous studies [8, 9]. The following ions were selected for this study:(i)Anions: chloride and sulfate.(ii)Cation: ammonium.Total concentrations of these anions and cation in tannery wastewater were determined from laboratory testing and, eventually, stock solution was prepared. The concentrations of different constituents of stock solution are shown in Table 3. The first simulated sample had all selected primary tannery wastewater constituents in it. The other three simulated samples were prepared such that chloride was excluded from one, sulfate from another, and ammonium from the third. Table 4 shows the description of different types of simulated water samples used in the study.
2.3. Compressive Strength Test and Weight Loss Measurement
The strength test was conducted in accordance with ASTM C109 . Three samples were taken for each type of water condition on each day of testing, that is, 90 days and 180 days. The cement mortar cubes were prepared as per ASTM C109 Code . The mortar used consisted of 1 part of cement and 2.75 parts of sand proportioned by mass. Fifty-millimeter (2-inch) test cubes were compacted by tamping in two layers. The cubes were cured for one day in the molds and stripped and immersed in lime water. A water-cement ratio of 0.485 was used for all the specimens. For measuring weight loss of samples, cubes were first cured in lime water for 28 days. After curing, cubes were then oven dried for 3 hours at 110°C and initial weight was measured. A similar process was followed to calculate the weight at 90 and 180 days. The mortar cubes were removed from the trays and dried in the oven for 3-4 hours at 110°C and then the final weight was determined. Weight loss (%) was calculated from the initial and final weights.
2.4. Image Capturing
Large trays were filled with the water samples, each tray containing a single type of water as described in Table 4. Samples were submerged in simulated water of each type as shown in Figure 1. Images of surface were taken for each cube using a DSLR camera before submergence using an image capturing setup. The setup consisted of a black cylindrical object made of nylon. An LED source was fitted to the upper end which emitted LED light when connected to a power supply. The specimen was placed inside the cylindrical setup such that it was at the bottom end of the cylinder. Image was captured from the light source end. The camera was placed at the same position for every image, so a fixed focal distance was maintained for every image. After the submergence period of 90 days and 180 days, cubes were dried in the oven and images were taken. The initial and final pictures were taken on the same particular exposed surface to ensure appropriate comparison between the initial and final conditions.
3. Image Processing
This section describes the methodology employed to obtain the percentage of surface area loss through image processing. First, the RGB image (True Color Image) was taken which consists of three independent planes where every pixel color is a combination of red, green, and blue intensity values, the primary components of colors. After that, contrast stretching was done as an enhancement technique for boundary detection as discussed by Casado . Using a filter, the standard deviation of every pixel was calculated. Then, the image was converted into binary image. However, it showed noise along the boundary line. Median filter was used to remove noise, followed by “bwareaopen,” a function of MATLAB  to remove small objects. In order to get final boundary line by connecting the segmented pixels, dilation followed by erosion was done. Figure 2 shows the flow diagram of the image processing technique used in the study. Details of every step are described in the following sections.
3.1. Contrast Stretching (CS)
Contrast is the difference in pixel intensity which can provide details about boundary of an object. The visibility of an object from its surroundings depends mostly on its boundary features. Adjustment of contrast is needed before extracting features from images having areas of low contrast or homogeneous intensity across the same boundary line. Homogeneity means statistical consistency where pixel intensities have close values. Weak homogeneity leads to local spatial dissimilarity and breaks the continuation of arrangements which results in segmented pixel output along the boundary. Such homogeneity creates very low range of grey levels. Besides, low-contrast images may generate from poor illumination or wrong settings of lens aperture during image acquisition . Surface texture conditions of the object itself may also generate low-contrast images. To overcome such problems, contrast stretching technique can be followed. Contrast stretching is a simple image enhancement technique which increases the dynamic range of the grey levels. In the current study, the accuracy of detecting the boundary was mostly dependent on the surface conditions of the mortar cubes; that is, the intensity variations of the image mostly depend on the extent of precipitation on the surface of the cubes under different water conditions.
The general form of contrast stretching as discussed by Casado  is where is the output image value, is the input image value, is the thresholding value, and is the slope.
Initially, histogram equalization technique was used to detect exact boundary. Histogram equalization distributes the frequency of greyscale values to a wider range which effectively adjusts the global contrast of an image. However, from Figure 3, it is apparent that histogram equalization failed to retrieve exact boundary. This is due to the fact that histogram technique is capable of enhancing black and white areas rather than middle grey levels . Contrast stretching is thus used to identify the boundary of the damaged surface of the samples. It is evident from Figure 4 that, before contrast stretching, pixels of the concrete block were of close grey level ranges, approximately within 150–250. But, after contrast stretching, grey level ranges significantly changed to a wider range of 50–220 by enhancing middle grey levels. It was clearly visible from the image that after contrast stretching it was darker in the dark areas and lighter in the light areas as shown in Figure 4(a) indicating the increased difference between intensities of the pixels.
3.2. Boundary Detection Using Local Standard Deviation Filter
Standard deviation is a statistical measure which is used to express variability in data set. In case of image processing, as discussed by Kumar and Gupta , standard deviation means the deviation of intensity of a pixel from mean in a specified neighborhood. Low standard deviation indicates that data set values are close to mean value. Higher standard deviation means significant dispersion of the values from the mean. Standard deviation enables measuring dispersion, which makes boundary detection possible, as the level of intensity changes at the boundary of an object by a significant amount. Calculation details of local standard deviation using filter are described in the following sections.
3.3. Filter for Local Standard Deviation
Local standard deviation of the input image was calculated using “stdfilt,” a built-in function in MATLAB . The images were converted from unit8 to double-precision floating point data type which ranges from 0 to 1. Each output pixel represents standard deviation by neighborhood around the corresponding pixel in the input image . The texture boundaries can be detected effectively by calculating standard deviation as discussed by Hidayat and Green . If only one texture is present in the local neighborhood, the output only represents intraclass variations. When both textures across boundary line are present, the output represents interclass along with intraclass variations . Besides, standard deviation filters work well in case of pattern recognition in noisy image [13, 15]. After filtering, two types of output images were obtained in this study: Type 1. Output showing the undamaged portion. Type 2. Output showing the damaged portion.
Accuracy in detecting boundary line depends on the intensity variations and uniformity of a region. Nonuniform intensity variations, in other words weak homogeneity, occurred due to variations of depth which eventually resulted in segmented and noisy boundary lines after transformation into binary image. Similar phenomena were also observed by Ito et al. .
3.4. Conversion to Binary Image
Resizing of image was needed as the output image after filtering contained segmented pixel because of nonuniform variations in intensity. The output image showed only the pixels with higher standard deviation, which included both the boundary and the noise around the boundary. Resizing was important in designing filters and structuring element for noise removal. Using MATLAB function “imresize” , the images were resized into . Conversion to binary image was done by using MATLAB  built-in function “im2bw” with changing the levels within certain ranges. It was found that conversion worked well within the range from 0.01 to 0.03 in class double.
3.5. Shadow Removal Technique
As image contrast was increased by contrast stretching (CS), it made the dark zones darker as highlighted in Figure 5(a). As a result, when the image was converted into binary image (Figure 5(b)), it also detected shadow as boundary line with higher level value. Different points within the shadow zone were of close intensity variations. Using standard deviation filter, the shadow was significantly removed. As a result, only the pixels within the specified level appeared in the binary image, as shown in Figure 6.
3.6. Filtering for Noise Reduction
Noise is described as the random fluctuations of pixel intensity value. “salt-pepper” noise is impulse valued noise. For an 8-bit image, the typical value for pepper noise is 0 and that for salt noise is 255. In this study, nonuniform precipitation created maximum value impulse noise on the surface, that is, “salt noise.” Therefore, such pixels containing noise interfered with a small number of pixels surrounding it . For these pixels, the mean differed significantly, as a result of which, standard deviation filter showed them as an output. Conventional low pulse filtering was not effective for impulse noise . Median filter is often used since it is capable of preserving the edges and small details in the image . Hence, median filter was used for effective removal of “salt-pepper” noise in this study. MATLAB  function “bwareaopen” was then used for further removal of small objects which could not be removed by median filter . After noise removal, the output images were found to consist of segmented pixels. Therefore, in order to get continuous boundary line, dilation followed by erosion was done. As discussed by Gonzalez and Woods , dilation can be defined as the morphological operation which expresses vector addition between an image or point set and the structuring element. After dilation operation, object areas expand. Therefore, reduction of object area is required which is done by erosion technique. Erosion is defined as the operation where output points are from vector subtraction between a set of points and the structuring element. For Type 1, the size of “disk shaped” structuring element varied within the range of 3–15 and for Type 2 image it varied between 1 and 9. When the segmented pixels were connected, “imfill” function of MATLAB  was used to fill the closed boundary foreground pixels with white color . Figures 7 and 8 show the images after noise removal and after connecting segmented pixels, respectively.
3.7. Calculation of Percent Surface Damage
In case of images having low standard deviations, the segmented pixels were not connected. As a result, the size of the structuring element for dilation was much greater than for erosion. This process resulted in error as dilation increased object boundary compared with the actual one. To minimize this error, the same structuring element was designed for dilation and erosion to retrieve the initial images. In calculating initial pixel quantity, the same image was used by changing level value for thresholding. This process was followed due to the occurrence of damage at different depths of mortar block which preserved almost the original square shape of the boundary. While adjusting level value above a certain range, shadow may also be included depending on the presence of shadow. For this case, a slope value and a threshold value were adjusted to avoid shadow for getting initial pixels. Finally, the following equation was used for calculating the percentage of damage:
4. Results and Discussion
In this section, surface area loss determined by the proposed image processing technique is presented. The relations between surface area loss and strength and weight loss are also discussed. Figures 9 and 10 show the images of the cube surfaces taken after submergence in different water conditions for 90 days and 180 days, respectively. Both figures (Figures 9 and 10) show the initial images and the final images after image processing. Corresponding threshold value () and slope value () for contrast stretching and level for binary image transformations are also given. In addition, calculated percent damage using (2) is provided. It is found that cubes under T1 and T2 tannery water conditions lost about 12% and 3% surface area, respectively, at 90 days as shown in Figure 9. Cubes under T3 and T4 conditions show no damage. These values, therefore, indicate that, after 90 days of submergence, cement composites under T1 and T2 conditions suffer significant area loss whereas cubes under T3 and T4 conditions experience no damage at all. At 180 days, surface area losses of cubes under T1 and T2 conditions were found to be about 31% and 17.5%, respectively. Similar to the case at 90 days, cubes under T3 and T4 conditions show no surface damage. Hence, it is evident that cement composites under T1 and T2 tannery water conditions lost significant surface area at 180 days. On the contrary, T3 and T4 water conditions appear to have no detrimental effect on cement cubes. However, the efficiency of the proposed image processing technique depends on existence of a good corelation between the calculated surface area loss and the actual strength and weight loss of the cement cubes. Figures 11 and 12 show the comparison of the strength loss, weight loss, and surface area loss of different samples subjected to various combinations (T1, T2, T3, and T4) of tannery wastewater considered in the study. The aforementioned image processing has been developed to trace two types of conditions: one was damaged cubes with a certain percentage of surface area lost and the other was cubes without any loss of surface area. It is, therefore, not required to calculate the percentage gain in surface area for cubes which had expanded. Also, these expanded samples do not fall within the damaged category. This is why gains in strength or weight are ignored for this particular study and any percentage gain in strength or weight is instead regarded as “zero loss.” Moreover, percent gains in weight of cubes in all cases are found to be insignificant (in the range between 0.5% and 1.5%).
As it can be observed from Figures 11 and 12, both T1 and T2 cubes have undergone significant strength loss and weight loss whereas T3 and T4 cubes show little or no strength loss and weight loss. What is interesting is that consistency is observed between the test results of strength and weight and that of image processing; that is, for the cases where surface area loss occurred, strength and weight loss also occurred, and, for the cases where surface area loss was zero, strength and weight losses were also zero. It also appears that the relations between surface area loss by image processing method and weight and strength loss are not linear and will require a range of surface loss values to identify particular durability criteria, that is, actual strength loss. Therefore, in order to develop a statistically significant relationship between the proposed image processing outcome and the actual durability measurements, more experiments will be needed to be conducted with large sample size. Nevertheless, it is evident that the proposed image processing technique is in harmony with the experimental results and proves its potential to provide reliable durability information on cement composites.
5. Comparison between Conventional Edge Detection Techniques and the Proposed Method
In this section, the effectiveness of the proposed method in comparison with some commonly used boundary detection functions of MATLAB is discussed. Sobel, Prewitt, LoG, and Canny operators were used to detect edges of T1 sample after 180 days of submergence. The outcomes of these methods are then compared with the proposed technique.
In Sobel operator, convolution mask is applied both horizontally and vertically in and directions and the magnitude values are combined to find the absolute magnitude of the whole image [20–22]. Sobel creates good results for high frequency variations as discussed by Rana and Dalai . Sobel method returns edges at those points where the gradient magnitude values of the image are maximum as described by Juneja and Sandhu . Figure 13(a), displaying the output of Sobel operator, shows maximum gradient along with missing edges. Prewitt operator is a first-order derivative operator and uses eight possible orientations to estimate gradient of image intensity function in the neighbourhood [24, 25]. The entire set of eight masks are computed and the largest module is selected. Therefore, both Sobel and Prewitt operators result in similar edge detection output which is also evident from Figures 13(a) and 13(b). Laplacian of Gaussian (LoG) operator combines Gaussian filtering with Laplacian for edge detection. Gaussian filter smoothens the image as 2nd derivative is sensitive to noise. Detection is done by finding the zero crossing which produces double edges. According to Sharma and Kaur , LoG does not properly detect boundary at the corners and curves where the intensity changes abruptly. It can be seen from Figure 13(c) that this operator fails to locate the actual boundary and instead detects false boundary lines. Smoothing for noise reduction in such a case might lead to reduction of magnitude of image gradients as the gradient would be much lower because of low grey level range. Canny operator calculates gradient using the derivative of a Gaussian filter . Using double thresholding, Canny is less sensitive to noise and can detect true weak edges because of double thresholding. For noise reduction, Gaussian 2D smoothing kernel with specified standard deviation is used as a blurring operation . For Canny, edge detection results become erroneous for values of σ greater than 0.35 as discussed by Kumar and Singh . For determination of boundary line of surface damaged cement composites, it is necessary to locate weak lines, that is, fine edges, as actual areas appear to have homogeneity on both sides of the line. Figure 13(d) shows how Canny operator is detecting boundaries with false boundary lines along with missing portions as described by Dhankhar and Sahu . But it is clearly visible that this operator better performs detection in noise conditions .
(a) Boundary detection by Sobel
(b) Boundary detection by Prewitt
(c) Boundary detection by LoG
(d) Boundary detection by Canny ()
(e) Boundary detection by the proposed method
In the proposed method, as described earlier, it was found that grey level ranges of the images were small. Then, contrast stretching was done as an enhancement technique to increase grey level ranges which also included darker shadow zones. Then, standard deviation filter was used to locate intensity variations as well as boundary lines. Noise reduction was done using filter and, eventually, segmented pixels were found as an output. It is evident from the output of the proposed method that, compared to conventional edge detection techniques, the proposed method yielded better results along with much less false boundary lines as shown in Figure 13(e). As the output of the images consisted of intensity irregularities as shown in Figure 13(e), general morphological operations were adopted for image restoration as discussed by Raid et al. . To overcome intensity discontinuities, dilation followed by erosion was done. As a result, images with defined boundary lines were obtained. Then, the final pixel percentage was calculated using (2).
Figure 14 shows the edge detection of T1 samples after 90 days by various conventional boundary detection techniques and the proposed method. The superior efficiency of the proposed method as compared to traditional boundary detection techniques is also evident from Figure 14.
(a) Boundary detection by Sobel
(b) Boundary detection by Prewitt
(c) Boundary detection by LoG
(d) Boundary detection by Canny ()
(e) Boundary detection by the proposed method
This paper has described a method for evaluation of concrete degradation under tannery wastewater by employing an image processing technique. Some key findings of this study are as follows:(a)An edge detection technique has been developed in MATLAB which can effectively detect boundaries of mortar cubes where different areas show homogeneity or weak homogeneity along the same boundary line.(b)The effectiveness in using standard deviation for boundary detection has also been presented along with its ability to avoid shadow of low contrast by adjusting contrast.(c)Quantification of percentage of surface area loss was obtained once the boundary was detected.(d)An effective corelation between surface area loss and experimental strength and weight loss has been observed.(e)It was found that cubes which underwent greater loss of surface area also suffered greater loss of strength.(f)In addition, cubes which experienced greater loss of surface area were also the ones which lost more weight.(g)A comparison of the proposed method with available conventional techniques reveals the superior effectiveness of the proposed technique for boundary detection.When the results from image processing (surface area loss), strength loss, and weight loss were studied together, it could be inferred qualitatively that the cubes which faced greater loss of surface area are the ones which also suffered greater loss of strength and weight. However, whether or not quantification of this correlation is possible is a question which can only be answered through further research, preferably with a larger sample size. This study establishes a cornerstone for undertaking an extensive study regarding the patterns of surface area loss, weight loss, and strength loss to explore the correlation more elaborately.
The authors declare that they have no competing interests.
The authors are grateful to the personnel of the Concrete Laboratory, Department of Civil Engineering, Bangladesh University of Engineering and Technology (BUET), for their assistance and cooperation for successful execution of the experiments.
- T. Ramasami, S. Rajamani, and J. Raghavarao, “Pollution control in leather industry: emerging technological options,” in Proceedings of the International Symposium on Surface and Colloidal Science and Its Relevance to Soil Pollution, Madras, India, March 1994.
- J. Skalny, J. Marchard, and I. Odler, Sulfate Attack on Concrete, Spon Press, Taylor & Francis Group, London, UK, 2002.
- P. W. Brown, “An evaluation of the sulfate resistance of cements in a controlled environment,” Cement and Concrete Research, vol. 11, no. 5-6, pp. 719–727, 1981.
- F. M. Lea, “The action of ammonium salts on concrete,” Magazine of Concrete Research, vol. 17, no. 52, pp. 115–116, 1965.
- I. Biczok, Concrete Corrosion and Concrete Protection, Akademiai Kiado, Budapest, Hungary, 1972.
- “Standard specification for standard sand,” Tech. Rep. ASTM C778-13, ASTM International, West Conshohocken, Pa, USA, 2013.
- S. Saha and S. A. Papry, Assessment of cement mortar degradation when exposed to tannery wastewater [Undergraduate thesis], Bangladesh University of Engineering and Technology, 2016.
- M. Bosnic, J. Buljan, and R. P. Daniles, Pollutants in Tannery Effluents, UNIDO, Vienna, Austria, 2000.
- S. L. Sultana, S. Mahmud, and T. Manzur, “Review on possible detrimental effects of tannery wastewater constituents on concrete,” in Proceedings of the International Conference on Advances in Civil Infrastructure and Construction Materials (CICM '15), MIST, Dhaka, Bangladesh, December 2015.
- “Standard test method for compressive strength of hydraulic cement mortars,” Tech. Rep. ASTM C109/C109M-13e1, ASTM International, West Conshohocken, Pa, USA, 2013.
- C. O. Casado, Image Contrast Enhancement Methods, Research Project, Faculty of Telecommunications, Department of Radio Communications and Video Technologies, Technical University, Sofia, Bulgaria, 2010.
- Mathworks, January 2016, http://www.mathworks.com/help/images.
- V. Kumar and P. Gupta, “Importance of statistical measures in digital image processing,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 8, pp. 56–62, 2012.
- R. Hidayat and R. Green, Real-Time Texture Boundary Detection from Ridges in the Standard Deviation Space, Computer Science and Software Engineering Department, University of Canterbury, Christchurch, New Zealand, 2009.
- M. Mastriani and A. E. Giraldez, “Enhanced directional smoothing algorithm for edge-preserving smoothing of synthetic-aperture radar images,” Journal of Measurement Science Review, vol. 4, no. 3, pp. 1–11, 2004.
- A. Ito, Y. Aoki, and S. Hashimoto, “Accurate extraction and measurement of fine cracks from concrete block surface image,” in Proceedings of the 28th Annual Conference of the IEEE Industrial Electronics Society, pp. 2202–2207, Sevilla, Spain, November 2002.
- R. Garg and A. Kumar, “Comparision of various noise removals using bayesian framework,” International Journal of Modern Engineering Research, vol. 2, no. 1, pp. 265–270, 2012.
- F. A. Jassim, “Semi-optimal edge detector based on simple standard deviation with adjusted thresholding,” International Journal of Computer Applications, vol. 68, no. 2, pp. 43–48, 2013.
- R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, New York, NY, USA, 2002.
- I. Sobel, “An isotropic 3×3 gradient operator,” in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed., pp. 376–379, Academic Press, New York, NY, USA, 1990.
- D. Rana and S. Dalai, “Review on traditional methods of edge detection to morphological based techniques,” International Journal of Computer Science and Information Technologies, vol. 5, no. 4, pp. 5915–5920, 2014.
- K. Wang and W. Dai, “Approach of image edge detection based on Sobel operators and grey relation,” Computer Applications, vol. 26, no. 5, pp. 1035–1036, 2006.
- M. Juneja and P. S. Sandhu, “Performance evaluation of edge detection techniques for images in spatial domain,” International Journal of Computer Theory and Engineering, vol. 1, no. 5, pp. 614–621, 2009.
- M. S. Sri and M. Narayana, “Edge detection by using lookup table,” International Journal of Research in Engineering and Technology, vol. 2, no. 1, pp. 483–488, 2013.
- N. Senthilkumaran and R. Rajesh, “A study on edge detection methods for image segmentation,” in Proceedings of the International Conference on Mathematics and Computer Science (ICMCS '09), vol. 1, pp. 255–259, 2009.
- K. Sharma and N. Kaur, “Comparative analysis of various edge detection techniques,” International Journal of Advanced Research in Computer Science and Software Engineering, vol. 3, no. 12, pp. 617–621, 2013.
- J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
- M. Kumar and S. Singh, “Edge detection and denoising medical image using morphology,” International Journal of Engineering Sciences & Emerging Technologies, vol. 2, no. 2, pp. 66–72, 2012.
- P. Dhankhar and N. Sahu, “A review and research of edge detection techniques for image segmentation,” International Journal of Computer Science and Mobile Computing, vol. 2, no. 7, pp. 86–92, 2013.
- A. M. Raid, W. M. Khedr, M. A. El-dosuky, and M. Aoud, “Image restoration based on morphological operations,” International Journal of Computer Science, Engineering and Information Technology, vol. 4, no. 3, pp. 9–21, 2014.
Copyright © 2016 Tanvir Manzur et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.