Abstract

In the shaking table test of large cassette structure, story drift is an essential set of experimental data. The traditional method of displacement measurement is limited to problems such as necessary full contact with the structure model for installation of sensors, large work of installation, and easily interfered by environment. The noncontact displacement measurement method, such as optical measuring technology, can solve the above problems and serve as an effective supplementary method for traditional displacement measuring in the shaking table test. This paper proposed a vison-based displacement measuring method. Predesigned artificial targets which act as sensors are installed on each floor of the cassette structure model. A high-speed industrial camera is used to acquire the series of the images of the artificial targets on the structure model during the shaking table test. A Python-OpenCV-based structural calculation program combining computer vision and machine vision is developed to extract and calculate the displacement of the artificial targets from the series of the images acquired. The proposed method is applied in a shaking table test of a reduced-scale fifteen-floor reinforced concrete cassette structure model, in which the laser displacement meter and the seismic geophone are also applied as a comparison. The experimental results acquired by the proposed method are compared with the results acquired by the laser displacement meter and the seismic geophone. The average error of the story drift obtained by the proposed vision-based measurement method is within 5% and is in good agreement with the laser displacement meter and the seismic geophone, which confirms the effectiveness of the proposed method.

1. Introduction

Cassette structure is a new type of the space structure system, which is independently developed in China. There is a growing study interest in the composition, characteristic, and performance of reinforced concrete cassette structure in high-rise structures under earthquake action. The shaking table test is a good method to analyse the seismic behaviour and the performance of the cassette structure in the high-rise structure. In the shaking table test, the displacement measurement technology is one of the most important research fields of engineering detection. The displacement measurement methods can be roughly divided into two types, contact and noncontact [1]. The contact measurement method is mainly realized by using classic traditional sensors which mainly include the LVDT (linear variable differential transformer), inertial sensor, and wire-type displacement gauge. The noncontact measurement method includes using various traditional noncontact displacement sensors or optical principle-based displacement measuring methods which mainly includes holographic interferometry [2], speckle interferometry [3], laser distance measurement, and vision-based measurement methods [4, 5]. The measurement results of these contact methods are easily affected by the structure model, especially cracks and other damages of the structure under large earthquake action, which means that the displacement measurement requirements are more stringent and the reliability of the contact point connection is particularly important. In addition, the installation workload of the displacement gauge, seismic geophone, or other instrument with contacts becomes huge when the measuring points are too many. On the contrary, the vision-based method, as a kind of the noncontact measurement method, has no contact to the structure model and not interferes with the movement of the specimen, which is more reliable [6]. Compared to other optical-based methods such as holographic and speckle interference, version-based measurement method has the advantages of simpler equipment needed, lower requirements for the measurement environment, and a wider measurement range [7], which can replace traditional measurement methods in some situations or be an effective supplement to traditional measurement methods.

Many researchers have tried to apply the vision-based measurement method in the shaking table test. Ji [8], based on the principle of computer vision, using camera parameter calibration, image tracking, and three-dimensional point reconstruction technology, established a structural dynamic displacement test method using a consumer camera. Wang Xiaoguang et al. [9] gave a robust landmark matching algorithm and developed a three-dimensional full-field displacement measurement system for shaking table experiments based on the VS2010 development environment. Hyungchul Yoon et al. [10], using a consumer-grade camera, proposed a visual measurement method, in which the measurement mark points are selected manually. Zhou Ying et al. [11] used a consumer-grade camera as the acquisition device and adopted the characteristic optical flow technology based on point matching to realize the motion tracking of the target and obtain the displacement time course of the target. Han Jianping [12] compiled noncontact displacement measurement programs by MATLAB based on computer vision and performed displacement measurement in the shaking table test of a four-story reinforced concrete frame-filled wall structure model. Even though, the vision-based method has seldom been applied in the shaking table test of the huge structure model, especially in the large cassette structure.

In the shaking table test of the large cassette structure, due to the complicated background of the artificial target, the recognition and location of the artificial target become exceedingly difficult. Therefore, for the shaking table test of the fifteen-floor reduced-scale structure model, this paper proposed a noncontact measurement method. A designed artificial target is installed on the structure as a “sensor,” and a single high-speed industrial camera is applied to acquiring the series of the target’s image during the shaking table test. A calculation program is developed to extract the target from the complicated background and calculate the displacement of the artificial target. The measurement program is compiled by Python, and the OpenCV module is applied. The proposed method is applied in the measurement of story drift in the shaking table test, while the traditional displacement sensor is also applied as a comparison. The experimental results acquired by the proposed method and the traditional displacement sensor are compared to verify the effectiveness and precision of the proposed method.

2. Vision-Based Displacement Measurement Method

The technical route of the proposed vision-based displacement measurement method is shown in Figure 1.

2.1. Image Acquisition

In this paper, the high-speed industrial camera hk-a4000-tc500 is applied, and fixed focus lens () are used for image collection. The camera is connected to a calculation server equipped with a large-capacity solid-state hard disk through the optical fiber and control box, as shown in Figure 2. To ensure the quality and speed of the acquired images, the camera directly outputs grayscale pictures, which is convenient for later image processing and avoid the errors due to grayscale conversion.

2.2. Camera Calibration

To locate the artificial target in real world coordinate, the corresponding relationship between the real world coordinate system and the two-dimensional image coordinate system has to be determined. Therefore, a geometric model of the camera imaging must be established, the model parameters of which are camera parameters. Camera calibration is the procedure of determining camera parameters through experiments and calculations. In this test, considering that the high-resolution industry camera is applied and only the middle area of the artificial target image is used for calculation, the effect of optical distortion can be ignored [13]. Only the scaling factor is needed to be determined in the camera calibration.

SF (scaling factor) is the relationship between the image space and physical space, and the scaling factor calculation formula is given as follows [14].

The edge length of the artificial target in the image is acquired by performing subpixel corner point recognition on the images of the high-precision artificial target. The real size of the artificial target is known as , as shown in Figure 3. In this paper, the value of and is separately and . Therefore, the conversion coefficient can be obtained by using Formula (1), which is .

2.3. Computer Version-Based Artificial Target Recognition

In this paper, the computer vision-based artificial target identification method is applied to extracting the image area of the artificial targets, which is fundamental to the subsequent machine vision-based positioning of the artificial target point. The extraction procedure is shown in Figure 1, mainly including image filtering, edge detection, contour detection, and mask generation. The whole procedure is shown in Figure 4.

2.3.1. Image Filtering

The image filter will only be used in the computer vision-based artificial target extracting procedure. Some key information of the edge will be lost in the filtering procedure, such as Gaussian filtering [15] and median filtering, which will cause errors to the measurement results. To preserve the edge information, the edge preservation filtering (EPF) method is adopted in this paper.

2.3.2. Morphological Operations

After binarization, there are some burrs and interference information on the edge of the image, and some interference information can be removed by multiple morphological dilation and corrosion calculation, as shown in Figure 4.

2.3.3. Edge Detection

The improved canny edge detection method is used to detect the edge of the image after the morphological operation, which is the foundation of the contour extraction in the next step. The improved canny edge detection method is to calculate the gradient and direction of each pixel in the image, as shown in equations (2) and (3), and performs threshold filtering on the results to obtain the image edge. Then, through the nonmaximum suppression of the double threshold method, as shown in Figure 5, the unnecessary edges are filtered; thereby, a more realistic image edge is obtained.where and are the gradients of one pixel in x and y directions; is the direction of the gradient.

2.3.4. Contour Detection-Based Target Extraction

Contour is a set of contour points of a connected region, as shown in Figure 6. The procedure of the contour-based target extraction is to first obtain the contour of the picture by performing contour detection on the edge-detected image, and the detected contour is then approximated according to the principle of the minimum distance, through which some redundant contour is filtered out. Finally, the circumscribed rectangle of the obtained contour is calculated.

The edge of the artificial target is a connected region, by performing contour detection on the edge-detected artificial target image, the contour of the artificial target, and other redundant information can all be detected. Because the contour of the artificial target is already an approximate square, its bounding rectangle is also an square and satisfies certain conditions, while other irregular bounding rectangles cannot meet these features. Filter conditions can be set according to this difference to remove the unwanted bounding rectangle and obtain the bounding rectangle that only contains the artificial target.

Finally, the bounding rectangle of the artificial targets are separated from the original image. According to the vertex coordinates and the width of the bounding rectangle, a mask can be made. The size of mask is the same as the original image, but the inside of the rectangular area is set to 1, while the outside is set to 0. The images that only contain the artificial targets can be extracted by performing the intersection operation of the mask and the original image, as shown in Figure 4.

2.4. Machine Version-Based Artificial Target Locating

Two methods of artificial target locating are applied in this paper, which are the corner detection method and template-based grayscale centroid method, respectively.

2.4.1. Pixel Level Corner Detection

After the abovementioned computer vision-based processing, the image of the artificial target is successfully separated from the background. The subpixel corner detection is then used to calculate the center coordinates of the marked points. The main procedure is as follows: first, pixel level corner detection [16] and then corner detection on the subpixel level near the desired corner.

The pixel level corner detection uses the Harris corner detection method, that is, a local window centered by on the image slides on the image. The eigenvalues of the matrix M corresponding to each pixel is solved, where the matrix M iswhere is the grayscale value of point , , is the partial derivative of , to .

When the eigenvalues , satisfy the condition: (1) , are large; (2) , the corresponding pixel is the corner point.

2.4.2. Subpixel Level Corner Detection

The subpixel level corner detection is to search the real corner point around the pixel level corner detection point, as shown in Figure 7.

The subpixel corner detection is to solve the equation set under the conditions of the situation shown in Figure 6, which is as follows:(a)The image area near point is uniform, the gradient of which is 0(b)The gradient of the edge is orthogonal to the vector along the edge direction

Assuming that the starting point q is near the actual subpixel corner, all vectors which satisfy the above conditions are detected, and equation (5) is satisfied. Many sets of gradients and related vectors can be found around the p point, and the inner product of the vector and the gradient of is 0; the system of equations can be solved. The solution of the equation set is the subpixel accuracy coordinates of the actual corner point .where is the gradient of ; is the vector of points q and p.

2.4.3. Grayscale Centroid Method Based on Template Matching

If the extracted images of the circle-shaped artificial targets are directly calculated by the grayscale centroid method, the centroid position of the same artificial target will be different due to the selected boundary of the artificial target, which will cause errors to the displacement measurement results as shown in Figure 8.

Moreover, because the percentage of the artificial target in the whole picture is exceedingly small, it is impossible to directly perform the positioning calculation. To solve this problem, in this paper, the images of the artificial target area obtained by the computer vision method are further subjected to template matching to achieve more accurate segmentation. Then, the following equation is used to calculate the centroid coordinate of each artificial target.

2.5. Displacement Calculation

The displacement of the artificial target in the image coordinate system can be calculated according to the front and rear images. Then, the actual displacement of the artificial target can be obtained by multiplying the displacement in the image coordinate and the scaling factor, as shown in the following equation, which is the displacement of the point where the maker is installed on the structure.

3. Validation Test

3.1. Experiment Setup

The proposed method is applied in the shaking table test of a reduced-scale fifteen-floor reinforced concrete cassette structure model. The experiment was conducted on the shaking table of the Structural Laboratory in Jiulonghu Campus, Southeast University. The parameters of the shaking table are shown in Table 1. The structural model in this paper is shown in Figure 9.

The height of the structure model is , and the plane size of the structure model is , and the ratio of height to width is . To avoid the structure torsion of the model and the need to rotate the model because of the different rigidity of the chief axis, the plan layout of the model is set to square, and the orthogonal diagonal sandwich plate is used as the floor slab. The material of the structure model is microconcrete, which is to simulate real concrete material. The design elastic modulus of the microconcrete is of the concrete . The reinforcement is simulated by a galvanized iron wire, and the design yield strength is . The weight of the whole structure model is about , which meets the requirements of the seismic station. In order to get the seismic response of the structure under different ground motions, EI Centro #6 wave is selected in this experiment for analysis, and the working conditions are divided by the zoom factor into 8 and 10.

3.2. Installation of the Displacement Sensor

The arrangement of the artificial target, laser displacement meter, and seismic geophone is shown in Figure 2. The laser displacement meter applied is Keyence IL-600, and the seismic geophone applied is 941B. The measuring range of the Keyence IL-600 is , the sampling frequency of the Keyence IL-600 is , and the repetitive accuracy of the Keyence IL-600 is . The sensitivity of the 941B seismic geophone is , and the maximum range of the 941B seismic geophone is .

3.3. Displacement Results

In this paper, an industrial camera is used to acquire images of the artificial targets, and a measurement program is compiled in Python language to measure the x-axis model displacement of a reduced-scale fifteen-floor cartridge structure model in the shaking table test. Finally, the measurement results are compared with the traditional measurement sensor, laser displacement meter, and seismic geophone.

3.3.1. Displacement Time-History Curves of Each Floor

Due to space limitations, this article only gives the results of the two working conditions of 21 (EI Centro #6 wave, scaling factor of 10) and 20 (EI Centro #6 wave, scaling factor of 8), as shown in Figures 10 and 11.

4. Discussion

4.1. Error Analysis

It can be seen from Figures 10 and 11 that the X direction horizontal displacement curve of the measuring point obtained by the geophone, the laser displacement meter, and the visual image measurement method basically coincides. In the early period, when the structure began to vibrate, the agreement was generally good, but in the middle and later periods, the curve of the geophone gradually deviated from the curve of the vision-based method and laser displacement meter, in which the curve of the vision-based method coincides well with the curve of the laser displacement meter. By analyzing the results, there are three main reasons:(1)Because the structural model is too large, the vibration of the shaking table causes the installation support of the laser displacement meter to vibrate, but the installation support is assumed to be a zero-displacement point, which introduces errors(2)Due to the high structural model and whiplash effect, large errors are introduced into the seismic geophone(3)The displacement of a seismometer is obtained by integrating the velocity or acceleration, during which the error is amplified, and the deviation is caused

4.2. Quantitative Analysis of Error

In order to quantitatively analyze the measurement results of the visual method and the traditional displacement sensor, the correlation coefficient is used to evaluate the correlation between the image measurement and the traditional measurement method. The calculation formula is given as the following equation:where and are the dynamic displacement values of traditional displacement sensors and visual methods, respectively; and are the average of the above two sets of data, respectively. The value range of is 0∼1, 0 means nothing, 1 means perfect match.

Under the working condition 21 and the working condition 20, the horizontal maximum displacement and the error and correlation coefficient of the displacement data obtained by the proposed and the laser displacement meter on each floor are shown in Tables 2 and 3.

It can be seen from Table 1 and Table 2, under the working conditions 21 and 20, the error of each floor is kept within 5%, which can meet the needs of displacement measurement in the field of civil engineering. The correlation coefficients are very close to 1, which proves that the results obtained by the laser displacement meter are highly consistent with the results of the image measurement.

5. Conclusion

In order to measure the story drift of the reduced-scale fifteen-floor reinforced concrete in the shake table test, this paper proposed a vision-based displacement measurement method, which combined Python programming, computer vision, and machine vision algorithms. The noncontact vision-based measurement method consists of four parts, artificial target image acquired by an industrial camera, the extraction of the artificial targets by computer vision, the positioning of the artificial targets by machine vision, and the corresponding measurement and calculation programs complied in Python, and the effectiveness and accuracy of the method was proved by a series of structural model shaking table tests, and the following conclusions were obtained:(1)Using the Python programming language, combined with related computer vision algorithms, under complex backgrounds, the marked points installed on the structural model can also be well extracted, which is the foundation for the positioning of the marked points and noncontact measurement(2)The average error of the horizontal displacement of each floor obtained by the proposed vision-based measurement method is within 5%, which is in good agreement with the laser displacement meter, and the correlation coefficient is much greater than 0.99(3)The effectiveness and accuracy of the proposed method is verified and applied in the later shaking table test in the shaking table of the Southeast University Jiulonghu Campus

Data Availability

The XLSX data used to support the findings of this study may be accessed by emailing to the corresponding author, Wang Yanhua, who can be contacted at [email protected].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors gratefully acknowledge the financial support for this study by the National Natural Science Foundation of China (51708110 and 11827801).