Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015 (2015), Article ID 610750, 7 pages
http://dx.doi.org/10.1155/2015/610750
Research Article

Fusing Multiscale Charts into 3D ENC Systems Based on Underwater Topography and Remote Sensing Image

1Navigation College, Dalian Maritime University, Dalian 116026, China
2Computer and Information Technology College, Nanyang Normal University, Nanyang 473061, China

Received 6 January 2015; Accepted 26 February 2015

Academic Editor: Luciano Mescia

Copyright © 2015 Tao Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The purpose of this study is to propose an approach to fuse multiscale charts into three-dimensional (3D) electronic navigational chart (ENC) systems based on underwater topography and remote sensing image. This is the first time that the fusion of multiscale standard ENCs in the 3D ENC system has been studied. First, a view-dependent visualization technology is presented for the determination of the display condition of a chart. Second, a map sheet processing method is described for dealing with the map sheet splice problem. A process order called “3D order” is designed to adapt to the characteristics of the chart. A map sheet clipping process is described to deal with the overlap between the adjacent map sheets. And our strategy for map sheet splice is proposed. Third, the rendering method for ENC objects in the 3D ENC system is introduced. Fourth, our picking-up method for ENC objects is proposed. Finally, we implement the above methods in our system: automotive intelligent chart (AIC) 3D electronic chart display and information systems (ECDIS). And our method can handle the fusion problem well.

1. Introduction

Since the 1960s, when “geographic information system (GIS)” was issued, the relative theories and technologies, including spatial data processing and management, spatial data service, web GIS, and 3D GIS, have been greatly developed [1]. Among them, 3D GIS is attracting more and more attention because it is more visually efficient, quicker, and easier to understand based on cartographic 3D visualization and can provide rich 3D spatial analysis functions [2]. And 3D GIS can take advantage of more types of spatial data, such as 3D building models and terrains.

With the development of computer science and sensor technology, spatial information can be conveniently obtained in various ways. Up to now, the most advanced and efficient way is the air-to-ground observation (remote sensing). It has many advantages such as large perception range, short repetition period, and high resolution[2]. Since 1957, when USSR successfully sent the world’s first artificial earth satellite to the reservation orbit, spatial science research and technology application have entered a new situation. At the beginning of 1970s, USA successfully sent the world’s first earth observation satellite (Landsat-1). Nowadays, aerospace surveying and mapping program gradually matures. A number of satellite systems are established, such as EarlyBird, QuickBird, GDE, OrbView, IKONOS, EROS-2, and SPOT-5 (HRG). This promotes the application of remote sensing images in 3D GIS. And with the development of 3D GIS systems such as digital earth and digital city, there is an increasing demand for remote sensing images.

Remote sensing image has been widely used in surveying and mapping, navigation, resource survey, meteorology, and so on. Researchers have committed to the combination of GIS and remote sensing image, especially the use of remote sensing images in 3D GIS systems. Numerous methods have been proposed to process and visualize remote sensing images in GIS systems [3, 4]. And many successful software applications have been developed, such as Google Earth, Google Map, NASA World Wind, ArcGlobe, and Windows Live Local. In order to explore the full value of remote sensing images, the appropriate information has to be extracted and presented in standard format to import it into geoinformation systems and thus allow efficient decision processes. So, many researchers study how to provide an appropriate link between the remote sensing image and GIS [5]. Lots of remote sensing pattern recognition approaches, such as supervised, unsupervised, and knowledge-based expert system approaches, have been developed, and these methods have been used in urban planning studies [69]. Some researchers use various methods to construct 3D building models from remote sensing images and then integrate the 3D models into spatial databases and GIS systems to support planning and analysis applications [1012]. And efforts have also been directed toward developing methods capable of producing global elevation data in digital formats [1316].

There is a huge potential of applications in reach, if we can take advantage of the 3D technology and remote sensing image in marine GIS systems, especially in ENC systems. Many researchers in this area have researched 3D charts to provide chart display systems, aimed to reduce the amount of marine accidents caused by fatigue, mental overload, and limited awareness of the navigational situation. Porathe and Sivertun [17] presented a research project suggesting the use of real-time 3D visualization techniques normally used in simulation environments as a navigation aid. Arsenault et al. [18] presented a prototype of 3D visualization system that overlaid a scanned paper navigational chart over a 3D bathymetry. Gold et al. [19] proposed a type of marine GIS system for maritime navigation safety. Ray et al. [20] introduced a 3D chart to facilitate the understanding of maritime behaviors and patterns at sea. Li et al. [21] presented an accurate method to describe underwater terrain.

However, most of the researches listed above mainly focus on the description of overwater information. Few researchers study how to fuse multiscale charts into 3D ENC systems based on underwater topography and remote sensing image. The ENC is critical to marine GIS systems. It is an abstraction and generalization of the real world, which focuses on sea data, with less emphasis on land information. The remote sensing image contains rich land information. And the 3D underwater topography can provide more direct and intuitive sounding information. So, these data should be fused together to provide a more visually efficient ENC system.

In this paper, we propose an approach to fuse multiscale charts into 3D ENC systems based on underwater topography and remote sensing image. Our fusion approach is mainly divided into three parts: view-dependent visualization, map sheet processing, and object rendering. Then, we introduce the picking up method for information inquiry. At last, we implement our method in our system AIC 3D ECDIS. The application shows the effectiveness of our approach.

2. Fusion Approach

In order to provide more rich and intuitive geospatial information for navigation, we use the remote sensing image to describe the overland information and use the 3D underwater topography to directly describe the sounding data. Artificial water surface objects and entity objects are extracted from the ENC to fuse with them. Artificial water surface objects, such as depth contours, coastlines, depth areas, anchor zones, and prohibited areas, are organized into point, line, and polygon object groups. Entity objects, such as beacon lights, bridges, and port structures, are represented by 3D models. Other objects (e.g., land areas) in the ENC can be ignored. In this paper, we propose an approach to fuse these data together. Fusion effects are shown in Figures 7, 8, and 9.

However, the ENC is based on the scheme of subdivision and multiscale. As can be seen from Figure 1, there is an overlap between the adjacent map sheets. And there is an association between the different scale map sheets in the same area. Therefore, in order to fuse multiscale charts into 3D ENC systems based on underwater topography and remote sensing image, four main problems should be solved:(1)handle the multiscale problem;(2)solve the map sheet splice problem;(3)fuse multiscale charts with the remote sensing image;(4)fuse multiscale charts with the underwater topography.Based on the above analysis, we propose a method to solve the problems listed above. The method consists of three parts: view-dependent visualization, map sheet processing, and object rendering. Firstly, use the view-dependent visualization technology to display the charts according to their scales. Secondly, use the map sheet processing method to deal with the map sheet splice problem. There are two main steps in this process: map sheet clipping and map sheet splice. Thirdly, use the rendering method to solve the last two problems.

Figure 1: Electronic navigational chart.
2.1. View-Dependent Visualization

Essentially, the 3D ENC system is a 3D computer graphics system, which is based on the 3D rendering pipeline. In order to display multiscale charts in 3D ENC systems, we should deal with the scale problem in the viewpoint observation system. Here, we use view-dependent visualization technology to display multiscale charts; that is, the display process of charts is determined by the change in viewpoint. The detailed steps of the view-dependent visualization are described as follows.(1)Calculate the height (the distance between the reference point and the viewpoint) of the current viewpoint in the world-coordinate reference frame.(2)Calculate the current display scale ds. ds is calculated by the following equation:In this equation, is the field-of-view angle measured in radians, is the width of the viewport measured in pixels, and is the height of the viewpoint measured in meters.(3)Display the chart according to the current display scale and its drawing scale. According to the display requirements of S-52 standard, the ENC should be displayed in its drawing scale, and it can be enlarged or reduced when needed. Zoom factor can be 0.5 to 3 times the drawing scale. However, the ENC can be limitlessly enlarged when needed. Thus, we mainly consider the reduced state. In this step, we design a display condition for each chart. To be specific, if the display scale is larger than 1 in 2 of the drawing scale of the chart, the chart should be displayed; otherwise, the chart should not be displayed. The larger scale chart is prior to the smaller scale chart, that is, the larger scale chart is displayed above the smaller scale chart.

2.2. Map Sheet Processing

In order to directly and intuitively display underwater topography, artificial water surface objects such as depth contours, coastlines, depth areas, anchor zones, and prohibited areas are displayed in semitransparent mode. However, the multiscale charts overlap and interweave. This will seriously affect the display effect. We propose a method to deal with this problem. Our method of map sheet processing is a dynamic process based on the change in viewpoint. There are three key points in our method: 3D order, map sheet clipping, and map sheet splice. The detailed steps of this process are shown in Figure 2.

Figure 2: The map sheet processing.
2.2.1. 3D Order

In order to get the right result in the map sheet processing, the charts should be processed in order. We design a process order called “3D order” to adapt to the characteristics of the chart. In the third step of the map sheet processing, the “3D order” is shown in Figure 3. The axis depicts the left-to-right direction, the axis depicts the down-to-up direction, and the axis is the scale axis depicts the small-to-large direction. The charts should be firstly processed from left to right and then from down to up and at last from small scale to large scale. So, 3D order is the sequence , , . This can ensure that no chart will be repeatedly processed or missed and larger scale chart is prior to smaller scale chart.

Figure 3: The schematic diagram for 3D order.

In order to facilitate description, here we give a schematic diagram for multiscale charts in Figure 4. Suppose the scale of chart A, chart B, and chart C is 1 : 25,000, the scale of chart D is 1 : 15,000, and the scale of chart E and chart F is 1 : 4,000. The process order should be the sequence B, A, C, D, F, E.

Figure 4: The schematic diagram for multiscale charts.
2.2.2. Map Sheet Clipping

Map sheet clipping is used to deal with the overlap between the adjacent map sheets. When two charts overlap, we should clip the other map sheet based on one map sheet in 3D order described above and update the geographic scope of each map sheet. Figure 5(a) shows the map sheet clipping result of the charts in Figure 4 at the same scale level. Figure 5(b) shows the last clipping result of the charts in Figure 4 at all scale levels.

Figure 5: The schematic diagram for the map sheet clipping.

This operation can be divided into two parts: map frame clipping and object clipping. The purpose of object clipping is to use the geographic scope of one chart to clip artificial water surface objects and entity objects in another chart. The geographic scope of the chart (map frame), whether regular or irregular, is a polygon. Artificial water surface objects and entity objects are point, line, or polygon objects in the chart. So, map frame clipping can be treated as a special case of object clipping. Here, we mainly describe the object clipping specifically. For a point object, we just need to judge if the object lies inside the geographic scope of the reference chart and then do the clip. For line and polygon objects, the clipping process is shown in Figure 6.

Figure 6: The clipping process of line and polygon objects.
Figure 7: Overwater view effect of the fusion.
Figure 8: Water surface view effect of the fusion.
Figure 9: Underwater view effect of the fusion.

It should be noted that, according to the data transfer standard S-57, line and polygon objects in ENC are segmented, and each part is marked with ordinal number, direction, display attribute, and so on. We sort the boundary points of each object into counterclockwise order to simplify the following calculations and then process each segment in counterclockwise order.

Suppose segment of the object and segment of the frame are being processed. We firstly transform their coordinates to Mercator coordinates. Mercator projection plane can be treated as the plane. Intersection judgment and intersection calculation can be performed by the following equation:where , , and are calculated by (3), (4), and (5), respectively. Considerwhere is (0, 0, 1) and is the intersection point. If the intersection point cannot be calculated by (2), it indicates that segment does not intersect with segment . At last, the coordinates of should be transformed to latitude-longitude coordinates.

In this clipping process, the intersection calculation and the clip operation are based on the counterclockwise order. And the calculation of each intersection point is followed by a clip operation. Boundary points that lie inside the frame of the reference chart should be removed. Frame points between two intersection points should be added to the boundary point set of the object in counterclockwise order. When all intersection calculations are finished, the object is also clipped.

2.2.3. Map Sheet Splice

Map sheet splice is used to deal with the splice point after the map sheet clipping. Splice points of the same object in two adjacent charts may have different coordinates. To handle this problem, there are three commonly used methods: enforcement method, averaging method, and optimization method. For two splice points, enforcement method directly assigns coordinates of one point to the other point. This method is mainly used in the case of knowing which point is more accurate and reliable. Averaging method uses the average values of the coordinates of the two points to replace their coordinates. Optimization method is relatively complicated and needs lots of intersection calculations.

In order to improve the efficiency of the map sheet splice, we adopt the following strategy.Firstly, compare the scale of the two adjacent charts.Secondly, choose the splice method according to the comparison result. If the two charts have the same scale, we use the averaging method to deal with the splice points. If one chart has larger scale than the other one, we choose the enforcement method. The reason is because larger scale chart is more accurate and reliable.

2.3. Rendering Method

There are two classes of ENC objects we use: artificial water surface object and entity object. Entity objects are represented by 3D models. Artificial water surface objects are organized into point, line, and polygon object groups. Essentially, artificial water surface objects are vector data. Most methods map the vector data to the terrain in 3D GIS systems [2224], and do not consider the rendering of multiscale vector data. However, this will not give an intuitive cognition of the artificial water surface object. We propose a method to intuitively describe the artificial water surface object and the underwater topography. The details of our method are as follows.

Extract entity objects from ENCs. Then, use 3D modeling tools, such as 3DS Max, SketchUp, and building information modeling (BIM) tools, to create 3D models for the representation of these objects.

Extract artificial water surface objects from ENCs. After the map sheet processing, boundary points of each object are sorted into counterclockwise order. For point and line objects, we should get their object attributes and spatial attributes for the following rendering. For polygon objects, we should find all the inner boundaries for each object and get their object attributes and spatial attributes. And we have to pay attention to the facing of the polygon. In most 3D graphics systems, the side whose boundary points are counterclockwise is the front side. That is why we use counterclockwise order.

Pretreat before rendering. Firstly, use view frustum to cull the scene [25]. This can effectively reduce data amount in the scene. Then, calculate the display attribute of each object by (6). If is less than zero, it indicates that the extent of the object is less than 1.0 pixel in the current scene; otherwise, the object should be displayed. ConsiderThe parameters of the equation are as follows::side length of the minimum bounding box of the object, measured in meters (m);:side length of the minimum bounding box of the object, measured in meters (m);:side length of the minimum bounding box of the object, measured in meters (m);:field-of-view angle, measured in radians (rad);:width of the viewport, measured in pixels (px);:distance between the viewpoint and the center of the minimum bounding box of the object, measured in meters (m).For entity objects, we use the distance between the viewpoint and the center of the minimum bounding box of the entity model to determine whether the model should be displayed. This method is also used in the rendering of the point object.

Use the color blend method in 3D computer graphics to draw the artificial water surface objects based on a certain elevation (the elevation of chart datum, the elevation of tidal datum, the elevation of tide, etc.) on top of the underwater topography in semitransparent mode. Hence, artificial water surface objects will not obscure the underwater topography. This will give an intuitive cognition of the underwater depth. It should be noted that all drawing styles should follow the display requirements of S-52 standard. Then, draw 3D entity models on the surface of artificial water surface objects according to their positions and adjust their elevations to get better visualization effect.

3. Picking Up Method

In order to query the information of ENC objects, we need to solve a problem first: pick up the object in the 3D ENC system. There are two classes of ENC objects that we use: artificial water surface object and entity object. For entity objects, we use the ray tracing method [26] to pick up them. For artificial water surface objects, the ray tracing method is relatively complicated and time-consuming. Here, we introduce a simple and ingenious method. The detailed steps are as follows.(1)Convert the pick-up point to a pick-up range. The pick-up point is a screen pixel point. We extend several pixels in the -axis and -axis directions, respectively. This fuzzy picking up method can provide better user experience, especially when picking up point objects and line objects.(2)Clear the color buffer of the pick-up range. This operation is executed in the back buffer, so it does not affect users’ visual effect. Then, draw all objects with specified picking up flag colors instead of the original colors. The same object uses the same picking up flag color. This process is also executed in the back buffer. At last, read the value of the color buffer of the pick-up range to judge whether an object should be picked up. After all objects are judged, we can get a list of picked up objects.(3)It should be noted that objects may overlap. So, we have to use the final value of the color buffer of the pick-up range to make the judgment; that is, the judgment must be made after the rendering of all objects is finished.This method can avoid complicated spatial intersection calculations. It is very efficient.

4. Application

We implement our methods in our system AIC 3D ECDIS [27]. AIC 3D ECDIS is a future ECDIS. This system can support global spatial data and 3D visualization. It uses multiresolution pyramid model to organize terrain data and remote sensing image data. And it visualizes these data in a unified framework and interface. This system can be published on the web to provide application and data service through the network.

There are two ways of implementing our method. One is to use the method on the server side and publish the data to the client side. The other is to use the method in the client program. In order to simplify the client’s workload, we choose to implement the method on the server side. When the client program needs data, it sends the request, which includes state parameters, such as viewpoint, viewport, world reference point, geographic scope, and display scale, to the server side. On the server side, spatial analysis servlet components and spatial calculation servlet components are responsible for the map sheet processing. These servlet components receive parameters from data service module and store the processed data in the cache memory. Then, the data service module publishes the data to the client. The client program receives the data and uses the view-dependent visualization technology and the rendering method to visualize the data. Here, we give the fusion effects in the client program. The fusion effects are shown in Figures 7, 8, and 9, where the green portion depicts the deep-water route and the symbol in the top right corner is the directional compass. As can be seen from these figures, entity objects, artificial water surface objects, remote sensing images, and underwater topographies are fused together.

5. Conclusions

This paper provides a solution for the fusion of multiscale charts in 3D ENC systems based on underwater topography and remote sensing image. It is the first time that this subject has been studied.

The remote sensing image, normally used in 3D GIS, is extended to enhance the understanding of the land environments in 3D ENC systems. The underwater topography gives an intuitive cognition of the underwater depth. The ENC data has important applications in the analysis and management of overwater spatial environments. Combination of these data helps in understanding the navigation environment. Our method based on the characteristics of the ENC is proposed to fuse these data together in the 3D ENC system. And we present the picking up method for ENC objects in the 3D ENC system.

At present, we implement the method on the server side. Further work has to be done in terms of data service, for example, server side concurrency calculation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities (Grant no. 3132013303).

References

  1. J. Gong, “Review of the progress in contemporary GIS,” Geomatics & Spatial Information Technology, vol. 27, no. 1, pp. 5–11, 2004 (Chinese). View at Google Scholar
  2. S. Xu, J. Wang, and Y. Sheng, “Review on the status of 3DGIS/4DGIS/TGIS and the development trends,” Computer Engineering and Applications, vol. 41, no. 3, pp. 58–62, 2005 (Chinese). View at Google Scholar
  3. T. Toutin, “Geometric processing of remote sensing images: models, algorithms and methods,” International Journal of Remote Sensing, vol. 25, no. 10, pp. 1893–1924, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Yang, D. W. Wong, R. Yang, M. Kafatos, and Q. Li, “Performance-improving techniques in web-based GIS,” International Journal of Geographical Information Science, vol. 19, no. 3, pp. 319–342, 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. U. C. Benz, P. Hofmann, G. Willhauck, I. Lingenfelder, and M. Heynen, “Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 58, no. 3-4, pp. 239–258, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Epstein, K. Payne, and E. Kramer, “Techniques for mapping suburban sprawl,” Photogrammetric Engineering and Remote Sensing, vol. 68, no. 9, pp. 913–918, 2002. View at Google Scholar · View at Scopus
  7. X. Yang and Z. Liu, “Use of satellite-derived landscape imperviousness index to characterize urban spatial growth,” Computers, Environment and Urban Systems, vol. 29, no. 5, pp. 524–540, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. B. N. Haack and A. Rafter, “Urban growth analysis and modeling in the Kathmandu Valley, Nepal,” Habitat International, vol. 30, no. 4, pp. 1056–1065, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. M. K. Jat, P. K. Garg, and D. Khare, “Monitoring and modelling of urban sprawl using remote sensing and GIS techniques,” International Journal of Applied Earth Observation and Geoinformation, vol. 10, no. 1, pp. 26–43, 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Hu, S. You, and U. Neumann, “Approaches to large-scale urban modeling,” IEEE Computer Graphics and Applications, vol. 23, no. 6, pp. 62–69, 2003. View at Publisher · View at Google Scholar · View at Scopus
  11. I. Suveg and G. Vosselman, “Reconstruction of 3D building models from aerial images and maps,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 58, no. 3-4, pp. 202–224, 2004. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Cheng, J. Gong, M. Li, and Y. Liu, “3D building model reconstruction from multi-view aerial imagery and lidar data,” Photogrammetric Engineering and Remote Sensing, vol. 77, no. 2, pp. 125–139, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. T. Toutin and P. Cheng, “DEM generation with ASTER stereo data,” Earth Observation Magazine, vol. 10, no. 6, pp. 10–13, 2001. View at Google Scholar
  14. R. Zomer, S. Ustin, and J. Ives, “Using satellite remote sensing for DEM extraction in complex mountainous terrain: landscape analysis of the Makalu Barun National Park of Eastern Nepal,” International Journal of Remote Sensing, vol. 23, no. 1, pp. 125–143, 2002. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Hirano, R. Welch, and H. Lang, “Mapping from ASTER stereo image data: DEM validation and accuracy assessment,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 57, no. 5-6, pp. 356–370, 2003. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Liu, “Airborne LiDAR for DEM generation: some critical issues,” Progress in Physical Geography, vol. 32, no. 1, pp. 31–49, 2008. View at Publisher · View at Google Scholar · View at Scopus
  17. T. Porathe and A. Sivertun, “Information design for a 3D nautical navigational visualization system,” in Proceedings of the 8th International Conference on Distributed Multimedia Systems (DMS '02), San Francisco, Calif, USA, September 2002.
  18. R. Arsenault, M. Plumlee, S. Smith, C. Ware, R. Brennan, and L. Mayer, “Fusing information in a 3D chart-of-the-future display,” in Proceedings of the U.S. Hydrographic Conference (US HYDRO '03), Biloxi, Miss, USA, March 2003.
  19. C. M. Gold, M. Chau, M. Dzieszko, and R. Goralski, “The marine GIS-dynamic GIS in action,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 35, pp. 688–693, 2004. View at Google Scholar
  20. C. Ray, R. Goralski, C. Claramunt, and C. Gold, “Real-time 3D monitoring of marine navigation,” in Proceedings of the 5th International Workshop on Information Fusion and Geographic Information Systems (IF&GIS '11), pp. 161–175, Brest, France, May 2011.
  21. Y. Li, P. Chen, and Z. Dong, “Sensor simulation of underwater terrain matching based on sea chart,” in Advances in Computer Science, Environment, Ecoinformatics, and Education: Proceedings of the International Conference, CSEE 2011, Wuhan, China, August 21-22, 2011, Part III, vol. 216 of Communications in Computer and Information Science, pp. 89–94, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  22. M. Schneider and R. Klein, “Efficient and accurate rendering of vector data on virtual landscapes,” Journal of WSCG, vol. 15, no. 1–3, pp. 59–65, 2007. View at Google Scholar
  23. C. Dai, Y. Zhang, and J. Yang, “Rendering 3D vector data using the theory of stencil shadow volumes,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, pp. 643–647, 2008. View at Google Scholar
  24. H. Chen, X. Tang, Y. Xie, and M. Sun, “Rendering vector data over 3D terrain with view-dependent perspective texture-mapping,” Journal of Computer-Aided Design and Computer Graphics, vol. 22, no. 5, pp. 753–761, 2010 (Chinese). View at Publisher · View at Google Scholar · View at Scopus
  25. F. Losasso and H. Hoppe, “Geometry clipmaps: terrain rendering using nested regular grids,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 769–776, 2004. View at Google Scholar
  26. M.-Y. Pan, T. Liu, D.-Q. Wang, D.-P. Zhao, and X.-Y. Zhang, “Web multidimensional digital waterway monitoring platform,” Journal of Traffic and Transportation Engineering, vol. 14, no. 2, pp. 97–103, 2014 (Chinese). View at Google Scholar · View at Scopus
  27. T. Liu, D. Zhao, and M. Pan, “Generating 3D depiction for a future ECDIS based on digital earth,” Journal of Navigation, vol. 67, no. 6, pp. 1049–1068, 2014. View at Publisher · View at Google Scholar