Abstract

With the rapid development of information society, a large amount of vague or uncertain English interpretation information appears in daily life. Uncertain information processing is an important research content in the field of artificial intelligence. In this paper, we combine the three-branch concept lattice and linguistic values with the digital elevation model and propose the three-branch fuzzy linguistic concept lattice as well as the attribute approximation method. In this paper, an improved serial algorithm for sink accumulation is proposed. The improved algorithm changes the order of cell calculation; after the cumulative amount of a “sub-basin” is calculated, all cells of the next “sub-basin” are calculated until all cells are calculated. The improved algorithm reduces the overhead space in the calculation process, reduces the pressure of cells entering and leaving the queue, and improves the calculation efficiency. The improved cumulant algorithm is compared with the commonly used recursive cumulant algorithm and the nonrecursive cumulant algorithm, and the improved algorithm improves by about 17% compared with the nonrecursive algorithm at 106 cell level, and the computation time of the recursive algorithm is about 3 times of the improved algorithm. Because the sink accumulation serial algorithm is an important part of the parallel calculation of sink accumulation, and the execution time is shorter by using the improved algorithm, this paper applies the proposed improved accumulation serial algorithm to the process of parallel calculation of accumulation.

1. Introduction

The semantic ambiguities in the corpus do not create enough obstacles for the source-language audience to understand the information in the corpus, but it is difficult for the target-language audience to accurately grasp the speaker’s intention, and different people may have different interpretations of the same word. Therefore, in addition to conveying the basic information of the corpus to the target language audience, the interpreter should also eliminate the semantic ambiguities that may cause misunderstanding as much as possible, so that the target language audience can accurately grasp the intention of the speaker’s discourse. In this corpus, semantic ambiguity is mainly reflected in two aspects: fuzzy restriction and fuzzy words. The calculation of sink accumulation is a key part of the construction of standard grammar and extraction of spoken idiomatic elements, and it is important to research the calculation algorithm of sink accumulation based on a digital elevation model. Fuzzy restriction words focus on modifying the degree of fuzziness of other words and do not possess fuzziness per se, while fuzzy words possess fuzziness per se. To make the target language audience grasp the source language information accurately, interpreters adopt different interpretation methods to deal with semantic fuzzy information [1]. The research related to digital elevation models has greatly contributed to the rapid development of English interpretation, and it is the common desire and goal of English researchers and interpreters to use grammatical elevation data to construct English interpretation models effectively, comprehensively, and accurately, and to better conduct relevant research on various fields of English on this basis. Among them, sink accumulation calculation is a key part of constructing standard grammatical and spoken idiomatic element extraction, and it is important to research the algorithm of sink accumulation calculation based on a digital elevation model.

In recent years, the resolution of DEM (digital elevation model) has increased from 30 m to 90 m to available submeter level resolution. This increase in resolution has led to an increase in the amount of data, which is currently about gigabytes of DEM data and growing, with billions of cells [2]. Even when only relatively low-resolution data are available, DEMs may cover large regions, such as the SRTM project, which has released 30-meter resolution data covering 80% of the world’s land, and projects such as TanDEM-X, which can acquire DEM data with up to 12-meter resolution with global coverage. As DEM resolution increases and coverage increases, the data volume of digital elevation models becomes larger and larger. Although computer processing and memory performance have improved significantly, traditional serial algorithms are still not effective for fast English information extraction, and sink accumulation algorithms for very large DEMs are still under development. The advent of parallel computing has greatly accelerated the speed of scientific computation and found new solutions to the above problems that are too computationally intensive to be solved. Convergent cumulant parallel extraction decomposes the very large-scale DEM data into discrete parts that are executed concurrently, and different discrete parts are executed at any of the same moment using multiple computational resources, thus achieving the goal of efficient and fast information extraction [3]. Computational resources typically include computers with multiple processors/multicore, or an arbitrary number of computers being connected. Ideally, the more resources are devoted, the shorter the completion time of a task. To execute only serial programs on contemporary computers would waste a lot of computational resources.

In the past two decades, the rapid development of networks, distributed systems, and multiprocessor computer architectures, and the rapid growth of data, both in terms of data volume and computational performance, have reflected the trend of parallelizing the study of sink accumulation extraction. At present, a lot of scholars have studied the parallel extraction of sink accumulation, which can handle the ultra-large-scale DEM data, but there are still problems such as frequent communication between nodes, waste of computer node resources, and frequent exchange of internal and external memory, thus affecting the time efficiency. It is important to develop a parallel processing framework for DEM with good scalability and carry out the parallelization research of sink accumulation extraction algorithm for ultra-large-scale DEM.

In the era of information explosion, many types of data, especially fuzzy data, are increasing year by year, so it is significant to study the data with linguistic value characteristics. Many scholars at home and abroad began to engage in research on linguistic values, and the literature [4] first proposed the theory of fuzzy sets and linguistic variables related to fuzzy set theory, which is an effective tool for fuzzy information processing and is widely used in areas such as uncertainty reasoning, evaluation, and decision-making. Soon after, the literature [5] proposed the first fuzzy linguistic approach to approximate inference by using linguistic variables, which was a great milestone in the study of linguistic values. Building on the previous work, the literature [6] proposes a consensus model for group decision-making based on linguistic preference relations to further address the problem of multigranularity decision-making in linguistic environments. The literature [7] discusses the extended language-valued representation model and its associated linguistic aggregation operator and constructs an effective interactive procedure to solve the multiattribute decision-making problem with the help of modifying the satisfaction of the solution to implement the interactive process until the best solution is obtained. These research works are breakthroughs in the field of multiattribute decision-making. The literature [8] discusses the association relationship between objects based on fuzzy linguistic terms and theoretical domains and defines the method of determining the semantics of fuzzy linguistic terms, and clear fuzzy linguistic values can be widely used in knowledge mining and data analysis. Linguistic values have applications not only in decision-making problems but also possess adaptability in the field of reasoning. The literature [9] proposes linguistic truth-value inference methods that can be used in decision-making with the help of the lattice implication algebra in mathematics, combined with the basic ideas of linguistic truth-value logic.

Research related to digital elevation models has greatly contributed to the rapid development of English interpretation. It is the common desire and goal of English researchers and translators to use grammatical elevation data to construct English interpretation models effectively, comprehensively, and accurately, and to better conduct related research on various fields of English on this basis. To study in-depth the structure of linguistic truth-valued intuitionistic fuzzy lattice and linguistic truth-valued intuitionistic fuzzy algebra, combined with truth and falsity degrees in intuitionistic fuzzy propositional logic systems, a linear rule approach on intuitionistic fuzzy propositional logic is investigated. The literature [10] gives qualitatively relevant operations and proofs for hesitant fuzzy language-valued sets and probabilistic inference steps based on hesitant fuzzy language-valued sets and triangular fuzzy numbers, which flexibly reflect the uncertainty of events. The literature [11] improves the G&M algorithm by proposing algorithms that are easier to implement and have substantially higher efficiency and accuracy. The literature [12] proposes a method in the EMFlow algorithm set to compress memory before writing to limit the amount of memory written and then reads to achieve locality by evicting the least-used blocks from memory using a least recently used (LRU) caching policy. To reduce the total number of I/O operations, the data divided into chunks is managed as a cache in a special data structure. In addition, a new strategy is used to subdivide island terrains that are treated separately. The literature [13] presents three similar algorithms that join shared memory parallel processing. The input DEM is divided into blocks, and each block is associated with its thread. The threads then perform some computations in parallel and periodically synchronize their boundary information. All blocks are managed by a centralized thread that swaps out the least recently used data blocks and tries to prefetch the ones it will need. Although the number of processors required is reduced, their algorithm still requires frequent interprocess communication. As the computation continues, many nodes may be functionally idle, wasting supercomputer service nodes and monopolizing resources.

This parallel algorithm proposed in the literature [14] has the above significant advantages, but the current parallel implementation is only for two algorithms, depression filling and single flow direction-based sink accumulation calculation, which makes this parallel framework not yet complete for the entire set of algorithms for ultra-large-scale DEM hydrological information extraction, thus making this parallel framework less effective in practical applications. The literature [15] proposed a multiattribute decision-making method with uncertainty in attribute values and attribute weight values by combining evidential reasoning methods and stochastic multicriteria acceptability analysis. In the literature [16], to deal with the diversity and uncertainty of knowledge types in complex industries, a hybrid knowledge base diagnosis system based on evidence fusion is proposed to establish different types of expert knowledge systems and assign reliability weights to them adaptively, which improves the utilization of information and the correctness of the system. The literature [17] proposes multistate rational decision-making based on evidence-based reasoning and prospect theory, constructs value functions and weight functions, and combines D-S reasoning to fuse the prospect values of multiple attributes to select the best solution. The literature [18] obtains reliability factors by quantifying the uncertainty of samples and the evidence generated and combines multiple evidence generated from quantitative samples or qualitative knowledge of multiple attributes to propose an attribute classifier based on evidential inference rules to provide higher classification accuracy. The literature [19] proposes temporal causality modeling and inference methods to extend dynamic uncertain cubic causality graphs. The approach generates causal graphs continuously, allowing causality to permeate an arbitrary number of time slices and discarding restrictive assumptions about the structure of the underlying graph that are commonly relied upon in existing research.

3. Identification of Fuzzy Information in English Interpretation Based on the Digital Elevation Model

3.1. Rationale for the Application of Digital Elevation Models in English Interpretation

Digital elevation model, as a branch of digital terrain model (DTM), is a solid ground model that portrays the state of surface undulation utilizing an ordered numerical matrix and a dynamic system modeling method based on expert knowledge and historical data [20]. It is possible to abstract grammar, word frequency, and sentence meaning into a model of undulation in English interpretation, and structurally, English interpretation fuzzy information can be defined as a kind of fuzzy weighted directed graph with feedback loops, whose structure is shown in Figure 1. It describes the basic behavior of a complex dynamic system in terms of concept nodes (i.e., objects, states, variables, or entities in the system), each of which has a precise meaning for the system under analysis, and uses weighted arcs with symbols to connect the concept nodes, which are used to represent the complex causal relationships between concepts.

Suppose there is a fuzzy cognitive graph with nodes, each node represents a concept, and the state of the concept is described by the state values of the nodes. The set of these nodes can be represented using the vector , where denotes the -th concept node. The state values of all nodes in the model at time are represented using the vector , where denotes the state value of the -th node at time . The larger the state value of a node, the greater the impact on the system. When node influences node , node is said to be the cause node, and is the result node, and the causal relationship between them is represented by a directed arc from to , and the weight of this arc is noted as , whose magnitude determines the magnitude of ’s influence on . The mutual causal relationships among all nodes can form a matrix, the weight matrix of the fuzzy cognitive graph, denoted by .

The reasoning process of a fuzzy cognitive graph is the process by which nodes change the state of each other through casual interactions. Assuming that the state of the fuzzy cognitive graph at a time is , then the state value of each node in the fuzzy cognitive graph at moment is inferred using the following equation. where , denotes the state value of the cause node at moment , denotes the state value of the result node at moment , and denotes the weight of the arc of the cause node pointing to the result node . The function is called a transition function, which serves to restrict the state value of the node at the next moment to a certain state domain, and there are several common types of transition functions.

As can be seen from the abovementioned conversion functions, the state domains of the nodes of the fuzzy cognitive graph restricted by different conversion functions are different. The two-valued function restricts the range of values of each node to the set {0, 1}, using 0 to denote the stable state of the concept and 1 to denote the increasing state of the concept, but not the decreasing state of the concept. The three-valued function restricts the range of values taken by each node to the set {-1, 0, 1} and uses -1 to denote the decreasing state of the concept, 0 to denote the stable state of the concept, and 1 to denote the increasing state of the concept. Conceptual nodes (i.e., objects, states, variables, or entities in a system) describe the basic behavior of a complex dynamic system. Each concept node has a precise meaning for the system under analysis, and a weighted arc with symbols is used to connect concept nodes to represent complex cause-and-effect relationships between concepts. In addition, neither the fuzzy cognitive map using the binary function nor the trivial function can describe the degree of increase or decrease of the concept. The sigmoid function and the hyperbolic tangent function restrict the range of values of each node to [0, 1] and [-1, 1], respectively, to represent both the increasing, decreasing, and stable states of the concept and to describe the degree of increase or decrease of the concept. One of the parameters determines the slope of the curves of the sigmoid function and the hyperbolic tangent function near the origin, as shown in Figure 2. The larger the value of , the steeper the curve near the origin, and the more the transition function approximates the step function. The smaller the value , the flatter the curve near the origin, and the more the transition function approximates the linear function. It has been shown that fuzzy cognitive maps have greater inference power when continuous transition functions are used. Therefore, the sigmoid function and hyperbolic tangent function are the most used transformation functions. However, it should be noted that the choice of the transformation function usually depends on the system requirements, i.e., the role each node plays in the actual system.

The inference formula (6) of the fuzzy cognitive graph shows that the state value of each node in the fuzzy cognitive graph at one moment will have an impact on the state of the nodes associated with it in the system at the next moment through a weighted directed arc so that the state of the system is updated every period that passes. After several iterations of updates, the system eventually reaches three states. (1)The system reaches some equilibrium point, meaning that after several iterations of updates, the mutual causal interactions between the nodes in the system will reach an equilibrium state, and the state of the system starts to remain constant (stationary), i.e.,where is a point in time after the system state has remained stationary. (2)The system enters a finite loop state, meaning that the system starts at a point in time and its state enters a loop state, exhibiting cyclical behavior thatwhere is some point in time after the system enters the cyclic state and is the period of the system state cycle. (3)The system is in a chaotic state, i.e., the states of the system are not regular and cannot converge to the equilibrium point or pole finite loop states. A fuzzy cognitive graph using either a binary or a three-valued function as the transition function has only a finite number of states. The reasoning process of the fuzzy cognitive graph is the process of nodes changing each other’s state through causal interactions. A fuzzy cognitive graph with nodes has different states if a binary function is used and different states if a three-valued function is used [21]. A fuzzy cognitive graph using a sigmoid function or hyperbolic tangent function as the transition function can describe the state of the system in terms of fuzzy values in the interval so that each concept has an infinite number of values and the fuzzy cognitive graph has an infinite number of different states. Thus, a fuzzy cognitive graph with nodes that uses a transition function that is a binary function will only end up in the equilibrium or limit ring state, and no chaotic states will occur. Since the system has only different states, it will return to the first presented state after at most iterations of updates. Similarly, if the transition function used is three-valued, the system will never appear in a chaotic state but will eventually reach the equilibrium point or the limit ring state with a maximum period of . If the transition function used is a sigmoid function or a hyperbolic tangent function, then the system will have an infinite number of different states, so the system may end up not only in equilibrium or a limit ring state but also in a chaotic state, and the system is more sensitive in that a small change in the weights of the arcs or the initial state values of the nodes may cause a dramatic change in the dynamic behavior of the system

3.2. Application of Digital Elevation Models to Fuzzy Information Recognition in English Interpretation

We propose the computation method of coarse-grained object similarity by the basic idea of granular computing, to obtain the coarse-grained object fuzzy linguistic formal background by simplifying the traditional fuzzy linguistic background. The traditional calculation method can get all the concepts in the coarse-grained object fuzzy language formal background, to analyze the intrinsic relationship between extents and connotations. However, this method is relatively complex for computers. In this section, we will propose the adjacency matrix and association matrix in the coarse-grained object fuzzy language formal context according to the relevant ideas of graph theory, to obtain the weighted graph between attributes and object relations in the coarse-grained object fuzzy language formal context. Finally, according to the correspondence in the weighted graph can quickly and accurately get the intrinsic relationship between extents and connotations, as shown in the following equation.

To further analyze the intrinsic relationship between objects and attributes, we preprocessed the coarse-grained object fuzzy linguistic formal context according to the definition, which yields the attributes and , as equivalent attributes to each other, and we retained the attribute an according to the definition, thus obtaining the preprocessed coarse-grained object fuzzy linguistic formal context.

The three-branch conceptual lattice can express three parts, namely, acceptance, rejection, and noncommitment. However, the classical three-branch conceptual lattice can only use “0” and “1” to represent the relationship between connotation and extension. This chapter combines the three-branch conceptual lattice with linguistic values and proposes a three-branch fuzzy linguistic conceptual lattice. The key to the choice of the conversion function is the value of λ. The larger the value of λ, the steeper the curve near the origin, and the more the conversion function approximates the step function. The smaller the value of λ, the flatter the curve near the origin, and the more the conversion function approximates the linear function. It has been shown that the fuzzy cognitive map has greater inference power when a continuous transition function is used. Therefore, the sigmoid function and hyperbolic tangent function are the most commonly used transformation functions. However, it should be noted that the choice of the transformation function usually depends on the system requirements, i.e., the role that each node plays in the actual system. This can express the real-life fuzzy information and explain the meaning of the information with linguistic values, as shown in the following equation.

Any node of the fuzzy cognitive graph can be used as an input node or an output node. In NHL, the expert is required to designate certain nodes in the model as output nodes according to each problem and the rest of the nodes as initial stimuli or internal concepts of the system. The output nodes act as the exposed external features of the system and represent the final state of the system. NHL will update the weights of all the connected nodes’ arcs after each iteration of updating the state values of the nodes. The formula for updating the weights is shown below.

Organizing these response sequences in matrix form results in a final matrix of state responses. where is the number of different initial state vectors, is the number of concepts in the system to be studied, and is the length of the response sequence. The -th column of each state response matrix is the response sequence of the -th concept of the system under the action of that initial state vector, and these state response matrices describe the dynamic behavior of the system to be studied under different initial conditions. To reproduce the dynamic behavior of the system, the nodes of the fuzzy cognitive graph are used to describe the concepts of the system under study, and the ultimate goal of learning is to determine a suitable matrix of weights so that the constructed fuzzy cognitive graph can reproduce the state response matrices under the stimulus of the same initial state vectors.

The DEM transforms the learning problem of fuzzy cognitive graph weights into a least-squares problem with constraints, and solving this least-squares problem can obtain the maximum likelihood estimate of the weight matrix corresponding to the noisy data, that is, the target weight matrix, which reduces the sensitivity of the DEM to noisy data and makes it have good noise immunity; the sparsity of the large-scale fuzzy cognitive graph weight matrix is ensured by the L1 parametrization of the weights. The DEM transforms the learning problem of fuzzy cognitive graph weights into a least-squares problem with constraints and solves this least-squares problem to obtain the maximum likelihood estimate of the weight matrix corresponding to the noisy data, which is the target weight matrix, reducing the sensitivity of the DEM to noisy data and making it good noise immunity; the L1 parametrization of the weights ensures that the large-scale fuzzy cognitive graph weight sparsity of the matrix. The sparsity of the matrix is ensured by the L1 parametrization of the weights; by invoking the multiplicative method to solve the derivation formulas (13)–(15) of the DEM, the DEM can obtain the weight matrix in polynomial time, which greatly improves the learning speed, and its learning process is simple and fast.

To make the algorithm more efficient, for example, the discussion of English interpretation should be “specifically,” i.e., sharing some “concrete” cases, rather than “generically,” i.e., talking about one’s ideas, “i.e., talk about one’s ideas.” The basic meanings of “generally” and “specifically” in the original text are “generally” and “specifically,” respectively. The basic meaning of “generally” and “specifically” in the original is “generally” and “specifically,” respectively, which is used to describe the degree of specificity of the content of the conversation. Under normal circumstances, the translator can just translate the vague restriction directly, which not only can retain the meaning of the original to the greatest extent but also can save most of the translator’s efforts. But in this case, if the translator simply adopts the direct translation method, the translation will be “because we do not want to talk in general, we want to talk specifically,” in which case the audience of the target language will be confused about the content of the conversation. What is a “general” conversation, and what is a “specific” discussion? To make the two vague restrictive words “generally” and “specifically” not disturb the audience of the target language, the translator will combine the discussion criteria proposed in the algorithm; that is, the translator should elaborate on his own specific experience to a viewpoint.

4. Experimental Verification and Conclusions

Due to the allocation of energy when translating, there is a high probability that human translators will add filler words or have problems with repetition or repetition, slowing down the pace of the translation and providing time for the next sentence to be understood and transformed. During a speech, the speaker will also include some verbiage and inflections to help sort out the ideas. Since machine translation does not have energy allocation issues and boils down to translating information content from the source language into the target language, it does not actively add filler words or start a new sentence to overturn what has already been translated. Typically, the digital elevation model is used to compare the sentences before and after, to identify repetitive information, and to integrate the information in the source language into a new sentence in concise language. The challenge that machines translation needs to address in terms of declarative fluency is the judgment of the source language, i.e., whether it is possible to screen whether certain words in the source language are filler words or words with logical meaning. The judgment of filler words also involves the problem of logical coherence, so the author splits this problem here, classifying the problem of filler words that do not affect the logic of the whole sentence as the problem of declarative fluency, while the part that affects the normal understanding of the interpretation user is classified as the problem of logical coherence. In addition to filler words and slurring, repetition and redundancy are also common problems in spoken language and pose a greater challenge to machine translation. Repetition, redundancy, and slips of the tongue arise because of the impromptu nature of spoken language and the fast pace of speech, which leaves the speaker with no room for repeated thought and condensation. Usually, this requires interpreters to give full play to their subjective initiative, compare the sentences before and after, find out the repetitive information, and integrate the information in the source language into new sentences in concise language, as shown in Figure 3, which shows the comparison of presentation fluency of English interpretation models based on the digital elevation model.

Due to the difference in language habits between Chinese and English, simple meaning in Chinese may require a subordinate clause to be expressed in English. Here, the speaker is referring to “this picture indicates that,” and all four-translation software use a subject clause to express such a simple meaning, but in fact, if the four words “This picture indicates that.” This picture indicates “that it” can fully reflect the meaning of the source language and can also make the audience pay more attention to the main information after that.

The study of linguistic accuracy can be divided into two main aspects: accurate and authentic vocabulary and grammatical correctness, as shown in Figure 4. In terms of vocabulary, besides paying attention to finding the corresponding words in the source language and reducing the deviation in meaning, attention should also be paid to the consistency and conciseness and efficiency of word usage. Grammatically, for machine translation, attention should be paid to issues such as clauses, singular and plural, tenses, and redundant or missing components. From the perspective of vocabulary, since spoken texts are more dependent on the context, speakers sometimes do not use words as precisely as in scientific and technical texts and rarely have fixed words and corresponding standard translations as in political texts, which requires machine translation to be able to choose the most suitable words according to the preceding and following texts. To understand a four-character word, it is often necessary to use a subordinate clause or even a whole long sentence to explain it separately, which delays the time and affects the pace of interpretation. Spoken Chinese is more fine-grained, with vague and loose information and even with great logical defects, all of which pose great problems for the grammar of machine translation. An important practice of the digital elevation model to improve the correct rate is clause splitting; that is, several clauses connected by commas in the source language are split into several independent sentences according to the grammatical structure of English, while the remaining several Chinese translation systems use supplementary conjunctions to transform the fragmented small sentences into subordinate clauses dependent on the main sentence. The attribute simplification problem of the digital elevation model is investigated, and the attribute simplification method of the object (attribute) derived from three fuzzy linguistic concept lattices is proposed based on the discriminative matrix and discriminative function. The problem with the way the digital elevation model is handled is that although it reduces the possibility of grammatical errors to a certain extent, the lack of logical articulation between sentences affects the expression effect. Conversely, the unreasonable approach may cause more grammatical problems and runs the risk of incorrect logical additions leading to semantic confusion. Overall, there is much room for improvement in both treatments. Part of the reason for the problem of redundant or fragmented sentence components may be that the Chinese go is fragmented, and a sentence is likely to be made up of several separate chunks, and the logical relationship between the chunks and the chunks needs to be filled in automatically by the listener or the human translator. As there is a lot of extraterritorial information in the spoken context, it makes otherwise inarticulate utterances clear. However, this simple logical connection step for humans may be more difficult for machine translation. A digital elevation model translation system cannot recognize the relationship between the chunks of speech and the main clause and thus cannot make the chunks act as corresponding constituents, only stacking them in the sentence and causing grammatical errors.

The spoken Chinese language is characterized by ambiguous and loose information and sometimes even logical errors. However, because people do not have high requirements for the clarity of “pure” language in spoken language and because in addition to the content of verbal expressions, extraverbal factors such as voice intonation, expressions, and gestures can be used to supplement the meaning that the speaker wants to express. The specific context combined with the specific mode of expression makes utterances that are easily ambiguous when taken alone very clear and understandable in the spoken language. However, such expressions that do not rely to a large extent on the language itself will pose greater difficulties for machine translation. The problems of four machine translation systems based on different algorithms in terms of logical coherence are shown in Figure 5. If we ignore other grammatical and expressive problems of machine translation but look at the logic in the example sentence, we can see that the source language emphasizes the high column of New York at the beginning and end of the sentence, and the first sentence is connected with the second sentence by the very colloquial “that is.” The “is” here is intended to explain a cause-and-effect relationship, such as “The pillars are high in New York because of the high output per square kilometer.” Since the conjunction “is” is not as direct as “because,” the speaker discovers the logical relationship only after the second clause, so the third sentence begins with the conjunction “so.” This is a perfectly acceptable practice in the spoken language, but in the written language, it results in a mishmash of words and repetition. The logic of the source language is not very clear because of its colloquial character, but it is very clear what the main body of the sentence is after removing all the modifiers, so this subject-verb-object logic should also be followed in the translation.

Compared with previous phrase-based and statistical method-based machine translation systems, neural network machine translation systems continuously perform word representation, which enables the generalization ability of machine translation to be improved, but at the same time, it is also prone to the problem of unfaithful translation. Unfaithful translation in this context means that the model generates words in the target language that guarantee the fluency of the target language utterance but do not accurately reflect the semantic letter of the source language sentences. The possible causes of translation unfaithfulness in this comparative study are filler word misjudgment, unclear referents, multiple meanings of words, and incomplete information. The specific performance of the four systems in terms of information fidelity is shown in Figure 6.

The correctness of the algorithm is verified by comparing the serial computation results with the parallel computation results in an experiment. Obviously for very large data size is beyond the scope of correctness verification, here different sizes of data are chosen for correctness verification, as shown in Figure 7. The correctness of the cumulative volume calculation in this paper can be verified using the serial algorithm and parallel algorithm results against each other, and the results are not verified using software such as ArcGIS and tandem because using these tools may introduce a costly dependency that cannot be included in the source code. Therefore, in this paper, different sizes of data were chosen, and one way of self-testing was used to ensure the correctness of the computed results. In the test, different algorithms are used to calculate the accumulation of the sink based on multiple flow directions, and the same data accumulation is calculated using the digital elevation model algorithm proposed in this paper. After using parallel calculations, the data blocks of the dataset were merged using GDAL to facilitate comparison with the serial algorithm. The test results show that there are deviations between the serial and parallel calculation results, but they are very small, and the reason for such deviations is the different order of accumulation of cells during the calculation. It has been verified several times that the different order of multiplication and addition leads to small errors in the result, even when using 32-bit double-precision type storage. At the same time, because the accumulation is a continuous accumulation process, the upstream errors will propagate to the downstream cells, which may produce the accumulation of errors and cause the deviation of the results, but these deviations are within a range and will not seriously become an error. Therefore, the error range is set to ±0.0001 for the validation in this paper. The validation data all conform to this error range, which verifies the correctness of the algorithm. According to the relevant ideas of graph theory, the adjacency matrix and association matrix are proposed in the context of coarse-grained object fuzzy language form, to obtain the weighted graph between attributes and object relations in the context of coarse-grained object fuzzy language form. Finally, according to the correspondence in the weighted graph can quickly and accurately obtain the intrinsic relationship between the extents and the connotations.

DEM elevation accuracy evaluation is an important element of DEM result accuracy evaluation, and this paper mainly adopts the commonly used medium error (RMSE) and mean error (ME) evaluation models as the evaluation models of elevation accuracy. The medium error refers to the degree of deviation of the elevation value of DEM data relative to the true value of the elevation of the test point, which is objectively evaluated by the statistical method. The average error evaluates the average value of DEM data elevation value relative to the elevation error of the test point data, which can reflect the distribution of errors and evaluate the goodness of the English interpretation DEM model. The elevation accuracy check, using the cross-validation method, uses the elevation points that did not participate in the modeling after cleaning, i.e., 20% of the total elevation points are taken from the DEM to check the statistical elevation accuracy by the checkpoint method, and the accuracy check report is formed (as shown in Figure 8). The results show that the medium error (RMSE) of the experimental areas is all much less than 0.2, thus verifying that the classification construction method in this paper has obtained a better modeling accuracy.

5. Conclusion

In the era of data and information explosion, the amount of ambiguous and uncertain information in English interpreting is increasing day by day. Therefore, it becomes very important to deal with fuzzy information more accurately and faster. In this paper, based on the digital elevation model and the background of fuzzy language form, we propose three fuzzy language concept lattices and its attribute approximate method, so that the information can be filtered out useful information in processing, which is conducive to people’s further reasoning and decision-making and improve their work efficiency. The main research contents of this paper are as follows: to reduce the complexity of processing data, this paper combines the relevant knowledge of grain computing and formal concept analysis, proposes coarse-grained object similarity, and generates a coarse-grained object fuzzy linguistic formal background to realize the simplification of the formal background. Combining the digital elevation model and the coarse-grained object fuzzy linguistic formal background, the linguistic threshold is proposed, to construct two pairs of three-branch fuzzy linguistic operators. The improved algorithm reduces the overhead space in the computation process, reduces the pressure on the cells entering and leaving the queue, and improves the computation efficiency. A constructive model for an object (attribute) derived three-branch fuzzy language concept lattice and object (attribute) three-branch fuzzy language concept lattice is further constructed. The digital elevation model can handle a large amount of fuzzy linguistic value information and is more widely used. To better deal with the information redundancy problem, the attribute reduction problem of the digital elevation model is investigated in this paper, and the attribute reduction method of the object (attribute) derived three-branch fuzzy linguistic concept lattice is proposed based on the identity matrix and the identification function.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The work was supported by the 2019 Major Theoretical and Practical Research Project of Shaanxi Social Science Association: On the Duality of Translator’s Identity and the International Promotion of Shaanxi Culture from the Perspective of Translation Economics (Grant No. 2019C077) and the 2020 School Level Project of Xi’an Fanyi University: The Belt and Road Initiative Language and Culture Research Base (Think Tank) (Grant No. 20KYJD02).