Abstract

This paper provides an in-depth study and analysis of big-data-driven data visualization and visual communication design models. The characteristics of new media and the definition of traditional media are analyzed; the importance of the new media environment is derived through comparison; and the successful cases of new media integration today are analyzed. In this process, we will optimize the traditional science and technology intelligence service model, optimize the various components that make up the science and technology intelligence wisdom service, achieve model optimization and reflect the four characteristics of science and technology intelligence wisdom service, and reconstruct the science and technology intelligence wisdom service using the literature research method. The design based on imagery schema theory is manifest, inclusive, and somewhat innovative and at the same time has a high degree of consistency and internal logical relationship with the visual representation of multidomain heterogeneous data at the cognitive level and displays purpose. This internal logical relationship is systematically organized and deeply analyzed, and the methodology from subpattern extraction and visual interaction design to the deep integration of visual representation is proposed in combination with specific application scenarios and cases.

1. Introduction

In the twenty-first century, digital information technology is the main media way. The major mode of communication in the new media environment of visual communication design is network communication, which is used to lead people to a new visual direction while connecting the orientation of consumerism culture, as the country continues to progress, in enhancing the hard strength of culture while also enhancing the soft power of culture. Therefore, under the continuous change of market economy, the transformation form of visual communication performance to meet the market is also particularly important, especially the expression form of visuals in the new media environment and the innovation of design style. This paper will also examine people’s mainstream ideologies on the research and analysis of visual performance, as well as the audience group’s psychological change features, popular aspects, and other varied factors [1]. In the market economy, we design to meet the consumer’s psychology and activate the desire to buy, thus boosting the gross domestic product [2]. So this article will also analyze the research and analysis of people’s mainstream ideology on visual performance that is very important and, at the same time, study various factors such as the psychological changes of the audience and popular elements. In the international design arena, design works not only follow the international fashion trend in the form of expression but also understand different aspects of domestic regional culture, customs, and folklore to regionalize to centralize the extraction of design works representing Chinese regional elements to the world. We should pay attention to both the embodiment of its value, but also should pay attention to the timeliness of the delivery of information published, the time of delivery, delivery efficiency, communication carriers, and the feelings of the audience [3]. In the digital information technology progress today, the form of design expression should also have the mass type. The cost of dissemination in new media operations is much more than the audience (which refers to the news media dissemination object as well as a variety of cultural and artistic works of the recipient) brought about by its dissemination itself and by economic benefits, so it cannot be formed for the effective value of the media.

In recent years, the rapid development of new media stage, the emergence of many new communication companies, but in the baptism through the media storm, few companies can survive; without understanding the core of the new media communication design expression form, blindly copying and borrowing other works leads to failure, or the design concept is too far beyond people’s basic cognitive aesthetics which leads to design failure, such examples abound [4]. While choosing the essence and the dross, it is also important to grasp the timeliness, interactivity, and interactivity of new media. With the aid of technology and information technology, it is also important to reflect the transformation of visual performance in an innovative way. Therefore, designers should profoundly analyze the ideology of consumers and the psychology of the audience before designing works and should not impose difficult design concepts on the design performance, leading to the failure of design works and market inconsistency and disproportionate input and outcome. In the data-driven era, the data resources faced by scientific and technological intelligence services show multidimensional, heterogeneous, and massive trends. These trends make it difficult for traditional data analysis methods, techniques, or tools to cope with the current situation of multiple and heterogeneous data sources, which is one of the challenges faced by science and technology intelligence agencies. According to the task division of S&T intelligence services defining the time, scope, format, and processing techniques of multisource data, there are five key categories of S&T intelligence tasks such as identification tasks, tracking tasks, comparison tasks, evaluation tasks, and prediction tasks. Cartography plays an important role in geographic research, where the main use is for the scientific exploration of geographic laws. Initially, it was used as a visualization product for abstract thinking [5]. With the transformation of cartography from traditional paper maps to computer hardware facilities such as screens, the information-carrying capacity of maps and the accuracy of information expression are also increasing, and the ways of human interaction with maps for data analysis are also enriched. And with the continuous development of Internet technology, the public openness of geographic information services is getting higher and higher. The communication of web map information has also faced a variety of different user needs. The tools and workflows produced by online mapping platforms, as well as the interactive features that can be implemented, are categorized and summarized in a series of case studies of maps from government, academia, and research institutions. It is concluded that overall, the technical difficulty of online mapping is decreasing due to open data distribution, open-source software, and cloud service innovations. These new technologies are enabling increasingly powerful data exploration capabilities and are an important part of big data and open data. However, online analytics capabilities for the web are currently less mature and can have promising applications in areas such as urban analytics.

Process-based assessment with big data will record and analyze all data about the student learning process. It is the fusion of information related to learning outcomes that will aid in decisions about instructional improvement. Although teacher excellence is outcome-oriented, if the process is ignored, it is difficult to reproduce the growth process of teaching excellence and replicate more of them. The commercial benefits brought by the communication itself cannot be formed into the effective value of the media. For example, evaluating the computer technology and information literacy abilities of computer science majors and teacher education majors by testing their computer application skills showed that the computer application skills of computer science majors were significantly higher than those of teacher education majors, which is an outcome evaluation. Based on the outcome evaluation alone, it is easy to overlook the fact that computer science majors take more computer-related courses during their training. It cannot be simply assumed that increasing the number of computer courses taken by teacher-training students will improve their information literacy; it is important to consider that students enrolled in computer science programs are generally more interested in software, electronic technologies, and products, and differences in logical thinking between students in teacher training programs and nonteaching students exist.

2. Current Status of Research

Two attributes, timestamp and geographic location, are the characteristics that distinguish spatiotemporal data from other data. The rapid spread of mobile terminal devices and sensor devices monitor and track the massive behavioral data generated by research subjects on the Internet in real time. Foreign scholars are committed to applying spatiotemporal big data for mining and analysis to reveal regular conclusions [6]. For example, in the field of urban research, it can be used to study the spatial structure of cities and residents’ behavioral patterns and so on. At the same time, many scholars are interested in how to represent these new types of data and visualize the results of the analysis [7]. Visualization is an important expression method for big data analysis, which takes advantage of human’s cognitive ability for visualized information and organically combines the respective strengths of humans and machines with the help of human-computer interaction analysis methods and interactive technologies, assisting people to gain more intuitive and efficient insight into the information, knowledge, and wisdom behind big data [8]. Since spatiotemporal data carries spatial geographic information, it is often combined with geographic cartography to express, and the key is how to visualize and map the spatial dimension, the temporal dimension, and the corresponding attribute dimension [9]. The spatiotemporal cube visualizes time, space, and events in the form of a 3D coordinate system. A stacked map can also be used to fuse 2D and 3D to expand the space for displaying multidimensional attributes in the space-time cube. However, the display of a 3D coordinate system also has its limitations, especially when the spatiotemporal information objects have more dimensions, so it is often fused with multidimensional data visualization methods to assist the expression [10]. The rapid development and mature application of artificial intelligence technology and big data bring opportunities and challenges to scientific and technological intelligence work, and systematic review and improvement of scientific and technological intelligence service research systems in the data-driven era can provide development ideas for scientific and technological intelligence intelligent service mode [11].

Faced with the trend of intelligent technology and the trend of multisource heterogeneity of massive data, the science and technology intelligence service model faces serious challenges. For example, in the face of massive data resources such as scientific and technical literature and patent literature, the traditional human-driven model cannot realize the semantic extraction of massive multisource heterogeneous data resources, and the characteristics of literature resources such as multilingual and multidomain for accurate extraction of data resources corresponding to scientific and technical intelligence tasks, which requires the development of semantic extraction tools with more enhanced capabilities [12]. The research and application of schema theory in the field of foreign language teaching are dominant, and the research direction is subdivided into the application of schema theory in reading comprehension, listening comprehension, translation, and corpus, but the mainstream of research and application is still limited to its ontology: linguistics; the research of cognitive discipline begins to sprout, which is manifested in the application of schema theory to gain insight into the process and method of language acquisition and the systematic analysis of cognitive patterns, pathways, strategies, and representations in the process of foreign language learning [13]. The research on the reading comprehension process occupies the main position, and the research on cognitive disciplines is still not separated from the field of foreign language teaching; although psychology and philosophy have been involved in this field since the introduction of schema theory, scholars’ research in this field is relatively rare and limited in depth.

The purpose of this paper is to construct a data-driven scientific and technological intelligence wisdom service model, improve and enrich the scientific and technological intelligence wisdom service research system, help scientific and technological intelligence service organizations better understand their advantages and defects, recreate the service process according to the scientific and technological intelligence wisdom service model, and develop a wisdom service plan that meets their characteristics. Firstly, the research on data-driven scientific and technological intelligence wisdom service has not been systematically analyzed in terms of its constituent elements and characteristics, and the systematic analysis of the constituent elements and characteristics of the data-driven scientific and technological intelligence wisdom service model on the existing research results is a key part of improving the scientific and technological intelligence wisdom service research system, establishing the correlation between the constituent elements of scientific and technological intelligence wisdom service and systematically cognizing each constituent element.

3. Analysis of Data-Driven Data Visualization and Visual Communication Design Patterns

3.1. Data-Driven Algorithm Design for Data Visualization

The design concept is too much beyond people’s basic cognition and aesthetics, which also leads to the failure of design. Such examples are everywhere. Therefore, designers should deeply analyze the ideology of consumers and the psychology of the audience before designing works and should not deal with design concepts that are difficult to achieve. Multidomain heterogeneous data is a kind of big data; it is an important trend and direction that emerged at a certain stage of big data development, which is proposed by researchers or users for expanding data sources, domains, and complex data structures, and big data is an important source and basis for multidomain heterogeneous data generation. Before understanding multidomain heterogeneous data, a brief overview of big data is needed [14]. Compared with the traditional service model, the components of the scientific and technological intelligence service model in the era of big data should reflect the characteristics of intelligent service, and the scientific and technological intelligence service has shifted from the qualitative analysis service model such as expert experience judgment in the past to quantitative analysis based on scientific study big data service model, and the data-driven research paradigm will become the core paradigm of scientific and technological intelligence service. The intelligence service relies more on intelligent devices and technologies, sensing user needs through intelligent data services, using intelligent analysis technologies to obtain intelligence oriented by the concept of integration, and finally using big data on user behavior to push scenario-based services. Science and technology intelligence services are user-demand-oriented, data-driven, and highly knowledge-intensive, and the development and maturity of big data bring opportunities and challenges for intelligence services. The composition of the scientific and technological intelligence analysis process that integrates scientific research big data covers seven dimensions, such as data, users, technology, intelligent intelligence, scientific and technological intelligence workers, and intelligent service platforms and methods. In-depth analysis will be conducted from these seven dimensions, and on this basis, the theoretical and practical basis for the realization of data-driven scientific and technological intelligence services will be discussed, and then the demand sensitivity of scientific and technological intelligence services will be discussed. The theoretical and practical foundations of data-driven STI services will be discussed, and the four main characteristics of STI services, including demand sensitivity, data multisourcing, technical intelligence, and service scenario, will be explained.

Principal component analysis (PCA) is a commonly used method for dimensionality reduction. High-dimensional data is generally very sparse, with most data at the boundaries in a high-dimensional hypercube. This makes analyses such as clustering and outlier analysis, which focus on the distance between points and the degree of aggregation, meaningless. And high-dimensional data also suffer from data redundancy, which reduces the speed of processing data later and takes up storage resources [15]. Therefore, it becomes necessary to represent high-dimensional data with low-dimensional data with as little information loss as possible. PCA style is the simplest method of dimensionality reduction by taking random variables that are originally componential correlated.

Data identification is based on the definition of data scope, structure, and attributes, such as for data collection services, after data fusion to refine the available knowledge for data applications, after data identification for collection and organization, and then classification and storage to build the corresponding field of the thematic database, such as industrial technology database, thematic database, intelligence reports database, and research id database. The data will generate new data in the application stage, such as user behavior data (evaluation and feedback data) and so on, which can be used as data identification scope for recirculation. Overall, due to open data release, open-source software, and cloud service innovation, the technical difficulty of online mapping is declining. The construction of scientific and technological intelligence resources has become more prominent in the characteristics of “big data,” and intelligent data have become a new orientation for the construction of scientific and technological intelligence resources. In this process, wisdom data service emphasizes semantic interconnection, which essentially improves the standard of data quality and value, and the construction and management of intelligence resources in the field of science and technology continue to innovate and rise, and the construction of various kinds of databases with special characteristics in the field are springing up, so how to reasonably embed wisdom data service, through the fusion of multisource heterogeneous data, correlation, and semantic analysis of data, and then realize the science and technology intelligence service. Intelligent data is the logical starting point of intelligent services; the quality and quantity of data determine the depth and breadth of scientific and technological intelligence intelligent services, as shown in Figure 1.

Small tree-like structures are often considered to be communities, despite their sparse structure. Considering the intrinsic meaning of communities, such a result is counterintuitive and raises the question: is community detection therefore unusable on mobile call graphs? Finally, user behavior big data are used to push scene-based services. Science and technology intelligence services are highly knowledge-intensive services that are user-demand-oriented and data-driven. The results may need to be considered with caution, but as community detection methods always do, this particularity of communities in mobile call graphs seems to be a special rather than a problem, regardless of the network used [16]. Although they may have a single shape, communities can provide important information when useful with external information.

The human brain can process visual features such as curvature, color, and so on and will process this information much faster than it can process symbolic information. For example, an observer can easily find a blue dot in a mass of red dots mixed with a blue dot or even spot the blue dot without being prompted. Information visualization takes advantage of the developed nature of the human perceptual system to be able to process large amounts of data quickly. Good visualization can convey information more efficiently and directly, which is an advantage of the method used in this study. Visual analytics is a combination of automatic computer analysis capabilities and human cognition of visual graphics, based on which Lei Ren et al. analyze in detail the cognitive theory, information visualization theory and human-computer interaction, and user interface theory that support the analysis process and discuss information visualization techniques for mainstream applications of big data and human-computer interaction techniques that support visual analytics. They use pixels of different colors to represent different data attributes. The position and order of the pixels can be used to represent semantic information such as level of interest or item similarity, and this visualization design can be used well to represent large data sets. Many visualizations in this category are based on hierarchical aggregation techniques, and often such visualizations distinguish information details for scalability, so there is little visual clutter after the data is visualized [17]. There has been much excellent research in the field of visualization based on visualization design. A proper spiral layout design can make trends in periodic data more visible when the right period is chosen. This is a very much used method to reveal periodic patterns in temporal data along a spiral timeline by parameterizing the visual design.

As an example of a diagrammatic reaction force, forces occur in pairs, and the action involves the active encounter of equally powerful physical or metaphorical opposing forces, which is the reaction force. The effects of reaction forces are more common in life, such as slapping a basketball or loosening a compressed spring. In design, proper use of reaction forces can receive good results, such as the proportional relationship between the length of time and the strength of pressing the cue in a billiards game; the game Jump a Jump also uses the same principle; pinball games and Angry Birds are typical applications of reaction forces in design, as shown in Figure 2.

Spatial schemas cover a variety of subgraphs that are closely integrated into human activities such as orientation and space and have a wide range of cuts in design and multidomain heterogeneous data visualization. User behavior data can be recirculated as the data recognition range. The construction of scientific and technological information resources has highlighted the characteristics of “big data,” and smart data have become a new direction for the construction of scientific and technological information resources. Their prominent role is not only in the basic composition but also in close cooperation with Gestalt-based design methods, which play an important and prominent role in the visual representation and interactive operation of multidomain heterogeneous data. The original purpose of data visualization is to map a wide variety of multisource heterogeneous data sets into geometric elements through algorithmic design and then output graphical images to show the structure and insight into the laws in an intuitive, clear, and concise manner. Data is an asset, and data visualization is an important tool to revitalize this asset. The set graphical style for data sets of the same kind of homogeneous merging, link graphical style for related data sets of relationship mapping, and part-whole graphical style for complex data sets of the point surface are the outstanding embodiment of simplification. Appropriate use of these composite schemas in the design of visual representations can provide intuitive insight into patterns, facilitate knowledge acquisition, and maximize the value of information.

3.2. Visual Communication Design Model Design

After the data has been cleaned, there are still problems such as sparse information density, so the information contained in the data needs to be mined. Operators want to understand the communication patterns of user groups in the local network, including the distribution of local users under each characteristic, which characteristics of users should be the focus of attention; how to identify ordinary and prominent users through objective methods; and other information. This requires us to find objective and effective methods to be able to accurately mine user group information [18]. Traditional analysis methods are designed using line graphs and histograms, but in the analysis process, only two to three dimensions of data can be selected for analysis, which greatly limits the depth of the information mined, making the information obtained in the analysis incomplete, and if a dimensionality reduction algorithm is adopted, the information displayed is also incomplete if the user is only interested in certain user characteristics. The user data density of call recording data is sparse, and there is the problem of incomplete analysis samples if only some of them are filtered for study. In terms of layout, the simplest way to represent a user is to take a point representation, but if the number of users is large, such as a shopping site with up to hundreds of millions of users, choosing to depict users with points would show points that are too dense for the display screen area to accommodate, as well as problems of visual confusion and occlusion coverage. Therefore, in this study, there is a need to abstract sparse user data records into machine-recognizable user features, with the aim of mining effective data information, improving the efficiency of the system in processing and drawing data information and designing clear and beautiful visualizations based on data analysis results to effectively communicate data information to users.

Due to the space limitation of the Cartesian product coordinate system, a few features are selected in this paper to analyze the distribution of users. The coordinates of intranet contacts and extranet contacts of users in the features are first selected for comparison, as shown in Figure 3. The horizontal coordinate is the number of intranet contacts, and by looking at the figure, we can roughly see that the order of magnitude of users’ extranet contacts is larger than the order of magnitude of intranet contacts, which means that generally, users have more extranet contacts than intranet contacts. But from the graph also that shows the shortcomings of this depiction, most of the users still have some similarities, and the communication behavior pattern also has some similarities; at the same time, the data volume of this experiment is large; there are more data points; the situation of obscuring and covering is more serious; and the information cannot be displayed clearly and intuitively.

In this paper, we want to obtain specific feature data for each point, and we want the data points to be in such a way that they cannot be covered and obscured, and the data points are visible [19]. To this end, a checkerboard layout was adopted to plot each coordinate point, which was able to demonstrate the needs of this analysis task and confirmed the above observations clearly and unambiguously. After this, the limitations of the layout were still found, taking this layout, depicting the limited distance of the axes, and not being able to observe other users who are not within the coordinates. If the plotting is done by zooming or sliding to select the coordinates, the user needs to slide the coordinates several times to observe all the data points, which means that the user cannot view all the information in a short time, which is not compatible with the theory of visualization to convey information quickly. Visual analysis is a combination of computer automatic analysis capabilities and human cognition of visual graphics. On this basis, Ren Lei and others analyzed in detail the cognitive theory, information visualization theory, and human-computer interaction and user interface theory that support the analysis process. And information visualization technology for mainstream big data applications and human-computer interaction technology that supports visual analysis is also discussed.

Inverse thinking, which can also be called divergent thinking, is what people take for granted in the opposing mode of thinking expression so that the design state will give people a new visual impact. For example, in the process of visual expression, the same theme using cis-thinking is difficult to stand out in many design works; more will make the audience produce visual fatigue. Instead, reverse thinking will give the audience a sense of visual impact at first sight. In reverse thinking, it is necessary to consider the universality of simple elements in the audience’s common design work, such as color changes, opposite sizes, or the choice of opposites, which can be the point of view of transformation. Using the design concept of guidance and challenge to observe the audience thinking blind, breaking the conservative design thinking concept, bold use of new elements and new composition methods, the audience often agrees with things in reverse thinking but has a critical point of view, which is also a breakthrough for the traditional way. This is where the designer needs to notice the novelty in reverse thinking, establish a new design concept, and focus on the sense of innovation, as shown in Figure 4.

The results of multisource data fusion are used to reveal different dimensions of an event or activity, showing facts or patterns side-by-side, where the same facts or patterns may be hidden in different forms in the data network and where the same form of data exists to show one or more dimensions of the same facts or patterns. If the dimensionality reduction algorithm is adopted, if the user is only interested in certain user characteristics, the displayed information is also incomplete. Multisource data fusion is the basis of computational intelligence and is the embodiment of quantification, automation, and fusion thinking [20]. The process of conducting decision research decomposes the task into different subtasks, constructs a multisource data fusion model for the data sources of the subtasks, and is oriented to the task context, and the process of data collection, processing, and analysis should be oriented to the context in which the subtasks are located, covering all the data sources required for the task. The development of data-driven thinking is gradually moving towards the development path of synthesis, multisource, and refinement, with data focusing more on the whole rather than traditional sampling data, more on precision rather than on efficiency, and better reflecting cause-and-effect relationships, overturning the previous model of intelligence task interpretation. Multisource data fusion places more emphasis on the complementarity between data, the ability to corroborate each other, and more emphasis on large and broad. At the level of data fusion algorithms, targeted data fusion algorithms are becoming more sophisticated in various fields.

4. Analysis of Results

4.1. Data Visualization Results

An important challenge in visualizing multidomain heterogeneous data is that the designer must control the amount of information in the visualization map; too much or too little information can cause some distress to the user in receiving the information. Visualizing too little data can be a waste of resources, typically data that contains only two opposing attribute values, such as the number of present versus absent, gender comparison, pass versus fail ratio, and other data that sum to 100% and can be derived from one attribute to the other, for which other more intuitive ways of displaying the data are available and have no visual value. In contrast to this is another situation where designers want to express too much information, believing that a visualization containing too much information not only increases the complexity of the visualization but also makes the visual schema confusing and difficult to interpret, making it difficult for end users to understand, obscuring necessary information, and causing problems for users to understand the data. The combination of attribute and composite schemas can alleviate this problem to a certain extent for designers. One way is to provide users with the operation of filtering and filtering data according to the size, density, and importance of the data so that users can select the data to be displayed in the current scene. Another way is to combine and divide the data in an orderly manner as proposed above, through multiple views or multiple screens and cross-screen to split the display according to user needs and relevance.

In multidomain heterogeneous data visualization design, color is second only to graphical layout in terms of expressiveness. However, in some cases, color has a communication effect that graphic layouts do not. The effect of color on human emotions in data visualization is not available in layout. The proper use of color can help users understand data structure more intuitively, detect data anomalies, focus their attention, and enhance their understanding of abstract data. If the graphic layout reflects the control of the designer’s reason, then the color in visualization puts a warm coat on the cold data, which brings spiritual comfort to the viewer, and the impact of color on the user focuses on emotional experience. Color in current design practice is one of the more extreme elements of data visualization, either ignored or abused, as shown in Figure 5.

Users of data visualization systems also indirectly assume the role of evaluation, where the evaluators are mainly users who test and evaluate visual graphics and visualization systems. Users cannot view all the information in a short period of time, which is inconsistent with the visual theory of rapid information transmission. Evaluation is important to find out the shortcomings of the visualization system, improve the user experience of the system, and evaluate the advantages and disadvantages of technical solutions. Evaluators generally consist of two types of users: expert users and general users. Expert users are professional users who meet certain conditions of the visualization system (industry experts, technical experts, etc.). These users have a thorough understanding of the data or the goals to be achieved, have a clear judgment of the usability and ease of use of the system, and can provide quality suggestions for the iteration of the visualization system. General users generally evaluate the public-oriented visualization system, which has a greater uncertainty of users, and the main purpose of testing is to improve the compatibility, friendliness, and ease of use of the system for users, which is generally done through task execution analysis, eye-movement data analysis, or subjective evaluation, as shown in Figure 6.

At the same time, the difficulty of mining user needs, especially deep user needs, increased by the characteristics of partial expression, false expression, and abandonment of expression for various reasons. For this reason, the article tries to mine the deepest emotional needs of users from the perspective of data visualization. Emotional demand is the state and direct expression of human nature, the deepest demand of users, and a cultural trait based on the original rhythm of life and higher than life in a narrow sense. Whether the emotional demand of users can be accurately mined and fully satisfied is one of the important criteria to measure whether the multidomain heterogeneous data visualization system has an innovative spirit and humanistic care. Based on the effective mining of information about users’ inner emotions in multidomain heterogeneous data visualization, we analyze them in depth and obtain insights about users’ emotional differences and inner needs.

The extraction of imagery patterns requires a high degree of accuracy in user wording. In industry-oriented multidomain heterogeneous data visualization design, users are mainly professional and semiprofessional, and the extraction of imagery patterns in such visualization systems is based on structured interviews, which facilitate designers to directly extract subpatterns of high relevance. Data with a sum of 100% can be derived from one attribute to another attribute value. Such data can be displayed in other more intuitive ways without visual value. The public-oriented data visualization platform is based on scene observation, supplemented by semistructured interviews, which can extract relevant subpatterns efficiently and accurately.

4.2. Visual Communication Design Results

After understanding the new performance of visual communication in the new media environment, while focusing on dynamic performance and interactive performance, it is also necessary to have innovative performance on the basis. Inheriting the growth of traditional media visual communication performance based on the creation of new forms of expression, to improve the expansion of visual advertising design, is also the design of the place’s connotation.

Through the dynamic form of visual expression that enables the audience more likely to accept the products marketed, the new media environment and the traditional media environment development of visual design development of different performance is also evolving, from the previous simple and easy-to-understand direct form of expression into a subtle performance; because the quality of today’s nationals is continuing to grow, the level of thirst for knowledge is increasing day by day, making it simpler to float about the things brought in. This is a great breakthrough for the design form of advertising to reflect the change. In design, we can use simple elements around us to represent some spiritual feelings that want to express complex psychological states so that the scene can create emotions and resonate with visual psychology and other states. The visual symbolization of the product increasingly becomes the representative direction to attract the eyes of consumers, for the product of successful packaging design or advertising design is indeed the consumer impulse to produce consumer ideas. People must be visual product expressions of whether the consumer view resonates with the psychology of the consumer in order to grasp the product of the development of emotional consumption, as depicted in Figure 7.

If the product’s external image is split, such as the product’s shape, material, and color, the buyers can get an instinctive sensation able to make high-quality ideas for the visualization system’s iteration. Ordinary users generally conduct evaluations on public-facing visualization systems. Users of this type of data visualization system have greater uncertainty. The main purpose of the test is to improve the compatibility of the system with users. The second is the behavior level for the experience of consumer behavior; seeing the product’s first sense is also the psychological state whether there is a sense of pleasure. The third is about the experience of the consumer’s emotional state, based on whether the consumer’s inner state resonates with the product to promote the desire to buy the product. The fourth is about the consumer’s associated experience, which also needs to be associated with the product’s after-sales service, whether the consumer can feel at ease to buy the product after the experience is also key. Therefore, by analyzing the characteristics of consumer experience, we can learn that the first sensory experience mainly corresponds to the consumer’s consumption state whether there is a sense of impulse to consume, the second experience corresponds to the emotional analysis of the consumer’s purchase state and whether there is a positive level of aspects, the third experience is through the consumer’s experience whether there is a psychological level and the establishment of emotions and whether the product has produced consumer trust, and the fourth experience is whether the consumer builds a cognition of the brand culture after understanding the product and whether he or she has the will to establish a long-term relationship to cultivate consumer consumption awareness, as shown in Figure 8.

Visual communication performance is to serve the audience, but there is still a very clear division of the audience; for different levels of consumption level, habitual acceptance of the media is clear, so more attention is paid to the commercial value of visual communication performance. The development of visual communication relies on digital technology in the form of expression, and the integration of design expression with the benefits of business is also an inevitable development trend. The gradual commercialization of visual communication performance has also become aesthetic and humanized. In the visual communication and the new media, the environment reflects the value of reciprocity; the performance form is closer to the market and integration in the economic development today. Because of the constant closeness to market requirements, visual communication performance has also changed qualitatively. For the audience, an examination of people’s visual psychological state for visual development and the economic market can promote a more perfect integration.

5. Conclusion

In the process of designing multidomain heterogeneous data visual systems, imagery schema theory can inspire designers’ creativity, improve design efficiency, and help designers find innovative design solutions, which can improve design efficiency and bring novel user interface designs to design works compared to other design approaches. The integration of visual communication in the new media environment has also led to the emergence of new visual models. For example, motion graphic design can provide the audience with a more intuitive understanding of the meaning of the expression. Secondly, interaction design allows the audience to interact with the visual work. The second experience is the emotional analysis corresponding to the consumer’s purchase state and whether it has a positive level. The third experience is whether a psychological level and emotional establishment are generated after the consumer’s experience. The design also pays more attention to the feedback link of the audience, so the direction of the design pays more attention to the “human-centered” design concept. The development of visual communication is promoted with the assistance of new media and Internet technology, and the concept of multisensory acceptance is adopted in the expression of design transformation forms. In today’s era of rapid economic development, the conversion of visual communication expressions is increasingly developed in the direction of commercialization. This paper analyzes the consumer psychology, consumption patterns, and aesthetic trends of consumers so that we can accurately integrate the consumer market better in the form of design conversion.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The author does not have any conflicts of interest.

Acknowledgments

This study was supported by (1) Research on Clothing Design Mode of Big Data Reverse Traction (2021bg04247) and (2) Research on Digital Transformation of Zhejiang Garment Industry and Quality Improvement and Efficiency Enhancement under the Internet Plus Background (2021C35038).