Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2021 / Article
Special Issue

Wireless Communications using Embedded Microprocessors

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 3560592 | https://doi.org/10.1155/2021/3560592

Meiting Qu, Lei Li, "3D Modeling Design of Multirole Virtual Character Based on Visual Communication in Wireless Sensor Networks", Wireless Communications and Mobile Computing, vol. 2021, Article ID 3560592, 9 pages, 2021. https://doi.org/10.1155/2021/3560592

3D Modeling Design of Multirole Virtual Character Based on Visual Communication in Wireless Sensor Networks

Academic Editor: Haibin Lv
Received08 Sep 2021
Revised16 Oct 2021
Accepted28 Oct 2021
Published10 Nov 2021


In order to solve the problems of poor design effect and time consumption of traditional virtual character modeling design methods, a three-dimensional (3D) modeling design method of multirole virtual characters based on visual communication is studied. Firstly, the wireless sensor network is used to locate, scan, and collect the human body structure information and convert the coordinates to bind the 3D skeleton. Secondly, according to different human postures, we switch the linear hybrid skin algorithm, spherical hybrid skin algorithm, and double quaternion hybrid skin algorithm; design the geometric surface; and attach it to the 3D skeleton to generate 3D modeling. Finally, based on the influence of visual communication on human eye observation and psychological feeling, the geometric surface is divided twice, and the virtual character is rendered in the way of display, coating, and splicing to obtain a complete 3D modeling of the virtual character. The results show that the positioning coverage of this method is higher, the rendering effect of the hand and head is better, the design time is significantly shortened, and the maximum time is no more than 35 min.

1. Introduction

With the advent of the digital information society and the continuous innovation of technologies, virtual reality technology has been born and has been widely applied to various fields [1, 2]. Furthermore, thanks to the increased speed of wireless sensor’s network transmission speed coupled with the guidance of the virtual reality technology [3], virtual communication technology has become more applicable and is able to better adapt to the processing of virtual images. With virtual communication technology, real and 3D images can be perceived virtually, thus meeting people’s demands for interactive images. In recent years, the booming development of the digital information technology has brought prosperity to the animation industry which has become an integral part of China’s cultural sector, and the digital animation industry is also developing rapidly. In addition, with their increasing demands for virtual animation, people are no longer satisfied with the simple modeling of virtual characters in animation, which has led to the continuously increasing demands for modeling design of animation characters [4, 5]. In order to make virtual character 3D modeling more real and interactive [6], it is necessary to explore different design methods of virtual character 3D modeling, upgrade tools used for the production of 3D animation and assisted teaching system, and thus provide more reliable technical support. At present, 3D modeling design has been widely used and has achieved relatively good application effect in 3D landscape design and the interactive gesture system of the Greenland system [7].

Literature [8] proposed a character virtual modeling design method based on the VR technology, in which virtual reality technology is used for the diagrammatic virtual modeling of characters integrated with the 3D virtual modeling technology, so that interactive modeling design of various characters can be realizing just by clicking the mouse. Literature [9] proposed character virtual modeling design methods based on the parameter modeling SMPL, in which the facial characteristics and contours of various characters are used, and the collected data are inputted into the parameter modeling SMPL to complete the 3D projection of facial modeling. Literature [10] proposed a character virtual modeling design method based on the mobile platform, in which virtual integration processing is conducted according to the modeling of various characters, thus constructing the facial modeling target functions of various characters. Restrictions are set for different characters, and the target function is solved under different restrictions, thus completing the character modeling design. Literature [11] proposed a character virtual modeling design method based on the computer diagram, in which the real-time computer diagrammatic technology, 3D modeling technology, and double-eyed 3D virtual technology are used for the modeling design of various virtual characters in virtual reality technology. Literature [12] proposed a character virtual modeling design method based on deep and enhanced target parameter learning, in which the maximum rewarding function of character virtual modeling design is constructed, and the mapping relationship between different kinds and various characters is studied. Besides, the agent learning method is used to complete the character virtual modeling design.

The current wireless sensor networks are optimized from the perspective of metaheuristics, and the new clustering protocol is used to improve the setting of network nodes. As a new design method proposed in recent years, visual communication, with the help of advanced technologies such as cloud computing and the Internet of Things, better applies visual elements to different art designs and strengthens the visual effect. On the whole, wireless sensor networks and virtual communication means having a wide range of application at this stage. In this study, under the application of wireless sensor networks, a new multirole virtual character 3D modeling design method is proposed based on virtual communication.

The main contributions of this paper are as follows: (1)In this paper, wireless sensor networks are proposed to locate human structure information, including different structural plane information and contour information(2)Based on the collected human body information, this paper constructs a 3D skeleton and attaches it to the geometric surface to generate a 3D modeling that can be split(3)Based on virtual communication, this paper renders the overall modeling structure to realize local texture and color enhancement

2. 3D Modeling Design of Multicharacter Virtual Character Based on Visual Communication

2.1. Locating Human Structure Information in Wireless Sensor Network

To design the 3D modeling of virtual characters, it is necessary to scan the human body structure and locate the information and activity state information of various parts of the human body according to the scanning. It is assumed that the location of the signal transmitting source node in wireless sensor network is , the signal transmission time is , and there are nodes in the wireless sensor network. According to the above information [13, 14], set the source node’s location as , and then, the corresponding transmission time is

where refers to the signal transmission speed and refers to the number of paths. We linearize the above formula, adjust the calculation order of parameters in the formula, and square both sides of the formula at the same time.

The above formula is further transformed to obtain new calculation results:

We regard in formula (3) as an unknown variable , i.e., . At this point, this formula becomes a linear observation equation, in which the values of , , and are all estimated values. Coordinate system transformation is performed on the estimated value to obtain

where and mean the coordinate parameters before transformation, while and refer to the coordinate transformation parameters.

The parameters after coordinate system transformation are substituted into formula (3) to obtain the following calculation formula:

where , , , and are the source node position, transmitting node position, node transmission time, and signal transmission time after coordinate system transformation, respectively.

According to the above calculation, when , . Therefore, it is deduced that

As can be seen from the above formula, the right side of the formula contains the measurement error, so the actual value of the parameter is derived according to the coordinate system transformation. According to this value, the positioning parameters of the wireless sensor network are set to realize the positioning of human structure information and provide structural data for the design of virtual character 3D modeling [15].

2.2. Draw 2D Outline of Characters and Bind the 3D Skeleton

The 3D modeling of characters is obtained by averaging multiple human body data, and the 2D contour and 3D skeleton are the support of 3D modeling. Therefore, the 2D contour of characters is drawn according to the positioning data of wireless sensor networks, and the 3D skeleton is bound. It is known that the posture of the human body is determined by bone motion, so starting from 78 joint points of the human body, 20 of them are used to design the 2D contour and 3D skeleton. It is shown in Figure 1.

According to Figure 1, the human skeleton is obtained by scanning mapping, which needs normalization. We set the skeleton in a plane coordinate system [16]; the axes and are used to describe the column and line of a specific pixel in the digital image, respectively; and axis is used to describe the deep information. We convert the coordinate system into the world coordinate system and obtain 3D coordinates according to the following conversion relationship: where refer to the coordinate positions in different directions; , , and refer to the transformation parameters in the direction of three coordinate axes; and refers to the focus [17]. The world coordinate system is obtained according to the coordinate transformation, and the human body structure information obtained by the wireless sensor network through the handheld 3D scanner is transformed. The transformation generates the corresponding transformation relationship according to the following formula:

where and are the internal parameter and external parameter generated by 3D modeling and refers to the coordinate position parameters after human body structure information transformation. The above formula is used to convert the coordinates to realize the binding of the 3D skeleton.

2.3. Create and Attach Geometric Surfaces for Virtual Character 3D Modeling Design

Virtual character 3D modeling has many geometric surfaces and irregular edges. The geometric surface is attached to the 3D skeleton by a skin algorithm to create virtual character 3D modeling. When designing the geometric surface with influence relationship between the bone and the skin, the skin algorithm designs the edge of the geometric surface by linear calculation. The formula is

where refers to the weight of bone in affecting the skin surface, refers to the coordinate transmission result under formula (10), refers to the original location of the skin vertex, and refers to the final position of the skin vertex. When a part of the human body moves, formula (11) is no longer used to design the geometric surface edge, but the skin algorithm with a spherical hybrid calculation function is used to obtain the final position of the skin vertex. The formula is

where refers to the interpolation rotation matrix calculated according to the weight value when the bone rotates; refers to rotation matrix of the skin vertex affected by local actions.

When the human body performs complex movements and the designed 3D modeling of characters needs to be displayed synchronously, the skin algorithm with dual quaternion hybrid computing ability is used. According to the value of the quaternion , we determine the final position of the skin vertex, and the calculation formula of the quaternion is

The above formula uses the double quaternion hybrid skin algorithm. We convert the bone rotation matrix into unit quaternion and get the quaternion by calculating the weight value of all bones. According to this parameter, the final spatial coordinates of the skin vertex are obtained to realize the design of the virtual character 3D modeling geometric surface under a complex action posture [18].

2.4. Render Virtual Characters Based on Virtual Communication

After the basic structure of the virtual character 3D modeling is designed, different body parts need to be rendered. In order to enhance the realism of 3D modeling, this design integrates the basic concept of virtual communication and renders virtual characters on the basis of the best view. Virtual communication is a design concept. It is proposed that different location information will make people have different psychological feelings for the information transmitted by different visual planes. It is shown in Figure 2.

According to Figure 2, different location matching and color matching will have different psychological effects. Therefore, based on the concept of virtual communication, visual association is established to render the appearance of virtual characters through showing, covering, and splicing. Among them, showing is to cover the skin at different positions according to the light, covering is to wrap the body structure and cover the lines, and splicing is to render human skin or decoration in the way of grain splicing [19]. On the premise of the above means, the geometric surface is divided twice, and the color and texture are rendered according to the local structure. It is shown in Figure 3.

We use the split method to avoid the lines and seams visible on the 3D modeling surface and hide the redundant lines and seams under the rendering lines of other body structures, so that the whole 3D model has no redundant lines. So far, the design of multicharacter virtual character 3D modeling is realized based on virtual communication in wireless sensor networks [20].

Through the above process, the 3Dmodeling design of multirole virtual characters is completed, and the overall design process is shown in Figure 4.

3. Experimental Analysis and Results

3.1. Experimental Environment and Data Set

The experimental environment parameters are shown in Table 1.


Operating systemWindows 7
CPUIntel Core i5-7300HQ
Memory32 GB
Running memory8 GB
Dominant frequency2.1 GHz
Simulation softwareMATLAB R2014a

The data sets used in this experiment include the iCartoonFace data set and the Danbooru2018 data set. Among them, iCartoonFace contains more than 5000 cartoon characters and more than 400,000 high-quality animation virtual character images. Danbooru2018’s animation picture library contains 140,000 images of 512 × 512 px size HD animation face map. 12000 images were randomly selected from the above data set, and the above images were randomly divided into 120 groups according to a group of 100 images for comparison experiments.

3.2. Experimental Standards

In order to fully verify the performance of this method, the experimental indexes are selected as follows.

3.2.1. Sensor Coverage

Based on the known coverage and coverage efficiency, a coverage evaluation index is proposed, the basic definition and calculation method of the index are given, and the positioning functions of different design methods are analyzed by using the index. There are two key links in scanning human body information, namely, data acquisition and data positioning. Therefore, the requirements for scanning and positioning are set to two categories: full coverage of one static acquisition target and full coverage of multiple dynamic acquisition targets. We set the coverage rate as . According to the above setting, i.e., and . In order to evaluate the two kinds of coverage, the body coordinates of the scanned person are set. When the node is within the normal communication range of referential nodes, the node is covered by . When the coverage of the scanned human body coordinate meets , the region is covered by . When all the points in the scanned region are covered by at least nodes, it is hereby referred to coverage, and the formula used to calculate the variance between the average coverage times and the actual coverage times is

where refers to the average coverage times, refers to the variance of coverage times, refers to the scanned point , and refers to all the points in the scanned region. According to the average coverage weight, the scanning data coverage times that can be achieved by different positioning methods can be evaluated; the uniformity of coverage can be tested according to the variance of coverage times.

3.2.2. Hand Modeling Design Effect

The effect of human body data positioning affects the 3D modeling of characters. We install the unified Maya animation software in the design system of 6 groups of methods to open the animation running scene. We open the script editor on the script editing control panel, import the scanning and positioning results of human body structure information obtained by different methods, and click start to run. We build an experimental test environment, and different test groups design the virtual character 3D modeling according to the obtained positioning data. According to the 3D effect of the character modeling structure and skin material rendering effect of different test groups, the differences of different design methods are compared.

3.2.3. Head Modeling Design Effect

The verification process of the head modeling design effect is basically the same as that of the hand modeling design effect, but the verification parts are different.

3.2.4. Time-Consuming Verification of Modeling Design

Due to the large demand for multirole virtual character modeling, there are strict requirements on the design time, so the verification time of various methods is compared.

3.2.5. Modeling Design Score

According to the design results of different methods, ten experts of relevant disciplines are invited to score the design results for 10 times, and the average value of the results of 10 times is taken. The scoring method is the percentage system.

The comparison methods used include the character virtual modeling design method based on VR technology proposed in Literature [8], the character virtual modeling design method based on parameter mode SMPL proposed in Literature [9], the character virtual modeling design method based on the mobile platform proposed in Literature [10], the character virtual modeling design method based on a computer diagram proposed in Literature [11], and the character virtual modeling design method based on deep and enhanced target parameter learning proposed in Literature [12].

3.3. Results and Discussion
3.3.1. Positioning Coverage Effect Test

The positioning coverage effect test results of the six methods are shown in Figure 5.

According to Figure 5, there are different degrees of gaps in the data coverage of the five traditional design methods, which shows that there are data leakage points in these methods when scanning and positioning human structure information, which affects the subsequent generation effect of virtual character 3D modeling. The data coverage of this method is not blank, which shows that the scanning data obtained under the support of a wireless sensor network positioning algorithm is complete and can cover the whole scanning area, which provides more data support for generating virtual character 3D modeling.

3.3.2. Comparison of the 3D Hand Structure Effect

The structure of human fingers is the most complex, because each human hand is composed of five fingers and one palm. In addition, human fingers have many joints and are very flexible. Therefore, there are some difficulties in 3D modeling design. Figure 6 shows the design effect of 3D modeling of human hands.

According to Figure 6, for the hand 3D modeling designed under 5 groups of traditional positioning technologies, there are problems such as adhesion and truncation between finger structures, which corresponds to the positioning coverage of the human body structure in Figure 5. Only with the help of wireless sensor network positioning, the method in this paper matches the different position information of the hand to the corresponding position according to the more comprehensive positioning information, so as to obtain the 3D design effect similar to the real hand.

3.3.3. Comparison of Hand Modeling Rendering Effect

Considering the design concept of virtual communication, on the basis of completing the overall 3D modeling design, the skin material rendering effects of six groups of methods on virtual character modeling are compared. It is shown in Figure 7.

According to the test results shown in Figure 7, when rendering 3D modeling of the human head in the first two methods, 3D collapse occurs in human eyes and nose. The third, fourth, and fifth groups of methods have some problems, such as missing eyebrows, too thin eyebrows, lack of facial lines, lack of spots, and wrong color matching. Based on the design of virtual communication, the head shape is complete, the color matching is correct, and the spots and wrinkles are obvious. From the above test results, the 3D modeling rendering effect based on the design of virtual communication is the best.

3.3.4. Comparison of Time-Consuming Comparison of Modeling Design

The comparison results of modeling design time consumption of the six methods are shown in Figure 8. It can be seen from the design results shown in Figure 8 that under multiple experiments, the modeling design time of this method is always lower than that of the literature comparison method in 5, and the maximum time is no more than 35 min.

3.3.5. Comparison Results of Modeling Design Scores

The scoring results of the six methods are shown in Table 2.

MethodsLiterature [8] methodLiterature [9] methodLiterature [10] methodLiterature [11] methodLiterature [12] methodThe proposed method

Average value75.874.670.581.770.999.6

As can be seen from the scoring results shown in Table 2, the expert scores of this method are higher than those of the other five comparison methods, and the average score is 99.6. Therefore, it shows that the 3D modeling design effect designed by this method can be recognized more.

4. Conclusions

The 3D modeling design method in this study gives full play to the technical characteristics of wireless sensor networks and virtual communication and generates 3D modeling of people with different poses through mode switching of different skin algorithms. The performance of the method is verified from both theoretical and experimental aspects. In the modeling design of multirole virtual characters, the coverage of human data acquisition is enhanced through wireless sensor networks. Virtual communication technology is used to effectively render the human body modeling, which can improve the design effect, shorten the design time, and better meet the requirements of a multirole virtual character modeling design. However, there are a lot of calculation steps in this design, so the overall design efficiency may not be high, affecting the design speed of modeling. In the future, we can optimize the positioning of wireless sensor networks, strengthen the adaptability of positioning and scanning, and speed up the overall progress of modeling design.

Data Availability

The data used to support the findings of this study are included within the article. Readers can access the data supporting the conclusions of the study from iCartoonFace and the Danbooru2018 data set.

Conflicts of Interest

The authors declare that there is no conflict of interest with any financial organizations regarding the material reported in this manuscript.


  1. H. Wan, L. Ting, F. Liwen, and Y. Chen, “Freehand gesture interaction method for 3D scene modeling,” Journal of Beijing University of technology, vol. 39, no. 2, pp. 175–180, 2019. View at: Google Scholar
  2. Y. Tao, “Significant image feature weight self matching simulation based on visual communication,” Computer simulation, vol. 37, no. 1, pp. 466–469, 2020. View at: Google Scholar
  3. Y. Yanfu, L. Ke, J. Xue, C. Wang, and W. Gan, “Face animation method based on deep learning and expression Au parameters,” Journal of computer aided design and graphics, vol. 31, no. 11, pp. 1973–1980, 2019. View at: Google Scholar
  4. H. T. Wu and G. A. Li, “Visual communication design elements of Internet of Things based on cloud computing applied in graffiti art schema,” Soft Computing, vol. 4, no. 2, pp. 1–10, 2019. View at: Google Scholar
  5. J. Wu, L. Zhen, L. Tingting, and J. Wang, “Uncertainty model of virtual character behavior in serious games assisted by social training,” Chinese Journal of image and graphics, vol. 24, no. 9, pp. 154–164, 2019. View at: Google Scholar
  6. H. Zhu and G. Yue, “Simulation system of artistic human anatomy painting based on forge cloud,” Computer system application, vol. 29, no. 5, pp. 110–116, 2020. View at: Google Scholar
  7. Y. R. Musunuri and O. Kwon, “State estimation using a randomized unscented Kalman filter for 3D skeleton posture,” Electronics, vol. 10, no. 8, p. 971, 2021. View at: Google Scholar
  8. Z. Wei, “3D design of animation characters based on VR technology,” Modern electronic technology, vol. 41, no. 16, pp. 172–175, 2018. View at: Google Scholar
  9. Z. Zhichao, L. Guiqing, Z. Xinyi, Y. Wang, and Y. Nie, “Pose and shape reconstruction of 3D mannequin,” Journal of computer aided design and graphics, vol. 31, no. 9, pp. 21–29, 2019. View at: Google Scholar
  10. Z. Xiao, C. Zhengming, H. Zhu, and J. Tong, “Implementation and application of 3D virtual hair trial system based on mobile platform,” Journal of graphics, vol. 39, no. 2, pp. 133–140, 2018. View at: Google Scholar
  11. L. Li, W. Zhu, and H. Hu, “Multivisual animation character 3D model design method based on VR technology,” Complexity, vol. 21, no. 4, 2021. View at: Google Scholar
  12. G. F. Gomes and C. A. Vidal, “An autonomous emotional virtual character: an approach with deep and goal-parameterized reinforcement learning,” Journal on Interactive Systems, vol. 11, no. 1, pp. 27–44, 2020. View at: Publisher Site | Google Scholar
  13. T. Treal, P. L. Jackson, J. Jeuvrey, N. Vignais, and A. Meugnot, “Natural human postural oscillations enhance the empathic response to a facial pain expression in a virtual character,” Scientific Reports, vol. 11, no. 1, pp. 124–128, 2021. View at: Publisher Site | Google Scholar
  14. P. A. Bortnikov and A. V. Samsonovich, “A simple virtual actor model supporting believable character reasoning in virtual environments,” in Advances in Intelligent Systems and Computing, A. Samsonovich and V. Klimov, Eds., pp. 17–26, Springer,, Cham, 2018. View at: Publisher Site | Google Scholar
  15. G. Liu, X. Li, and J. Wei, “Large-area damage image restoration algorithm based on generative adversarial network,” Neural Computing & Applications, vol. 33, no. 10, pp. 4651–4661, 2021. View at: Publisher Site | Google Scholar
  16. K. Hyejin and H. Jae-In, “Manipulating augmented virtual character using dynamic hierarchical pointing interface,” Computer Animation & Virtual Worlds, vol. 33, no. 1, pp. 109–126, 2018. View at: Google Scholar
  17. Y. M. Lou and J. M. Hu, “Coordinate transformation of three-dimensional image in integral imaging system illuminated by a point light source,” Optik, vol. 22, no. 11, article 165885, 2021. View at: Google Scholar
  18. C. Jiang, P. Kilcullen, X. L. Liu et al., “Real-time high-speed three-dimensional surface imaging using band-limited illumination profilometry with a CoaXPress interface,” Optics Letters, vol. 45, no. 4, pp. 964–967, 2020. View at: Publisher Site | Google Scholar
  19. D. M. Hoffman and G. Lee, “Temporal requirements for VR displays to create a more comfortable and immersive visual experience,” Information Display, vol. 35, no. 2, pp. 9–39, 2019. View at: Publisher Site | Google Scholar
  20. B. A. James and D. Williamobeng, “A note on geometric surfaces,” Asian Journal of Mathematics and Computer Research, vol. 14, no. 5, pp. 23–32, 2019. View at: Google Scholar

Copyright © 2021 Meiting Qu and Lei Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.