Research Article  Open Access
Yuxiang Zhu, Yanjun Peng, "Generation of Realistic Virtual Garments on Recovery Human Model", Mathematical Problems in Engineering, vol. 2019, Article ID 5051340, 14 pages, 2019. https://doi.org/10.1155/2019/5051340
Generation of Realistic Virtual Garments on Recovery Human Model
Abstract
Displaying a variety of fabrics on a customized character could help customers choose which fabric is more suitable for themselves and help customers choose clothing. However, it is not an easy task to show realistic garment on customized virtual character. As a result, we propose a stable finite element method (FEM) model which is stable to approximate stretching behaviors. At first, we measure four kinds of cloth materials with measurement techniques to research elastic deformations in real cloth samples. Then, we use the parameter optimization method by fitting the model with measurement data. For promoting the display of realistic fabrics, we recover 3D human in shape and pose from a single image automatically. Human body datasets are constructed at first. Then, CNNbased image retrieval in shape and skeletonbased template matching method in pose are combined for 3D human model recovery. To enrich human body details, we synthesize the human body and 3D face with spatial transformation. We compared our proposed method of recovering 3D human from a single image with the stateoftheart methods, and the experimental results show that the proposed method allows the recovered virtual human to put on garment with different fabrics and significantly improves the fidelity of virtual garment.
1. Introduction
Unlike physical tryon, which requires try on of several clothes before the shopper makes decision, the virtual tryon system allows shoppers to visualize what a garment might look like on themselves before purchasing and enhances the shopper’s shopping experience. Thus, enabling realistic clothes try on the recovered human body of their own has the potential prospect. Some virtual tryon systems, on the other hand, do not typically account for the effects of fabric materials. With the right set of parameters, these systems could simulate real cloth to a high degree of fidelity. But some parameters should be tuned based on the animator’s intuition about the fabric. And during this tuning process, it is difficult to tell which models and which parameters are giving results more like the real fabric. In addition, almost all of the virtual tryon systems assume that the users have selected a predefined set of avatars or have captured accurate measurements of their own bodies through 3D scanning. And these avatars or captured bodies do not have the generated 3D human face details.
In this study, we consider the problem of exhibiting realistic clothes on recovered virtual character with less input information, such as the capability that enables users to virtually try on different material clothes by a single photograph of themselves. Instead of previous cloth models, which are inaccurate for stretching materials, we introduce a stable FEM cloth model to approximate the stretching behaviors of various materials. Then, elastic deformations on stretching in real cloth samples are studied using stretching measurement and parameter optimization method. A recovery human body method is needed to show realistic fabric. Instead of the scanned human model or human shape relying on multiview images, we recover the human model in shape and pose from a single image. For improving the accuracy of garment models with animated characters, we enhance the visual realism effects of digital human models by incorporating human body with 3D face.
And the contributions of this paper can be summarized as follows: (1) We propose a stable FEM cloth model which could approximate various materials accurately and stably. With stretching measurement and parameter optimization method, we find the stiffness parameters of the elastic model (in Section 4). (2) We recover the shape and pose of a human body from a singleview image. And a human body with 3D face is synthesized for detail display (in Section 5). (3) Two applications are applied; one is exhibiting different fabrics on the recovered human model and the other is imagebased virtual tryon (in Section 6).
2. Related Work
Our work builds on previous efforts including cloth simulation, cloth parameter estimation, and 3D human recovery.
2.1. Cloth Simulation
Cloth simulation is a traditional research problem in computer graphics. Early physicsbased cloth simulation adopts the particle system. Macklin et al. [1] adopted particles connected by constraints as their fundamental building block and allowed them to treat contact and collisions in a unified manner. But the accuracy was fairly limited for large deformations and wastes long computation times. For supporting realtime, cloth simulations adopted a simplified massspring system [2, 3], and the GPUs were used to accelerate simulation [4]. Unfortunately, these kinds of massspring systems are inaccurate for nonlinear cloth models.
In addition to discrete models like massspring, there are also continuous models based on FEM [5, 6]. The advantage of continuous models is being independent of mesh resolution, and numerous authors had attempted to accelerate stability and improve the accuracy. Thomaszewski et al. [7] used a corotational formulation for their finite element simulation. In their work, they show how membrane and bending energies can be modeled consistently for thin and flexible objects. Volino et al. [8] introduced a nonlinear Green–Lagrange tensor in a mechanical model which could obtain arbitrary strainstress relationship. And the membrane model for nonlinear tensile stiffness was proposed.
As we know, garment modeling is built upon cloth simulation. Some methods start from the 2D design patterns; then, they use physical simulation or iterative optimization of related parameters to stitch the planar pattern and obtain the desired realistic 3D garment. Umetani et al. [9] provided an interface that enables users to immediately customize the parsed patterns via manipulation and editing. To facilitate the reuse of existing designs and make them as a starting point for creating new garments, Bartle et al. [10] proposed a novel framework that allows designers to directly apply the changes they envision in 3D space.
2.2. Cloth Parameter Estimation
Despite a large amount of works on cloth simulation models, little work has been done on estimating the parameters of these models which could match the behavior of real fabrics. To this end, previous cloth parameter estimation works [11, 12] had mainly relied on hardware devices, e.g., Kawabata Evaluation System (KES) machines. But there is no mapping between the parameters for a particular cloth model and the device parameters. Several methods are attempted to extract physical cloth parameters from video sequences of simple cloth pieces. Bouman et al. [13] estimated simulation parameters by using a datadriven regression model. And the regression model is based on the marginal statistics of image features computed from video. Yang et al. [14] characterized the motion and visual appearance of the cloth geometry by video.
Another approach to model the cloth is to fit cloth parameters by experimental data. Wang et al. [15] found the stiffness parameters which are fitted to experimental data. The experimental data are obtained from planar and bending deformations. But dynamic parameters are not measured. Eiguel et al. [16] numerically optimized nonlinear stressstrain curves to estimate parameters by minimizing errors in force and position. However, we are interested in finding stiffness parameters for elastic materials continuously based on FEM cloth simulation.
2.3. 3D Human Recovery
Traditional methods that capture full skeletal motion mostly rely on multiview camera systems. Baak et al. [17] combined local optimization with global retrieval for computing characteristic features and tracked fullbody motions from depth image stream. But fast motions would lead to unstable posture. Ganapathi et al. [18] derived an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. Such capture methods with expensive systems and special controlled recording conditions will limit their applicability.
There are lots of studies on estimating 3D human body from multiview images or video sequences [19, 20], while recently, static image methods [21] have emerged. Human body is an articulated object with a number of joint angles. A fixed or deformable crude model based on skeleton has been widely exploited for human body pose estimation. Guan et al. [22] had incorporated more visual cues, such as shading cues, internal edges, and silhouettes, to facilitate fitting of the SCAPE model to images. Bogo el al. [23] used the DeepCut method to predict the 2D body joint locations. Then, they fitted SMPL to the 2D joint by minimizing an objective function. These methods have shown the shape recovery based on several silhouettes in detail. However, parameters required to optimize the SCAPE or SMPL model should use strong priors, i.e., a manually defined skeleton or body parts.
All human body recovery methods do not take into account face details; thus, we provide a fully automatic method for estimating a 3D human model in pose and shape from a single image and combining it with 3D face. Although constructing 3D human model dataset is a bit trivial, we only need to do it once.
3. Methodology
3.1. Process for Exhibiting Realistic Garment on Recovered Human Model
The specific process of our system is shown in Figure 1. The system adopts the input image and the kind of cloth material on the input image as input. Finally, the system adopts the recovered 3D human model, as well as the realistic cloth material as an output. In the process of recovering 3D human in shape and pose, we obtained the human segmented image of the input image at first. Then, the CNNbased image retrieval method is used to find the top 20 candidate images , which are most similar to human body shape on in the 2D human segmentation dataset . Finally, the human skeleton of the input image is obtained, and the skeletonbased template matching method with each image in the candidate images is obtained. The 3D human body in shape and pose is recovered for input image . To enrich the human body , we reconstruct the 3D human face of input image , and then human body is synthesized between the human body and 3D human face with 3D spatial geometric transformation. In the process of generating realistic fabrics, we have applied the parameter optimization method to obtain the parameters of four kinds of fabrics based on the stable FEM fabric model. Then, using the cloth semantics for the input image , the corresponding realistic cloth on the input image could be obtained. The realistic cloth is stitched on the synthesized human body .
3.2. Symbol Definition
A list of symbols used in this study is presented in Table 1.

The 3D human model and 3D human face are triangulated meshes. These triangulated meshes are composed using the vertex collection and triangle topology connection information. The input image and candidate images are all 2D images. The realistic fabric is generated by cloth parameter . represents the coordinate in 2D, while represents the coordinate in 3D.
4. Generation of Realistic Fabric
We have attempted to accelerate stability and improve the accuracy for continuous models based on FEM. While realistic fabric is possible to be simulated with existed techniques, it is time consuming to adjust many parameters to achieve an authentic appearance of the particular fabric. In this section, we first propose a stable FEM cloth model that is stable and approximates stretching behaviors. Then, we use the parameter optimization method to study the real cloth samples of four kinds of materials. And we apply the optimization parameter of four different materials on our proposed stable FEM cloth model.
4.1. Stable FEM Cloth Model
We adopt the FEM for inplane elastic force in this study. And assume the nonlinear Green–Lagrange strain tensor as the strain metric. To avoid the process oscillation caused by the elastic force during the simulation process and to accelerate the convergence speed of the simulation, this model applies a damping force on each particle according to the cloth property.
It is known from the elastic mechanics that the deformation caused by the internal tension of the cloth is described by the strain and the tension is described by the stress . Assume that the adopted materials satisfy the law of Hooke, that is, the stress strain satisfies the linear relationship as shown in the following equation:
Among them, is Young’s tensor, which describes the ability of the material to resist deformation. represents Poisson’s ratio and describes the proportional relationship between transverse strain and longitudinal strain. In this study, is used to represent the coordinates of the points in the 2D initial material coordinate system, and is used to represent the coordinates in the 3D world coordinate system during the deformation process. In each triangle of the initial material coordinate system, for any point within the triangle, it can be represented linearly by the three vertices of a triangle, as shown in the following equation:
The deformed points are also represented by the same weight:
To facilitate the calculation of the formula, it can be assumed that and . The displacement could be expressed by equations (2) and (3) as
According to equation (4), could be obtained. And the strain obtained by Green–Lagrange tensor is
When equation (5) is developed and simplified,
is a matrix of , which can be calculated in advance. Therefore, can be precalculated to speed up the simulation. From the theory of elastic mechanics, the relationship between energy per unit area and stress and strain is . Assuming that the overall energy of a triangular element is , the force at each vertex of the triangle iswhere is the area of the triangle. Substituting equations (1) and (6) into equation (7):where is
In the process of garment simulation, the elastic force is easy to cause oscillation effect, and the simulation process cannot be stabilized quickly. To solve the problem, this study introduces a linear damping force model, which is linearly related to the velocity of the particle:
Among them, is the damping coefficient. and are the speeds of the particle and , respectively. By introducing the damping force, the convergence speed of the system is accelerated greatly, and the damping force slows down the rate change of the particle displacement, making the system more stable.
We adopt the iterative optimization of related parameters to stitch the planar pattern and obtain the desired realistic 3D garment. The initial 2D garment which is around the human body is shown in Figure 2(a). During the stitching process, the garment is subject to internal forces and external forces. Then, the clothing patterns can be connected to the 3D mannequin as shown in Figure 2(b). Two fabric models were used for stitching; one is the FEM fabric model and the other is the stable FEM fabric model. The comparison for energy of the two fabric models during the stitching process is shown in Figure 2(c). As can be seen from Figure 2(c), the FEM fabric model produced two oscillating effects before reaching a steady state, while the stable FEM fabric model reached a steady state earlier.
(a)
(b)
(c)
4.2. Cloth Parameter Optimization
We know that a given model describes a particular piece of cloth by fitting the model to the measurement data. It could adjust its parameters to minimize the difference between the model’s predictions and the measured behaviors. In this study, we do this by solving an optimization problem, measuring cloth behavior under conditions, and estimating cloth deformation models following an incremental parameter fitting procedure. Like other cloth testing systems, we just focus on tensile forces, because it is hard to measure compression forces in a sheet repeatedly.
To obtain the stiff parameters, we design experiments for stretching, shearing, and bending behaviors of cloth, which demonstrate a sufficient set of cloth behaviors. The square fabrics on the normalized size of 25 cm 25 cm are selected for the deformation test. And four markers on the top corner are considered as points of action. We measure the behavior of cloth, like position of the marks.
We select four kinds of common fabrics for testing, which are silk, fleece, denim, and linen, respectively. They exhibit distinct elastic behaviors, for example, rib knit material has little effect on linen likely due to its vertically rigid structure. And we have first measured the tensile stretching, shearing, and bending forceelongation curves with our proposed method on the fleece sample, as shown in Figure 3.
(a)
(b)
(c)
(d)
Our optimization method is to find optimal parameters by minimizing the difference between captured features and simulated features. And the optimization formulation is as follows:
Among them, represents the number of tests, represents the shape features captured from the th test, and represents the corresponding features, which is generated using a cloth simulation with the given elastic model of 3 parameters , , and . The elastic model is discussed in Section 4.1.
The parameter optimization test was performed based on the stable FEM cloth model, and we compared the motion of cloth in simulation with measured real fabric motion by parameter fitting incremental. The cloth parameters of four fabrics are obtained after tests. And the three parameters of fabric silk based on stable FEM cloth simulator are 1.0e − 5, 50, and 50 respectively, while, the detailed result of cloth parameters is listed in Table 2. Figure 4 shows the simulation results of four different kinds of fabrics, which achieves a strong sense of realism.

(a)
(b)
(c)
(d)
5. Recovery of 3D Human Model from a Single Image
The recovery of 3D human pose from 2D is critical in many applications. However, most existing techniques for human shape recovery rely on multiview images and are insufficient to constrain a 3D shape from a single image. Thus, we provide a fully automatic method to recover a 3D human model both in shape and pose from a 2D image. First, we construct human model dataset using a 3D generative model called SMPL. Then, we combine the CNNbased image retrieval and skeletonbased template matching method to match the shape and pose in the human model dataset. We add a last step, which synthesize independent human body with 3D face.
5.1. Human Model Dataset
To reconstruct the 3D human model from a single image, we prepare human model dataset in advance. In this study, we employ the recent SMPL model introduced by Loper et al. [24]. SMPL defines a function , where is the shape parameter, is the pose parameter, and is the fixed parameter of the model. Given the rest shape retrieved by the shape parameters , SMPL defines posedependent deformations and uses the pose parameters to produce the final output mesh. The direct output of this function is a body mesh with vertices . SMPL is gender specific; it distinguishes the shape space of females and males. By adjusting the shape and pose parameters, it generates the male 3D human body set and the female 3D human body set . The number of is 1000, and the number of is 1000. Figure 5 shows the generation of human model dataset by two parameters, while the upper row of Figure 5 shows the human models generated with pose parameter and the lower row of Figure 5 shows the human models generated with shape parameter .
5.2. Recovery of 3D Human Model
When recovering the 3D model that is closest to the 2D human in image, a CNNbased image retrieval method could not accurately estimate the 3D human body model due to the particularity of the human body posture. Using a skeletonbased template matching method requires a pose comparison with each of the 3D human bodies in the human model dataset , which takes a lot of time. However, in this study, we adopt the combination of CNNbased image retrieval and skeletonbased template matching method, which could accurately and quickly recover the 3D human body in the input image to meet the requirement of the system. The system adopts a single image and semantics (male or female) as input. Finally, it generates the 3D human body mesh as output. A detailed recovery process is shown in Figure 6, and the detailed steps for recovering 3D human body from a single image are as follows: Preprocess: for each human body in the human body dataset , a 2D image of its front side is obtained. Semantic segmentation is performed on the 2D image of to generate a 2D image segmentation dataset . Each image in is binarized, the human body part is marked in white, and the nonhuman part is marked in black. Step 1: semantic segmentation on the input image to obtain the segmented image was performed. Then, the CNNbased image retrieval method is used to find the top 20 candidate images . Step 2: the human skeleton for the input image is obtained, and the skeletonbased template matching method with each image in the candidate images is performed.
5.2.1. CNNBased Image Retrieval
For recovering the 3D model that is closest to the 2D human in the input image, it is essential to find the model that is closest in shape to the 2D image. CNNbased image retrieval could help us retrieve the candidate images that have the most similar content to the query image. The key concern for CNNbased image retrieval is feature extraction. It is important that the features extracted from the database can be detectable even under changes in image scale, noise, and illumination.
We adopt the popular pretrained CNN model VGG16, which could get more discriminative representations for object recognition and be used as the basic model for image retrieval [25]. Our image training dataset is . And we calculate the Euclidean distance between the query image feature and each image feature in the training database. The closer the distance, the higher the correlation. As shown in Figure 6, one of the input image is signed with , and the output candidate images are signed with . The 3D human bodies correspond to in the human body dataset, which are most similar to the human body in shape with the input image .
5.2.2. SkeletonBased Template Matching Method
For recovering the 3D human model that is similar in pose from the input image , we adopt the skeletonbased template matching method. And the corresponding human model in that most matches on skeleton with the input image is the final recovered 3D human model . We detect the mannequin bones firstly and marked them with . The skeletonbased template matching algorithm is as follows:
Among them, and are the two important parameters for human body matching. is the distance between vertex and vertex ; is the angle between vertex , vertex , and axis . represents the weight, and we set it to be 0.5 which could enhance the matching accuracy. Figure 7(a) is part of the skeleton displayed in the coordinate system, and Figure 7(b) is the whole skeleton of a human body. As seen from Figure 6, the human body model on is the most similar in shape to the human body on the input image . The human body model on is the most matching in shape and pose to the human body on the input image . The 3D human model corresponding to is the final recovery of 3D human from input Image in shape and pose.
(a)
(b)
5.3. Synthesis of Human Body and 3D Face
From Section 5.2, we could obtain the 3D human model from input image . To enrich the human body details, we synthesize the human body with 3D face. At first, we adopt the 3D face reconstruction method [26], getting the 3D face of input image . What we get are the 3D vertices and corresponding colors from a single image. The results are saved as mesh data with obj format, which can be opened with Meshlab or Microsoft 3D Builder. Then, the human body and 3D face are synthesized using 3D spatial geometric transformation and labeled with .
In 3D space, when the human body and the 3D face are synthesized, the human body without spatial transformation is used as a reference object, and the spatial transformation is performed on the 3D face. The transformation calculation is shown as follows:where is the rotation matrix, is the scaling matrix of the vertex in the 3D spatial, and is the translation matrix.
The synthesis process is shown in Figure 8, while the human body in Figure 8 recovered based on Section 5.2 is marked as and the 3D face without texture in Figure 8 is marked as . The input image of 3D face is in Figure 6 and marked as . And the synthesis result between human model and 3D face in Figure 8 is marked as .
6. Applications
With our proposed stable FEM cloth model in Section 4, the parameters of four different cloth materials are optimized and the realistic fabric simulation of different materials could be obtained. And using the proposed human model recovery method in Section 5, a 3D human body with enriched face details could be achieved. There are two applications for the above results. One application is to display fabrics with different realisms on the recovered 3D human body, as shown in Section 6.1. The other is imagebased virtual tryon, as shown in Section 6.2.
6.1. Fabrics on Recovered Human Model
Displaying various fabrics on customized character allows shoppers to visualize the effects of trying on various fabrics on their own. This is a good application prospect for virtual tryon. In this section, we will show garments that are made up of four fabrics and try them on the customized human model.
Our initial 3D garments are created from 2D panels. However, the 2D panels are triangulated using a Delaunay algorithm since we use triangular meshes to represent garment surfaces. And seam lines are explicitly specified by choosing pairs of panel boundary edges. We apply the cloth parameters obtained in Section 4.2 to our stable FEM cloth model and set it up in advance. The designed garments are assembled and linked by seaming lines to simulate clothing behavior on the 3D mannequin. By applying elastic force among the seams, two clothing pieces in the pattern can be attached to each other during the sewing process. The clothing patterns can be therefore connected to the 3D mannequin. After several seconds, the virtual human will be dressed in 3D garment.
Figure 9 shows the garments that are made up of four fabrics and dressed on custom character. The fabrics in Figures 9(a)–9(d) are silk, fleece, denim, and linen, respectively. From the display results, the effect of four fabrics dressed on human body demonstrates different fabric shapes, even though they have some similarities. For example, silk is more stretchy than fleece. And denim does not tend to have small wrinkles because it is incompatible in bending.
(a)
(b)
(c)
(d)
6.2. ImageBased Virtual TryOn
Given a single image, our goal is to reconstruct the 3D geometry and texture of a clothed human while preserving the detail present in the image. Existing stateoftheart virtual tryon systems require a depth camera for tracking and overlay the human body with the fit garment. Saito et al. [27] introduced a novel pixelaligned implicit function, which spatially aligns the pixellevel information of the input image with the shape of the 3D object, for deep learningbased 3D shape and texture inference of clothed humans from a single input image. But this method should train an encoder to learn individual feature vectors for each pixel of an image, which is timeconsuming. Our proposed method is able to fit the human body from a single 2D image by dressing the recovered human body with the designed garments. And it could be applied directly to imagebased virtual tryon. We first recover the pose and shape of the human body from the singleview image. Then, we dress the recovered human body with the designed garments.
One of the imagebased virtual tryon result is shown in Figure 10. Figure 10(a) shows the input image. Figure 10(b) shows the recovered human body and garment. The recovered human body has enriched face details based on Figure 10(a), and the recovered garment has the same material with garments in Figure 10(a).
(a)
(b)
7. Experimental Results
All of our experiments were tested on a 3.4 GHz AMD Phenom II x4965 processor machine, with 4 GB of RAM, and a NVIDIA GTX260 graphics card. For cloth simulation, we use the stable FEM cloth models with continuous collision detection and constraintbased collision response on this configuration. And we implement parallel virtual garment display on custom character on CUDA.
7.1. Comparison with Existing Methods
Compared to the previous human model recovery method, our proposed method has some advantages. The benefits include the follows: (1) Requiring no visual cues, such as shading cues, internal edges, or silhouettes, to promote the fitting of the template model. (2) There is no need to use strong priors, such as a manually defined skeleton or body parts, which facilitates recovery of the human model accurately. (3) Our proposed method achieves enriched face details. And the comparison results on human image are shown in Figure 11. Figure 11(a) shows the human image of input and Figures 11(b)–11(c) show the recovered 3D human model using the inferring 3D shape method [21] and our proposed method. The human body of Figure 11(c) is smooth than Figure 11(b), and the Figure 11(c) has more enriched face details than Figure 11(b).
(a)
(b)
(c)
Another comparison results on human image are shown in Figure 12. Kanazawa et al. [28] described Human Mesh Recovery for reconstructing a full 3D mesh of a human body from a single RGB image. They use the generative human body model, SMPL, which parameterizes the mesh by 3D joint angles and a lowdimensional linear shape space. However, there is a great deal of work on the 3D analysis of humans from a single image. Figure 12(a) shows the human image of input, and Figures 12(b)–12(c) show the recovered 3D human model using the Human Mesh Recovery method [28] and our proposed method. The human body of Figure 12(c) is accurate in pose than Figure 12(b), and the Figure 12(c) has more enriched face details than Figure 12(b).
(a)
(b)
(c)
Figure 13(a) shows the human image of another input; Figures 13(b)–13(c) show the recovered 3D human model using the Human Mesh Recovery method [28] and our proposed method. Obviously, the human body of Figure 13(c) is accurate in pose than Figure 13(b). And the Figure 13(c) has more enriched face details than Figure 13(b). Although our method is superior to Human Mesh Recovery method in performance, the accuracy of our method depends on the size of the human model dataset, and it is a bit trivial to prepare a large human model dataset in advance.
(a)
(b)
(c)
The following is the comparison results of imagebased virtual tryon. Yang et al. [29] achieved detailed garment recovery from a singleview image. To construct an accurate body model, the user indicates 14 joint positions on the image and provides a rough sketch outlining the human body silhouette. They fit 3D garment template’s surface mesh onto the human body to obtain the initial 3D garment and then jointly optimize the material parameters, the body shape, and the pose to obtain the final result. Compared to this imagebased virtual tryon method [29], our proposed method has some advantages. The benefits include the following: (1) Our method considers the material of fabric, which makes the garment look more realistic. (2) Human model has facial details and makes the human dressed on garment looks nature. The comparison results about garment recovery and repurposing are shown in Figure 14. Figure 14(a) shows the original image; Figures 14(b)–14(c) show the recovered garment on human body based on the imagebased modelling method [29] and our proposed method.
(a)
(b)
(c)
7.2. Draped Garment in Wind Environment
For displaying dynamic effect of the fabric, we added wind into the natural environment. When the human body is in windblown environment, the clothe fabric will swing to a certain extent with the size and direction of the wind. In order to simulate the true natural wind blowing effect, it is necessary to randomly take the direction and size of the wind to obtain irregular wind. And we select the garment of linen material as virtual tryon. Figure 15(a) shows the draping simulation after seaming in nature environment without wind. Figure 15(b) shows the dressed garment on human body in wind environment.
(a)
(b)
8. Conclusion and Future Work
We have presented a procedural method that exhibits realistic garments on a recovered 3D human model. It allows general pose of virtual human model to put on garment for displaying realistic fabrics. A stable FEM cloth model is proposed, and the stiffness parameters of elastic models are obtained by stretching measurement and the parameter optimization method. We show that four simulation fabrics which draped on recovered 3D human model. This method is easy to implement and significantly improves the fidelity of virtual garment, which make evident that our system has great potential value, such as commercial applications like virtual dressing or interactive applications like VR game.
Although our method has some limitations, it points out the direction of our future research. The first limitation is that we have found the stiffness parameters of the elastic model for four real fabrics. And the parameters of elastic model for more fabrics are waiting for research. Another limitation is that the accuracy of our recovered human body depends on the human model dataset. In the future, we will devote a lot of time to remove the two limitations and so we could get more reasonable and realistic garment immediately.
Data Availability
The research data used to support the findings of this study are available from the corresponding author upon request (corresponding name: Yanjun Peng, email: pengyanjuncn@163.com)
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
The authors would like to thank Prof. Zhigeng Pan at Hangzhou Normal University for his valuable comments and Assoc Prof. Mingmin Zhang at Zhejiang University for discussion. This work was supported by the National Natural Science Foundation of China under Grant no. 61976126, the National Key Research and Development Program of China under Grant no. 2018YFB1004902, the Natural Science Foundation of Shandong Province under Grant no. ZR2019MF003, and the Natural Science Foundation of Shandong Province under Grant no. ZR2017FM054.
References
 M. Macklin, M. Müller, N. Chentanez, and T.Y. Kim, “Unified particle physics for realtime application,” ACM Transactions on Graphics, vol. 33, no. 4, pp. 1–12, 2014. View at: Publisher Site  Google Scholar
 P. Volino and N. MagnenatThalmann, “Implicit midpoint integration and adaptive damping for efficient cloth simulation,” Computer Animation & Virtual Worlds, vol. 16, no. 34, pp. 163–175, 2005. View at: Publisher Site  Google Scholar
 R. Goldenthal, D. Harmon, R. Fattal, M. Bercovier, and E. Grinspun, “Efficient simulation of inextensible cloth,” ACM Transactions on Graphics, vol. 26, no. 99, p. 49, 2007. View at: Publisher Site  Google Scholar
 T. T. Liu, A. W. Bargteil, J. F. O’ Brien, and L. Kavan, “Fast simulation of massspring systems,” ACM Transactions on Graphics, vol. 32, no. 6, pp. 1–7, 2013. View at: Publisher Site  Google Scholar
 O. Etzmuss, M. Keckeisen, and W. Strasser, “A fast finite element solution for cloth modeling,” in Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, Canmore, Alberta, Canada, October 2003. View at: Publisher Site  Google Scholar
 J. Bender and C. Deul, “Adaptive cloth simulation using corotational finite elements,” Computers & Graphics, vol. 37, no. 7, pp. 820–829, 2013. View at: Publisher Site  Google Scholar
 B. Thomaszewski, M. Wacker, and W. Straber, “A consistent bending model for cloth simulation with corotational subdivision finite elements,” in Proceedings of the ACM SIGGRAPH Symposium on Computer Animation, Vienna, Austria, September 2006. View at: Google Scholar
 P. Volino, N. MagnenatThalmann, and F. Faure, “A simple approach to nonlinear tensile stiffness for accurate cloth simulation,” ACM Transactions on Graphics, vol. 28, no. 4, pp. 1–16, 2009. View at: Publisher Site  Google Scholar
 N. Umetani, D. M. Kaufman, T. Igarashi, and E. Grinspun, “Sensitive couture for interactive garment modeling and editing,” ACM Transactions on Graphics, vol. 30, no. 4, pp. 1–9, 2011. View at: Publisher Site  Google Scholar
 A. Bartle, A. Sheffer, V. G. Kim, D. M. Kaufman, N. Vining, and F. Berthouzoz, “Physicsdriven pattern adjustment for direct 3D garment editing,” ACM Transactions on Graphics, vol. 35, no. 4, pp. 1–11, 2016. View at: Publisher Site  Google Scholar
 G. Cho, N. Lim, and Y. Yang, “Integrated graphical presentation of fabric sound and mechanical properties,” Fibers and Polymers, vol. 11, no. 3, pp. 516–520, 2010. View at: Publisher Site  Google Scholar
 M. Akgun, “Assessment of the surface roughness of cotton fabrics through different yarn and fabric structural properties,” Fibers and Polymers, vol. 15, no. 2, pp. 405–413, 2014. View at: Publisher Site  Google Scholar
 K. L. Bouman, B. Xiao, P. Battaglia, and W. T. Freeman, “Estimating the material properties of fabric from video,” in Proceedings of the International Conference on Computer Vision, Sydney, NSW, Australia, December 2013. View at: Publisher Site  Google Scholar
 S. Yang, J. B. Liang, and M. C. Lin, “Learningbased cloth material recovery from video,” in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, October 2017. View at: Publisher Site  Google Scholar
 H. M. Wang, J. F. O’Brien, and R. Ramamoorthi, “Datadriven elastic models for cloth: modeling and measurement,” ACM Transaction on Graphics, vol. 30, no. 4, p. 1, 2011. View at: Publisher Site  Google Scholar
 E. Eiguel, D. Bragdley, B. Thomaszewski et al., “Datadriven estimation of cloth simulation models,” Computer Graphics Forum, vol. 31, no. 2, pp. 519–528, 2012. View at: Publisher Site  Google Scholar
 A. Baak, M. Muler, G. Bharaj et al., “A datadriven approach for realtime full body pose reconstruction from A depth camera,” in Proceedings of the International Conference on Computer Vision, pp. 1092–1099, Barcelona, Spain, November 2011. View at: Publisher Site  Google Scholar
 V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single timeofflight camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, June 2010. View at: Publisher Site  Google Scholar
 A. O. Balan, L. Sigal, M. J. Black et al., “Detailed human shape and pose from images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, Minneapolis, MN, USA, June 2007. View at: Publisher Site  Google Scholar
 A. Jain, T. Thormahlen, H. P. Seidel et al., “MovieReshape: tracking and reshaping of humans in videos,” ACM Transactions on Graphics, vol. 29, no. 6, p. 1, 2010. View at: Publisher Site  Google Scholar
 Y. Chen, T. K. Kim, and R. Cipolla, “Inferring 3D shapes and deformations from single views,” in Proceedings of the European Conference on Computer Vision, pp. 300–313, Crete, Greece, September 2010. View at: Google Scholar
 P. Guan, A. Weiss, A. Balan, and M. J. Black, “Estimating human shape and pose from a single image,” in Proceedings of the International Conference on Computer Vision, pp. 1381–1388, Kyoto, Japan, October 2009. View at: Publisher Site  Google Scholar
 F. Bogo, A. Kanazawa, C. Lassner et al., “Keep it SMPL: automatic estimation of 3D human pose and shape from a single image,” in Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, October 2016. View at: Publisher Site  Google Scholar
 M. Loper, N. Mahmood, J. Romero, G. PonsMoll, and M. J. Black, “SMPL: a skinned multiperson linear model,” ACM Transactions on Graphics, vol. 34, no. 6, pp. 248:1–248:16, 2015. View at: Publisher Site  Google Scholar
 W. Yu, K. Y. Yang, H. X. Yao, X. Sun, and P. Xu, “Exploiting the complementary strengths of multilayer CNN features for image retrieval,” Neurocomputing, vol. 237, pp. 235–241, 2016. View at: Publisher Site  Google Scholar
 Y. Feng, F. Wu, X. H. Shao et al., “Joint 3D face reconstruction and dense alignment with position map regression network,” in Proceedings of the European Conference on Computer Vision, Munich, Germany, September 2018. View at: Google Scholar
 S. Saito, Z. Huang, R. Natsume et al., “PIFu: pixelaligned implicit function for highresolution clothed human digitization,” 2019, https://arxiv.org/abs/1905.05172. View at: Google Scholar
 A. Kanazawa, M. J. Black, D. W. Jacobs et al., “Endtoend recovery of human shape and pose, computer vision and pattern recognition,” 2018, https://arxiv.org/abs/1712.06584. View at: Google Scholar
 S. Yang, T. Amert, Z. R. Pan et al., “Detailed garment recovery from a singleview image,” 2016, https://arxiv.org/abs/1608.01250. View at: Google Scholar
Copyright
Copyright © 2019 Yuxiang Zhu and Yanjun Peng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.