Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 3849153 | 10 pages | https://doi.org/10.1155/2016/3849153

A Study of Online Review Promptness in a B2C System

Academic Editor: Francisco R. Villatoro
Received29 Mar 2016
Revised20 Jun 2016
Accepted26 Jun 2016
Published26 Jul 2016

Abstract

Web 2.0 technologies have attracted an increasing number of active online writers and viewers. A deeper understanding of when customers will review and what motivates them to write online reviews is of both theoretical and practical significance. In this paper, we present a novel methodological framework, which consists of theoretical modeling and text-mining technologies, to study the relationships among customers’ review promptness, their review opinions, and their review motivations. We first study customers’ online “purchase-review” behavior dynamics; then, we introduce the LDA method to mine customers’ opinion from their review text; finally, we propose a theoretical model to explore some motivations for those people publishing review online. The analytical and experimental results with real data from a Chinese B2C website demonstrate that the behavior dynamics of customers’ online review are influenced by the multidimensional motivations, and some of them can be observed from their review behaviors, such as review promptness.

1. Introduction

Online customer review is a review made by a customer who has purchased a product or service online. It is a form of customers’ feedback on e-commerce and online shopping sites. Now, online review has become an important channel for both consumers and producers to provide product information and recommendations from a customer’s perspective [1, 2].

To understand the exact information from the massive and various online reviews, several interesting questions need to be answered properly. First, it is surprising that some B2C websites like https://www.amazon.com/ and http://360buy.com/ have collected so many reviews, since review is voluntary behavior and takes considerable time and creative effort for customers [3, 4]. Second, quite a lot of online review behaviors are more of a type of emotional expression [5]. Consequently, consumers would make comments, or reviews, not only about the products they had purchased, but also about everything that they had experienced during the whole process of online shopping. Third, certain reviews are generated casually by unwilling users with a kind of lazy attitude. Therefore, we should make clear what motivates people to review online.

In the literature, the classic characterization of motivations as broadly extrinsic and intrinsic was used to discuss the motivations that contribute to online communities [6]. Intrinsic motivations come from the pleasure in an activity itself, such as the pleasure of writing. Extrinsic motivations come beyond the activity itself, such as status, financial reward, or social influence. However, reviewers will not explain why they post such a review online, especially for those extrinsic reasons. Thus, it is hard to explore the customers’ exact review motivations directly. Fortunately, part of the “reviewers” motivations can be observed by their review behavior, such as review quality (posting a long or short review), promptness (posting a quick or lazy review), and attitude (posting review actively or passively) [7]. Different from the most previous research, our focus is on “why did people publish a review like that?” Or what are the results in such an online review? In detail, why did people review their online shopping experience in such a manner (promptness), given such a score and with such a content (aspects and words)? So, it is interesting to explore when people will publish a review online if they gain an extrinsic or intrinsic motivation related reward.

In addition, for those valuable reviews published on a B2C website, people may have further interests in exploring what is talked about in those reviews. However, the task of exploring information (opinions) from online reviews may become more obvious and thus serious to the phenomenon of the so-called information overload [1] and data sparsity [8], referring to the difficulty a person will have in understanding an issue and making decisions in the presence of overly abundant and various information. In particular, information overload is becoming more pervasive as B2C e-commerce activities grow more popular. Table 1 shows a statistical sample demonstrating the information overload of online product reviews in https://www.amazon.com/ and http://360buy.com/. It is impossible for ordinary consumers to find out helpful (interesting) information from such a huge data set. Thus, another research question is about exact opinions embedded in these reviews, especially, when these reviews are associated with some specific review motivations.


B2C systemProduct# of reviews

https://www.amazon.com/Kindle Keyboard 3G36,112
Kindle Fire, Wi-Fi,15,692
The Hunger Games5,524

http://360buy.com/TP-LINK TL-WR841N 300M100,687
TP-LINK WR340G+ 54M64,308
Philips HQ91257,997

In the literature, quite a few research results have shown evidence of the existence of review motivations [6, 9]. However, few of them have focused on the customers’ review behavior, particularly, exploring the relations among customers’ review promptness (when they do online review), their review opinions (what they talk about), and their review motivations (why customers write reviews). In this work, we present a methodological framework to address the research questions. The main contributions of this paper lie in three aspects. First, we study the customers’ online behavior dynamics by exploring the distribution of customers’ “purchase-review” time interval; second, we introduce a LDA-based method to solve the problem of opinion mining from the online reviews varying in length, detail, and quality. Finally, we present a theoretical model to study the relationships between the customers’ review promptness and some of their potential motivations.

The rest of this paper is organized as follows. Section 2 presents the theories background and hypotheses. Section 3 shows the preliminary data and sketches out the methodology in detail. Section 4 discusses the empirical results of hypotheses. Section 5 concludes the work.

2.1. Research Framework

In general, a typical online shopping experience has several steps: first, people buy a product online; then, they experience the product delivery and quality (function); finally, a review motivation generated and the contents were posted online (see Figure 1).

As we can see from Figure 1, the review behavior, especially the promptness, is affected by the business processes experienced by a customer (also he will be a reviewer). Generally, the reviewed score and text are results affected directly by the motivation, and publishing review online is accordingly the final action. In this study, we aim to design a framework to explore how the review motivation would result in a review promptness. First, we use web crawler to extract raw data from the target B2C website and split the crawled raw data into three types: the purchase and review time; the review text; and the remainder as other information. Then, we implement “purchase-review” behavior dynamics analyzing and opining mining on the time information and review text, respectively. Finally, in combination with the supplementary information, we propose a theoretical model to explore the motivations of reviewer.

The whole research processes are shown in Figure 2. As we can see, the “purchase-review” behavior dynamics analyzing, the opining mining, and the theoretical modeling are the three key processes. The final purpose of this work is about the reviewers’ motivations.

2.2. Online Behavior Dynamics

Human behavior dynamics deals with the effects of multiple causal forces in human behavior, including network interactions, groups, social movements, and historical transitions, among many other concerns [10]. Empirical studies on web browsing [11], online review communities [12, 13], online music listening [14], online instant messaging [15], and online microblog replying [16] found that the time interval between two consecutive reviews on the same topic, known as the interevent time, followed a power-law distribution.

Since online communities bring together individuals with shared interest in joint action or sustained interaction, a very recent work presented by Johnson et al. [17] has studied the formation of power-law distributions via the mechanisms of preferential attachment.

Although online review has become popular in B2C systems, little effort has been undertaken to examine the dynamic aspects of online opinion formation. It is valuable to mention that Wu and Huberman studied the dynamics of online opinion formation by analyzing the temporal evolution of very large sets of users’ views [18]. These studies are different from this study, because our work tries to understand the formation of customers’ “purchase-review” dynamics under the influence of various factors.

2.3. Customers Review Motivation

Previously, there were also some research findings about the motivations of posting reviews online [19]. The classic characterization of motivation as broadly extrinsic or intrinsic was used to discuss motivations to contribute to online communities [6]. We believe that reviews are often written voluntarily, but it could be interesting to find out whether people would more easily write a review when they gain an extrinsic motivation related reward [3]; these forms of motivation were echoed by Brown’s reviewers, who expressed a mix of both intrinsic and extrinsic motivations [20]. In [21], Picazo-Vela et al. provided a partial understanding of the factors that determine an individual’s intention to provide an online review.

From the observations of actual review behavior, even if reviewers only write occasional reviews, they give us some immediate reasons why they might want to write a review [20]. For example, if one has been offended or staffs have been rude in online shopping process, people may take a quick reaction to express a grievance or warn others. Therefore, based on these motivation related theories, we propose the following hypothesis.

Hypothesis 1. Rate polarity shows extreme attitude and has positive impact on review promptness.

2.4. Social Exchange Theory

Social exchange theory is one of the basic theories of social economy [22], which tries to explain the individual behavior of participating in the exchange of resources. The resources obtained from social exchange or the positive results in social exchange are regarded as benefits, and the contraries are costs. According to the social exchange theory, the principle of individual behavior is to explore the maximum profit and the minimum cost [23].

According to the theory, if one person provides advice based on his or her knowledge, then he or she expects certain types of social rewards, such as approval, respect, or increased status in the eyes of the other individuals [24]. Thus, reciprocity is a central concept in social exchange theory. Specifically, this kind of exchange behavior would stop when benefits were not mutual.

In current online shopping websites, membership and its level management strategy are introduced and used to provide customers with not only online review communities reputation but also incentives to promote them to post their online reviews. In general, customers with high membership levels can enjoy higher level of services, such as products discounts. Fu and Wang argued that, in practice, shopping sites taking incentives and membership level management strategies may promote reviewers to post more positive online reviews [25]. It is reasonable to hypothesize the following.

Hypothesis 2. Membership shows extrinsic motivations, which has negative impact on users’ review promptness.

2.5. Opinion Mining

Opinion mining, also known as sentiment analysis [5], plays an important role in online business. The basic technology used in opinion mining is text-mining [26], which is used to derive insights from user-generated content and primarily originated in the computer science literature [5, 27]. Thus, previous text-mining approaches focused on automatically extracting opinions of reviews [28].

Opinion summarization [27] is the task of producing a sentiment summary. This method differs from traditional text summarization, which involves reducing a larger corpus of multiple documents into a short paragraph conveying the meaning of the text. This approach tracks features or objects on which customers have opinions. In some real applications, readers are often interested not only in the general sentiment towards an online item but also in a detailed opinion or analysis of each aspect of the item. These considerations underline the need to detect interesting aspects in an online review data set by, for example, extracting the reviewed features [29]. On one hand, these methods can be used to extract product features automatically from review text. On the other hand, some aspects (topics) are very hard to extract and the results are also hard to understand [30].

Feature extraction involves simplifying the amount of resources required to describe a large set of data accurately with a method of early expert annotation [5]. For example, in [31], a feature-based ranking technique was presented to mine customer reviews. In the past several years, several probabilistic graphical models have been proposed to address the aspect-based opinion mining problem [32], which aims to extract aspects and their corresponding ratings from customer reviews. Particularly, in [30], the authors presented a method to look into the text to extract features impossible to observe by simple numeric rating.

In this study, we move away from mining opinions only and seek to explain how the review motivations would affect customers’ behavior dynamics and review contents by the observations of actual review behavior. Due to the nature of virtual communities, the “online attractiveness” of reviewers, such as the online social status of a reviewer, plays a role in source credibility [33]. In the context of online review, reviewers with online attractiveness are more competent and more likely to be recognized than the ordinary users in the virtual community. Taking the social exchange theory [22] into consideration, reviewers tend to fast inform the true quality and function information to the others. Thus, we propose the following hypothesis.

Hypothesis 3. Product related reviews contents (quality and function) have positive impacts on review promptness.

In addition, Cho et al. showed that the performance of e-commerce platform (online shopping system) could also be an object of review [34]. It is reasonable to take service as a short-term experience in the transaction process. Recent reviews affect customers’ attributions of controllability for service delivery, with negative reviews exerting an unfavorable influence on consumers’ perceptions [35]. We propose the hypothesis about service as follows.

Hypothesis 4. Service related reviews contents (cost and service) have negative impacts on review promptness.

Based on the hypotheses, a theoretical model is shown in Figure 3. Except for the previous work on the effect of membership level on review promptness [25], to the best of our knowledge, there have been no studies that have attempted to understand customers’ review motivations and opinions with respect to their review promptness.

3. Data Set and Variables

3.1. Data Collection

The data of customer reviews used here were extracted from the http://360buy.com/ website, which is one of the most famous B2C online shopping malls in China. The data set covered mainly one product category of “laptop and pad” (12 unique products). We collected the product and review information for all online goods using web page crawler. Altogether, 34,504 reviews posted from 2008-11-06 to 2012-12-31 were collected into the review data set.

Each observation contains the collection date, the product ID, the retail price on http://360buy.com/, and the average product rating according to the posted consumer reviews. Particularly, we also collected the full set of reviews for each product. Each product review has a numerical rating on a scale of one to five stars, as well as the date of the good purchased, the date the review was posted, and the entire text posted by the reviewer. For the th review data , the extracted fields from website are shown in Table 2.


Extracted dataDescriptionNotation

MEMBERSHIP_LEVELCustomer membership level(ML)
PURCHASE_TIMEThe time stamp of the purchase/transaction(PT)
SCORECustomer’s rating(SC)
REVIEWCustomer’s review contents(RE)
REVIEW_TIMEThe time stamp of the review(RT)

In this work, we define the time interval between a user’s two actions of purchasing product online and publishing review online as his/her review promptness. Review promptness may reflect the initiative and efforts made by reviewers to post reviews [25]. As such, we can calculate a “purchase-review” interval for the th review as follows:

Taking the data set as a whole, the descriptive information is summarized in Table 3, in which means the review date and purchase date were the same. From Table 3, the distribution of the review data varied with all types of characters. The averaged time intervals, that is, , were various from 8.0 to 24.5. Particularly, the average length of review text, that is, , was extremely short (average of 58 Chinese characters) which means that the distribution of words in the review data set was very sparse.


Online product# of
reviews
Minimum text length Maximum
text length
Average text length

ACER12481438272.7017818.0
ASUS2542133472.1016818.5
DELL4018638166.7010414.8
HP4141130359.1017324.5
iPad 218549736550.9018023.5
Macbook2892332957.40508.0
SamSung 1104081937864.7012813.7
Thinkpad18131136365.108613.0
SamSung 5304272130769.5016415.0
SONY3442242167.30598.7
Teclast P856740839365.7017516.4

3.2. Background Check: Purchase-Review Behavior Dynamics

To study the distribution of , we further measure the frequency of , that is, calculate the total number of in the data set as

Therefore, all the “purchase-review” time intervals as well as their frequency in the data set will generate a data series of

can be used to analyze the customers’ review dynamics, in which some group-based behavioral characteristics, such as promptness and attitude, are involved. Evidence in literature has shown that the distribution of represents the users’ behavior dynamics [15, 16]. Moreover, if follows a typical non-Poisson process and is characterized by a power-law distribution, it means that the review behavior on a B2C website has been affected by extrinsic motivations, intrinsic ones, or both [36].

To verify the assumption that the time interval between two consecutive customers’ behaviors, that is, purchase and review, follows a power-law distribution, an analysis is mainly made by using a linear regression and the least-squares method to fit the power-law function curve. Let denote the time interval between the consumers’ purchase and review behavior and let denote the frequency of each time interval, and then the function of the power-law distribution curve is , where . We can collect lots of review data online and run a simple linear regression based on

For the experimental data set, the fitted power-law distribution function is . The goodness of fit is and the statistical test is satisfied, meaning that the frequency of the “purchase-review” time intervals follows the power-law distribution with exponent . Typically, it follows the so-called power-law distribution (Figure 4(b)).

3.3. Identifying Customer Opinions
3.3.1. The Preliminary of LDA Model

Of course, the review contents were organized as natural language without any information tag making them valuable to mine information from. In this study, we are interested in finding clusters of words/topics in text. To that end, we introduce the LDA method [37] to model the corpus and each topic is treated as a cluster.

Figure 5 is a general representation of LDA. The boxes are “plates” that represent replicates. The outer plate represents documents, whereas the inner plate represents the repeated choice of topics and words within a document.

In the LDA method (as shown in Figure 5),(i)a word is the basic unit of a document;(ii)a document is a sequence of words denoted by ; a corpus is a collection of documents denoted by ;(iii)a topic is a probability distribution over the vocabulary of all the words in ;(iv) is the parameter of the Dirichlet prior on the per-document topic distributions, is the parameter of the Dirichlet prior on the per-topic word distribution, and is the topic distribution for document .

3.3.2. LDA-Based Opinion Mining

LDA is a popular topic modeling tool to learn a set of topics and feature words from . Taking the reviewed text, that is, , as a short document, then all the reviews could be used to form a corpus . Along this line, we can use LDA to capture the opinions in . There are three steps for this work:(i)Segment into words .

Word segmentation is the problem of dividing a string of written language into its component words. In general, the noise phrases, stop words, and meaningless symbols are removed from the data set after word segmentation. In this work, we simply keep the useful word segments, most of these are nouns.(ii)Conduct LDA method on .

Given a collection of unlabeled text documents, the LDA model seeks to discover hidden topics as distributions over the words in a fixed vocabulary. However, it is assumed that these topics are specified before any document has been generated. Thus, for any document in the corpus, the generative process contains two stages. First, a topic distribution vector modeled by a Dirichlet random variable has been chosen randomly to determine the topics appearing in a document. Then, for each word that is to appear in the document, a single topic is randomly selected from the distribution vector [37].

Initially, we use LDA to mine 20 topics. Some samples are shown in Table 4.


IDTop 10 sample words and their probability

1果 (apple) 0.244; 东西 (goods) 0.182; 品 (brand) 0.05; 京东 (Jingdong) 0.046; 品 (product) 0.045;
量 (quality) 0.043; 不 (do not) 0.039; 品 (quality) 0.024; 正品 (certified goods) 0.020; 优点 (merit) 0.019.

2使用 (use) 0.108; 友 (friend) 0.087; 感觉 (feel) 0.057; 发现 (discovery) 0.049; 正 (in process) 0.041;
没什么 (noting) 0.036; 目 (current) 0.031; 暂时 (temporary) 0.027; 应该 (should) 0.022; 优点 (merit) 0.019.

3屏幕 (screen) 0.079; 反应 (reaction) 0.036; 电池 (battery) 0.031; 触屏 (touch screen) 0.026; 触摸 (touch) 0.023;
速度 (speed) 0.016; (Resolution) 0.016; 游戏 (game) 0.015; 重力 (gravity) 0.015; 感应 (response) 0.014.

4送货 (delivery) 0.144; 速度 (speed) 0.136; 东西 (goods) 0.078; 量 (quality) 0.056; 包装 (package) 0.052;
快 (fast) 0.048; 发 (consignment) 0.039; 服务 (service)0.032; 物流 (logistics) 0.028; 快 (express) 0.016.

5外观 (appearance) 0.084; 屏幕 (screen) 0.057; 工 (workmanship) 0.028; 尚 (fashion) 0.022; 操作 (operation) 0.017; 果 (display effect) 0.015; (Resolution) 0.013; 外形 (Shape) 0.013; 显示 (display) 0.012.


20价比 (cost performance) 0.095; 格 (price) 0.077; 配置 (configuration) 0.062; 感觉 (feel) 0.026; 打折 (discount) 0.018; 工 (workmanship) 0.018; 总体 (totality) 0.015; 价位 (price) 0.014; 促销 (promotion) 0.014; 品 (gift) 0.010.

However, not all the topics in Table 4 are suitable for representing what the users reviewed. As an unsupervised method, LDA shows only the probability of a set of words belonging to a topic; thus it has problem in feature selection. Moreover, from the end users’ perspective, it is hard to understand why the models perform as it does [38]. As we can see in Table 4, for the results directly generated by LDA, there is no tag or information about what the topics are (only labeled with topic ID). Further, some topics are interesting (e.g., Topics 3 and 4), whereas some others are hard to understand (e.g., Topic 2). Previous studies in text analyzing domain have been a great help for this study to select topics. Following the guidance provided by [39, 40], next, we inspect the results generated by LDA manually to identify the valuable topics.(iii)Inspect and annotate topics.

Two of the authors manually inspected the resulting topics [41]. They manually assigned labels and merge similar topics and discarded incoherent reviews. Finally, for the gathered online reviews, we label four topics as well as their sample features in Table 5.


Annotated topic Featured words

QualityQuality, appearance, brand, and so forth
FunctionFunction, experience, operation, and so forth
CostPrice, discount, gift, and so forth
ServiceLogistic, package, delivery, and so forth

In order to simplify the calculation, we can say is the union of a set featured words ; the relationship between them is specified as follows:

Someone might argue that -means may also be a suitable method for the clustering tasks. However, if both are applied to assign topics to a set of documents, the most evident difference is that -means are going to partition the documents in disjoint clusters (i.e., topics). On the other hand, LDA assigns a document to a mixture of topics and each document is characterized by one or more topics. Hence, LDA can give more realistic results than -means on topic assignment.

3.3.3. Mapping a Review to Topics

One purpose of this work is to analyze the latent correlations between review promptness and the reviewed topics. As we mentioned before, different people may use various words to express the same topic for online shopping experience, leading to the sparse word distribution and increasing difficulty in analyzing customers’ common concerns.

To address this problem, we map the review of onto a set of proper topics to show which topics have been reviewed in . The mapping process is based on the opinion mining results in Table 5. For the topic , , the mapping result is specified as follows:

For example, = “The price is relatively high; but I like its blue painting” can be divided into words . Then can be mapped onto two topics of “Cost” and “Quality,” since “price” is a featured word of topic “Cost” and “blue painting” is a feature of “Quality” (appearance). By mapping all the reviews onto appropriate topics, we finally use these reviewed topics as some independent variables to analyze the relationship between reviewers’ opinion (on topic) and review promptness.

4. The Theoretical Analyzing Results

4.1. Model Specification

The dependent variable is review time interval (TI) measured by the difference between the review time and purchase time. In order to test our hypotheses, we take the logarithm on dependent variable TI.

The independent variables, including review score and membership level, are mapped onto the score between “1” (low) and “5” (high). The review contents are binary variable to be measured as “1” if they disclose information and “0” for the other situations. All the variables used are summarized in Table 6.


TypeVariableNotationExplanation

DependentTime intervalTITime interval between purchase and review behavior.

IndependentRatingRatingThe score rated by the reviewer.
MembershipMemberMembership of an online reviewer.
map(Quality)Quality1 for product quality feature reviewed, otherwise 0.
map(Function)Function1 for product function feature reviewed, otherwise 0.
map(Cost)Cost1 for cost feature reviewed, otherwise 0.
map(Service)Service1 for service feature reviewed, otherwise 0.

Finally, we use a linear specification for the review promptness estimation:

4.2. Descriptive Analysis of the Variables

Initially, a correlation analysis including all of the variables used in estimations was conducted. Correlation values are shown in Table 7.


TIRatingMemberQualityFunctionCostService

TI
Rating
Member
Quality
Function
Cost
Service

The maximum correlation index was about (correlation between Function and Quality). However, the correlations between the different independent variables were very low. This indicates that there is no significant correlation among independent variables in the above model.

4.3. Empirical Results

The results of the regression analysis for the model are shown in Table 8. The residual standard error is 1.226 on 3648 degrees of freedom; the multiple -squared value is 0.08275. The -statistic is 41.14 on 8 and 3648 degrees of freedom; value is <2.2.


CoefficientStd. err. value95% conf. interval

Constant−0.611100.774630.43022
Rating0.890000.367040.01536
Rating2−0.101300.043690.02048
Member0.575350.08093
Member2−0.054470.01275
Quality−0.028860.050240.56567
Function0.059750.044280.17738
Cost−0.126210.044350.00445
Service−0.443800.05342<

As this study proposed, “Rating2” () is in a marginal manner (), while “Rating” (; ), “Member” (; ), “Member2” (; ), “Cost” (; ), and “Service” (; ) were statistically significant. These variables explained about 8 percent of the review promptness ().

Table 8 shows that Rating and Member have positive effects on review promptness. It means that, with a longer purchase-review time interval, people tend to review a relative higher score, and people with higher level of membership tend to publish late review online. In contrast, has a negative effect on review promptness, indicating that people with low level of membership tend to publish quick review. Moreover, with a longer purchase-review time interval, the review contents are more about the Cost and Service topics; few of them are product quality or function related.

4.4. Interpretation of Results

The final test results are included in Table 9.


DescriptionResult

H1People will rate a relative high score after long purchase time.Supported
H2People with high membership level will not publish a lazy review.Rejected
H3Longer time interval: review contents are more about product.Rejected
H4Longer time interval: review contents are more about Cost and Service.Supported

H1 is supported since people will rate a relative high score after a long purchase-review interval. If the product is ok, then there is nothing special to review—leading to more random comments. Moreover, the U-shape of rate scores shows emotion polarity among customers.

The finding that H2 is supported means that people with high membership level will publish a late review. For the low membership level people, they have much extrinsic gains (membership promotion; credits exchange) to publish reviews quickly online. For the high level people, they have poor gain (both intrinsic and extrinsic) from quick response. So, the customer loyalty rewards, such as membership levels, are effective in encouraging consumers to review their online shopping experiences. However, some people do not want to take the time to write high-quality reviews for information sharing, but they are willing to publish quick reviews for the rewards.

It is interesting that customers with a bigger purchase-review interval would like to present more text. First, after a long purchase-review interval, people would say few words on product since sufficient information could be released by others. So, H3 is rejected, whereas the finding that H4 is supported means that people have less comments about the product but more to say about the service and share experience about the service after they have used the product.

5. Conclusions

In this paper, we present a methodological framework to study the review promptness and some motivations of online reviewers. The analytical and experimental results with real data from a B2C website of http://360buy.com/ demonstrate two main findings:(i)The frequency of time intervals between consumers’ purchasing a good online and their publishing reviews follows a power-law distribution, providing new evidence for the study of human behavior online.(ii)The observations of actual review behavior, such as review quality, promptness, and attitude, are mostly consistent with reviewers’ motivations: If a consumer’s “purchase-review” time interval is relatively short, the customer’s evaluation contents are service related. On the contrary, a relatively long time interval means that the experience with a product/service is more complete and careful; thus, a customer may provide reviews about the function of the product.

These implications can help B2C sellers to manage consumers’ relationships and adjust online marketing strategies accordingly.

We should note some limitations of this work. First, Gilbert and Karahalios showed that the power-law curve governs Amazon’s review community [13]. However, this study does not conduct comparison analysis with an international B2C website, such as https://www.amazon.com/. Second, in this paper, only one online type of goods is selected into data set; it is important to gather more online review data (particularly, more product categories) for future study.

Competing Interests

The authors declare that they have no competing interests.

References

  1. D.-H. Park and J. Lee, “eWOM overload and its effect on consumer behavioral intention depending on consumer involvement,” Electronic Commerce Research and Applications, vol. 7, no. 4, pp. 386–398, 2008. View at: Publisher Site | Google Scholar
  2. F. Zhu and X. Zhang, “Impact of online consumer reviews on Sales: the moderating role of product and consumer characteristics,” Journal of Marketing, vol. 74, no. 2, pp. 133–148, 2010. View at: Publisher Site | Google Scholar
  3. Z. Wang, “Anonymity, social image, and the competition for volunteers: a case study of the online market for reviews,” B.E. Journal of Economic Analysis and Policy, vol. 10, no. 1, article 44, 2010. View at: Google Scholar
  4. J. Wang, A. Ghose, and P. G. Ipeirotis, “Bonus, disclosure, and choice: what motivates the creation of high-quality paid reviews?” in Proceedings of the 33rd International Conference on Information Systems (ICIS '12), pp. 1–15, Orlando, Fla, USA, 2012. View at: Google Scholar
  5. B. Pang and L. Lee, “Opinion mining and sentiment analysis,” Foundations and Trends in Information Retrieval, vol. 2, no. 1-2, pp. 1–135, 2008. View at: Publisher Site | Google Scholar
  6. R. E. Kraut and P. Resnick, Building Successful Online Communities: Evidence-Based Social Design, MIT Press, Boston, Mass, USA, 2012.
  7. Y. Liu, X. Huang, A. An, and X. Yu, “Modeling and predicting the helpfulness of online reviews,” in Proceedings of the 8th IEEE International Conference on Data Mining (ICDM '08), pp. 443–452, IEEE, Pisa, Italy, December 2008. View at: Publisher Site | Google Scholar
  8. K. Popat, A. R. Balamurali, P. Bhattacharyya, and G. Haffari, “The haves and the have-nots: leveraging unlabelled corpora for sentiment analysis,” in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL '13), pp. 412–422, August 2013. View at: Google Scholar
  9. C. Dellarocas, G. Gao, and R. Narayan, “Are consumers more likely to contribute online reviews for hit or niche products?” Journal of Management Information Systems, vol. 27, no. 2, pp. 127–157, 2010. View at: Publisher Site | Google Scholar
  10. D. R. White, “Human behavior, dynamics of,” in Encyclopedia of Complexity and Systems Science, R. A. Meyers, Ed., pp. 4608–4631, Springer, Berlin, Germany, 2009. View at: Google Scholar
  11. B. Gonçalves and J. J. Ramasco, “Human dynamics revealed through web analytics,” Physical Review E, vol. 78, no. 2, Article ID 026123, 7 pages, 2008. View at: Publisher Site | Google Scholar
  12. Z. Wang, “Anonymity, social image, and the competition for volunteers: a case study of the online market for reviews,” The B.E. Journal of Economic Analysis & Policy, vol. 10, no. 1, article 44, 34 pages, 2010. View at: Google Scholar
  13. E. Gilbert and K. Karahalios, “Understanding deja reviewers,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '10), pp. 225–228, February 2010. View at: Publisher Site | Google Scholar
  14. N. Hu, L. Liu, and J. J. Zhang, “Do online reviews affect product sales? The role of reviewer characteristics and temporal effects,” Information Technology and Management, vol. 9, no. 3, pp. 201–214, 2008. View at: Publisher Site | Google Scholar
  15. G. Chen, X. Han, and B. Wang, “Multi-level scaling properties of instant-message communications,” Physics Procedia, vol. 3, no. 5, pp. 1897–1905, 2010. View at: Google Scholar
  16. D. Sousa, L. Sarmento, and E. M. Rodrigues, “Characterization of the twitter @replies network: are user ties social or topical?” in Proceedings of the 2nd International Workshop on Search and Mining User Generated Contents (SMUC '10), pp. 63–70, ACM Press, Toronto, Canada, 2010. View at: Google Scholar
  17. S. L. Johnson, S. Faraj, and S. Kudaravalli, “Emergence of power laws in online communities: the role of social mechanisms and preferential attachment,” MIS Quarterly, vol. 38, no. 3, pp. 795–808, 2014. View at: Google Scholar
  18. F. Wu and B. A. Huberman, “Opinion formation under costly expression,” ACM Transactions on Intelligent Systems and Technology, vol. 1, no. 1, Article ID 1858953, pp. 5:1–5:13, 2010. View at: Publisher Site | Google Scholar
  19. Y. Wang and D. R. Fesenmaier, “Assessing motivation of contribution in online communities: an empirical investigation of an online travel community,” Electronic Markets, vol. 13, no. 1, pp. 33–45, 2003. View at: Publisher Site | Google Scholar
  20. B. Brown, “Beyond recommendations: local review web sites and their impact,” ACM Transactions on Computer-Human Interaction, vol. 19, no. 4, article 27, 2012. View at: Publisher Site | Google Scholar
  21. S. Picazo-Vela, S. Y. Chou, A. J. Melcher, and J. M. Pearson, “Why provide an online review? An extended theory of planned behavior and the role of Big-Five personality traits,” Computers in Human Behavior, vol. 26, no. 4, pp. 685–696, 2010. View at: Publisher Site | Google Scholar
  22. K. S. Cook and J. M. Whitmeyer, “Two approaches to social structure: exchange theory and network analysis,” Annual Review of Sociology, vol. 18, no. 1, pp. 109–127, 1992. View at: Publisher Site | Google Scholar
  23. L. D. Molm, Coercive Power in Social Exchange, Cambridge University Press, Cambridge, UK, 1997.
  24. M. M. Wasko and S. Faraj, “Why should I share? Examining social capital and knowledge contribution in electronic networks of practice,” MIS Quarterly, vol. 29, no. 1, pp. 35–57, 2005. View at: Google Scholar
  25. D. Fu and K. Wang, “Does customer membership level affect online reviews? A study of online reviews from 360Buy.com in China,” in Proceedings of the 17th Pacific Asia Conference on Information Systems (PACIS '13), pp. 85:1–85:12, June 2013. View at: Google Scholar
  26. A. Ghose and P. G. Ipeirotis, “Estimating the helpfulness and economic impact of product reviews: mining text and reviewer characteristics,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 10, pp. 1498–1512, 2011. View at: Publisher Site | Google Scholar
  27. M. Hu and B. Liu, “Mining and summarizing customer reviews,” in Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '04), pp. 168–177, Seattle, Wash, USA, 2004. View at: Google Scholar
  28. K. Dave, S. Lawrence, and D. M. Pennock, “Mining the peanut gallery: opinion extraction and semantic classification of product reviews,” in Proceedings of the 12th International Conference on World Wide Web (WWW '03), pp. 519–528, ACM, May 2003. View at: Publisher Site | Google Scholar
  29. I. Titov and R. T. McDonald, “Modeling online reviews with multi-grain topic models,” in Proceedings of the 17th International Conference on World Wide Web 2008 (WWW '08), pp. 111–120, ACM, Beijing, China, April 2008. View at: Publisher Site | Google Scholar
  30. B. Liu, Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies, Morgan Claypool Publishers, Synthesis Lectures on Human Language Technologies, 2012.
  31. K. Zhang, R. Narayanan, and A. Choudhary, “Voice of the customers: mining online customer reviews for product feature based ranking,” in Proceedings of the 3rd Conference on Online Social Networks (WOSN '10), p. 11, USENIX Association, Boston, Mass, USA, June 2010. View at: Google Scholar
  32. Y. Jo and A. Oh, “Aspect and sentiment unification model for online review analysis,” in Proceedings of the 4th ACM International Conference on Web Search and Data Mining (WSDM '11), pp. 815–824, February 2011. View at: Publisher Site | Google Scholar
  33. L. Zhu, G. Yin, and W. He, “Is this opinion leader's review useful? Peripheral cues for online review helpfulness,” Journal of Electronic Commerce Research, vol. 15, no. 4, pp. 267–280, 2014. View at: Google Scholar
  34. Y. Cho, I. Im, R. Hiltz, and J. Fjermestad, “An analysis of online customer complaints: implications for web complaint management,” in Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS '02), pp. 2308–2317, IEEE, Big Island, Hawaii, USA, January 2002. View at: Publisher Site | Google Scholar
  35. V. Browning, K. K. F. So, and B. Sparks, “The influence of online reviews on consumers' attributions of service quality and control for Service Standards in Hotels,” Journal of Travel and Tourism Marketing, vol. 30, no. 1-2, pp. 23–40, 2013. View at: Publisher Site | Google Scholar
  36. Q. Yan, L. Yi, and L. Wu, “Human dynamic model co-driven by interest and social identity in the Microblog community,” Physica A: Statistical Mechanics and Its Applications, vol. 391, no. 4, pp. 1540–1545, 2012. View at: Publisher Site | Google Scholar
  37. D. M. Blei, A. Y. Ng, and M. I. Jordan, “Exploring the determinants of knowledge sharing via employee weblogs,” Journal of Machine Learning Research, vol. 3, no. 1, pp. 993–1022, 2003. View at: Google Scholar
  38. J. Murdock and C. Allen, “Visualization techniques for topic model checking,” in proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI '15), pp. 4284–4285, Austin, Tex, USA, January 2015. View at: Google Scholar
  39. X. Wan, L. Zong, X. Huang et al., “Named entity recognition in Chinese news comments on the web,” in Proceedings of the 5th International Joint Conference on Natural Language Processing, pp. 856–864, November 2011. View at: Google Scholar
  40. B. Peng, J. Wu, H. Yuan, Q. Guo, and D. Tao, “ANEEC: a quasi-automatic system for massive named entity extraction and categorization,” The Computer Journal, vol. 56, no. 11, pp. 1328–1346, 2013. View at: Publisher Site | Google Scholar
  41. V. Uren, P. Cimiano, J. Iria et al., “Semantic annotation for knowledge management: requirements and a survey of the state of the art,” Web Semantics: Science, Services and Agents on the World Wide Web, vol. 4, no. 1, pp. 14–28, 2006. View at: Publisher Site | Google Scholar

Copyright © 2016 Junqiang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

1076 Views | 469 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.