Abstract

Online surveys have become a popular way to collect data. However, response rates are low, specifically for online intercept-based surveys, which can be as low as 1%. This raises questions about the accuracy of the inferences based on these results. Furthermore, it is difficult to compare the characteristics and behavior of the responders and nonresponders as there is very limited information on nonresponders. The objective of this article is to present a unique comparison of online intercept survey responders, nonresponders, and partial responders. The sample includes 192,566 U.S.-based users who went through a research experiment during the installation process of ESET online security software. During the process, users were asked to enable or disable the detection of potentially unwanted applications. At the end, they could also opt to answer questions on a short security-related survey. The users were split into three groups: (a) nonresponders (), (b) complete responders (), and (c) partial responders (). There were only slight differences between the responder and nonresponder groups in their hardware (i.e., computer CPU quality and RAM size). Responders and nonresponders differed in their behavior. Complete responders enabled the detection of potentially unwanted applications significantly more often than nonresponders (on average by 4.5%) and spent more time on the screen that provided details about this feature. Additional comparisons showed that complete responders were slightly younger and more educated than partial responders. We conclude that there are only slight differences between online intercept survey responders and nonresponders and that these differences manifest in computer usage-related decisions. Despite the low overall response rates, online product-related surveys can provide useful insights about the user base. Nevertheless, the companies that use online surveys should be careful because behavior might differ for users in specific situations.

1. Introduction

Online survey research has been on the rise in recent decades along with the growing numbers of information and communication technology (ICT) users around the world [1, 2]. The COVID-19 pandemic further accelerated technology adoption and digitalization [35], including the use of online surveys and testing due to limited offline interpersonal interactions [4, 6]. Obtaining information about a customer or a user through an online survey provides a simple, fast, and efficient way to collect data about the user experience of digital technologies and online services. However, the response percentages tend to be low and vary considerably [2, 7], which raises questions about the generalizability of the results. Furthermore, it is difficult to compare the characteristics of online survey responders and nonresponders, especially because little is usually known about the nonrespondents [8, 9]. The research on the differences between online survey responders and nonresponders is scarce.

Our study presents unique data where the study design enabled the evaluation of the differences in behavior and characteristics between users who, upon request, opt to complete an ICT security-related online intercept survey, users who opt out, and users who provide only partial or careless answers. These results will help us to determine the generalizability and validity of online intercept survey findings.

Despite the clear advantages of online survey methodology, there are several major issues of which researchers and online survey contracting authorities should be aware. First, coverage and sampling errors need to be considered when conducting research online [10]. The coverage error stems from the mismatch between the target population to which one wants to generalize the findings and the frame population, which is defined as a list of members of the population from which the sample can be drawn [11]. In the case of online surveys, this usually means that not everyone from the target population (e.g., the adult population in a certain country) has access to the internet or the necessary hardware and, thus is not included in the frame population. The sampling error occurs when not all of the members of the frame population are surveyed [10, 12]. These errors pose a threat, especially in online surveys that strive to make inferences about the general or broader populations that reach beyond internet users. They are of lesser threat to online surveys where the target population is narrowly defined as currently active internet users or even visitors of a specific website [10]. In the case of surveying users of a certain website or service, so-called intercept surveys, which invite every th user to participate in a survey, are often used. Users are usually shown a pop-up window or a specific page which invites them to complete a survey. Through intercept-based sampling, the researcher can draw participants systematically and arrive at a probabilistic sample, which diminishes the coverage and sampling errors.

Nevertheless, online surveys, including intercept surveys, are especially prone to issues that stem from high nonresponse rates, also referred to as attrition or break-off [8, 12, 13]. In this article, we use the term “nonresponders” to describe the situation when users who are invited to participate in a survey do not respond to any items (also referred to as “unit nonresponse”). We use the term “partial response” when users do not answer all of the questions or respond carelessly and provide low-quality answers. Low response rate might lead to biased results and pose a serious issue for the utility of the findings [10]. The possible nonresponse errors depend on the nonresponse rate as well as the differences between responders and nonresponders on the variables of interest [14]. This is especially problematic when little or nothing is known about the nonresponders and the extent to which the subgroup of users that opted to complete the survey differs from the whole target population with regard to the studied variables (i.e., opinions, preferences, and specific behaviors) cannot be determined. Thus, the intended evidence-based nature of the subsequent managerial or policy-related decisions might be at risk.

Survey response rates vary widely both within and between survey modes [7] and depend on a number of factors at all stages of the survey process, including development, delivery, completion, and return [15]. Previous studies found that online surveys generally yielded lower response rates than conventional offline survey modes (e.g., mail and telephone) [7, 1618]. While there were some studies which reported a response rate higher or the same for online surveys compared with mail surveys [19, 20], online surveys generally report only low to modest response rates [10, 21, 22]. This is also true for online intercept surveys, which yield low responses both when using pop-ups or banner-advertised invitations [2326]. As intercept surveys are often conducted by commercial companies, there is little systematic evidence for average response rates. While Comley [23] reported response rates between 15 and 30% for the commercial company Virtual Surveys Limited, Survicate, a survey management software, reports that average website intercept survey response rates are as low as 0.1-0.2% [26]. Low response rates were further supported by Tuten et al. [25], who reported click-through rates for survey advertisements on American and German search engine websites below 1%, and Dodge and Cucchi [27], who found a response rate of 0.26% for an online survey posted on a poison center homepage. Moreover, with the ever-changing means of online communication and its usage, what was true for online survey modes and response rates a couple of years ago might not be the case today. For instance, online intercept-based surveys that use pop-up windows or website banners can be blocked by internet browser plug-ins, which further limits the reach of online survey strategies.

Naturally, researchers have focused on factors that affect response rates and the ways to increase them for online surveys. Fan and Yan [15] provide a thorough review of the factors that affect response rates in online surveys throughout the survey process. These include factors related to survey development (e.g., survey length and salience of the topic), survey delivery (e.g., contact delivery methods, incentives, notifications, and reminders), survey completion (e.g., society-related factors such as public attitudes towards surveys and respondent-related factors such as age or certain personality traits), and survey return (e.g., level of data security). However, even some of the most actionable recommendations based on previous research, including the use of incentives, invitations, reminders, and personalization, might not be applicable to online intercept survey methodology. Moreover, as Couper [8] notes, focusing only on increasing response rates might not be sufficient to diminish nonresponse bias because it also depends on the extent to which those who respond differ from those who do not on the variables of interest. In the case of the online intercept surveys of customers and users, we are oftentimes faced with the combination of low response rates and uncertainty about the differences between responders and nonresponders.

Despite the obvious difficulties in characterizing individuals who do not respond to surveys, previous research has indicated that there might be differences between those who respond to surveys and those who do not. Survey responders have been found to be of higher socioeconomic status, better educated, and more interested in the survey topic [15, 28]. Fan and Yan [15] report in their systematic review of online surveys that several personality traits are linked to participation. For instance, responders tend to be more conscientious (i.e., careful and diligent) and agreeable, and they have a higher need for cognition (i.e., a tendency to engage in and enjoy activities that require thinking) [15]. A recent study by Abbott [29] compared a wider set of those who self-reported participation in online panel surveys (during the British Population Survey face-to-face interviews) with those who did not. There were only slight differences between the groups with regard to demographics, with middle-aged participants being overrepresented in the online survey subgroup when compared to the younger segment, which appeared harder to recruit. The study found some additional differences, for instance, in the range of online activities (e.g., online survey respondents reported more activities), and concluded that online survey participants appeared to have the characteristics of early adopters and to be more engaged in new experiences [29]. However, the obvious caveat to this study is the fact that online survey participation was only self-reported, and it focused on participation in online panels and not intercept-based surveys. The respondents in this study also consisted of those who were willing to engage in an offline interview, so the differences with complete nonresponders could not be evaluated.

Several studies focused on evaluating the characteristics of breakoffs, i.e., responders who left the survey and did not come back to finish it. Peytchev [30] made use of the basic demographic data which was provided upon joining the survey panel. He found that higher education and older age were related to lower likelihood of break off. Additionally, his results suggest that those who left the survey were not inattentive. In fact, breakoffs spent more time on average answering the questions. Steinbrecher et al. [9] analyzed data from follow-up surveys to an initial survey in which respondents broke off. This enabled the comparison of initial-survey complete responders’ and breakoffs’ characteristics. They found that breakoffs were younger, less educated, and less politically interested than complete responders. Additionally, they could compare breakoffs who participated in the follow-up survey and those who did not regarding basic demographic variables (given the quota design of the initial tracking survey) and political interest (the first question). There were no differences between the breakoff groups regarding age, education, and interest in politics. While the responder group was split evenly between men and women, there were more women among both breakoff groups.

Moreover, there could be additional differences between (partial) responders and nonresponders regarding their computer’s parameters which could indicate that they constitute different subgroups of users, i.e., technologically savvy versus regular users. Little research has explored this up to date. A recently published study by Smahel et al. [31] (a partially overlapping sample with the current study) indicated that there might be slight differences in hardware and software features between online intercept survey responders and nonresponders. However, the study predominantly focused on the responders’ characteristics and did not provide detailed information on the comparisons with nonresponders or partial responders.

Overall, the existing evidence is very limited and even more so with respect to online intercept-based surveys. Our study overcomes the limitations of the above-mentioned approaches by making use of information that was passively collected during a software product installation process, followed by a request to complete a short, security-related survey that was presented to all users. This design allows for a unique comparison between actual online intercept-survey responders and nonresponders.

More specifically, the objective of this study is to compare the behavior and characteristics of three types of users: users who opt in and fully complete an online survey that was presented at the end of an online security software installation process; users who opt out; and users who provide only partial or low-quality answers. Basic hardware characteristics and behavior during the preceding installation process will be compared. Additionally, the differences in demographics and the self-reported skills and attitudes between complete and partial responders will be evaluated. These results will help to understand the generalizability and accuracy of inferences based on results from online surveys.

2. Materials and Methods

2.1. Procedure and Sample

The study was conducted in cooperation with ESET, an online security software company with more than 100 million users in more than 200 countries and territories (https://www.eset.com/int/about/). The data are part of a larger project to develop more attractive installation screens with a particular focus on the presentation of the choice to detect potentially unwanted applications (PUA). It was collected and analyzed by the company for technical purposes. The researchers gained access and an approval to analyze the data for the presented analyses in cooperation with the company. For this study, we used a subsample of U.S.-based users who installed the English version of ESET’s online security software solutions (specifically ESET Internet Security, ESET NOD32 Antivirus, ESET Smart Security, and ESET Smart Security Premium, all for Windows OS) between October 2016 and February 2017. Since previous analyses of the dataset have shown that there are country-specific differences related to the installation of the software [31], we used a subsample of users from one country to limit the impact of user nationality in the experimental study.

The ESET installation process had seven main screens that guided the user through the process. Only the screen that asked about PUA detection was altered, and each user was randomly presented with one of 15 experimental PUA screens (which differed in the text, layout, and visual elements). The details on the PUA screen variants have been published elsewhere [32]. ESET recorded the user decisions for PUA detection and the time spent on each screen. During the data collection period, ESET added one last screen to their regular installation to present the option to complete a short survey. The users who agreed were then redirected to ESET’s website, which hosted the survey.

After removing duplicate cases from the dataset (identified by combining hardware features, IP addresses, and hashed MAC addresses), the sample included 192,566 U.S.-based records. The sample was spread equally across the 15 experimental PUA screens. Since demographic characteristics (e.g., age, gender, and education) were reported in the intercept survey, they are available only for a subsample of complete or partial survey responders and are discussed in the results, Section 4.4.

2.2. Measures
2.2.1. System Variables

During each installation, ESET recorded some information about the user’s devices for performance optimization purposes. In our study, we specifically use (1) CPU performance, which was sorted according to PassMark CPU Mark criterion (https://www.cpubenchmark.net/) into low-end, mid-low, mid-high, and high-end categories; and (2) RAM size, which was sorted into four categories: 0-2 GB (0-2,048 MB), 2-4 GB (2,049-4,096 MB), 4-8 GB (4,097-8,192 MB), and 8+ GB (8,193+ MB).

2.2.2. Time Spent on Screens

ESET recorded the time spent on each installation screen in milliseconds. For our study, we use the time spent on the end-user license agreement (EULA) screen and the PUA screen. Because the time was highly skewed due to outliers (i.e., EULA screen time: ; PUA screen time: ), for the specific analyses, we omitted the users who were above the 95th percentile for the EULA screen (excluded ; resulting in ) and above the 99th percentile for the PUA screen (excluded ; resulting in ).

2.2.3. Detection of Potentially Unwanted Applications

User behavior during the installation process was observed with focus on the enabling of the detection of PUA feature. Users either enabled or disabled the additional detection by clicking one of the options on the PUA screen.

2.2.4. Intercept Survey

At the end of the installation process, users had the option to complete a short security-related survey. We did not provide any monetary or material incentives. Motivation to complete the survey was encouraged by stressing the fact that the answers would help increase the usability of the product and the fact that the survey had been developed in cooperation with the university. Survey responders provided information regarding their age, gender, and education. They were further asked to self-evaluate their computer skills (i.e., do you consider yourself to be a skilled computer user?), the perceived privacy of their computer data (i.e., do you consider the data in this computer private?), their sensitivity about privacy (i.e., in general, are you sensitive about your privacy?), and the perceived security of computers against online attacks (i.e., in general, do you consider computers to be safe devices against online attacks, e.g. viruses, hacking, phishing, etc.?) on a scale from 1 “not at all” to 6 “extremely.” There were a few more questions at the end of the survey (e.g., personal characteristics) that were not relevant to the current study and were not used. In total, the survey included 14 items and it took approximately 5-10 minutes to fully complete.

3. Analytical Strategy

We used the test and logistic regression (categorical data) and -tests or the analysis of variance (interval data) to assess the differences between the respondent segments. Analyses of large samples typically show statistically significant results even for very small effects. When considering such results, it is important to interpret effect size rather than significance alone. We thus calculated Phi () for the categorical data and Cohen’s for the interval data. For , the value of 0.1 is considered small, 0.3 is medium, and 0.5 represents a large effect size. For Cohen’s, the respective values are 0.2, 0.5, and 0.8; for , the values are 0.01, 0.06, and 0.14. All analyses were performed with IBM SPSS Statistics for Windows, version 25 [33].

4. Results

We divided respondents into three groups based on their response to the opt-in survey: (a)Nonresponders (, 95.56% of the sample): respondents who did not fill in any items in the intercept survey(b)Complete responders (, 3.51% of the sample): respondents who filled in all of the items in the survey(c)Partial responders (, 0.94% of the sample): respondents who filled in at least one but not all items in the survey, or who responded carelessly. Careless responding was detected based on providing (i) nonsensical values for age (e.g., an extremely large number) or (ii) conflicting values (e.g., both of the options “I am a sole user of the computer” and “I am one of multiple users” at the same time)

Since nonresponders constituted 95.56% of the sample, we do not display the entire sample results in the following analyses and tables because they are virtually identical to nonresponders’ characteristics.

4.1. System Variables
4.1.1. CPU Performance

There were slight but significant differences among nonresponders, complete responders, and partial responders regarding their device’s CPU performance (). Most users’ devices fit in the high-end and high-mid CPU categories. Less complete responders’ and partial responders’ devices were labeled as having a high-end CPU performance compared with nonresponders’ devices (i.e., 37.14% of nonresponders, 33.64% of complete responders, and 30.69% of partial responders). This difference was mostly balanced in the second highest category (i.e., midhigh) with more responders’ and partial responders’ devices in this category (42.52% of nonresponders, 45.13% of complete responders, and 46.03% of partial responders). Details are shown in Table 1.

4.1.2. RAM Size

Nonresponders, complete responders, and partial responders also differed slightly in their devices’ RAM sizes (). Nonresponders’ RAM sizes were larger than complete respondents’ and even larger than partial respondents’. Details are in Table 1. RAM size between 2 and 4 GB was detected for 30.55% of nonresponders, 31.44% of complete responders, and 34.94% of partial responders. While 22.10% of nonresponders had RAM size larger than 8 GB, this was the case for 20.10% of the complete responders and only for 16.83% partial responders.

4.2. Time Spent on Screens
4.2.1. EULA Screen

The average time spent on the EULA screen was 39.71 seconds (). The differences in time on the EULA screen for nonresponders (), complete responders (), and partial responders () were not significant ().

4.2.2. PUA Detection Screen

The differences in the time spent on the PUA detection screen were evaluated separately for the 15 individual variants, because each screen included text of different length and structure. There were significant differences in time spent of PUA screen between nonresponders, complete responders, and partial responders on all 15 variants, but the effect sizes were small (see Table 2). Post hoc analyses showed that, for all of the variants, nonresponders spent significantly less time on the screen than both groups of responders (i.e., complete responders and partial responders, ). However, the differences between complete and partial responders were not significant on any screen variants, except for Variant E3. Average times spent on the PUA screen are shown in Table 2. For the individual PUA screen variants, the differences between (b) complete responders and (a) nonresponders ranged from 4.14 s to 11.37 s; the differences between (c) partial responders and (a) nonresponders ranged from 5.32 s to 12.85 s; and the differences between (b) complete responders and (c) partial responders ranged from -6.65 s to 4.92 s.

4.3. User Behavior during Installation Process
4.3.1. Detection of PUA

Differences in the willingness to enable the detection of potentially unwanted applications were evaluated separately for the 15 individual variants of the PUA detection screens. On 13 of the 15 variants, there were significant differences between the percentage of nonresponders and the complete responders who enabled the detection of PUA. Generally, complete responders enabled PUA detection the most and nonresponders the least, with a few exceptions as shown in Table 3.

Difference between groups significant on .

4.4. Survey Responses of Partial and Complete Responders

As most attrition occurred toward the end of the survey, complete responders were compared to partial responders on the variables presented at the beginning of the survey. There was a significant, albeit small, difference between complete responders and partial responders regarding age (). Complete responders () were slightly younger than partial responders (). Differences in the proportion of males and females between complete and partial responders were not significant (). There were 65.30% males within the complete responders () and 65.65% males within partial responders (). Complete and partial responders differed significantly in their highest achieved level of education, but the effect size was small (). Complete responders () were slightly better educated with 81.19% reporting university education, 17.90% secondary education, and 0.92% primary education, compared to 78.41%, 19.05%, and 2.53%, respectively, for partial responders ().

Complete and partial responders were also compared with regard to their self-evaluated computer skills, the perceived privacy of their computer data, their sensitivity about privacy, and the perceived security of computers against online attacks, while accounting for the possible effects of the demographic variables. The two groups did not significantly differ in any of these self-evaluated variables () when accounting for demographic variables (i.e., age, gender, and education). Having higher education increased the likelihood of completing the questionnaire fully (), even when accounting for age, which remained the only other significant predictor ().

5. Discussion

Our study presented a unique comparison of the behavior and characteristics of three groups of users: users who opt in and fully complete an ICT security-related intercept survey that was presented at the end of an online security software installation process; users who opt out; and users who provide only partial or low-quality answers.

The total response rate to our online intercept survey was 4.45%. This included 3.51% complete responders, who filled out all of the items in the survey, and 0.94% partial responders, who filled in at least one but not all items in the survey or responded carelessly (i.e., provided nonsensical values for age or conflicting answers). This is lower than the 15 to 30% intercept-based response rates reported by a commercial company [23]. However, it is higher than the response rates to website intercept surveys reported by Tuten et al. [25] or Dodge and Cucchi [27], which were both below 1%. Our study design did not include any monetary or material incentives. The use of incentives could further increase the response rate, as was found in previous studies [15]. The setting for our study differed from the previous studies because the option to complete the survey was presented to every user at the end of an online security software installation process. Thus, it provided a unique estimate for the response rate for this design. To our knowledge, no previous studies have reported response rates to this type of intercept-based survey.

Nevertheless, as Groves [14] notes, nonresponse bias is based not only on the response rate itself but also on the differences between responders and nonresponders on the variables of interest. Thanks to user data being detected throughout the installation process, we could provide an unprecedented comparison between survey responders and nonresponders on their hardware characteristics as well as their behavior during the installation process.

Firstly, we evaluated the possible differences in hardware, specifically CPU performance and RAM size. Survey nonresponders had slightly higher CPU performance than responders. More nonresponders’ devices fit in the highest category of high-end CPU performance. Nevertheless, this difference was mostly balanced in the second-highest category, where more complete and partial responders’ devices fit. Similarly, nonresponders also had slightly larger RAM size, but the difference was again most evident in the two highest categories of 4 to 8 GB and 8 GB+. Although these differences were significant, the effect size was very small as was the case in Smahel’s [31] analysis of a related sample across countries. The current results indicate that the differences were mostly driven by the two highest categories of both CPU and RAM. Thus, we cannot conclude that nonresponders and responders would constitute user segments with distinctly different computer hardware.

We were able to detect user behavior during the installation process regarding the time spent on various screens and the willingness to enable the detection of potentially unwanted applications for all users, including the survey nonresponders. We found that the differences in the time spent on the EULA screen between nonresponders and complete and partial responders were not significant. All of the users spent similar amounts of time ranging from 37.5 to 40.2 seconds on this page. Considering the length of the license agreement, such times indicate that users generally do not read the full text before agreeing to it. This is in line with previous research that examined reading privacy and terms of service policies [34].

However, in the case of the installation screen that provided details and the request to enable PUA detection, we found significant differences between the responder segments. The 15 variants of the PUA screen were analyzed separately because they differed in the length of text, layout, and other visual elements. This screen was unique because the user had to provide active consent to allow PUA detection, which could not automatically be labelled as the recommended or preferred option. Unlike in the case of EULA—where the users know the “correct” option to choose in order to install the software, thus allowing faster movement through the screen—the informed decision on the PUA screen required users to read the description of the feature. On all 15 variants, nonresponders spent significantly less time than both groups of responders. The differences with complete responders ranged from 4.14 to 11.37 seconds; the differences with partial responders ranged from 5.32 to 12.85 seconds. Complete and partial responders did not significantly differ in the time spent on the PUA screen (except for PUA screen Variant E3). It is important to note that, while the differences on the PUA screen were significant, the effects were small. Nevertheless, these results could indicate that survey responders were more attentive to the presented information and spent more time reading the details on the PUA detection feature of the software. The screen with PUA contained short descriptions, which included from 29 to 123 words. Based on average reading speed, the expected reading time ranged from 7 to 30 seconds [32]; hence, even a few seconds of difference may be meaningful. An additional difference in behavior was detected with regard to the actual willingness to enable the PUA detection. On 13 of the 15 PUA screen variants, complete responders enabled the PUA detection significantly more often than nonresponders. The significant differences ranged from 3.14% to 6.96%. Considering that the PUA detection rates were already around 90%, this difference between nonresponders and complete responders is considerable.

The differences in time spent on the PUA screens and in the actual PUA detection could possibly be explained by the higher willingness of some users (i.e., complete responders) to spend additional time during the installation process as well as provide answers to a survey. Also, Fan and Yan [15] note that online survey responders tend to be more conscientious and agreeable than nonresponders. Such traits could underlie a higher willingness and compliance for the requests during the installation process in our study. Abbott [29] found that online survey responders had the characteristics of early adopters and were more engaged in new experiences than nonresponders. Responders in our survey could possibly view the additional participation in a survey as a new experience worth exploring.

Further, we analyzed the survey responses of complete responders compared with partial responders, where available, to provide additional comparison for these two segments of users. Partial responders were slightly older than complete responders, although the difference was less than 1.5 years. This is contrary to Peytchev [30] and Steinbrecher et al. [9], who found that breakoffs (i.e., partial responders) were younger than complete responders. We did not find any differences regarding the percentage of males and females between the groups. Approximately two-thirds of the survey responders were male. The fact that the majority of responders were male might be related to the setting for our study—a software installation process—which is arguably a male-dominated domain [35]. Since the gender is not available for the nonresponders, we cannot rule out the possibility that gender plays an important role in the differences between nonresponders and those who opt in to complete the survey.

Previous studies indicated that survey responders might be better educated that nonresponders or breakoffs [9, 30]. We found a similar trend with complete responders being slightly better educated than partial responders, and this effect remained significant even when controlling for age. Complete and partial responders were also compared regarding their self-evaluated computer skills, the perceived privacy of their computer data, their sensitivity about privacy, and the perceived security of computers against online attacks. The groups did not differ on any of these variables when accounting for demographic variables.

6. Limitations

Some limitations beyond our control could have influenced these results. Despite our careful cleaning process, we cannot be completely certain that each record corresponded to a unique participant because participants could have used and installed the software on multiple devices.

While we detected some differences in behavior during the installation process between responders and nonresponders, it is difficult to verify what caused it because we could only make use of passively collected data. A different design, possibly with mixed modes or in-person follow-ups, could be used to further explore the underlying processes.

The installation was a rather simple and straightforward process that did not include many individual decisions that could be examined in our study. It might be worthwhile to replicate our results with a more complicated process that would capture a broader range of user behaviors.

We did not set up our survey so that it could collect additional paradata such as time spent on specific questions in the survey. Further studies could make use of this information and provide a detailed description of the types of online survey responders and the related variables.

7. Conclusions

Based on the results of our study, we can conclude that there are slight differences between online survey responders and nonresponders. Both groups used similar hardware, although survey responders’ devices were slightly better in terms of CPU performance and RAM size. However, the behavior of the responders and the nonresponders differed in computer-usage-related decisions. We found that users who completed the survey were also more willing to enable the detection of potentially unwanted applications than survey nonresponders, and they spent more time on the screen that provided details about this feature. Users who filled in the survey fully were also slightly younger and better educated than those who filled it in only partially. We can conclude that, despite the low overall survey response rates, online product-related surveys can provide useful insights about the user base. Nevertheless, the companies using online surveys should be careful in their inferences because the behavior might differ among the segments of users for specific situations.

Data Availability

Data access is limited due to a collaboration with a business subject in the data collection. The authors are, nevertheless, willing to run requested analyses and/or provide additional information about the data to interested parties and readers.

Conflicts of Interest

There are no conflicts of interest for any of the authors.

Acknowledgments

This work was supported by the Masaryk University (project MUNI/M/1052/2013).