Abstract

Online user feedback, collected by means of internet survey tools, is a promising approach to obtain early user feedback on concepts and early prototypes. In this study, the collection and utilization of online user feedback was investigated in four design cases: all master student projects for industry clients involving seven student designers. A total of 272 user participants provided quantitative feedback. Half of these also provided qualitative feedback. One third of the qualitative feedback was perceived as useful by the student designers. The main usefulness of the feedback was related to strategic concept decisions rather than the interaction design of the early prototype. Lessons learnt are provided.

1. Introduction

It is well established, for example, within the field of Human-Computer Interaction (HCI), that the early involvement of users in the design process is a key to developing solutions that satisfy user needs and requirements. In particular, it is held that early involvement of users is particularly important in order to avoid costly redesign at later stages of development. One way to facilitate such early user involvement is related to concept development and early prototyping, where user feedback is collected in response to low-fidelity representations of solutions in the early stages of design.

However, obtaining user feedback on concepts and early prototypes is resource demanding. In particular, this is due to having user representatives participate individually, or in small groups, in face to face (FtF) feedback sessions. High resource demands are likely to imply that user feedback is conducted at longer intervals and later in the design process than would be the case if the resource demands were lowered. Also, as the number of user participants affects the time and cost associated with running an evaluation, sample sizes are likely to be kept small. Small samples are prone to biases and therefore increase the risk that the participants are not representative of the intended user group.

Online collection of user feedback may represent a way to counter the challenges of resource demands and sample sizes in early phases of design. In order to explore this, we studied online collection of user feedback in four cases of concept design and early prototyping. In each of the cases, the user feedback was collected as part of a project returning concepts and user interface (UI) visualizations for novel social media services. All four design projects were conducted in the design phase preceding technical development, as described by Buxton [1]. The concepts and prototypes used to elicit online user feedback were presented as text and UI visualizations. The user feedback was collected using an online questionnaire with quantitative and free text responses.

In this paper, we will present the four cases and their associated findings and summarize the lessons learnt.

2. Background

2.1. The User in the Design Process

In standard software engineering processes, users are regarded as providers of information on system requirements and sources of feedback in system evaluations [2]. In the field of HCI, methods have been developed for user requirements specification [3], participatory design [4], and usability evaluation [5, 6]. The user-centered design process has been formalized in the international standard ISO 9241-210 Human-centred design for interactive systems [7].

In the scientific practice of HCI, focus has typically been on the transfer of information between the user world and the world of software developers [8]; that is, to provide context specification and requirements as inputs to design and to conduct evaluation of the resulting designs. In the phases of context specification and requirements specification, methods of capturing information from users include context-of-use analysis, surveys, field studies, and task analysis [9]. In the evaluation phase, users may be involved through usability testing, where they try to use and provide feedback on the developed solutions. There is widespread agreement on the importance of user evaluations early in the design process, conducted as exploratory or formative studies on preliminary design concepts [6].

2.2. User Feedback on the Basis of Early Prototypes

It is a commonly accepted notion that it is difficult, if not impossible, to provide a correct and comprehensive requirements specification in advance of the design process [10]. In consequence, the design and development processes are typically conducted iteratively, where phases of design are followed by phases of evaluation.

Prototyping is an important tool in order to allow early evaluations of designs. In particular, paper prototypes have been advocated as useful in this respect, due to the ease and low cost of their making, their flexibility, and their ability to communicate design ideas to users [11]. Such early prototypes are simple representations of the system as seen from the users’ perspective and may be used for conducting usability tests where participating users try to solve predefined tasks on the paper UI (with the moderator taking on the role as the machine, updating the UI in response to the actions of the participating user). Early prototypes may also serve as basis for engaging the user participants in codesign processes, where the prototype is updated on the basis of discussions between the users and the development team [11].

A number of empirical studies have been conducted in order to investigate possible effects of prototype fidelity or characteristics. In these studies, usability test results are found to be fairly robust across high- and low-prototype fidelity, and also across paper and computer prototypes [1214]. However, one study reported that, even though test results seem to be largely unaffected by the medium of the prototype, user test participants may be more comfortable with user tests conducted on computer prototypes rather than paper prototypes [15].

2.3. Online User Feedback

Online collection of user feedback on concepts, prototypes, or products is widely used in the field of HCI. Several validated questionnaire measurements for user satisfaction are available, such as the questionnaire for user interface satisfaction, QUIS [16], and the software usability measurement inventory, SUMI [17]. Industrial solutions also exist for event-driven collection of user feedback on running systems, such as InSight by Netminers (http://www.netminers.dk/), and for unmoderated usability testing or design feedback via web-based solutions, such as Chalkmark by Optimal Workshop (http://www.optimalworkshop.com/) and Loop11 (http://www.loop11.com/).

Within the field of psychology, as well as the field of HCI, comparative studies provide knowledge of how unmoderated web-based studies with users compare to laboratory studies moderated by a test leader. Previous studies referred to by Schulte-Mecklenbeck and Huber [18], and, in particular, the review of Krantz and Dalal [19], indicate that for a range of study types, web and laboratory studies yielded comparative results. However, Reips [20] noted that web-based studies will introduce variability in the technical setup of the study and limit the control of multiple submissions.

In the current HCI literature, empirical comparisons of online versus FtF collection of user feedback have been conducted in the context of usability testing, and it seems as if the findings of online data collection may resemble those of the usability laboratory even though differences exist. Task completion times and task completion rates have been found to be fairly similar [21, 22]. However, for studies of online search behavior, online and laboratory studies may affect user behavior differently [18]. Tullis et al. [22] noted that the larger samples possible in online studies improve the reliability of subjective measures, such as user satisfaction questionnaires. Roberts, Barrett, and Marr [21] found that moderated usability testing in the usability laboratory generated richer and more useful feedback than did online unmoderated usability testing. This finding of Roberts et al. may have been due to the difference between having user participants verbalize their feedback in the moderated laboratory condition, whereas they wrote their comments in the online condition. Tullis et al. [22], who requested that their participants write their comments in the web interface in both the laboratory and the online context, did not report such differences.

3. Research Questions

The research objective of this study was to explore the characteristics of online user feedback in the early design phases. The following research questions were addressed.R1:What kind of feedback is provided by users in response to online presentations of concepts and early prototypes?R2:How is the online user feedback perceived by designers?

With respect to R1, it was seen as relevant to get insight into the different categories of online user feedback that may be expected, as well as the relative proportions of the categories. It was also interesting to investigate whether the user participants’ first impression of the service affected their qualitative feedback.

With respect to R2, it was seen as important to get knowledge about the degree to which online user feedback provides designers with new insight in the context of use, user requirements, or ideas for redesign.

4. Method

4.1. Design Cases

Four cases involving concept design and early prototyping were investigated. The cases were all conducted as part of a master course in interaction design on design for social media at the Oslo School of Architecture and Design. The design work was done for two clients. In Case 1–3 the client was a football club requesting concepts for social internet services, to serve as a basis for the development of new functionality for their supporter website. In Case 4, the client was a telecom operator requesting social concepts for their online music shop, designed for both PC and mobile phones (There was also a fifth case associated with the course. However, in this case we were not able to conduct the same data collection as in the four other cases. This fifth case is therefore not included in this presentation). The final delivery in each case was a service concept including an early (nonfunctional) prototype of the service UI. Case 1 included a just-for-fun betting service. Case 2 covered a range of entertainment options such as games, quiz, and karaoke. Case 3 proposed a timeline visualization of user-generated multimedia content. Case 4 suggested a music service utilizing geographical positioning of mobile phones.

The choice of student design cases implies important benefits. The constraints of the design processes were known, controllable, and similar in all the studied cases, increasing our ability to analyze findings across cases. The student designers did not have a well-established design process, making them flexible with respect to the implementation of a design process with an online user feedback loop. Further, student designers, when compared to experienced designers, may be more open to actually utilizing novel sources of design feedback. Student designers may also be expected to more easily participate in additional data collection activities required for research purposes.

At the same time, the choice of student design cases implies an important limitation: the knowledge generated through research on student design processes will need to be replicated in industrial cases before it may also be made general to experienced designers. However, given the exploratory nature of this study, the limitations of student design cases were found to be outweighed by the benefits.

4.2. Design Process

The design process in all four cases was kept the same, with the exception of the number of student designers participating in each case, which varied (details below). Each case was initiated through the presentation of a design brief. Then, there followed a phase of ideation, lasting for 5 working days. At the end of the ideation phase, the student designers got feedback through (a) face-to-face feedback sessions with potential users and (b) client presentations. Each case included two or four face-to-face (FtF) feedback sessions, where individual potential users were presented with an early conceptualization of the ideas the student designers wanted to pursue (It was planned for each case to have FtF feedback sessions with four individual users. However, in two of the cases two of the recruited users did not show up). The inclusion of these FtF feedback sessions enabled the student designers to compare their experience of FtF and online user feedback.

On the basis of the client and user feedback, a phase of conceptualization and early prototyping followed, lasting for 10 working days. This phase was concluded with the presentation of the concepts and early prototypes for online feedback from the user participants.

The online user feedback period was one working day and two week-end days. Upon presentation of the online feedback a phase of finalizing the concepts and early prototypes followed, lasting for 5 working days. At the end of this was a client presentation of the final delivery. The design process is presented diagrammatically in Figure 1.

In all cases, the early prototypes were visual presentations of the user interface. The visual presentation, however, varied between the cases. In Case 1 the prototype was a Flash presentation allowing the user to click through a preset series of steps showing key functionality. In Case 2 the prototype was presented as a series of nonclickable screens. In Case 3 the visual presentation was a one and a half minute video showing intended functionality. In Case 4 a Flash presentation allowed the user to engage in exploration of the prototype by way of simple interaction.

The underlying concepts were in Cases 1–3 presented as text together with the visual presentations of the prototypes. In Case 4 the concept was presented through visual storytelling immediately preceding presentation of the prototype.

4.3. Participants

The participating designers were seven students conducting the design cases as part of a master course in interaction design. Four were male, three female. Their median age was 26 years (min = 24, max = 29). Five of the student designers had backgrounds in industrial design; two had backgrounds in informatics. None had extensive professional work experience.

The online user feedback was collected from representatives of prospective users, recruited in order to be as close to the target population as possible. The participating users in the three football club design cases were recruited on the basis of an invitation in an electronic newsletter sent out to the supporters of the particular football club. The participating users in the online music shop case were recruited from a large national panel meant to be representative of the Norwegian internet population. These latter participants were selected on the basis that they were in the age range of 15–40 and reported to have a very great interest in music. Details of the user participants in the study, for each of the four cases, are presented in Table 1.

4.4. Data Collection and Analysis
4.4.1. User Feedback Data Collection and Analysis

The online user feedback was collected through an online questionnaire solution (http://www.surveymonkey.com/). Upon entering the questionnaire, data were collected on the participant’s background. Following this, the concept and an early visual prototype were presented and the participant was instructed to familiarize him or herself with it. The visual prototype contained a visual presentation of the user interface, covering approximately 700*500 pixels of the screen. The concept was presented either by text together with the visual prototype or by visual storytelling immediately preceding the visual prototype, as described above.

Following the participant’s familiarization with the concept and prototype, data were collected on the first impression of the concept and the early prototype. Finally, the participant was asked to provide free text feedback on how the conceptualized service could be improved. Free text feedback was to be given in a rather large input field: typically 9 lines high and 90 characters wide. The large size input field was chosen in order for the participants not to be restricted in their feedback by a seemingly small input field. The different questionnaire items are presented in Table 2.

Some variation in the online data collection existed between the cases. In Case 2, the presentation was split in three, with data collection on first impressions and free text feedback for each of three aspects of the concept; however, due to the splitting up of the presentation, first impression data were only collected in the same manner as in the other cases for the last third of the presentation. In Cases 1 and 4, two free text questions were asked, one question on the assumed context of use for the conceptualized service in addition to the question on how it may be improved.

Upon the completion of the questionnaire data collection, and prior to the student designer interviews, the free text feedback was analyzed according to the following categories:(i)constructive (information on context of use, user requirements, change suggestions, ideas),(ii)positive (nonconstructive positive feedback),(iii)negative (nonconstructive negative feedback),(iv)other (comments on issues not covered by the above categories).

Free text answers only consisting of short nonsense text (such as “asdasd” or “???”) or exclamations (such as “what?”, “no”, “yes”) were not included in the analysis.

4.4.2. Student Designer Interviews—Data Collection and Analysis

Immediately upon completion of the design deliverables, the student designers were interviewed about their experiences with the online feedback. The interviews targeted the following six themes:(i)the effect of the online feedback on the concept and early prototype, and the most important findings in the online feedback,(ii)the degree to which the online feedback (a) provided background knowledge on the user and user context, (b) knowledge of user needs and requirements, and (c) new or creative ideas,(iii)the weak aspects of online user feedback,(iv)the strong aspects of online user feedback,(v)comparison of FtF and online user feedback; advantages of each kind of feedback method, and differentiation with respect to (a) background on users and context of use, (b) knowledge of user requirements and (c) new and creative ideas,(vi)change suggestions as to how online user feedback should be collected and used.

At the end of the interview, the student designers were asked to rate the usefulness of each of the qualitative online feedback units received in their case. The rating was made on a scale from 1 (not at all useful) to 5 (very useful). The interview and rating sessions lasted 25–45 minutes, and all were recorded. The time spent on the interviews was 13–25 minutes.

The interview data were made the subject of a thematic analysis [23]; an analysis technique aiming to code the data according to coding categories developed iteratively in response to the data. In total, 57 coding categories were developed to cover 181 units of meaning across the seven interviews. Of these, 19 coding categories covered all meaning units shared by 3 or more of the student designers. All but 5 of the coding categories were found to belong to one of the following themes: usefulness of online feedback, strengths of online feedback, weaknesses of online feedback, strengths of FtF feedback, weaknesses of FtF feedback, areas of use for FtF and online feedback methods, and suggested changes to the user feedback process.

Example coding categories: (a) usefulness of online feedback: confirm or chose direction of project, (b) strength of online feedback: many participants, (c) weakness of online feedback: lack of depth in feedback, and (d) suggested change to the user feedback process: improved communication of concept.

5. Results

5.1. The Online Feedback

Across the four cases, 272 user participants provided online feedback. In addition, 28 user participants dropped out before entering feedback on their first impressions, a drop-out rate of 9 percent. Among the user participants, 134 (49 percent) provided qualitative feedback. Since two of the cases had more than one free text item, a total of 178 units of qualitative feedback were provided; of these, 93 were coded constructive, 38 positive, and 44 negative (3 other). The median number of characters for the qualitative feedback units was 52 (min = 5, max = 554). Feedback examples are provided in Table 3.

A single measure of the user participants’ first impression was established as the average score of each of the three items used to measure first impressions. Case 2 was left out of the analysis of first impressions, due to the diverging use of the first impression items in this case (as explained above). The internal reliability of this measure was found to be satisfactory; Cronbach’s alpha = 0.89 ( ). Median score for first impressions was 3.67 (min = 1, max = 5).

The effect of first impressions on the qualitative feedback was explored with respect to (a) the user participants’ likelihood of providing qualitative feedback at all and (b) the likelihood that the feedback was constructive.

First impressions seemed to have little or no effect on the user participants’ likelihood of providing qualitative feedback. When dividing the user participants into two groups on the basis of their first impression score being below or above the median, 67 percent of those with low first impression scores, and 69 percent of those with high, provided qualitative feedback. To illustrate the lack of effect, a Fischer Exact test was conducted ( , ).

First impressions seemed to have a substantial effect on the user participants’ likelihood of providing constructive qualitative feedback. Among those who provided qualitative feedback, only 41 percent in the low first impression group, and as many as 75 percent in the high first impression group, provided feedback coded as constructive. To illustrate this, a Fischer Exact test was conducted ( , ).

5.2. The Student Designers’ Perceptions of the Online Feedback
5.2.1. Results from the Student Designers’ Usefulness Ratings

The quantitative usefulness ratings each student designer made for each received item of user feedback provide an initial understanding of the student designers’ perceptions of the online feedback. Usefulness ratings were provided for 167 of the qualitative feedback items. In the three cases involving two student designers, usefulness scores were calculated as the average of the two individuals’ scores. About half (52 percent) of the qualitative items were rated as low in usefulness (scores below 3), whereas 32 percent were rated as high (scores above 3). Ten percent of the items were given the highest usefulness score.

The categories of qualitative user feedback were clearly related to the perceived usefulness. Only constructive items of qualitative user feedback were rated by the student designers as high in usefulness (scores above 3); 63 percent of the items classified as constructive were rated as high in usefulness. All items of user feedback classified as positive were rated low in usefulness (scores below 3); 75 percent of the feedback items classified as negative were rated low.

5.2.2. Results from the Student Designer Interviews

The key results from the interviews are presented below, following the topics of the interview guide. The presented findings represent all the findings made on the basis of reports from three or more of the interviewees. In addition, a small number of the findings based on less than three of the interviewees are included.

(a) The Effect of Online Feedback
The online feedback seemed to have had a varying effect in the different cases. Three of the student designers reported that the online feedback caused major changes to the concept or early prototype, whereas two reported no changes made in response to the user feedback. One example of a major change was related to a just-for-fun betting service developed in one of the football cases. In this case, many of the user participants asked for increased attention towards group usage of such a service. In consequence, the final delivery was made mainly to serve group use.
The online user feedback also seemed to serve as confirmation of the concept or as background on which to choose a direction for further design; this was reported by three of the student designers. Feedback where users’ descriptions of how they would like to make use of the suggested service matched the intentions of the designers seemed to have been interpreted as confirmatory, whereas user preferences and positive or negative reactions were used to guide the direction of the design. Six of the student designers reported that the online feedback provided new insight into whether the service met user needs and requirements, and that such knowledge may be crucial when making strategic decisions on how to develop a concept.
Finally, three of the student designers reported that the online feedback motivated them to develop a clearer or more self-explanatory concept. This seemed to have been motivated by the student designers’ interpretation of user feedback indicating that their initial concepts did not communicate as intended to the user participants.

(b) Input Provided by the Online Feedback
The online feedback was reported by four of the student designers to provide increased insight into the users and the context of use. This was elaborated as insight in assumed use, culture, and user types, as well as the provision of a new perspective, a perspective very different from the perspective they found to be held by themselves or their peers. In particular, this seems to have been the case in the three football club cases, where two of the student designers expressed surprise with respect to the likes and dislikes voiced by the user participants. At the same time, three of the student designers clearly expressed that they did not find the online feedback to provide new insight in users and context of use.
The student designers were also asked to what degree the online feedback included new or creative ideas. In general, it seems like the student designers found little or no such benefit from the feedback. No one reported more than a few new or creative ideas, and one student designer also pointed out that the creative ideas were mainly issues the student designers had already thought of.

(c) The Weak Aspects of Online Feedback
The reported weak aspects of online user feedback, as seen from the point of view of the student designers, seem to have been a lack of depth or detail in the feedback (reported by four), lack of control of the engagement and motivation of the user participants (reported by three), and lack of consistency in the feedback (reported by three). The lack of depth and detail was explained as a consequence both of the limited length of the qualitative feedback and also by the lack of opportunity to engage in a dialogue with the participant. The lack of control of the user participants’ motivation was reported to be an important threat to the credibility of the online feedback; one of the student designers clearly expressed that he did not want to make design decisions based on feedback from people who did not make a serious effort to make relevant and good answers. The lack of consistency in user feedback seemed to confuse some of the student designers; how to interpret feedback where some user participants say one thing and others the direct opposite? Two of the student designers reported resolving conflicting user feedback by selective utilization of the user feedback, but voiced concern that one may be tempted to only utilize feedback that is in line with one’s preconception.

(d) The Strong Aspects of Online Feedback
The main strong aspect of online user feedback was reported to be the number of user participants providing feedback. Five of the student designers saw this as a benefit. The reasons provided for this strength in numbers were that more user participants provide a more correct picture of the user group in total, and also that large number of users allows the use of quantitative feedback—as in the first impression scores. In addition, more user participants were seen as a way to get more breadth in the feedback.

(e) Comparison of FtF and Online User Feedback
When comparing FtF and online user feedback, the student designers’ comments suggested the complementary nature of the two, but also clearly indicated that the different student designers seem to prefer different feedback methods. The FtF feedback was reported to provide more insight in the individual user (reported by five), better control of the motivation of the user participants (reported by five), more detailed feedback (reported by six), and more creative feedback (reported by six) than one gets with online feedback. The student designers were not asked which of the feedback methods they would choose if they only could use one, but three of them voiced a preference for the FtF feedback; two voiced a preference for the online feedback.
Four of the student designers suggested that the two methods may be seen as complementary in the design process. The online feedback was suggested to be more suited for feedback on conceptual issues, and possibly also for studies of finished products, whereas FtF feedback was suggested to be more beneficial for feedback on prototype design.

(f) Change Suggestions
A range of change suggestions was reported with respect to how online user feedback should be collected in order to be useful in concept development and early prototyping. Of these, the change suggestion reported by most (three) was to implement measures in order to present concepts that communicate well to the user participants. As was noted by one of the student designers, relevant user feedback only comes in response to concepts that actually communicated what the designers intend. This rather trivial reflection seems to be an important learning point in the study. Suggested ways to improve the communication of concepts included piloting prior to submitting concepts for online user feedback, as well as more time for concept preparation.
Other change suggestions were to increase the number of user participants, in order to get more relevant feedback (reported by one) and to get more background information on the user participants, in order to be better able to interpret the feedback (reported by one). One of the student designers also reported that in order to better use the online feedback, a longer project was needed; this to be able to adequately digest and utilize the online feedback.

6. Discussion

In the following, we will first discuss the results with respect to the two research questions. This discussion will conclude in a set of lessons learnt. Then, we will discuss the limitations of the study and suggest future work.

6.1. The Value of Online User Feedback

The first research question addressed the kind of user feedback provided in response to online presentations of early design. The results indicated that about half the qualitative feedback was constructive in the sense that it provided information on the context of use, user requirements, change suggestions or ideas. There was a good spread in the length of the user participants’ qualitative responses. However, none provided more than a few sentences of feedback. Such brevity in the qualitative feedback may be beneficial in the sense that it allows for easy overview and analysis. Even so, it seems clear that the benefit of online feedback would be increased with increasing comprehensiveness or level of detail in the user feedback.

It was interesting to note that the user participants’ first impression did not to any great degree affect their tendency to provide qualitative feedback, nor the length of the feedback provided. This may imply that whether users like or dislike a design does not affect their willingness to provide feedback. This finding is comforting, because it indicates that a possible bias of getting comments only from those in favor of a design may not be relevant. Some concern, however, should be given to the finding that those with a positive first impression of a design were more inclined to make constructive comments. In future studies when, with the help of social software, designers are able to engage in online dialogues with participating users, a bias may be introduced if users providing constructive feedback are more likely to be engaged in dialogues.

The interviews with the student designers suggest that for some the online user feedback clearly had beneficial effects on the design process; in particular by motivating concept clarification and smaller or larger changes to the design. On the other hand, others did not find the online findings particularly useful and strongly preferred FtF user feedback. This divergence may indicate that the beneficial outcome of online user feedback depends on a designers’ ability or willingness to utilize the feedback. Possibly, online user feedback is an approach that suits some designers but not all.

The changes associated with user feedback seemed typically to be on the conceptual level, and it seems as if the online user feedback was perceived as more suited for early strategic decisions at the concept level than for decisions related to early prototype design, which includes design at the level of interaction design and UI-visualizations. This finding, together with the finding that the main strength of the online feedback was seen as related to the number of respondents, is an important argument for further experimentation with online feedback in the very early phases of the design process, in the design phase described by Buxton [1] as preceding project green-lighting and engineering. These early phases are characterized by decisions of a strategic nature, where it may be highly valuable to use the feedback from a representative sample of users as background information. Such early feedback from a large representative sample is less dependent on dialogue between designer and user, and therefore is not greatly affected by the online user feedback being a one-way communication process. Possibly, in user feedback processes supported by social software, dialogue between designers and users may also improve the usefulness of online user feedback in prototype design. Such dialogue may enable the designers to correct misconceptions, better inform the user participants about the project’s goals, and gain a common understanding of what the designers are trying to achieve, and inform the user participants about how feedback should be given in order to be constructive. However, if the user feedback is collected in dialogue between designers and users, it may be that only smaller numbers of users can be involved.

6.2. Lessons Learnt

The study has provided new insights and there are several lessons to be learnt. In the following, we summarize some of the lessons that we find particularly important.(i)Online user feedback on concepts and early prototypes was seen as useful in the design process by some student designers, but not all, in particular due to the insights into (a) the service’s context of use, (b) whether or not the service met user needs and requirements, and (c) the unfamiliar perspective of the common users.(ii)The online feedback was suggested as mostly useful for feedback on concepts rather than at a level of detailed UI design, indicating that this method may well complement FtF user feedback—in particular in the early phase of the design process.(iii)Only constructive feedback was perceived as useful by the student designers, indicating the importance of communicating to the participating users that they should explain why they (dis) like a concept, or how it may be improved. Since it was found that the user participants’ inclination to provide constructive feedback depended on their first impressions, measurements of first impressions may possibly be used to filter qualitative user feedback for efficient analysis.(iv)The qualitative feedback was not seen as sufficiently detailed, and the feedback situation does not provide sufficient control of the user participants’ motivation. This finding may indicate a need to include measures to support improved interaction between designers and users, for example, by conducting the online feedback as an online dialogue rather than one way feedback. However, it may also be, in line with Roberts et al. [21], that the depth of user feedback will depend on the user participants being able to provide their comments verbally in an FtF context.(v)Conflicting user feedback was found confusing by some of the student designers, highlighting the conflict between the subjective creative vision created by the designer and contrasting feedback from the user participants. Knutsen and Morrison [24] advice that when conflicts arise, the ultimate decisions need to be made by the designer. Designers will probably benefit from being provided with guidelines on how to utilize conflicting user feedback.(vi)Even though drop-out rates were low, only about half the user participants provided qualitative feedback, and only about half of these again provided constructive feedback. This indicates the importance of taking the relative low proportion of constructive feedback into account when recruiting.

6.3. Limitations

The present study has two main limitations. First, the study includes only a small number of cases; second the study was conducted on student designers rather than experienced designers. Both of these limit the generality of the claims that may be supported by the findings. We have earlier argued for the benefits of conducting this study on student designers; for example, in comparison to experienced designers, student designers will be more likely to adapt to the requirements of the study’s feedback methods and they may also be more open towards novel sources of feedback. At the same time, we are aware that student designers may possibly respond to user feedback in a different manner than experienced designers. For example, student designers may possibly be more naïve in their interpretation and handling of user feedback (both online and FtF) and may have more difficulty in separating valuable from irrelevant user feedback. Student designers may also be more used to presenting their design for critique throughout the design process, possibly making them more accepting of a design process where such presentation is required.

In spite of the limitations, we believe that the study provides a valuable exploration of the emerging opportunity to use online user feedback in the design process. Future research is, however, needed in order to investigate the generality of the present findings.

6.4. Future Work

We aim for this study to serve as an early exploration of online user feedback in the design process, and hope that it motivates future research in this field. In particular, we see a need for trying out the generality of the findings of this study in different cases as well as with designers at varying levels of experience.

We also see a need to further explore processes and mechanisms for eliciting and utilizing online user feedback, both at the level of data collection and at the level of implementing this approach to user feedback in the design process. An important aspect of this latter strain of future work is the exploration of social software as a medium for dialogue between designers and users, where in-depth design feedback may be gathered.

Finally, since designers seem to diverge in their view of the usefulness of online user feedback, it would be interesting to explore the designer characteristics that affect how online user feedback is perceived. It would also be useful to investigate how online user feedback processes, and environments should be set up best to serve the varying needs and requirements of designers.

Online user feedback seems to be a potentially valuable source of information and critique in design processes. We hope that future work in this area will enable online user feedback to be included in common design practice, as a cost-effective approach to user involvement in the early phases of the design process.

Acknowledgment

This study was conducted as part of the research project RECORD (http://recordproject.org/), supported by the VERDIKT programme of the Norwegian Research Council.