Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 842431, 10 pages
http://dx.doi.org/10.1155/2014/842431
Research Article

Evaluation and Satisfaction Survey on the Interface Usability of Online Publishing Software

Department of Cultural & Creative Industries, National Kaohsiung University of Applied Sciences, 415 Chien Kung Road, Kaohsiung 807, Taiwan

Received 6 May 2014; Accepted 9 May 2014; Published 11 June 2014

Academic Editor: Teen-Hang Meen

Copyright © 2014 Ying-Jye Lee and Chia-Ju Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Digital publishing is one of the national key programs. Different from traditional digital publishing models, consumers could create personal digital publications with the editing program provided by businesses and combine it with web-to-print to output solid publications. Nevertheless, the usability of online publishing software is related to consumers’ acceptance or intention of product purchase. In this case, Focus Group is utilized to screen representative online publishing software (including TinTint, Photobook, and Hypo) for evaluating interface usability, investigating users’ Subjective Satisfaction, and further proposing suggestions for interface modification. Learnability and the number of user errors are set as the evaluation indicators of usability. Within the evaluation indicators in Learnability, the results show that nine typical tasks, except for Storing, show significant difference between various online publishing software. Typical tasks of basic information of works, appending pictures, adjusting pictures, changing type version, and changing pages in the number of user errors reveal significant difference on distinct online publishing software. Regarding the evaluation of overall Subjective Satisfaction with interface, TinTint and Hypo outperform Photobook, and no significant difference appears between TinTint and Hypo. It is expected that the research model could be the application reference of interface development and evaluation in digital content industries.

1. Introduction

Cultural and creative industry was included in Challenge 2008, National Development Program, proposed by the government in May 2002. With the concept of industrial chain, the value of cultural industries is redefined to expand the creative field, and humanity and economy are combined to develop industries covering cultural accumulation and economic benefits. Cultural and Creative Industry Development Act was made public by the Ministry of Culture in February 2010, in which, publishing industry is classified by the Government Information Office, in 2007 Publishing Almanac, into press publishing, magazine publishing, audiobook publishing, book publishing, and digital publishing. Digital publishing and archives industries are defined as the industries covering publishing, circulation, and archives, applying the Internet, information technology, and copyright management mechanism to creating new operation models for new markets, enhancing the production, circulation, and service chain of digital knowledge. Digital publishing covers electronic books, electronic magazines, electronic paper, electronic database, and mobile contents. Digital publishing is not simply a part of digital content industries but a new form of publishing. Within numerous digital publishing items, electronic book industry is the focus of public concern.

There have been businesses promoting online publishing software in recent years, which is different from traditional digital publishing models. Editors in traditional digital publishing industries serve as the edit creator, while the ones in online publishing industries are the consumer themselves. The consumers apply the editing program provided by the businesses to create unique digital publications exclusive of individual consumers and combining web-to-print for outputting solid publications. Such online publishing models have gradually been applied to personal albums, postcards, desk calendars, and flashcards. Using online publishing models not only reduces human resource investment and repeated proofreading costs but also customizes products, solves the communication problems between the production end and consumers in the past editing process, and creates new channels and territories for digital publishing. Online publishing therefore is regarded as a standard customization model.

Nevertheless, the highly developed science and technology have products which no longer are requested for products functions. When the functions among different brand products are not remarkable and there are numerous options, interface usability becomes a key factor in the users choosing the product [2]. In the network era, applying online publishing software to web-to-print is expected. Usability of online publishing software interface therefore is worthy of considerations. The idea of usability used to be applied to the research on human-computer interaction (HCI), especially the usability for software improvement interface [1, 35]. The favorable development and effectiveness of usability in HCI has the concept of usability applied to other fields, such as usability improvement of consumption products [6] and usability evaluation for color selection interface of customer-tailored products [7]. In this case, this study tends to evaluate the test and analysis of online publishing software interface usability with usability engineering and further propose suggestions related to the problems and design of online publishing software interface usability.

In conclusion, literatures on digital publishing, online publishing software, and usability are explored and organized [8]. Focus Group is applied to screening large-scale and representative online publishing software; interface usability is evaluated for the users, and online publishing software performance is analyzed. Meanwhile, it is expected to realize the online publishing software users’ demands for software and further propose the user-friendly interface for designers or relevant businesses as well as the promotion of online publishing software usability. The research results are expected to provide the governmental sectors or other relevant sectors like online game businesses and electronic book publishers with directions for digital content interface design. The research model is expected to be applied to the digital content usability evaluation in other areas of Taiwan, providing valuable reference for digital content or other industries.

2. Evaluation of User Interface and Usability

A favorable user interface starts from understanding people, rather than figures, as software is merely the tool for certain objectives. The better satisfaction would please the users more. The systems designed by designers are often different from the users’ imagination and understanding of such systems. It is considered as mental model when designing user interface. If the user’s mental model is different from the system, the user interface of the system exist usability problems. Usability refers to an accessible and accepted system for specific users conducting certain tasks in certain environments [9]. Usability engineering is proposed by Nielsen [1], who considered usability as being consistent of multiple attributes and proposed the following evaluation indicators.

(1) Learnability. The system is easy to learn and use for beginners. The novice users are measured for understanding the learnability of the system. Generally, the time of novice users being familiar with the system or the successful percentage of designated tasks is measured for the judgment.

(2) Efficiency of Use. After the users are acquainted with the system, high productivity would be achieved. The experts also measured the time for completing a specific typical task.

(3) Memorability. When general users return to the system after a period of time, they do not learn the system again. Moreover, casual users are tested for the memorability with the time spent or the number of times of correct answers.

(4) Error Rate. It allows the users to make fewer mistakes and easily verify the mistakes; and dramatic errors would not appear on the system. Errors are normally classified into deadly errors, minor errors, and real-time modified errors.

(5) Subjective Satisfaction. The users’ satisfaction with the system is often measured with questionnaires in order to understand the preference. The user requirements, advantages, and shortcomings of various usability evaluations are shown in Table 1.

tab1
Table 1: Users needed and main advantage for different usability methods.

The online publishing software presents entertainment characteristics that the user interface focused on high learnability, low error rate, and high subjective satisfaction. Performance measurement is therefore utilized for evaluating Learnability and the number of user errors of online publishing software interface, and questionnaire survey is applied to understanding users’ subjective satisfaction with the software interface.

3. Methods

3.1. Experimental Planning

Focus Group method used in this study screens the representative online publishing software. Focus Group is often used for evaluating users’ demands [1012] for objective and representative solutions. The Focus Group members are divided into two, with six interviewed members in each group to discuss the representative online publishing software as the research samples. (The top three samples are selected.) The members in Focus Group are interviewed with semi-structural questions, Table 2.

tab2
Table 2: Focus Group interview outline.

The subjects are requested to freely browse the contents, software instruction, instructional video, or operation software of each software website for three minutes before the experiment, aiming to be close to the users’ habits. The interface usability testing of online publishing software is further investigated after completing the 3-minute learning (total nine minutes). Usability testing aims to test the users using the system in laboratories. In Usability Engineering Lifecycle, the usability testing of performance measurement is important for evaluating the achievement of usability goals and comparing competitive products [1]. Users’ performance measurement tends to collect the time and error data of a group of subjects conducting a set of testing tasks. Learnability and the number of user errors are therefore set as the evaluation criteria. With the experimental data, the representative online publishing software is evaluated and compared to the interface usability testing.

Furthermore, users’ subjective satisfaction with the online publishing software interface is investigated to understand the users’ demands for online publishing software to propose better interface improvement suggestions. The questionnaire is referred to QUIS (Questionnaire for User Interface Satisfaction), developed by Chin et al. [13]. The users’ satisfaction with the interface is regarded as the core of the questionnaire, and the original QUIS is slightly modified phrases for the research subjects. The 0~9 scales are also modified to 1~7. According to Miller [14], merely units could be rapidly memorized in human short-term memory; otherwise, the memory would be overloaded. “7” was therefore called a magical number. For this reason, the 10 scales in QUIS are modified to 7 in order to reduce the evaluation load of subjects.

3.2. Subjects

Subjects are classified into two parts. Part of the participants in Focus Group is designers, while the other part is the users with long-term experiences in online publishing software. Total of 30 subjects are included in the evaluation of software usability. Most of the subjects in this study are aged 20~29. According to the data of InsighXplorer Limited in July 2011, the average network use frequency of the age group below 29 is higher than that of the age group above 30. Besides, different age groups would browse distinct types of web sites; the users aged 20~29 prefer social network sites, online video, news, and shopping centers. Besides, the age group of 20~29 reveals higher score on software, which is better related with the topic and online publishing software. The age group of 20~29 is therefore selected as the subject. Such subjects present high educational background, normal eyesight, and no color blindness.

3.3. Typical Task

The typical tasks are revised and set from the online publishing software interface of TinTint, Table 3. Albums are selected as the product of the software in this experiment, as they are commonly promoted by online publishing businesses. The selected typical tasks are therefore suitable for the measured software.

tab3
Table 3: Typical task set in the operation steps and the objectives.

4. Results and Discussion

4.1. Focus Group Interview

Focus Group interviews are held for designers and general users. Each of the six participants would participate in the discussions, aiming to screen representative online publishing software.

4.1.1. Focus Group Interview of Designers

The participants in the Focus Group interview are selected from a designing company in Kaohsiung, which takes charge of designing businesses and proceeds research and development of online publishing software related products. The participants hereafter are coded as Table 4.

tab4
Table 4: Participants’ data of designers and the results.
4.1.2. Focus Group Interview of General Users

The participants in this interview present experiences in using online publishing software and more than 10 years of experience in the use of Internet. Table 5 shows the individual data and the interview results of the participants.

tab5
Table 5: Participants’ data of general users and the results.
4.1.3. Focus Group Interview Result

The top three businesses, TinTint (10), Hypo (8), and Photobook (7), are regarded as the experimental samples of online publishing software. The brief introduction of the online publishing software is shown in Table 6.

tab6
Table 6: Brief description of representative online publishing software.
4.2. Usability Comparison of Representative Online Publishing Software

Within a total of 30 subjects (10 males and 20 females) aged 22~29, 90% are students, and 10% are salary people with more than 5 years of web experiences. All subjects are requested to operate the software of TinTint, Photobook, and Hypo for analyzing Learnability and the number of user errors. Analysis of variance (ANOVA) is first proceeded for the typical tasks for Learnability, Table 7.

tab7
Table 7: Analysis of variance for learnability of the typical task.

From Table 7, the typical tasks, except storing, present significant difference. Consequently, Multiple Comparison Test is further proceeded, Table 8.

tab8
Table 8: Multiple comparison analysis regarding learnability of the typical task.

From the analyses of various typical tasks for Learnability, Table 8, the online publishing software presents significant difference () in inputting basic information of works. From Figure 1, TinTint requires the least time, followed by Hypo and Photobook. Both TinTint and Hypo do not show significant difference in appending pictures, while they reveal significant difference () with Photobook, where TinTint = Hypo > Photobook. Hypo shows significant difference () with TinTint and Photobook in copying topics, while there is no significant difference between TinTint and Photobook. The three businesses present significant difference () in inserting pictures; and Photobook shows the least time for Learnability, followed by TinTint and Hypo. Photobook reveals great significant difference () with TinTint and Hypo in adjusting pictures, while there is no significant difference between TinTint and Hypo, where TinTint = Hypo > Photobook. Hypo presents great significant difference () with TinTint and Photobook in Changing type version, while no significant difference appears between TinTint and Photobook, where Hypo > TinTint = Photobook. Hypo shows great significant difference () with TinTint and Photobook in inputting words, and no significant difference appears between TinTint and Photobook. The relation equation is shown as Hypo > TinTint = Photobook. TinTint presents great significant difference with Hypo and Photobook in changing pages, and no significant difference appears between Hypo and Photobook, where Photobook = Hypo > TinTint. Photobook reveals great significant difference () with TinTint and Hypo in previewing, and no significant difference appears between TinTint and Hypo, where TinTint = Hypo > Photobook.

842431.fig.001
Figure 1: Learnability comparison of the online publishing software.

Aiming at the number of user errors of the online publishing software, the analysis of variance results are shown in Table 9.

tab9
Table 9: Analysis of variance for the number of user errors of the typical task.

With ANOVA, the typical tasks with significant difference in the number of user errors are further proceeded Multiple Comparison Test, Table 10.

tab10
Table 10: Multiple comparison analysis regarding the number of user errors of the typical task.

The ANOVA results are further explained with Figure 2. Photobook presents great significant difference () with TinTint and Hypo in inputting basic information of works, appending pictures, and adjusting pictures, and no significant difference appears between TinTint and Hypo, where TinTint = Hypo > Photobook. The three businesses show great significant difference () in changing type version, and Hypo appears have the least number of user errors, followed by TinTint and Photobook. The relation therefore is shown as follows: Hypo > TinTint > Photobook. TinTint reveals great significant difference with Photobook and Hypo in changing pages, and no significant difference between Photobook and Hypo, where Photobook = Hypo > TinTint.

842431.fig.002
Figure 2: Number of user errors comparison of the representative online publishing software.
4.3. Comparison of Subjective Satisfaction with Online Publishing Software

The descriptive statistics and ANOVA of overall Subjective Satisfaction are shown in Tables 11 and 12.

tab11
Table 11: Descriptive statistics of overall Subjective Satisfaction.
tab12
Table 12: ANOVA of overall Subjective Satisfaction.

From Table 12, the representative software presents significant difference () on the overall Subjective Satisfaction. Consequently, Multiple Comparison Analysis is further proceeded, Table 13. From Table 13, there is no significant difference between TinTint and Hypo (), while Photobook reveals significant difference compared to TinTint and Hypo ().

tab13
Table 13: Multiple comparison analysis regarding overall Subjective Satisfaction of the typical task.

5. Conclusions

Focus Group is utilized in this study for screening the representative online publishing software (TinTint, Photobook, and Hypo) as the testing software. Usability evaluation of performance measurement is applied to measure the subjects’ Learnability and the number of user errors for the online publishing software and investigate the subjects’ subjective satisfaction with the online publishing software. The results show that the typical tasks for Learnability, except for Storing, reveal significant difference. For the number of user errors, inputting basic information of works, appending pictures, adjusting pictures, changing type version, and changing pages present significant difference, while the rest do not show statistical significance. Regarding the comparison of overall subjective satisfaction with online publishing software interface, TinTint and Hypo outperform Photobook, but no significant difference appears between TinTint and Hypo. It is expected that the research results could be the reference and evaluation basis for relevant digital industries, and the research model could be broadly applied to usability evaluation of relevant digital contents in other areas of Taiwan for valuable reference.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract no. NSC 101-2410-H-151-021.

References

  1. J. Nielsen, Usability Engineering, Academic Press, Boston, Mass, USA, 1993.
  2. J. S. Dumas and J. C. Redish, A Practical Guide to Usability Testing, International Specialized Book Services, Portland, Ore, USA, 1999.
  3. L. Bálint, “Adaptive human-computer interfaces for man-machine interaction in computer-integrated systems,” Computer Integrated Manufacturing Systems, vol. 8, no. 2, pp. 133–142, 1995. View at Google Scholar · View at Scopus
  4. A. B. Barreto, J. A. Jacko, and P. Hugh, “Impact of spatial auditory feedback on the efficiency of iconic human-computer interfaces under conditions of visual impairment,” International Journal of Medical Informatics, vol. 75, no. 3-4, pp. 335–342, 2005. View at Google Scholar
  5. R. Michalski, J. Grobelny, and W. Karwowski, “The effects of graphical interface design characteristics on human-computer interaction task efficiency,” International Journal of Industrial Ergonomics, vol. 36, no. 11, pp. 959–977, 2006. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Wang, W. P. He, D. H. Zhang, H. M. Cai, and S. H. Yu, “Creative design research of product appearance based on human-machine interaction and interface,” Journal of Materials Processing Technology, vol. 129, no. 1–3, pp. 545–550, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. F.-G. Wu, C.-Y. Chen, Y.-J. Lee, and R. Chen, “Effects of color sample display and color sample grouping on screen layout usability for customized product color selection,” Computers in Human Behavior, vol. 26, no. 1, pp. 51–60, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. Y.-J. Lee and C.-J. Lin, “Usability evaluation of the online publishing software applied to web-to-print: using Tintint website in Taiwan as an example,” Applied Mechanics and Materials, vol. 145, pp. 420–424, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. N. Bevan, “Measuring usability as quality of use,” Software Quality Journal, vol. 4, no. 2, pp. 115–130, 1995. View at Publisher · View at Google Scholar · View at Scopus
  10. P. A. Chalmers, “User interface improvements in computer-assisted instruction, the challenge,” Computers in Human Behavior, vol. 16, no. 5, pp. 507–517, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. G. Marchionini and X. Mu, “User studies informing E-table interfaces,” Information Processing and Management, vol. 39, no. 4, pp. 561–579, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. F. Bruno and M. Muzzupappa, “Product interface design: a participatory approach based on virtual reality,” International Journal of Human Computer Studies, vol. 68, no. 5, pp. 254–269, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. J. P. Chin, V. A. Diehl, and K. L. Norman, “Development of an instrument measuring user satisfaction of the human-computer interface,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '88), pp. 213–218, 1988.
  14. G. A. Miller, “The magical number seven, plus or minus two: some limits on our capacity for processing information,” Psychological Review, vol. 63, no. 2, pp. 81–97, 1956. View at Publisher · View at Google Scholar · View at Scopus