Table of Contents Author Guidelines Submit a Manuscript
Education Research International
Volume 2014 (2014), Article ID 731720, 11 pages
Research Article

Estimating Students’ Satisfaction with Web Based Learning System in Blended Learning Environment

1University of Montenegro, Sanja Bauk, Dobrota 36, 85330 Kotor, Montenegro
2“Mediterranean” University, Snežana Šćepanović, Vaka Đurovića bb, 81000 Podgorica, Montenegro
3University of Graz, Michael Kopp, Liebiggasse 9/II, 8010 Graz, Austria

Received 4 December 2013; Accepted 15 March 2014; Published 22 April 2014

Academic Editor: Yi-Shun Wang

Copyright © 2014 Sanja Bauk et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Blended learning became the most popular educational model that universities apply for teaching and learning. This model combines online and face-to-face learning environments, in order to enhance learning with implementation of new web technologies and tools in learning process. In this paper principles of DeLone and Mclean success model for information system are applied to Kano two-dimensional model, for categorizing quality attributes related to satisfaction of students with web based learning system used in blended learning model. Survey results are obtained among the students at “Mediterranean” University in Montenegro. The (dys)functional dimensions of Kano model, including Kano basic matrix for assessment of the degree of students’ satisfaction level, have been considered in some more detail through corresponding numerical, graphical, and statistical analysis.

1. Introduction

Facing many rapid changes and challenges brought by new technologies and competitive pressure, higher education institutions are trying to innovate their service and raise their public reputation. Education is undergoing a dramatic transformation. Technology plays a powerful role in the life of today’s students and institutions can no longer meet their needs through classroom-based instruction alone. Higher education institutions are increasingly focusing on determining the right model to integrate technologies in teaching and learning in order to fulfill students’ needs and provide education and skills needed for the future society.

Blended learning is one way in which institutions can prepare themselves for the next era in education [1, 2]. It offers new opportunities for combining face-to-face and online teaching and learning. This includes different learning or instructional methods (lecture, discussion, guided practice, reading, games, case study, and simulation), different delivery methods (live classroom or computer mediated), different scheduling (synchronous or asynchronous), and different levels of guidance (individual, instructor or expert led, or group/social learning).

There are many definitions of blended learning and yet no single accepted one. In the scope of this study we should refer blended learning to a formal education program in which a student learns at least in part through online learning, with some elements of student control over time, place, path, and/or pace [3].

Measuring student satisfaction with web based learning systems has been an important issue for the researchers and academia. At the Americas Conference on Information Systems (AMCIS) as early as 2001, e-learning was identified as one of the nine metatracks for information systems (IS) discipline, and multiple studies in both education and the IS literature measure student satisfaction with the online courses [4]. Research shows that perceived usability, value, and quality are critical factors that affect user satisfaction with e-learning systems [5, 6]. However, there are insufficient studies investigating students’ satisfaction with web based learning system used to support teaching and learning in blended environment. Clearly, understanding the factors influencing students’ satisfaction with online component of blended learning is a critical issue. Given the role of information and system design in online customer satisfaction, McKinney et al. [7] study has synthesized the IS research on users’ satisfaction with marketing research on customer satisfaction to gain insight into web based system satisfaction. Similarly, this study draws from call for both IS and marketing research studies to examine the factors that contribute to web based learning systems benefits.

This research paper is organized in five sections. The first one examines literature and discusses models for IS success relating to user satisfaction. The second section gives theoretical overview of the considered problem and gives reference to the appropriate literature sources. The third and the fourth sections describe our study and the method of data analysis along with the obtained results discussion. Section five concludes the paper and presents directions for future work in this domain.

2. Theoretical Background

Satisfaction of the users in the computer based and information systems is very important for developers and administrators of these systems, since the success of the computer based systems is generally associated within the users’ satisfaction [8, 9]. For the information systems quality and usability, there are international standards such as ISO 9241-11 which explain that information should be retrieved in a way that satisfies the standards in terms of measures of user performance and satisfaction. In the case of information technology systems, satisfaction is an outcome of a function or an interaction occurring when the results fit to expectations of a person, or it is a function of how well a product fits his/her requirement or solutions within an acceptable range [10]. Satisfaction also can be defined as achieving success in the designated tasks [11, 12].

Constructing theory and the measurement methods for user satisfaction are investigated by researchers and these efforts resulted in some models showing the components of users’ satisfaction [1315]. End-user computing satisfaction model [16, 17] is specified for information systems with five subcategories which are content, accuracy, format, ease of use, and timeliness. Additionally, DeLone and Mclean [18] proposed a generic model for the information systems in order to understand the system success relating to user satisfaction with the following six components: systems quality, service quality, information quality, use, user satisfaction, and net benefits. Another analysis has been done by Ozkan and Koseler [19] who proposed a conceptual hexagonal e-learning assessment model, suggesting a multidimensional approach for learning management system (LMS) evaluation in six dimensions: system quality, service quality, content quality, learning perspective, instructor attitudes, and supportive issues. The explanatory factor analysis conducted showed that each of the six dimensions of the proposed model had a significant effect on the learners’ perceived satisfaction. Lee et al. [20] analyzed learners’ acceptance of the e-learning system throughout four independent variables: instructor characteristics, teaching materials, design of learning contents, and playfulness; two belief variables: perceived usefulness and perceived ease of use; and one dependant variable: intention to use e-learning. They all confirmed several hypotheses within the researched field but noticed that their study has certain limitations and that there is a requirement for larger, cross-cultural studies within this ever-growing area of this novel learning channel.

Considering e-learning systems as a part of information system there are also studies to measure and model the user satisfaction for e-learning system. For example, Matsatsinis et al. [21] proposed a multicriteria model to evaluate users’ satisfaction with e-learning program using linear programming to measure a satisfaction index and to compute criteria weights. Since DeLone and McLean (D&M) developed their model of IS success, there has been much research on the topic of success as well as extensions and tests of their model. In the study Lee-Post [22] interpreted the success model of DeLone and Mclean throughout an e-learning success model stating the related metrics of the model as in Figure 1.

Figure 1: DeLone and Mclean IS success model with e-learning success metrics.

Extensive and valuable analyses in the domain of determining users’ satisfaction with web based learning systems upon similar dimensions-category model have been done by Wang [23] and Shee and Wang [24]. Also, a number of studies used Kano two-way quality model to measure e-learning system satisfaction of users [25]. The review of Kano model previous applications in estimating e-learners satisfaction with e-learning courses is given in [26].

As in the above examples, there are research studies trying to establish a model to determine the success metrics for e-learning related to satisfaction of usage. In those models satisfaction is considered as a function of interaction between users and system or services provided via these systems. End results and outcomes fitting to user expectations and requirements are defined as the criteria of the success. There are limited research studies that clearly identify satisfaction of users with web based systems and no model showing the role of the students’ satisfaction with web based system in the blended learning success models. Hence, the educational institutions and policy makers should consider in more detail students’ satisfaction within this context, in order to succeed in their activities and operations.

3. Kano Model

In the past, customer satisfaction has been perceived in one-dimensional terms: the greater the fulfillment of desired quality attributes, the higher the customer satisfaction. However, there are some quality attributes that fulfill individual customer expectations to a great extent without necessarily implying a higher level of customer satisfaction [27]. Several studies have therefore attempted to link the physical and psychological aspects of quality to see how specific attributes of a product or service actually relate to customer satisfaction or dissatisfaction, where the physical aspect is concerned with the physical state or extent of the specific attributes, and the psychological aspect is related to the customer’s subjective response in terms of personal satisfaction [28]. Similarly, Kano [29] considered two aspects of any given quality attribute: an objective aspect involving the fulfillment of quality and a subjective aspect involving the customers’ perception of satisfaction. Using this model, quality attributes are classified into six categories (the first four of them are shown in Figure 2):(i)attractive quality attribute (A): an attribute that gives satisfaction if present but that produces no dissatisfaction if absent;(ii)one-dimensional quality attribute (O): an attribute that is positively and linearly related to customer satisfaction—that is, the greater the degree of fulfillment of the attribute, the greater the degree of customer satisfaction;(iii)must-be quality attribute (M): the presence of these product/service attributes will not increase customers’ satisfaction level significantly, while their absence will cause extreme dissatisfaction;(iv)indifferent quality attribute (I): an attribute whose presence or absence does not cause any satisfaction or dissatisfaction to customers;(v)reverse quality attribute (R): an attribute whose presence causes customer dissatisfaction, and whose absence results in customer satisfaction;(vi)questionable quality attribute (Q): it means that it is not clear weather customers expect these attributes, since they gave unusable responses due to misunderstanding the questions on the survey or making an error when filling out the questionnaire.

Figure 2: Kano 2D graph with functional and dysfunctional dimensions.

It is critical to identify must-be quality attributes and to meet demand for these at a minimum threshold level at least. Universities must also do their best on the one-dimensional attributes, which are typically articulated by customers as functionality they desire. The attractive quality attributes can be selected as competitive weapons to draw the attention of students, especially new ones [30].

4. Research Methodology

Within conducted study Kano model questionnaire is used to understand students’ satisfaction with the web based learning system. In order to define quality attributes for Kano model, five quality components of DeLone and Mclean (D&M) model have been used (Table 1). The questions for each quality attribute of web based learning system, systems quality, service quality, information quality, use, and net benefits, have been created.

Table 1: Kano model quality attributes questionnaire defined by D and M model.

The responders have been asked about their mindset due to functional and dysfunctional dimension of web based e-learning system quality attribute; for example, the offered answers in both cases, in accordance with Kano model, are as follows: I like it; It must be that way; I am neutral; I can tolerate it; or I dislike it. The respondents have to choose one of the offered options (answers) for both functional and dysfunctional dimension of the question. Due to the chosen pairs the reviewers may get an overview of the students’ satisfaction of the web based learning system quality attributes.

The population sampled was 115 students at the University “Mediterranean” (Montenegro). Among the interview results, 63 were valid; that is, these students understood the basic principle of applied model and propositions for providing the researchers with the proper answers. More precisely, 52 students did not understand in fact that they have to give answers to the questions about both functional and dysfunctional components of a considered blended learning system features, or they were not enough motivated to do so.

The following section provides some more details about blended learning model at University “Mediterranean” at which the described analysis has been done.

5. Blended Learning at University “Mediterranean” (Montenegro)

University “Mediterranean” (UNIM) is the first private university in Montenegro established in 2006. It consists of six faculties located in three different cities in Montenegro. A major element of UNIM’s distinction relates to learning and teaching. During the early stages of the development of blended learning a plan for the utilization of learning technology at UNIM was created. The heart of the model of learning within UNIM is the method of optimal mixing of ICT-based and human tuition within web based learning systems. The university has positioned itself as the traditional university with strong emphasis on online approaches. Five faculties established and started to use blended learning at about 60% courses, while Faculty of Information Technology established this learning model at all courses (100% of courses). In addition, UNIM participated in number of EU e-learning projects which brought important progress in the practice. UNIM is employing significant efforts to improve the quality of the teaching process in traditional and distance learning.

Across the university, students can study modules by face-to-face tuition or using video conferencing, both asynchronous and synchronous. Lectures and seminars are backed up by the use of a virtual learning environment (Moodle). The use of videoconferencing in particular enables small groups of students from remote locations to join together to form a single cohort for a module. This strategy enables students to undertake university study while being based in their own cities.

Depending on particular subject, lecturers can create different assignment types in web based systems with the support of Moodle tools for e-learning. Additionally, students can use web based learning system for discussion (forum) and communication with other students and lecturers (chat, message, and e-mail) and they can create their own virtual communities of interest. This mixture of pedagogies characterizes UNIM’s approach to blended learning. These pedagogies are not without challenge. This may be because cohort sizes are a disincentive for a blended approach or because the subject requires a face-to-face experience. For example, many elements of engineering or visual arts would fall into this category, as students require physical access to facilities and equipment or need to paint in studio.

6. Research Results and Analysis

In the following subsections are presented the results obtained by the analysis of(i)frequencies of certain Kano categories appearance in the set of responses;(ii)customers’, here students’, (dis)satisfaction indexes;(iii)two-dimensional (linear) graphical schemes;(iv)some basic statistical parameters.

7. Evaluation according to the Frequencies of Kano Categories Appearance

By analyzing the results of the survey conducted among the students who have used web based learning system in blended model at the University “Mediterranean” (Montenegro), the following has been noticed.

(i) The indifferent (I) category of applied Kano model has the greatest frequency of appearance among all the categories in even nine of offered ten questions! Looked through Kano model, it means that customers, here students as e-learners, do not care about these features either way. How could this be explained? It could be realized that most of students among the responders are not interested in e-learning system, or it was difficult for them to be “consistent” in giving answers to both functional and dysfunctional features/dimensions of the e-learning system at the same time, so the easiest for them was to be “indifferent.” Or, they just want to fill the “form” by answering the questions, but they did not think deeply about the questions and scope of doing the interview. Anyhow, in our further analysis we have ignored the “indifferent” answers in the case of questions where they are present in the greatest number (these numbers are put in brackets in Table 2), and we focused on the second and/or third most frequent answers as rather indicative ones. The answers in cases Q5, Q6, Q7, and Q9 which are marked with * in Table 2 (collaborative activities; self-evaluation possibilities; mandatory exercises, tests, essays, etc., and availability of e-tutor) can be treated like exceptions. Namely, it makes sense that students are indifferent about collaborative activities within e-learning platform, since they have a lot of other possibilities to collaborate through different social networks (Facebook, e.g.). Additionally, students are not usually aware about the importance of self-evaluation possibilities in making them learning easier and more interesting, therefore it can be reasonable that they do not care about this feature. But, the teachers should explain to them the benefits of self-evaluation process and “convince” them in a way to treat this category as more important one. Further, students usually do not like obligations like mandatory exercises, tests, essays, and so forth. And, finally, when we take into consideration the question of availability of e-tutor, then it is to be emphasized that most of the students are familiar with contemporary ICT, and consequently they do not have special requirements for e-tutor.

Table 2: Kano Model quality attributes questionnaire defined by DeLone and Mclean model.

(ii) Also the numbers of “questionable” answers were present, that is, in three cases (Q2, Q5, and Q7), so they have been neglected (symbolically by putting them into the brackets; see Table 2) and the accent was given to the next greatest numbers related to the other more relevant categories within the considered context. This can be again treated as a result of the lack of some students’ understanding of the basic principle of the questionnaire. Hence, we have to focus on, let us say, those answers which can be treated as more valid and relevant ones and ignore those which do not have importance for planning an attractive e-learning systems in blended model due to learners’ (reasonable) wishes/expectations. Sometimes, students are not aware of what is indeed useful for them, and the obligation of e-learning systems designers, teachers, and e-tutors is to find the optimal solution(s). However, the judgments and feeling of the students should not be neglected.

In order to explain better the meaning of marked (bold) first, second, or third greatest frequency numbers among Kano categories per each question corresponding to certain e-learning system dimension/feature, it is to be recall the meaning of “must-be” and “one-dimensional”.

(i)  Must-be (M) means that customers, here e-learners, consider these requirements to be basic factors; thus, their presence will not increase their satisfaction level significantly, while their absence will cause extreme dissatisfaction. In here conducted survey, after certain approximations explained above, technical stability/reliability of the web based e-learning system, presence of audio/video recordings, and collaborative activities are within the domain of this category.

(ii)  One dimension (O) means that these factors cause satisfaction if their performance is high, while they cause dissatisfaction if their performance is low. These attributes are linear and symmetric because they are typically considered customers’ (here e-learners’) explicit needs and desires. Within this survey and by taking into account certain approximations, self-evaluation capacities; mandatory tests, exercises, essays, and so forth; blended learning possibilities; presence/existing of e-tutor(s); and available access to the system at any time are of one-dimensional category.

Concerning the dimensions of the system: “user-friendly interface” and quality/quantity of the available instructional materials in a system of e-learning, it can be noticed that the frequencies of (M) and (O) are the same. Having in mind that (M) is stronger due to the hierarchical rule of category importance (i.e., M O A I) [26], must-be (M) category should be assigned as more preferably one. It is important to note that such evaluation of the e-learners’ responses to the questionnaire is rather fuzzy, particularly since in most of the cases the second, or even the third, score in a series of frequencies, starting with the greatest one, has been considered as referral. The above results are obtained on the basis of Kano evaluation table being modified by Fred Poliot [31]. The categories which are changed in comparison to primer Kano functional-dysfunctional matrix are marked (bold) in Table 3. In fact, Pilot changed only two values, (2,2) and (4,4), replacing indifferent (I) with questionable (Q) categories in comparison to the Kano basic model. The detail explanations of these two replacements are given in Walden [31] work. Simply, the pairs of students’ (here e-learners) responses are “overlapped” over this etalon matrix (Table 3) being generated by Kano view (slightly modified), and the scores are acquired per each responder and per each question related to certain blended/e-learning system feature.

Table 3: Kano modified evaluation model with reversals.

8. Evaluation according to Customers’ Satisfaction Indexes

Since the results of the analysis in the previous case are fuzzy, we do here an effort to “sharp” them slightly, throughout the further analysis being based upon Berger et al. (1993) model (see p. 92 [26] and p. 17 [31]). Namely, instead of concerning must-be (M), one-dimensional (O), and attractive (A) features, the responses of the customers are reduced here to two numbers: a positive number that is the relative value of meeting this customer requirement (versus the competition) and a negative number that is the relative cost of not meeting the customer requirement. These numbers are labeled as “better” (1) and “worse” (2) indexes and calculated in the following way, that is, by

Better (or, satisfaction index) indicates how much customer satisfaction is increased by providing certain feature of a system which is intended to be developed, while worse (or, dissatisfaction) indicates how much customer satisfaction is decreased by not providing the feature. More precisely, the positive better numbers are indicative of the situation where, on average, customer satisfaction will be increased by providing attractive and one-dimensional elements. The negative worse numbers are indicative of the situation where customer satisfaction will be decreased if these one-dimensional and must-be elements are not included into “exante” blended/e-learning system which designers, teachers, e-tutors, and so forth are intended to develop by meeting the learners’ (customers’) expectations.

Now, let us consider in the light of these two coefficients the results of the survey being conducted here and try to create more specified picture of the customers’ expectations. The indexes better and worse are calculated and presented in Table 4.

Table 4: Satisfaction (better) and dissatisfaction (worse) indexes.

By analyzing the results of the survey on the basis of previously described model, the following points can be derived due to the positive indexes.(i)Presence of audio/video recordings seems very important for the customers; that is, it implies must-be requirement. Its absence will cause consequently great dissatisfaction (the better index is the largest for Q4).(ii)Collaborative activities, quality/quantity of instructional materials, and user-friendly environment (Q5, Q3, and Q2) have large better indexes, which means that their absence will also cause dissatisfaction among the users.(iii)To the availability of the access to the system at any time, as well as technical stability/reliability of the systems (Q10 and Q1), the customers did not give high scores. This can be explained as something that they take for granted a priori. Or, in other words, it is quite normal for them that these two conditions are present, so they do not think they require special concerning. However, this statement should be taken with a certain dose of reserve.(iv)Presence of e-tutor(s) is considered unimportant for the students (the smallest value of better index for Q9). This could be explained by the fact that students are sufficiently familiar with information systems and that they do not need e-tutor.Now, by taking into the consideration the negative indexes, the following can be observed.(i)Absence of audio/video instructional materials causes dissatisfaction among the customers (the worse index absolutely value is the largest for Q4). This is completely in accordance with the previous statements due to this feature.(ii)Also, absence of e-learning system stability/reliability will imply customers’ great dissatisfaction. This is logical, even it is not completely in accordance with the previous customers’ judgments about this feature.(iii)The requirement that causes the lowest degree of dissatisfaction among users is not providing user-friendly environment (the worse index absolutely value is the smallest for Q2). It can be concluded that its presence is convenient, but its absence will not cause excessive dissatisfaction.(iv)The levels of dissatisfaction which can be caused by the absence of the remaining features are rather of equal level, which implies that their absence will not extremely affect the customers’ needs.

Because of the slight fuzziness in the above (based on (dis)satisfaction indexes) and the statements given in the previous subsection (based on frequencies of categories appearances), the third assessment method, based on the graphical analysis of the survey results, will be considered within the next part of the paper.

9. Graphical Analysis of the Survey Results

The graphical analysis implies that there are pairs of questions, , and respondents, . In accordance with Kano model, there may be two basic scores for each potential customer requirement being investigated: functional and dysfunctional ones. These two scores can be coded as follows [31]:(i)functional: (dislike), −1 (live with), 0 (neutral), 2 (must-be), and 4 (like),(ii)dysfunctional: (like), −1 (must-be), 0 (neutral), 2 (live with), and 4 (dislike).Since each answer of the respondents (here students) has been assigned by the appropriate numerical value, it is possible to calculate average values for functional () and dysfunctional () dimensions of the answers in the following manner, that is, by

These average pairs of values can be plotted on two-dimensional coordinate system with four quadrants representing key categories of Kano model: attractive, one-dimensional, indifferent, and must-be (like in Figure 2). For the purpose of this research, based on the collected students’ answers, we take into consideration only must-be and like functional dimensions and live with and dislike dysfunctional dimensions. Since neutral category implies pondering responses with zero value, it has in fact no impact on the total score and considered average values. Questionable and reversal answers were ignored too. Thus, all average values are in positive quadrants (between 0 and 4 per - and -axis) and they are given as points in Figure 3.

Figure 3: Plots of average functional and dysfunctional points for the questions (Q1–Q10).

On the basis of the plots in Figure 3, it is obvious that most of the average values are in indifferent quadrant, which is in correspondence with the analysis based on the greatest frequencies of appearance of certain answers. It is understandable that respondents (students) are indifferent according to the obligatory exercises, tests, essays, and so forth (Q7), because they usually do not like them. Similarly, since students are commonly familiar with information and communication technologies, it sounds reasonable that they are indifferent when having available e-tutors is in question (Q9). It also makes sense that responders are indifferent toward collaborative activities existence within the system of blended/e-learning, since they have available such activities within different social networks (Q5). And social networks might be more comfortable in a way for collaborative activities than a conventional e-learning system; for example, see Figure 3.

However, some interventions should be done by the evaluators and latter planers of a better system; for example, questions of optimal quality and/or quantity of available e-instructional materials (Q3) and presence of user-friendly environment (Q2) should be “shifted” into the attractive quadrant as it is symbolically shown in Figure 4 (dashed line). With better instructional materials and user-friendly environment the system will be more competitive on e-learning market within blended learning environment.

Figure 4: Repositioning the plots of average functional and dysfunctional points for the questions Q2, Q3, Q4, Q6, and Q8.

Average value which corresponds to the answers of the question of presence of audio and video materials besides more traditional textual ones (Q4) is on the line between indifferent and must-be zone, and it could be more logical, from the researchers’ and system creators’ point of view, to move it to the must-be zone. Technical stability of the system represented by point 1 in Figure 3 is in the must-be zone, which means that e-learners are more dissatisfied when the system has lower stability in technical sense; however, their satisfaction never rises above neutral no matter how functional this feature of the system becomes. Point 10 corresponding to the question of accessibility of the system at any time (Q10) is in one-dimensional zone. This means more functionality of this feature leads to more students’ satisfaction.

Points 6 and 8, which correspond to the questions of self-evaluation possibilities within the system (Q6) and blended learning features (Q8), are rather fuzzy and we consider they should be shifted to the one-dimensional zone in a future system planning in order to satisfy users’ objective needs to greater extent. All suggested “shifts” by the authors are given in Figure 4 (dashed lines).

10. Some Statistical Refinements of the Analyzed Data by Graphical Method

In further analysis over the data set consisting of pairs, where and are calculated by expressions (3) for , the following statistical values have been calculated: mean value, standard deviation or variance, covariance, and correlation coefficient [32, 33]. The numerical values of these statistical measures are given in Table 5. Used notation is simplified and the analyzed data sets (pairs) are shown simply as , , and (, ).

Table 5: Values of some statistical indicators.

Based upon the calculated values of the statistical measures (Table 5) the following can be observed.(i)If we consider the mean value for the parent population, then it is the hypothetical “true” value of the variable. This means that Mean () and Mean () might be treated as a pair which represents “true” value of general answer to all ten considered questions within the questionnaire. Consequently, the general answer is equivalent to must-be category of Kano model. This makes sense if we assume that questionnaire has been proposed by the experienced researcher and staff at the universities in Montenegro, in consultation with the expert from the University of Graz. Truly, this pair is not in the lower right corner of the must-be guardant of Kano 2D graph, but it is within must-be quadrant and should be taken at the end as indicative one.(ii)Variations Var () and Var (), as well as covariance Cov (, ), are used as precalculus for determining correlation coefficient Correl (, ). In fact, the higher the absolute value of the correlation coefficient, the stronger the correlation.(iii)Relatively high value of correlation coefficient or the coefficient of determination means that there is a strong correlation between and variables. This is understandable for pairs of opposite (functional and dysfunctional) categories of Kano model. What makes that correlation stronger is that neutral (indifferent), questionable, and reversal responses have been excluded from the graphical analysis. In another words, means that more than 60% of the total variation in can be explained by variations in . Or, another explanation might be that the ellipse representing correlation in this case should enclose more than 60% of the considered points, that is, (), , pairs on which it is based [34].

The above given short analysis of the numerical values of some relevant statistical measures provides a certain refinement of the observations made upon graphical interpretation of Kano model based on plotting pairs of responders’ quantified answers to both functional and dysfunctional aspects of the questions. These refinements will be better, that is, more reliable, by introducing greater number of respondents and/or by having a greater number of questions forming the questionnaire or by uprising the parent population in statistical terms, what should be a subject of further more extensive research.

11. Conclusions

This study aims to identify critical elements of web based learning system within blended environment using Kano (dys)functional model [31] and DeLone and McLean [18] generic model for the information systems success, providing in such manner the recommendations for creating better new teaching/learning system.

The population sampled was composed of students at University “Mediterranean” (in Montenegro). A total of 63 valid questionnaires were collected, with a response rate of 55% in comparison to the total number of interviewed students. Firstly, frequencies of each Kano model categories appearance have been measured and some approximations have been done in order to make the responses more meaningful. Also some additional analysis based on determination of “better” and “worse” indexes have been made with the aim of reducing the fuzziness in observations as much as possible. Some two-dimensional graphical analyses have been realized as well. These analyses result in “shifting” some points to other more appropriate Kano categories or 2D graphic quadrants, due to the researchers’ empirical point of view.

It is to be noted that there is a scattering among the obtained results and that this is to be reduced throughout repeating the questionnaire among another considerably larger target group(s) of students, modifying the questions, and/or including some additional questions into the model.

However, to the designers of e-learning systems in blended environment should be recommended to combine different analytical and/or stochastic methods in assessing degree of customers’ expectations and their level of satisfaction. A holistic approach based on users’ satisfaction level and the appropriate measurement analysis should give support to the designers in improving existing and designing new more attractive web based learning models in the contemporary educational blended schemes.

And finally, speaking more generally, as a powerful communications and commerce medium, the Internet is a communication and IS phenomenon that lends itself to a measurement framework (e.g., Kano and D&M models). Within the e-commerce context, the primary system users are customers or suppliers rather than internal users. Customers (students/learners) and suppliers (teachers/instructors) use the e-system for learning as well for buying or selling learning courses. These might have positive impacts to individual learners, universities, and even national economies in the future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. D. R. Garrison and H. Kanuka, “Blended learning: uncovering its transformative potential in higher education,” Internet and Higher Education, vol. 7, no. 2, pp. 95–105, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Owston, D. York, and S. Murtha, “Student perceptions and achievement in a university blended learning strategic initiative,” The International and Higher Education, vol. 18, pp. 38–46, 2013. View at Publisher · View at Google Scholar
  3. Clayton Christensen Institute for Disruptive Innovation (2012-2013), “Blended Learning Model Definitions,”
  4. J. J. Summers, A. Waigandt, and T. A. Whittaker, “A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class,” Innovative Higher Education, vol. 29, no. 3, pp. 233–250, 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. C.-M. Chiu, M.-H. Hsu, S.-Y. Sun, T.-C. Lin, and P.-C. Sun, “Usability, quality, value and e-learning continuance decisions,” Computers and Education, vol. 45, no. 4, pp. 399–416, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. P. B. Seddon, “A respecification and extension of the DeLone and McLean model of IS success,” Information Systems Research, vol. 8, no. 3, pp. 240–253, 1997. View at Google Scholar · View at Scopus
  7. V. McKinney, K. Yoon, and F. Zahedi, “The measurement of Web-customer satisfaction: an expectation and disconfirmation approach,” Information Systems Research, vol. 13, no. 3, pp. 296–315, 2002. View at Google Scholar · View at Scopus
  8. B. Ives, M. H. Olson, and J. J. Baroudi, “The measurement of user information satisfaction,” Communications of the ACM, vol. 26, no. 10, pp. 785–793, 1983. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Muylle, R. Moenaert, and M. Despontin, “The conceptualization and empirical validation of web site user satisfaction,” Information and Management, vol. 41, no. 5, pp. 543–560, 2004. View at Publisher · View at Google Scholar · View at Scopus
  10. J. A. Tessier, W. W. Crouch, and P. Atherton, “New measures of user satisfaction with computer-based literature searches,” Special Libraries, vol. 68, pp. 383–389, 1977. View at Google Scholar
  11. R. Beeler, “The relationship of user fees and user satisfaction,” in Proceedings of the 3th National Online Meeting, pp. 24–26, University of Technology Science, New York, NY, USA, 1981.
  12. G. Momenee, “Asking the right question: why not Info Track?” Research Strategies, vol. 5, pp. 186–190, 1987. View at Google Scholar
  13. M. Khalifa and L. Vanessa, “State of research on information system satisfaction,” 2004,
  14. R. Applegate, “Models of user satisfaction: understanding false positives,” RQ, vol. 32, no. 4, pp. 525–539, 1993. View at Google Scholar
  15. M. Paechter, B. Maier, and D. Macher, “Students' expectations of, and experiences in e-learning: their relation to learning achievements and course satisfaction,” Computers and Education, vol. 54, no. 1, pp. 222–229, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. W. J. Doll and G. Torkzadeh, “The measurement of end-user computing satisfaction,” MIS Quarterly, vol. 12, no. 2, pp. 259–273, 1988. View at Google Scholar · View at Scopus
  17. W. J. Doll, X. Deng, T. S. Raghunathan, G. Torkzadeh, and W. Xia, “The meaning and measurement of user satisfaction: a multigroup invariance analysis of the end-user computing satisfaction instrument,” Journal of Management Information Systems, vol. 21, no. 1, pp. 227–262, 2004. View at Google Scholar · View at Scopus
  18. W. H. DeLone and E. R. McLean, “The DeLone and McLean model of information systems success: a ten-year update,” Journal of Management Information Systems, vol. 19, no. 4, pp. 9–30, 2003. View at Google Scholar · View at Scopus
  19. S. Ozkan and R. Koseler, “Multi-dimensional students' evaluation of e-learning systems in the higher education context: an empirical investigation,” Computers and Education, vol. 53, no. 4, pp. 1285–1296, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. B.-C. Lee, J.-O. Yoon, and I. Lee, “Learners' acceptance of e-learning in South Korea: theories and results,” Computers and Education, vol. 53, no. 4, pp. 1320–1329, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. N. Matsatsinis, E. Grigoroudis, and P. Delias, “Customer satisfaction and e-learning systems: towards a multi-criteria evaluation methodology,” Operational Research, vol. 3, no. 3, pp. 249–259, 2003. View at Google Scholar
  22. A. Lee-Post, “E-learning success model: an information systems perspective,” Electronic Journal of E-Learning, vol. 7, no. 1, pp. 61–70, 2009. View at Google Scholar
  23. Y.-S. Wang, “Assessment of learner satisfaction with asynchronous electronic learning systems,” Information and Management, vol. 41, no. 1, pp. 75–86, 2003. View at Publisher · View at Google Scholar · View at Scopus
  24. D. Y. Shee and Y.-S. Wang, “Multi-criteria evaluation of the web-based e-learning system: a methodology based on learner satisfaction and its applications,” Computers and Education, vol. 50, no. 3, pp. 894–905, 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. L. H. Chen and H. C. Lin, “Integrating Kano's model into E-learning satisfaction,” in Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '07), pp. 297–301, Singapore IEEE Engineering Management Society Singapore Center & IEEE Singapore Section, Singapore, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. G. Dominici and F. Palumbo, “How to build an e-learning product: factors for student/customer satisfaction,” Business Horizons—Kelley School of Business, Indiana University, vol. 56, pp. 87–96, 2013. View at Google Scholar
  27. K. Matzler and H. H. Hinterhuber, “How to make product development projects more successful by integrating Kano's model of customer satisfaction into quality function deployment,” Technovation, vol. 18, no. 1, pp. 25–38, 1998. View at Google Scholar · View at Scopus
  28. S. J. Schvaneveldt, T. Enkawa, and M. Miyakawa, “Consumer evaluation perspectives of service quality: evaluation factors and two-way model of quality,” Total Qualaity Management, vol. 2, no. 2, pp. 149–161, 1991. View at Publisher · View at Google Scholar
  29. N. Kano, “Attractive quality and must be quality,” Hinshitsu (Quality), vol. 14, no. 2, pp. 147–156, 1984. View at Google Scholar
  30. B. L. Bayus, S. Jain, and A. G. Rao, “Too little, too early: introduction timing and new product performance in the personal digital assistant industry,” Journal of Marketing Research, vol. 34, pp. 50–63, 1997. View at Publisher · View at Google Scholar
  31. D. Walden, “A special issue on: Kano’s methods for understanding customer defined quality,” Center For Quality of Management Journal, vol. 2, no. 4, pp. 1–37, 1993. View at Google Scholar
  32. D. Bertsekas and J. Tsitsiklis, Introduction To Probability, Athena Scientific, Nashua, NH, USA, 2nd edition, 2008.
  33. K. Weltner, W. J. Weber, J. Grosjean, and P. Schuster, Mathematics for Physicists and Engineers (Fundamentals and Interactive Study Guide), Springer, Berlin, Germany, 2009.
  34. R. Taylor, “Interpretation of the correlation coefficient: a basic review,” Journal of Diagnostic Medical Sonography, vol. 6, no. 1, pp. 35–39, 1990. View at Google Scholar · View at Scopus