Education Research International

Education Research International / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 1936076 | https://doi.org/10.1155/2021/1936076

Adel A. Babtain, "An Adaptation and Validation of Students’ Satisfaction Scale: The Case of McGraw–Hill Education Connect", Education Research International, vol. 2021, Article ID 1936076, 14 pages, 2021. https://doi.org/10.1155/2021/1936076

An Adaptation and Validation of Students’ Satisfaction Scale: The Case of McGraw–Hill Education Connect

Academic Editor: Yu-Min Wang
Received23 Apr 2021
Accepted19 Jun 2021
Published28 Jun 2021

Abstract

This study aims to adapt and validate an Arabic version of the students’ satisfaction scale. It tries to measure students’ satisfaction with the McGraw–Hill Education Connect platform in Saudi Arabia. It provides Saudi and Arab academics with a valid instrument for further studies and interventions to improve students’ learning and environments. The study examined items to establish content, construct, convergent, and discriminant validity. It used two-phase Chemistry 101-student samples (N = 50 and N = 193). The exploratory factor analysis (EFA) using the maximum likelihood extraction method and the Promax rotation method was used to explore the survey’s constructs in the pilot phase. It supported the five-factor construct of the survey. Three competitive construct models were investigated using the confirmatory factor analysis in the main phase study. The model that fitted the study data and satisfied reliability and validity standards was a second-order model identifying two primary constructs distinctively: satisfaction (N = 3, α = 0.912) and utility (N = 19, α = 0.965). The utility scale was composed of four subscales: understanding (N = 5, α = 0.913), studying (N = 3, α = 0.896), preparation (N = 4, α = 0.893), and usability (N = 7, α = 0.913). The results indicated that student’s overall satisfaction with MCGH Connect was significantly met (M = 3.52, SD = 0.176). Also, students were significantly satisfied with the MGHE Connect utility (M = 3.51, SD = 0.221). The highest level of satisfaction was understanding (M = 3.60, SD = 0.170), and the lowest was with preparation to classes (M = 3.23, SD = 0.259). Students were equally satisfied with using MGHE Connect to understand the materials, study and review for exams, and friendliness.

1. Introduction

Nowadays, blended learning (b-learning) became simplified learning and the new traditional approach among higher education institutions [13]. B-learning can be seen as a combination of traditional teaching and the e-learning environment based on the principle that face-to-face and online activities are optimally integrated into a unique learning experience [47].

Moreover, educators’ call given the advancements in computer technology and the Internet had led textbook publishers to increase the incorporation of pedagogically related technological supplements [8]. Text technology supplements (TTSs) are considered as specific technologies in the broader category of computer-assisted learning [8]. TTS becomes more prolific in higher education as complementary tools to assist student learning [9].

Many publishers and researchers claimed that textbooks’ supplement products improve learning efficiency, time management, in-class discussions, student engagement, personalized learning experience, exam scores, course grades, and overall satisfaction with the course and course work [2, 1012]. However, others disagreed [1315]. Moreover, these systems provide just-in-time feedback to students and let instructors intervene at the right time to support students [16, 17]. A McGraw–Hill Education (MGHE) product is one of those supplements that uses interactive learning technology to enable a more personalized learning experience by enhancing students’ engagement with the course content and learning activities [12].

Learners’ satisfaction with b-learning played a crucial role in evaluating its effectiveness and measuring such programs’ quality [1, 2, 1821]. Institutions implement b-learning to meet learners’ needs; thus, it is equally important to measure their perceived satisfaction to determine programs’ effectiveness [2]. Evaluating learning effectiveness and learners’ satisfaction is interconnected [1, 2224]. Learners’ satisfaction is positively correlated with the quality of learning outcomes, and studies established a relationship between students’ perception of satisfaction and their learning environment and their quality of learning [2527]. Learners’ satisfaction is critical for learners to continue using blended learning [25]. That is why institutions involved in blended learning should be concerned about increasing learning satisfaction. Chen and Tat Yao summarized that it is essential to understand learners’ attitudes, perceptions, acceptance, and satisfaction to evaluate instructional design success based on technology [2].

Moreover, institutions can intentionally provide learning environments with appropriate supplements when the factors influencing students’ satisfaction were identified [25]. Understanding the factors influencing student satisfaction with blended learning can help design a learning environment and positively impact the student learning experience [25]. Standard measures of learners’ satisfaction in blended courses use students’ overall satisfaction with the experience, perceived quality of teaching and learning, and ease of use of technology [20, 21]. Although students’ satisfaction is not necessarily associated with achievement, satisfied students are more likely to accomplish their cognitive goals [27].

Although students’ satisfaction concerns institutions to provide quality education, the field remains in a preliminary stage where more valid and reliable instruments are needed [28]. Also, there is a need to deeply understand the components of perceived satisfaction and quality of blended learning [27]. Interventions based on reliable data that support students’ learning are critical [29]. Accurate data is necessary to support learning improvement and measure progress toward the goal [29]. The survey results are usually used to make recommendations for intervention curriculum, faculty training, and products directed at developing teaching methods. That is why it is essential to rely on well-designed and validated instruments [29, 30].

Instruments that have been initially developed in a particular language for use in some contexts could be made appropriate to use in one or more other languages or contexts [31, 32]. In such cases, the translation/adaptation process aims to produce an instrument with comparable psychometric qualities as the original following a specific procedure [30, 31, 3335], and the instrument developer should evaluate the target population’s validity [31].

Validity is an instrument’s ability to measure what is supposed to measure a latent construct [36] (p. 55). It could be validated by examining the content, construct, convergent, and discriminant validity [36] (p. 55) [37]. Content validity can be established when subject matter experts examine the constructs, including the definitions and items for each construct [28]. Once the content validity is established, the instrument is implemented to examine the construct validity.

The purpose of construct validity is to determine if the constructs being measured are a valid conceptualization of the phenomena being tested [28]. If given items did not load on the intended construct, they should be eliminated, as they were not an adequate measure of that construct [37]. In confirmatory factor analysis (CFA), construct validity is achieved when the fitness indices for a construct are satisfied [30].

Convergent validity is achieved when all items in a measurement model are significantly correlated with the respective latent constructs [30, 35, 38]. It could also be verified by using the average variance extracted (AVE) for every construct. The AVE estimate is the average amount of variation that the latent construct can explain in the observed variables to which it is theoretically related [37, 39] (pp. 600–638). The AVE of all latent constructs should be above 0.5 to establish convergent validity [40]. Discriminant validity indicates that the measurement model of a construct is free from redundant and unnecessary items. It checks if items within a construct intercorrelate higher than they correlate with other items from other constructs theoretically supposed not to relate [30, 37, 38, 41].

This study aims to adapt and validate an Arabic version of the students’ satisfaction scale. It aims to measure students’ satisfaction with the MGHE Connect platform in Saudi Arabia and provide Saudi and other Arab academics with a valid instrument for further studies and interventions to improve students’ learning and environments.

2. Materials and Methods

2.1. Materials

The study started with a survey proposed by Gearhart [8]. Gearhart’s survey consisted of 30 items covering four general categories of perception: satisfaction (N = 5, α = 0.87), utility (N = 12, α = not reported), usability (N = 9, α = 0.87), and perceived value (N = 4, α = 0.91). Utility-scale items consisted of three subscales: understanding (N = 4, α = 0.66), studying (N = 4, α = 0.73), and preparation (N = 4, α = 0.87). Gearhart described these categories as follows [8]:(i)Satisfaction concerns whether the tool generally met the needs of the students(ii)Utility relates to how students used the technology, and it includes three subscales:(iii)Understanding reflects the degree to which students thought Connect helped them to comprehend the material better(iv)Preparation measures the students’ use of Connect to introduce course content before discussions and lectures(v)Studying assesses the use of technology to review for exams(vi)Usability gauges student perceptions about access and user-friendliness(vii)The perceived value indicates if it is worth it (p.13)

The survey’s response scaling ranged from 1 = strongly disagree to 5 = strongly agree. Two items from the satisfaction subscale, items 3 and 5, were negatively worded and reverse-coded before analysis.

Although students’ perceptions survey developed by Gearhart consisted of 30 items [8], when the researcher contacted the author to get the permission and the items list, the researcher received a list of 34 items classified as follows: satisfaction (N = 5), utility (N = 16), usability (N = 9), and perceived visibility (N = 4) [42]. Satisfaction and usability scales were identical in both versions, while the utility scale was not. In [8], the utility-scale items (N = 12) were categorized into three subscales: understanding, studying, and preparation, while in [42], all items (N = 16) were grouped under one scale. Thus, the researcher started the investigation with more items to reach the final validated scale.

Nevertheless, since the licenses of access to students in this study were free according to some arrangements between Yanbu Industrial College and McGraw–Hill company agent, the perceived visibility was excluded from this adaptation study. Thus, the researcher started the adaptation work using 30 items where five represented satisfaction, 16 represented Utility, and nine represented usability (see Appendix A).

2.2. Procedure

The researcher followed the International Test Commission (ITC) Guidelines and other literature for translating and adapting tests [30, 31, 33, 4350]. The 30-item survey was translated from English into Arabic by two bilingual experts, and the translations were discussed with the researcher to consolidate in one version. Then, the translated version was translated back into English by two other bilingual experts. Semantic adaptation and some corrections and discussions were made by the researcher and the other two experts to consensus the translated survey’s initial version.

The translated version was then sent to seven professional subject matter experts in educational technology, e-learning, computer science, and chemistry to review the content relevance to constructs and the items for each construct. One item from the Usability scale was deleted (item 22) since 71% of reviewers disagreed that it was relevant to Usability or other scales.

After verifying the content validity, the survey was administered using a 50-participant pilot sample to examine the instrument and its items empirically using EFA. The researcher also interviewed ten respondents to check if they had any questions, concerns, or comments about the survey. The survey was then ready for further empirical investigations in the primary phase using CFA.

2.3. Participants

The study used two samples in two phases. In the pilot study phase, a cluster sample was used by selecting two sections randomly out of ten sections (each section has on average 25 students) of Chemistry 101 offered in Yanbu Industrial College located in the west region of Saudi Arabia in the Fall semester of 2019. The two-section sample was composed of 55 students. The sample students in that semester used MGHE Connect for the whole semester as a supplement platform in addition to the face-to-face method. After using MGHE Connect for 15 weeks and before the final exams started, the students were invited to respond to that phase’s survey. The number of students who responded was 50, and the participants’ ages ranged between 19 and 21.

Next semester, Spring 2020, there were also ten Chemistry 101 sections (each section has on average 23 students) whose students used MGHE Connect in the same way. After 15 weeks of using MGHE Connect in their learning, they were invited to respond to the survey. This final implementation sample was composed of 193 students whose ages were between 19 and 22 years.

2.4. Data Analysis

The study used exploratory factor analysis (EFA) using the maximum likelihood extraction method and the Promax rotation method to explore the survey’s constructs in the pilot sample. The Promax rotation method was used since constructs were correlated [51]. The EFA solution used Kaiser’s criterion (eigenvalue > 1) to retain factors [52]. Kaiser–Meyer–Olkin measure of sampling adequacy (KMO > 0.7) and Bartlett’s test sphericity value () were used to examine if factor analysis is appropriate to use [53]. IBM SPSS version 20 was used to analyze the data.

The study assessed the proposed competing models by using CFA using IBM SPSS AMOS v. 22 [47, 54] (pp. 103–122) [5256]. The maximum likelihood method was used to estimate parameters. The construct validity was examined using six fit indices: Chi/df (<5), CFI (>0.9), GFI (>0.9), TLI (>0.9), SRMR (<0.08), and RMSEA (<0.08) [57, 58]. The convergent validity was examined using indicators load onto the expected factors (>0.4) and the AVE (>0.5) [30, 40, 59, 60] (pp. 73–84). The discriminant validity was examined using the Fornell and Larcker’s criterion [61]. The AVE of each construct should be higher than its maximum shared variance (MSV) with any other construct [62, 63]. The shared variance (SV) is represented by the square of the correlation between any two constructs [37].

The study used Cronbach’s alpha (α) to assess the reliability (α should be >0.7) [61, 64] and the item-total correlation between each item and its construct (r should be >0.4) [65]. The composite reliability (CR) of each latent variable was also estimated because it is a more suitable indicator of reliability than Cronbach’s Alpha [40, 66]. MaxR(H), which refers to McDonald’s construct reliability, was also estimated. The coefficient (H) describes the latent construct’s relationship and its measured indicators [40]. Means and parametric tests were used to describe Likert scale responses and test the significance of differences [67].

3. Results and Discussion

3.1. Pilot Study Results

The study started with the three-construct version suggested by the original author [42], and it indicated that Cronbach’s alpha was as follows: satisfaction (N = 5, α = 0.785), utility (N = 16, α = 0.935), and usability (N = 8, α = 0.851). The item analyses showed the item-total correlations and alpha if item deleted which are shown in Table 1.


SubscaleItemItem-total correlationAlpha if item deleted

SatisfactionV10.7240.689
V20.6720.706
V30.2980.820
V40.5700.743
V50.5730.742
V220.5900.835

UsabilityV230.6150.831
V240.6920.826
V250.3150.869
V260.5920.834
V270.6170.831
V280.7660.814
V290.6380.828

UtilityV60.7620.929
V70.8150.927
V80.6510.932
V90.7450.930
V100.6970.931
V110.6800.931
V120.8130.929
V130.5810.933
V140.6740.931
V150.6100.933
V160.7020.930
V170.6530.932
V180.7730.928
V190.5590.935
V200.3450.941
V210.7900.928

The results indicated that the three proposed constructs were internally consistent. However, three items (3, 20, and 25) had low item-total correlation coefficients (<0.4) and contributed negatively to Cronbach’s alpha. Thus, they were removed from the suggested survey item list.

Then, EFA was conducted to explore the preliminary constructs of the survey. The Kaiser–Meyer–Olkin measure of sampling adequacy (KMO = 0.772) and Bartlett’s test sphericity value (approx. chi-square = 1054.012, df = 325, ) indicated that factor analysis is appropriate to use. The EFA solution using Kaiser’s criterion retained five factors. The total sum of squared loadings was 65.32%, and the extracted five factors and item loadings are shown in Table 2.


Factor1, 2
12345

V1I am satisfied with Connect0.860
V2I would like to take a similar course if it utilizes Connect0.883
V4I would recommend others to take courses which use Connect0.644
V5I was not satisfied with my Connect experience0.624
V6With Connect I learned more in this online course as compared to a face-to-face course0.668
V7Connect encouraged me to rethink my understanding of some aspects of the subject matter after completing Connect modules0.773
V8While using Connect, examples and illustrations were given to help me to grasp things better0.973
V9While using Connect, I was prompted to think about how I could develop my learning0.4220.590
V10Completing Connect modules helped to direct my learning
V11Connect modules clarified expectations of what is required to be understood to get good marks0.459
V12The Connect modules used in this course facilitated my learning0.730
V13Receiving feedback on Connect modules helped me to determine areas of deficiency0.498
V14Preparation for graded quizzes in this course facilitated my learning0.424
V15Online quizzes embedded in Connect better helped me to understand my level of comprehension0.887
V16Online quizzes embedded in Connect helped to direct my studying and learning0.655
V17Connect helped me to come to class prepared0.866
V18Connect facilitated my interaction in course discussions0.6040.436
V19I used Connect to cover course content before it was discussed in class0.767
V21Connect allowed me to be more interactive during class time
V22From the start, Connect made clear to me what I was supposed to learn in this unit0.436
V23The amount of work required for Connect modules was appropriate0.534
V24Connect allowed me to access online/digital learning resources readily0.560
V26Connect allowed me to be responsible for my own learning0.565
V27I used Connect to help me commission my strengths and weaknesses0.608
V28Connect helped me to learn more efficiently by showing me what I needed to focus on0.645
V29Connect helped me to focus my attention on specific areas of need0.4380.672

1Extraction method: maximum likelihood. 2Rotation method: Promax with Kaiser’s normalization.

The EFA solution, shown in Table 2, supported the five-factor construct. Three items (9, 18, and 29) showed cross-loadings on factors, and two items (10 and 21) were loaded less than 0.4 on factors.

Thus, items 10 and 21 were removed, and EFA was conducted again to show the solution in Table 3.


Factor1, 2
12345

V1I am satisfied with Connect0.835
V2I would like to take a similar course if it utilizes Connect0.866
V4I would recommend others to take courses which use Connect0.603
V5I was not satisfied with my Connect experience0.585
V6With Connect I learned more in this online course as compared to a face-to-face course0.658
V7Connect encouraged me to rethink my understanding of some aspects of the subject matter after completing Connect modules0.748
V8While using Connect, examples and illustrations were given to help me to grasp things better0.942
V12The Connect modules used in this course facilitated my learning0.730
V27I used Connect to help me commission my strengths and weaknesses0.587
V13Receiving feedback on Connect modules helped me to determine areas of deficiency0.521
V14Preparation for graded quizzes in this course facilitated my learning0.429
V15Online quizzes embedded in Connect better helped me to understand my level of comprehension0.847
V16Online quizzes embedded in Connect helped to direct my studying and learning0.655
V9While using Connect, I was prompted to think about how I could develop my learning0.578
V17Connect helped me to come to class prepared0.838
V18Connect facilitated my interaction in course discussions0.536
V19I used Connect to cover course content before it was discussed in class0.714
V11Connect modules clarified expectations of what is required to be understood to get good marks0.492
V22From the start, Connect made clear to me what I was supposed to learn in this unit0.466
V23The amount of work required for Connect modules was appropriate0.559
V24Connect allowed me to access online/digital learning resources readily0.553
V26Connect allowed me to be responsible for my own learning0.549
V28Connect helped me to learn more efficiently by showing me what I needed to focus on0.657
V29Connect helped me to focus my attention on specific areas of need0.669

1Extraction method: maximum likelihood. 2Rotation method: Promax with Kaiser’s normalization.

The EFA solution again supported the five-factor construct. All items were loaded adequately on the respective factor (>0.40). Items 11 and 27 were swapped between constructs.

Cronbach’s alphas of the pilot version constructs were as follows: satisfaction (N = 4, α = 0.820), understanding (N = 5, α = 0.911), studying (N = 4, α = 0.860), preparation (N = 4, α = 0.855), usability (N = 7, α = 0.877), and utility which covers understanding, studying, and preparation (N = 13, α = 0.930). The internal consistency of all constructs/subconstructs was improved, and there was no indication for any further modification required at this phase.

3.2. Main Study Results

In this phase, the survey was implemented to examine construct, convergent, and discriminant validities. Based on the survey’s theoretical background [8] and the empirical findings of the EFA in the pilot phase, three proposed underlying construct/subconstruct models of the survey were intended to examine (see Figures 13):Model 1: first-order three-factor constructs. This model represented the proposed constructs sent by Gearhart through personal communication in which the survey items were categorized into three constructs: satisfaction (N = 4), utility (N = 13), and usability (N = 7) [42]. In this model, the Utility construct was not classified into more subconstructs, as Gearhart suggested in [8].Model 2: first-order five-correlated-factor constructs. This model represented what was suggested by the EFA findings of the pilot study. In this model, the five correlated constructs were satisfaction (N = 4), understanding (N = 5), studying (N = 4), preparation (N = 4), and usability (N = 7).Model 3: three-factor first-order with higher-order factor constructs. This model was proposed based on Gearhart’s inputs in [8] and the pilot study’s EFA findings. In this model, the first-order three-factor constructs were understanding (N = 5), studying (N = 4), and preparation (N = 4). These three factors were constructed under a higher-order factor known as utility (N = 13), which is correlated with two other factors: satisfaction (N = 4) and usability (N = 7).

The three proposed models were tested using CFA with the maximum likelihood estimation method. Kaiser–Meyer–Olkin measure of sampling adequacy (KMO = 0.959) and Bartlett’s test sphericity value (approx. chi-square = 4107.110, df = 276, ) showed that the Factor Analysis is appropriate to use. The fit indices of the CFA conducted for the three proposed models are as shown in Table 4.


ModelSample (n)Chi (df)Chi/dfCFIGFITLISRMRRMSEA value

1193699.226 (249)2.8080.8880.7610.8760.0750.0970.000
2193528.590 (242)2.1840.9290.8040.9190.0770.0790.000
3193540.039 (246)2.1950.9270.8020.9180.0790.0790.000

The CFA solutions showed poor fit indices for model 1 (CFI < 0.90, GFI < 0.90, TLI < 0.90, RMSEA > 0.08) while both model 2 and model 3 fitted the study data adequately (CFI > 0.90, TLI > 0.90, SRMR < 0.08, SRMSEA < 0.08). Only GFI was below the cutoff (0.90). Thus, model 2 and model 3 were considered to achieve the construct validity criteria with a slight advantage of model 2.

3.3. Reliability and Validity Evidence

The study started investigating model 2 to examine the construct reliability, convergent validity, and discriminant validity, and the results are as shown in Table 5. The results indicated that the composite reliability (CR) and McDonald’s construct reliability (MaxR(H)) were high (>0.7), establishing the reliabilities of all five constructs suggested in model 2. Also, the AVE values of all constructs were above 0.5, establishing the constructs’ convergent validity.


CRAVEMSVMaxR (H)UnderstandingStudyingPreparationSatisfactionUsability

Understanding0.9140.6820.8830.9190.826
Studying0.8970.6890.7310.9450.8360.830
Preparation0.8910.6720.8230.9060.8940.7850.820
Satisfaction0.8780.6520.7100.9220.8280.8070.7440.808
Usability0.9130.6020.8830.9190.9400.8550.9070.8430.776

.

However, MSV values were higher than AVE values in all constructs, and the square root of the AVE value, which is shown on the diagonals in bold faces, was not consistently higher than the rest of the interconstruct correlations. This finding indicated that the model did not achieve discriminant validity [40]. Thus, the researcher moved to model 3 to investigate if it would satisfy the required validities.

When model 3 was examined, the results are as shown in Table 6.


CRAVEMSVMaxR(H)SatisfactionUtilityUsability

Satisfaction0.8780.6520.7430.9230.807
Utility0.9420.8430.9620.9540.8620.918
Usability0.9130.6010.9620.9190.8420.9810.775

The composite reliability (CR) and McDonald’s construct reliability (MaxR(H)) were satisfied (>0.7), and all values of the average variance extracted (AVE) of higher-order constructs in model 3 were above 0.5, indicating that the convergent validity of these constructs was established. However, MSV values were higher than AVE values. The square root of the AVE value, shown on the diagonals in bold faces, was not consistently higher than the rest of the higher-order interconstruct correlations, as shown in Table 6. These findings indicated that model 3 achieved construct reliability, construct validity, and convergent validity successfully but had a discriminant validity problem.

Neither model 2 nor model 3 was acceptable because of the lack of discriminant validity. Thus, further modifications and investigations were needed to resolve the discriminant validity issue and reach a better solution.

3.4. Alternative Models

A minimal modification was conducted to achieve the discriminant validity while maintaining the content validity that has already been established. Research literature suggests that grouping highly correlated constructs, using higher-order constructs, and eliminating some items could resolve the discriminant validity challenge [37].

The researcher selected model 3 to modify because it already had higher-order constructs. The highest correlation coefficient between constructs in this model was between utility and usability constructs (r = 0.981). Thus, the researcher grouped the usability construct with the utility construct to propose a new model 4, as shown in Figure 4.

The fit indices of model 4 were χ2/df = 2.186 (<3), CFI = 0.927 (>0.9), SRMR = 0.054 (<0.08), and RMSEA = 0.079 (<0.08), indicating that data fit the model. AVEs were 0.652 and 0.873 for satisfaction and utility constructs, respectively. The correlation coefficient between satisfaction and utility constructs was 0.860. Since the square root of the AVE of the satisfaction construct (0.807) was less than the correlation between satisfaction and utility constructs, this model still lacks discriminant validity. Gaskin and Lim suggested deleting item 13 to improve the model [68]. After removing item 13, the fit indices of the new model were χ2/df = 2.000 (<3), CFI = 0.941 (>0.9), SRMR = 0.048 (<0.08), and RMSEA = 0.072 (<0.08), indicating that data fitted the proposed model better. AVEs were 0.652 and 0.929 for satisfaction and utility constructs, respectively. The square of the correlation coefficient between satisfaction and utility constructs was 0.741, which was higher than the AVE of satisfaction construct (0.652), indicating that the model still lacks discriminant validity.

One more thing that could help achieve the discriminant validity is to improve the satisfaction construct [69]. The item loadings on the satisfaction construct showed that item 5 had the lowest loading on its respective construct. When item 5 was deleted, AVEs were 0.780 and 0.865 for satisfaction and utility constructs, respectively. The square of the correlation coefficient between satisfaction and utility was 0.734, which was less than the AVE values of the satisfaction and utility constructs, indicating that the model has achieved discriminant validity. The fit indices of the model were χ2/df = 2.079 (<3), CFI = 0.942 (>0.9), GFI = 0.83 (<0.9), TLI = 0.934 (>0.9), SRMR = 0.046 (<0.08), and RMSEA = 0.075 (<0.08) indicating that data fit the proposed model better.

The composite reliability CR and AVE for all construct/subconstructs were 0.994 (0.780), 0.998 (0.865), 0.990 (0.681), 0.987 (0.758), 0.986 (0.673), and 0.987 (0.602) for satisfaction, utility, understanding, studying, preparation, and usability construct/subconstruct, respectively. This finding indicated that this model successfully achieved construct reliability, construct validity, and convergent validity in addition to discriminant validity. The model constructs, loadings, and variance explained for the optimal model are shown in Figure 5.

When the three proposed models were tested using the primary study sample, the results supported the five-factor constructs (model 2 and model 3) rather than the three-factor construct (model 1). CFA findings consistently supported the construct reliabilities, construct validity, and convergent validity of proposed models. However, none of them achieved discriminant validity. It was clear that the survey items and dimensions were highly correlated and challenging to achieve discriminant validity. Farrell and Rudd in [37] and Yale et al. in [70] suggested solving the challenge using higher-order grouped constructs or increasing the satisfaction construct’s AVE. Finally, the study reached the model that satisfied the discriminant, construct, and convergent validity.

Even though high associations among items and constructs make the survey highly internally consistent and reliable, it raised the challenge to achieve discriminant validity. The survey’s lack of discriminant validity indicated that different constructs’ total scores could not be interpreted clearly. The study tried to detect distinct constructs as much as possible, and it reached the two higher-order constructs solution as shown in model 5. These study findings, to some extent, supported the construct validity suggested by Gearhart [8]. The survey measures the overall satisfaction and utility dimension, which has four subscales measuring understanding, studying, preparation, and usability.

3.5. Students’ Satisfaction

Based on the validated survey, the subscale means of students’ satisfaction are shown in Table 7 and Figure 6.


Scale/subscaleItems (N)Cronbach αMeanSt devt1dfSig. (2-tailed)Mean difference95% CI of the difference
LowerUpper

Satisfaction30.9123.5220.1765.66319200.521590.33990.7033
Understanding50.9133.5960.1707.80219200.595850.44520.7465
Studying30.8963.5730.1386.60619200.57340.40220.7446
Preparation40.8933.2270.2593.0121920.0030.226680.07820.3751
Usability70.9133.5750.1458.27119200.575130.4380.7123
Utility190.9653.5070.2217.31919200.506950.37030.6436

1One sample t-test with test value = 3.0.

The results indicated that student’s overall satisfaction with MCGH Connect was met (M = 3.52, SD = 0.176) and above neutral (M = 3.0), t (192) = 5.663, . Also, students were satisfied with its utility (M = 3.51, SD = 0.221), t (192) = 7.319, . The highest level of satisfaction was with understanding (M = 3.60, SD = 0.170), t (192) = 7.802, , and the lowest was with preparation (M = 3.23, SD = 0.259), t (192) = 3.012, . These findings support what Gearhart found in [8].

The t-tests for paired differences between utility subscales are shown in Table 8.


PairSubscales’ differenceDifference meanSt. dev.Std. error meantdfSig. (2-tailed)

1Understanding—studying0.022450.778340.056030.4011920.689
2Understanding—preparation0.369170.680410.048987.5381920.000
3Understanding—usability0.020730.563510.040560.5111920.61
4Studying—preparation0.346720.911290.06565.2861920.000
5Studying—usability−0.001730.753790.05426−0.0321920.975
6Preparation—usability−0.348450.638930.04599−7.5761920.000

Results indicated that the significant differences in satisfaction means between understanding and preparation (∆M = 0.36917, SD = 0.6804, t (192) = 7.538, ), studying and preparation (∆M = 0.34672, SD = 0.91129, t (192) = 5.286, ), and preparation and usability (∆M = −0.34845, SD = 0.63893, t (192) = −7.576, ) were significant. In all differences, the satisfaction level of preparation was significantly lower than others. Students were equally satisfied with using MGHE Connect to understand materials, study for exams, and its usability.

In sum, the study indicated an overall students’ satisfaction with MCGH Connect, which means that the tool generally meets student needs. It indicated that MCGH Connect is adequate in helping students understand and comprehend study. Also, it helped them in studying and reviewing for exams. Students thought MGHE Connect helped them prepare for classes and study course content ahead of lectures and class discussions. However, students thought that MGHE Connect was significantly less helpful in preparation compared to understanding and studying. The study showed that students were as satisfied with MGHE Connect usability as understanding and studying.

Unlike studies that found that MGHE Connect was ineffective in improving student academic performance measures [1315], this study showed how student perceptions were positive about MGHE Connect. These study findings showed the essential of considering direct measures such as exam scores and indirect measures such as survey measures to assess blended learning programs [14, 15, 7173]. Using both types of measures might resolve the ambiguity in assessing such programs’ effectiveness [70]. The study findings also agreed with what [27, 70] stated that student self-reports of learning have no relationship with actual learning. Students might perceive tools to substantially impact their learning while it has no impact on direct learning measures.

4. Conclusion

This study aimed to adapt and validate a scale to assess students’ satisfaction with MGHE Connect in Saudi Arabia. It aimed to provide a valid instrument measuring students’ satisfaction with MGHE Connect to further studies and interventions to improve students’ learning and environments. The study followed a well-established procedure to translate the survey and establish content validity. It examined survey items to establish the construct, convergent, and discriminant validities, and composite reliabilities. The only model that fitted the data and satisfied all reliability and validity standards was a second-order model. Two primary constructs were distinctively identified: satisfaction and utility. The utility scale was composed of four subscales: understanding, studying, preparation, and usability. The survey constructs were strongly associated, and it was challenging to establish discriminant validity for proposed models. Higher-order grouped construct allowed achieving discriminant validity in addition to already established reliability and validity coefficients. The study’s final version of the survey was reliable and valid (see Appendix B), and it can be used in further studies and interventions. The study showed that MGHE Connect, in general, met the students’ needs and satisfaction. MGHE Connect significantly helped students comprehend course materials better, study before exams to get better scores, and prepare for class discussion in advance. However, students’ use of MGHE Connect in preparation was significantly less than using it to understand and study. Also, students were satisfied with MGHE Connect’s ease of use and friendliness. The study showed how useful MGHE Connect was based on students’ perceptions in Saudi Arabia. However, this study is limited since the sample used may not represent the public as a whole.

Data Availability

The data used to support the findings of this study are available upon request to the author.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author would like to thank Dr. Christopher Gearhart, Tarleton State University, USA, for sharing the original survey items. Also, the author thanks Dr. Saeed Al-Qahtani, Yanbu, English Language Institute Deputy Managing Director, Dr. Mahmoud Alabdallah, Dr. Bijal Kottukkal Bahuleyan, Dr. Islam Khan, Prof. Adulkareem Al-Alwani, Dr. Adnane Habibm, Dr. Osman Barnawi, Mr. Wahieb Al-Baroudi, Mr. Mohammad Al-Johani, Mr. Adel Almotairi, Mr. Omar Alkhowaiter, Mr. Salem Alsufyani, and Mr. Sultan Almalki for their valuable contributions in translating, reviewing, and implementing the survey instrument.

Supplementary Materials

The items and subscales of the initial version of the scale are listed in Appendix A. Also, the items of the final version of the validated survey are shown in Appendix B. (Supplementary Materials)

References

  1. N. A. A. Rahman, N. Hussein, and A. H. Aluwi, “Satisfaction on blended learning in a public higher education institution: what factors matter?” Procedia—Social and Behavioral Sciences, vol. 211, pp. 768–775, 2015. View at: Publisher Site | Google Scholar
  2. W. S. Chen and A. Y. Tat Yao, “An empirical evaluation of critical factors influencing learner satisfaction in blended learning: a pilot study,” Universal Journal of Educational Research, vol. 4, no. 7, pp. 1667–1671, 2016. View at: Publisher Site | Google Scholar
  3. L. Umek, N. Tomaževic, A. Aristovnik, and D. Keržic, “Predictors of student performance in a blended-learning environment: an empirical investigation,” in Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on E-Learning, Lisbon, Portugal, July 2017. View at: Google Scholar
  4. M. Eryilmaz, “The effectiveness of blended learning environments,” Contemporary Issues in Education Research (CIER), vol. 8, no. 4, pp. 251–256, 2015. View at: Publisher Site | Google Scholar
  5. A. A. Albhnsawy and A. M. Aliweh, “Enhancing student teachers’ teaching skills through a blended learning approach,” International Journal of Higher Education, vol. 5, pp. 131–136, 2016. View at: Publisher Site | Google Scholar
  6. D. Meier, “Situational leadership theory as a foundation for a blended learning framework,” Journal of Education and Practice, vol. 7, pp. 25–30, 2016. View at: Google Scholar
  7. M. G. Alzahrani and J. M. O’Toole, “The impact of internet experience and attitude on student preference for blended learning,” Journal of Curriculum and Teaching, vol. 6, no. 1, pp. 65–78, 2017. View at: Publisher Site | Google Scholar
  8. C. Gearhart, “Does LearnSmart connect students to textbook content in an interpersonal communication course?: assessing the effectiveness of and satisfaction with LearnSmart,” International Journal of Teaching and Learning in Higher Education, vol. 28, pp. 9–17, 2016. View at: Google Scholar
  9. J. Babinckak, W. Harbacivch, and B. Lipschutz, “The impact of an adaptive learning technology on student performance in an introductory-level business course,” The Transnational Journal of Business, vol. 3, pp. 30–38, 2018. View at: Google Scholar
  10. T. L. Austin, L. S. Sigmar, G. B. Mehta, and J. L. Shirk, “The impact of web-assisted instruction on student writing outcomes in business communication,” Journal of Instructional Pedagogies, vol. 20, pp. 1–11, 2018. View at: Google Scholar
  11. A. I. Gambari, A. T. Shittu, O. O. Ogunlade, and O. R. Osunlade, “Effectiveness of blended learning and elearning modes of instruction on the performance of undergraduates in Kwara state, Nigeria,” MOJES (Malaysian Online Journal of Educational Sciences), vol. 5, pp. 25–36, 2018. View at: Google Scholar
  12. McGraw-Hill Education, The Impact of Connect on Student Success, McGraw-Hill Connect Effectiveness Study, New York, NY, USA, 2016.
  13. A. A. Babtain, “The effect of McGraw-Hill education connect on students’ academic performance,” International Journal of Emerging Technologies in Learning (IJET), vol. 16, no. 3, pp. 86–113, 2021. View at: Publisher Site | Google Scholar
  14. G. White, “Adaptive learning technology relationship with student learning outcomes,” Journal of Information Technology Education: Research, vol. 19, pp. 113–130, 2020. View at: Publisher Site | Google Scholar
  15. E. R. Griff and S. F. Matter, “Evaluation of an adaptive online learning system,” British Journal of Educational Technology, vol. 44, no. 1, pp. 170–176, 2013. View at: Publisher Site | Google Scholar
  16. N. Lewkow, N. Zimmerman, M. Riedesel, and A. Essa, “Learning analytics platform, towards an open scalable streaming solution for education,” in Proceedings of the 8th International Conference on Educational Data Mining (EDM), Madrid, Spain, June 2015. View at: Google Scholar
  17. L. Agnihotri, A. Aghababyan, S. Mojarad, M. Riedesel, and A. Essa, “Mining login data for actionable student insight,” in Proceedings of the 8th International Conference on Educational Data Mining (EDM), Madrid, Spain, June 2015. View at: Google Scholar
  18. J. Wu and W. Liu, “An empirical investigation of the critical factors affecting students’ satisfaction in EFL blended learning,” Journal of Language Teaching & Research, vol. 4, pp. 176–185, 2013. View at: Publisher Site | Google Scholar
  19. J. B. Arbaugh, “What might online delivery teach us about blended management education? prior perspectives and future directions,” Journal of Management Education, vol. 38, no. 6, pp. 784–817, 2014. View at: Publisher Site | Google Scholar
  20. M. Laumakis, C. Graham, and C. Dziuban, “The sloan-C pillars and boundary objects as a framework for evaluating blended learning,” Journal of Asynchronous Learning Networks, vol. 13, pp. 75–87, 2009. View at: Google Scholar
  21. J. Bowyer and L. Chambers, “Evaluating blended learning: bringing the elements together,” Research Matters, vol. 23, pp. 17–26, 2017. View at: Google Scholar
  22. S. Sorden and I. Munene, “Constructs related to community college student satisfaction in blended learning,” Journal of Information Technology Education: Research, vol. 12, pp. 251–270, 2013. View at: Publisher Site | Google Scholar
  23. Y.-S. Wang, “Assessment of learner satisfaction with asynchronous electronic learning systems,” Information & Management, vol. 41, no. 1, pp. 75–86, 2003. View at: Publisher Site | Google Scholar
  24. B. Akkoyunlu and M. Yılmaz-Soylu, “Development of a scale on learners’ views on blended learning and its implementation process,” The Internet and Higher Education, vol. 11, no. 1, pp. 26–32, 2008. View at: Publisher Site | Google Scholar
  25. S. R. Palmer and D. M. Holt, “Examining student satisfaction with wholly online learning,” Journal of Computer Assisted Learning, vol. 25, no. 2, pp. 101–113, 2009. View at: Publisher Site | Google Scholar
  26. M. Abou Naaj, M. Nachouki, and A. Ankit, “Evaluating student satisfaction with blended learning in a gender-segregated environment,” Journal of Information Technology Education: Research, vol. 11, pp. 185–200, 2012. View at: Publisher Site | Google Scholar
  27. M. Giannousi, N. Vernadakis, V. Derri, M. Michalopoulos, and E. Kioumourtzoglou, “Students’ satisfaction from blended learning instruction,” in Proceedings the of TCC Worldwide Online Conference, pp. 61–68, Honolulu, HI, USA, January 2009. View at: Google Scholar
  28. E. Strachota, “The use of survey research to measure student satisfaction in online courses,” in Proceedings of the Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, The University of Missouri-St. Louis, October 2006. View at: Google Scholar
  29. J. Hanson, A. Bangert, and W. Ruff, “A validation study of the what’s my school mindset? survey,” Journal of Educational Issues, vol. 2, no. 2, pp. 244–266, 2016. View at: Publisher Site | Google Scholar
  30. B. Getnet and A. Alem, “Validity of the center for epidemiologic studies depression scale (CES-D) in Eritrean refugees living in Ethiopia,” BMJ Open, vol. 9, pp. 1–16, 2019. View at: Publisher Site | Google Scholar
  31. International Test Commission, The ITC Guidelines for Translating and Adapting Tests, International Test Commission, Brussels, Belgium, 2nd edition, 2017.
  32. R. Fernández-Ballesteros, E. E. J. De Bruyn, A. Godoy et al., “Guidelines for the assessment process (GAP): a proposal for discussion,” European Journal of Psychological Assessment, vol. 17, no. 3, pp. 187–200, 2001. View at: Publisher Site | Google Scholar
  33. A. S. Lenz, I. Gómez Soler, J. Dell’Aquilla, and P. M. Uribe, “Translation and cross-cultural adaptation of assessments for use in counseling research,” Measurement and Evaluation in Counseling and Development, vol. 50, no. 4, pp. 224–231, 2017. View at: Publisher Site | Google Scholar
  34. C. T. Beck, H. Bernal, and R. D. Froman, “Methods to document semantic equivalence of a translated scale,” Research in Nursing & Health, vol. 26, no. 1, pp. 64–73, 2003. View at: Publisher Site | Google Scholar
  35. M. Ziegler, “Psychological test adaptation and development–how papers are structured and why,” Psychological Test Adaptation and Development, Hogrefe Publishing Corp, Boston, MA, USA, 2020. View at: Google Scholar
  36. Z. Awang, A Handbook on Structural Equation Modeling Using AMOS, Universiti Teknologi MARA Publication, Shah Alam, Malaysia, 2012.
  37. A. Farrell and J. M. Rudd, “Factor analysis and discriminant validity: a brief review of some practical issues,” in Proceedings of Australia and New Zealand Marketing Academy Conference, Melbourne, Australia, November 2009. View at: Google Scholar
  38. N. Kock and G. Lynn, “Lateral collinearity and misleading results in variance-based SEM: an illustration and recommendations,” Journal of the Association for Information Systems, vol. 13, no. 7, pp. 546–580, 2012. View at: Publisher Site | Google Scholar
  39. J. Hair, W. Black, B. Babin, and R. Anderson, Multivariate Data Analysis, Pearson Education Limited, London, UK, 7th edition, 2014.
  40. M. S. Adil and K. Bin Ab Hamid, “Impact of individual feelings of energy on creative work involvement: a mediating role of leader-member exchange,” Journal of Management Sciences, vol. 4, pp. 82–105, 2017. View at: Publisher Site | Google Scholar
  41. A. Zait and P. S. P. E. Bertea, “Methods for testing discriminant validity,” Management & Marketing Journal, vol. 9, pp. 217–224, 2011. View at: Google Scholar
  42. C. C. Gearhart, Personal Communication, Tarleton State University, Stephenville, Tx, USA, 2018.
  43. R. K. Hambleton, “Guidelines for adapting educational and psychological tests: a progress report,” European Journal of Psychological Assessment, vol. 10, pp. 229–244, 1994. View at: Google Scholar
  44. R. K. Hambleton, “Guidelines for adapting educational and psychological tests,” in Proceedings of the Annual Meeting of the National Council on Measurement in Education, New York, NY, USA, April 1996. View at: Google Scholar
  45. R. Grassi-Oliveira, H. Cogo-Moreira, G. A. Salum et al., “Childhood trauma questionnaire (CTQ) in Brazilian samples of different age groups: findings from confirmatory factor analysis,” PLoS One, vol. 9, 2014. View at: Publisher Site | Google Scholar
  46. F. Devynck, M. Kornacka, C. Baeyens et al., “Perseverative thinking questionnaire (PTQ): French validation of a transdiagnostic measure of repetitive negative thinking,” Frontiers in Psychology, vol. 8, p. 2159, 2017. View at: Publisher Site | Google Scholar
  47. M. A. Kandemir and R. Akbas-Perkmen, “Examining validity of sources of mathematics self-efficacy scale in Turkey,” European Journal of Education Studies, vol. 3, pp. 69–68, 2017. View at: Google Scholar
  48. K. A. Dhamani and M. S. Richter, “Translation of research instruments: research processes, pitfalls and challenges,” Africa Journal of Nursing and Midwifery, vol. 13, pp. 3–13, 2011. View at: Google Scholar
  49. W. Maneesriwongul and J. K. Dixon, “Instrument translation process: a methods review,” Journal of Advanced Nursing, vol. 48, no. 2, pp. 175–186, 2004. View at: Publisher Site | Google Scholar
  50. A. Hernández, M. D. Hidalgo, R. K. Hambleton, and J. Gómez-Benito, “International test commission guidelines for test adaptation: a criterion checklist,” Psicothema, vol. 32, no. 3, pp. 390–398, 2020. View at: Publisher Site | Google Scholar
  51. H. Abdi, “Factor rotations in factor analyses,” Encyclopedia for Research Methods for the Social Sciences, Sage, Thousand Oaks, CA, USA, 2003. View at: Google Scholar
  52. J. Braeken and M. A. L. M. Van Assen, “An empirical kaiser criterion,” Psychological Methods, vol. 22, no. 3, pp. 450–466, 2017. View at: Publisher Site | Google Scholar
  53. N. U. Hadi, N. Abdullah, and I. Sentosa, “An easy approach to exploratory factor analysis: a marketing perspective,” Journal of Educational and Social Research, vol. 6, pp. 215–223, 2016. View at: Google Scholar
  54. T. A. Brown, Confirmatory Factor Analysis for Applied Research, The Guilford Press, New York, NY, USA, 2006.
  55. E. L. Lamoureux, J. F. Pallant, K. Pesudovs, G. Rees, J. B. Hassell, and J. E. Keeffe, “The impact of vision impairment questionnaire: an assessment of its domain structure using confirmatory factor analysis and Rasch analysis,” Investigative Opthalmology & Visual Science, vol. 48, no. 3, pp. 1001–1006, 2007. View at: Publisher Site | Google Scholar
  56. C. Randler, E. Hummel, M. Glaser-Zikuda, C. Vollmer, F. X. Bogner, and P. Mayring, “Reliability and validation of a short scale to measure situational emotions in science education,” International Journal of Environmental and Science Education, vol. 6, pp. 359–370, 2011. View at: Google Scholar
  57. B. Ishiyaku, R. Kasim, and A. I. Harir, “Confirmatory factoral validity of public housing satisfaction constructs,” Cogent Business & Management, vol. 4, no. 1, Article ID 1359458, 2017. View at: Publisher Site | Google Scholar
  58. A. Alumran, X.-Y. Hou, J. Sun, A. A. Yousef, and C. Hurst, “Assessing the construct validity and reliability of the parental perception on antibiotics (PAPA) scales,” BMC Public Health, vol. 14, no. 1, p. 73, 2014. View at: Publisher Site | Google Scholar
  59. L. T. Hu and P. M. Bentler, “Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives,” Structural Equation Modeling: A Multidisciplinary Journal, vol. 6, no. 1, pp. 1–55, 1999. View at: Publisher Site | Google Scholar
  60. B. M. Byrne, Multivariate Applications Series.Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, Routledge/Taylor & Francis Group, Milton, UK, 2nd edition, 2010.
  61. K. S. Taber, “The use of Cronbach’s alpha when developing and reporting research instruments in science education,” Research in Science Education, vol. 48, no. 6, pp. 1273–1296, 2018. View at: Publisher Site | Google Scholar
  62. C. Fornell and D. F. Larcker, “Evaluating structural equation models with unobservable variables and measurement error,” Journal of Marketing Research, vol. 18, no. 1, pp. 39–50, 1981. View at: Publisher Site | Google Scholar
  63. J. Henseler, C. M. Ringle, and M. Sarstedt, “A new criterion for assessing discriminant validity in variance-based structural equation modeling,” Journal of the Academy of Marketing Science, vol. 43, no. 1, pp. 115–135, 2015. View at: Publisher Site | Google Scholar
  64. J. R. Warmbrod, “Reporting and interpreting scores derived from likert-type scales,” Journal of Agricultural Education, vol. 55, no. 5, pp. 30–47, 2014. View at: Publisher Site | Google Scholar
  65. W. Y. Chin, E. P. H. Choi, K. T. Y. Chan, and C. K. H. Wong, “The psychometric properties of the center for epidemiologic studies depression scale in Chinese primary care patients: factor structure, construct validity, reliability, sensitivity and responsiveness,” PLoS One, vol. 10, no. 8, Article ID e0135131, 2015. View at: Publisher Site | Google Scholar
  66. Z. Awang, A. Afthanorhan, M. Mohamad, and M. A. M. Asri, “An evaluation of measurement model for medical tourism research: the confirmatory factor analysis approach,” International Journal of Tourism Policy, vol. 6, no. 1, pp. 29–45, 2015. View at: Publisher Site | Google Scholar
  67. G. M. Sullivan and A. R. Artino, “Analyzing and interpreting data from likert-type scales,” Journal of Graduate Medical Education, vol. 5, no. 4, pp. 541-542, 2013. View at: Publisher Site | Google Scholar
  68. J. Gaskin and J. Lim, “Master validity tool. AMOS plugin: gaskination’s StatWiki,” 2016, http://statwiki.kolobkreations.com/index.php?title=Main_Page. View at: Google Scholar
  69. A. M. Farrell, “Insufficient discriminant validity: a comment on bove, pervan, beatty, and shiu (2009),” Journal of Business Research, vol. 63, no. 3, pp. 324–327, 2010. View at: Publisher Site | Google Scholar
  70. R. N. Yale, J. D. Jensen, N. Carcioppolo, Y. Sun, and M. Liu, “Examining first- and second-order factor structures for news credibility,” Communication Methods and Measures, vol. 9, no. 3, pp. 152–169, 2015. View at: Publisher Site | Google Scholar
  71. D. R. Bacon, “Comparing direct versus indirect measures of the pedagogical effectiveness of team testing,” Journal of Marketing Education, vol. 33, no. 3, pp. 348–358, 2011. View at: Publisher Site | Google Scholar
  72. C. Luce and J. P. Kirnan, “Using indirect vs. direct measures in the summative assessment of student learning in higher education,” Journal of the Scholarship of Teaching and Learning, vol. 16, no. 4, pp. 75–91, 2016. View at: Publisher Site | Google Scholar
  73. Q. Sun, Y. Abdourazakou, and T. J. Norman, “LearnSmart, adaptive teaching, and student learning effectiveness: an empirical investigation,” Journal of Education for Business, vol. 92, no. 1, pp. 36–43, 2017. View at: Publisher Site | Google Scholar

Copyright © 2021 Adel A. Babtain. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1073
Downloads678
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.