Abstract

The importance of Digital Self-Efficacy is likely to grow as digital technology becomes increasingly widespread. To succeed in today’s digital world, it is essential for people to have a strong belief in their ability to effectively use digital technologies. Therefore, it is necessary for researchers to have adequate instruments to measure them in different populations. The Digital Self-Efficacy Scale offers an innovative technology-independent approach. It provides a multidimensional assessment grounded in the DigComp framework, allowing adaptability and facilitating comparison across diverse demographics. This study examined the validity and reliability of the Spanish-adapted version of the Digital Self-Efficacy for Equatorian workers. The results from a sample of 471 participants, with a gender-balanced distribution (43.74% female and 55.41% male) and a mean age of 34 years, showed that this translated 19-item scale is a valid and reliable measure of this construct. Nomological network analysis with SEM showed that Digital Self-Efficacy had a positive and significant association with task-technology fit and the use of technology. This suggests that workers with higher levels of Digital Self-Efficacy are more likely to use technology effectively and efficiently and that they are more likely to find technology that is a good fit for their tasks.

1. Introduction

Digital transformation is a rapidly evolving field that has profoundly impacted people’s lives. From the way we shop to how we work, digital technologies are changing how we interact with the world. Nearly 20 years ago, Stolterman and Fors [1] defined digital transformation as the changes that technology causes or influences on all aspects of human life. Studies have evaluated the effects of digital transformation on human aspects such as quality of life [2], social interaction [3], business [46], technological infrastructure [7], the workplace [8], and education [9, 10].

According to the results obtained from a search conducted on the Scopus website, approximately 16,000 articles have been published within the past five years that focus on digital transformation in their titles, abstracts, or keywords. The primary subject areas that have addressed this topic are computer science, engineering, business, management, and accounting. This substantial volume of research highlights the increasing interest in exploring how technology influences both companies and employees, making it a critical area of investigation.

The adoption of digital technologies by businesses and digital acceleration caused by the COVID-19 pandemic have changed the dynamics of firms [1113] and questioned the appropriate skills of workers [1418]. In recent years, the ability of workers to function effectively in digital environments has gained significant prominence among the various skills. Workers need to adapt to changing job requirements and organizational practices related to new skill-intensive technologies [19]. Terms such as digital skills, digital competences, digital literacy, and digital intelligence are used to describe these skills [17, 2023]. Studies such as the 21st Century Digital Skills Framework developed by van Laar et al. [23] and the DigComp framework proposed by the European Commission [17, 20, 22] provide valuable insights into the essential skills for thriving in a dynamic and intricate professional setting.

Peiffer et al. [24] indicated that the proficient utilization of digital systems is not solely contingent upon a predetermined set of abilities but also depends on subjective beliefs regarding one’s competence. These findings expand the knowledge gained by other studies, in which self-efficacy plays a role in the efficient use of digital systems [25]. Moreover, these beliefs have an impact on various aspects of digital system utilization [26, 27].

In the literature, constructs linked to self-efficacy and technology are associated with social cognitive theory (SCT) proposed by Bandura [28]. This theory is a robust framework for understanding how individuals learn and alter their behavior. SCT emphasizes the role of self-efficacy, which is an individual’s belief in their ability to successfully complete a task or achieve a goal [29, 30]. Over time, self-efficacy has been used to evaluate competence self-perception in different contexts, particularly in the digital era [31, 32]. From this perspective, constructs such as Computer Self-Efficacy (CSE), Internet Self-Efficacy (ISE), ICT Self-Efficacy (ICTSE), Technology Self-Efficacy (TSE), and Digital Self-Efficacy (DSE) evaluate an individual’s belief in their capacity to navigate and interact efficiently with digital technologies. However, each construct has a distinct focus and responds to the digital paradigm at the time of its proposition. Table 1 lists and describes the principal constructs associated with self-efficacy and technology, their focus, and the relevant instruments.

Concepts such as CSE, ISE, ICTSE, and TSE have focused on a particular approach to technological skill beliefs. By contrast, Digital Self-Efficacy is a holistic approach that incorporates both technical and nontechnical skills essential for effective and responsible engagement in the digital world [44]. Agarwal et al. [45] and Maran et al. [46] describe DSE as an individual’s belief in the effective and effortless utilization of technology in digital environments. For its adaptability, DSE can be applied to various contexts and disciplines [44]. Previous studies have evaluated it as a significant predictor of behaviors, such as task-technology fit, technology usage, teaching evaluation strategies, motivation, and work engagement [4753]. Janssen et al. [54] consider it as the “building block” of digital competences.

In today’s digital world, it is essential for people to have a strong belief in their ability to use digital technologies effectively and for researchers to have adequate instruments to measure it. Various instruments have been used to measure self-efficacy using technology in different languages [5557] and contexts [32, 55, 5861], particularly in education [37, 41, 62]. These scales focus on computers, the Internet, ICT, and other digital beliefs [63]. However, despite their adoption, limitations in adaptability and multidimensional representation have been noted [43, 44]. First, they are often system- or technology-specific, which makes them obsolete as technology changes. Second, they did not reflect the multidimensional nature of digital competence. In response to these limitations, the DSE instrument proposed by Ulfert-Blank and Schmidt [44] assesses individuals’ digital confidence in different contexts and purposes. Additionally, the scale is based on DigComp, which is an internationally recognized framework for digital competencies [17, 20, 22] that permits progression evaluations and comparisons among different populations.

The Spanish-speaking population is the fourth most prevalent language globally, representing approximately 559 million inhabitants worldwide [64]. Considering this, previous studies used adapted Spanish scales to measure technology-related self-efficacy [6567]. However, these instruments have limitations and do not align with the DigComp framework. To address this gap, our study is aimed at adapting and validating the Digital Self-Efficacy Scale developed by Ulfert-Blank and Schmidt [44] in its Spanish version and evaluating its psychometric properties within a sample of workers.

This paper is organized as follows: Section 2 describes the methodology employed for the adaptation and validation of the instrument in Spanish, Section 3 presents the findings of this study, and Section 4 discusses the results, provides concluding remarks, and outlines research limitations and perspectives.

2. Materials and Methods

2.1. Data Collection

Data were collected in June and July 2023. Participants responded to the instrument through an online survey published on researchers’ social networks using the Encuesta Fácil tool.

2.2. Participants

Two convenience samples of workers from the Guayaquil Metropolitan Area, Ecuador, were used for this study. The first sample, used for a pilot test of validity, considered 40 workers studying business administration at the time of data collection. The final version of the instrument was administered to 753 participants. Of these, 517 participants completed the questionnaire. This represents a response rate of 68.66%. To be included in this study, people should have worked in paid occupational activities for the past 12 months at the time of the research. Thus, 471 participants met the inclusion criteria and were included in the analysis.

The mean age of the participants was 34 years (). 74% () identified themselves as men, 55.41% () as women, and 0.85% () as LGTB+. Regarding educational level, 21.23% () had a postgraduate degree (professional specialization, master’s, and PhD degrees), 52.45% () had a university or technical degree, and 26.33% () had a high school or primary education degree. Of the participants, 45.02% () were studying for a second degree. In this group, 4.67% () were undergraduates, 20.81% () were pursuing a master’s degree, 18.05% () were pursuing a technical degree, and 1.06% () were pursuing a PhD program. Table 2 provides an overview of the sample.

2.3. Ethical Considerations

This research project followed the university’s rules and was approved by the Research Deanship. All respondents were invited to participate voluntarily and accepted an online informed consent form prior to their responses. The consent form presented the importance, objectives, and the voluntary and confidential nature of the study.

2.4. Measures
2.4.1. Digital Self-Efficacy (DSE)

The 25-item scale developed by Ulfert-Blank and Schmidt [44] was used. The scale has five dimensions corresponding to DigComp 2.1. competence areas: (1) information and data literacy, (2) communication and collaboration, (3) digital content creation, (4) safety, and (5) problem-solving. Some of the items for each competence area are as follows: “I search for specific information in digital environments,” “I interact with others in digital environments,” “I create digital content,” “I recognize the health risks associated with using digital environments,” and “I identify technical problems when using digital environments.” The participants responded to the items on a five-point Likert scale from (1) totally disagree to (5) totally agree.

2.4.2. Task-Technology Fit

We considered the three items used by Lee and Lehto [49], Larsen et al. [48], and Lu and Yang [50]. These items assessed how technology fits work tasks, the necessity of technology for work tasks, and how technology meets work needs. We used a five-point Likert scale from (1) totally disagree to (5) totally agree.

2.4.3. Technology Use

We selected the two items used by Shih and Chen [52] to evaluate the frequency and duration of the technology. The first item used a five-point Likert scale for the following categories: (1) do not use, (2) once a month, (3) once a week, (4) once a day, and (5) several times a day. The second item uses a five-point Likert scale for the following categories: (1) do not use, (2) less than 1 hour, (3) 1-2 hours, (4) 3–4 hours, and (5) more than 5 hours.

The descriptive statistics for all scales are shown in Table 3.

2.5. Translation and Adaptation

The English version of the DSE was translated into Spanish by two certified translators. We synthesized the translated items into a preliminarily adapted version. Subsequently, two researchers with experience in the field of education and digital transformation evaluated the preliminary version and suggested that one option of the cSE4 items must be chosen. The option “Defend myself and others against injustice in digital environments/Me defiendo a mí y a otros contra la injusticia en entornos digitales” was suggested and accepted by the authors. Finally, to assess the content validity of the Spanish version of the scale, a semantic analysis between the authors was performed. However, no significant differences were observed.

According to the back-translation method [68] and the evaluation of content validity [69], the preliminary Spanish version of the DSE scale was translated into English and shared with a group of lecturers from different areas of expertise () and English experts () for a content validity test and language evaluation. The experts evaluated whether each item was clear and understandable and whether they were able to find a relationship with the associated competence area. Based on the suggestions of the academic group, no significant changes have been made.

2.6. Pilot Test

Following the criteria of Van Belle [70] for the sample size of the pilot test, we surveyed 40 workers who had studied the degree of business administration at the time of data collection. In this pilot test, participants made no suggestions for changes after reading the questionnaire carefully but indicated that they were not familiar with some of the competencies included in the instrument, such as programming.

Reliability and validity were evaluated using collected data. The results indicated that four of the five dimensions of the instrument showed good values for Cronbach’s alpha, composite reliability, and average variance extracted (AVE). Based on these results, changes to the dimension of information and data literacy (iSE) were made jointly between the authors and a group of six experts and academics who participated in the review of the instrument. Table 4 presents the final versions of the Spanish and English DSE scales.

2.7. Data Analysis

We conducted a confirmatory factor analysis (CFA) to assess factorial validity and dimensionality [7173]. CFA was conducted using the maximum likelihood estimation method. The goodness of fit of the CFA model was evaluated using the following indices: ratio, CFI, TLI, and RMSEA. An acceptable fit was indicated by a ratio of less than 3, CFI and TLI values greater than 0.92, and RMSEA values less than 0.07 [74].

The reliability of the scale was assessed using ordinal Cronbach’s alpha (), omega (), and composite reliability (CR) [75, 76]. The discriminant validity of the scale was evaluated by comparing the AVE of each dimension with the squared correlations between dimensions. Discriminant validity was indicated by AVE values that exceeded squared correlations [67, 68]. All analyses were performed using IBM SPSS Statistics 23 and AMOS Graphics 23.

3. Results

3.1. Evaluation of Common Method and Nonresponse Bias

Principal component analysis was used to evaluate the common methods used in this study. This analysis identified that the first factor explained 39.74% of the variance in the data, suggesting a low risk of common method bias [77]. Nonresponse bias was evaluated by calculating the response rate and performing multiple imputation analyses. The response rate was 68.66%, which is a high response rate, and multiple imputation analysis showed that there were no significant differences between the participants who responded and those who did not respond to any of the study variables [78]. Overall, the findings of this study suggest that common method and nonresponse bias were not major problems.

3.2. Factorial Validity

In the CFA of the first-order and second-order models, six items were extracted. The first four (CSE7, CSE8, DSE4, and SSE4) had factorial loadings below 0.60 or a lower factorial loading in the dimension. The final two (CSE3 and CSE4) did not comply with the standardized residual covariance criterion [79]. The final revision consisted of a revision of the modification indices, where high covariance was detected for CSE1, CSE2, SSE3, and SSE5 and for PSE4 and PSE5 [80]. The goodness-of-fit indices for the two models are presented in Table 5, and their illustrations are shown in Figure 1.

3.3. Analysis of Validity and Reliability

Each dimension of the Spanish version of the DSE scale had acceptable reliability indices. Table 6 presents an overview of the results of the reliability analysis.

To evaluate convergent validity, we used the average variance extracted (AVE) according to the criteria proposed by Fornell and Larcker [81]. To verify discriminant validity, we used the criterion proposed by Hair et al. [72] to compare the AVE values for two different constructs with the square of the estimate of the correlation between these two constructs. The only dimensions that did not meet this criterion were CSE and ISE. Despite this, performing a content validity test when adapting and reviewing the scale confirms that each dimension measures something differently [69]. Table 7 provides evidence for the convergent discriminant validity of each construct.

3.4. Evidence with External Variables

To evaluate the influence of DSE on task-technology fit and use of technology, we estimated a structural equation modeling specification using first- and second-order models. The results of these analyses revealed that the second-order model yielded better results than the first-order model. In the second-order model, DSE had a positive and statistically significant association with task-technology fit (, ). Furthermore, the evidence shows that DSE has a positive and statistically significant association with the use of technology (, ). The goodness-of-fit indices for this specification were satisfactory (; ; ; ). The path coefficients of the model estimates are shown in Figure 2.

4. Discussion and Conclusions

The increasing importance of technology in various spheres, including the workplace and education, underscores the pivotal role of Digital Self-Efficacy (DSE). Notably, individuals vary in their confidence levels regarding technology use, which contributes to a discernible digital divide. This discrepancy highlights the need for robust Digital Self-Efficacy measures to assess and address these disparities effectively.

Ulfert-Blank and Schmidt’s Digital Self-Efficacy Scale stands out as an innovative and improved alternative for measuring self-efficacy in digital environments. Unlike its predecessors, this scale avoids the pitfalls of obsolescence because it is not tied to specific technologies. Moreover, this instrument provides multidimensional assessment based on the DigComp framework. Both features enable adaptability and a broad understanding of individuals’ digital competence beliefs, which facilitate meaningful comparisons across diverse demographic groups.

Our study significantly contributes to the discussion on DSE, particularly in the context of the Spanish-speaking population, the fourth most prevalent language globally [64]. Validation of the Spanish version of the Digital Self-Efficacy (DSE) Scale provides a reliable tool for researchers and practitioners for this linguistic demographic. The results of confirmatory factor analysis (CFA) showed that the second-order model had better fit indices than the first-order model, suggesting that the five dimensions of the DSE scale are better represented as a single composite construct. Moreover, structural equation modeling (SEM) analyses revealed a positive and significant association between DSE and task-technology fit, as well as the use of technology. This implies that heightened levels of DSE correlate with more effective and efficient technology use along with a greater likelihood of finding technology suitable for work tasks.

The findings of this study are consistent with Ulfert-Black and Schmidt [44]. Both studies found that the DSE is a multidimensional construct that can be reliably measured. Additionally, both studies found that DSE is related to other measures of digital skills and self-efficacy. However, there were some important differences between the two studies in terms of language. First, Ulfert-Black and Schmidt [44] focused on the general population, whereas this study focused on a population of workers. Second, Ulfert-Black and Schmidt [44] performed a nomological network analysis using a first-order model, and our study yielded better results with the second-order model. It is important to consider these differences when interpreting the results of these two studies. However, the results of both studies suggest that DSE is an important construct that is related to other measures of digital skills and self-efficacy.

The implications of this study extend beyond academic realms to practical applications in Spanish-speaking populations. For researchers, the validated DSE scale serves as a key instrument for evaluating the association between Digital Self-Efficacy and work-related outcomes such as job performance, workers’ well-being, and work recovery. Practitioners benefit from interventions informed by this scale, fostering improvements in DSE.

However, acknowledging the limitations of this study is crucial. The relatively modest sample size, confined to workers in Ecuador, may restrict the generalizability of the findings. Additionally, the cross-sectional design prevented the establishment of causal relationships. Despite these constraints, this study provides valuable insights into Digital Self-Efficacy and lays the groundwork for further research and intervention strategies in Spanish-speaking populations.

Data Availability

The data used in this study will be available on request.

Conflicts of Interest

The authors declare no conflicts of interest.