Abstract

Artificial intelligence (AI) chatbots are set to be the defining technology of the next decade due to their ability to increase human capability at a low cost. However, more research is required to assess individuals’ behavioural intentions to use this technology when it becomes publicly available. This study applied an extended Technology Acceptance Model (TAM), with additional predictors of trust and privacy concerns, to assess individuals’ behavioural intentions to use AI chatbots across three industries: mental health care, online shopping, and online banking. These services were selected due to the current popularity of regular chatbots in these fields. Participants (, 202 females) aged between 17 and 85 years (, ) completed a 71-item online, cross-sectional survey. As hypothesised, perceived usefulness and trust were significant positive predictors of behavioural intentions across all three behaviours. However, the influence of the perceived ease of use and privacy concerns on behavioural intentions differed across the three behaviours. These findings highlight that the combination of predictors within the extended TAM have different influences on behavioural intentions to use AI chatbots for mental health care, online shopping, and online banking. This research contributes to the literature by demonstrating that the influence of the variables in one field cannot be generalised across all uses of AI chatbots.

1. A Multi-Industry Analysis of the Future Use of AI Chatbots

Artificial intelligence (AI) will be integrated into everyday technology throughout the next decade to facilitate services. AI can be defined as an intelligent technical learning system that can autonomously perform human-based tasks [1, 2]. Currently, chatbots rely on a human engineer’s intelligence to continuously update the system (i.e., they are not autonomous). AI chatbots will be different from current conversational and messenger chatbots (e.g., Google Home and Siri) as they will autonomously identify arrangements, trends, and significance from data that is too intricate to be processed by human programmers or machines to mimic human-like conversation [3, 4]. It has been suggested that AI chatbots will reduce costs for companies by decreasing human input and will increase user value by engaging in intelligent conversations that reduce the time it takes a human to achieve a task and improve their performance [510]. While chatbots are currently popular, it is predicted that AI chatbots will advance chatbots in the market, growing to be worth over AUD $100 billion by 2025 [10, 11].

AI is increasingly present in daily life, infiltrating industries from health care to recruitment [12]. The current dominant AI paradigm functions on trained data. Specifically, a popular form of AI relies on machine learning, which learns from imputed data or data from the machine’s experience [13]. The data is then taught values via conditioning. Such AI is used in various functions, from voice recognition software like Siri and Alexa to medical diagnosis through pattern recognition techniques [14, 15]. Other examples of current AI include FN Meka, an AI rapper signed to Capitol Records and research at Columbia University that taught a robot to visualise itself through AI [16]. Worldwide, governments and organisations are developing frameworks for the development and enactment of AI in daily life. In a 2022 survey, 77% of 850 organisations across 18 geographies stated they were prioritising AI regulation as a companywide policy, and 80% said they would increase investment into the development of ethical AI [17].

Despite this growth, the current use of chatbots has revealed that this technology is not always accepted. Meta released an AI chatbot (Blenderbot) in August of 2022. As it was built to learn from its interactions, it was not long before the chatbot made untrue and offensive statements. Such harmful examples of chatbots may have negative implications for future intentions to use chatbots, as media exposure affects people’s perceptions of technology [18]. This is highlighted in a study that found that previous cognitive evaluations of chatbots impacted attitudes towards future use of this technology [19]. For instance, one study found that disclosing chatbot identity before a conversation reduced purchase rate by almost 80% and reduced conversation length compared to conversations with humans [4]. Similarly, other studies have found that users can identify when the service agent is a chatbot and will subsequently change their language to be curter, more profane, and reduce their purchases [4, 20]. User acceptance of AI chatbots is a critical factor of success as it maximises the uptake and prevents misuse of the technology [21, 22]. More research is required to comprehend how individuals intend to use this technology so efforts can be made today to develop successful behaviour change interventions to enhance user acceptance.

Chatbots have produced a positive user experience and increased brand value. Trivedi [23] recruited 258 Gen Y Indian participants to research the impact of quality dimensions on customer experience and brand love. They found that system quality created a positive customer experience of using a chatbot, which positively affected love for the bank brand [23]. Positive use of chatbots has also been studied in fields such as psychology for therapy [1], health care for disease diagnosis [24], and online banking for financial transactions [25]. However, the previous research has been chiefly industry-specific, limiting the application of the findings to other contexts.

Mental health chatbots, such as Woebot and TESS, implement Cognitive Behavioural Therapy (CBT) (CBT is a form of therapy which posits that thoughts, behaviours, and feelings mutually affect each other (Brewin, 1996)) to treat mood disorders, such as depression and anxiety [26, 27]. Additionally, therapeutic chatbots help to combat difficulties in users’ everyday life, such as interpersonal conflict and occupational stress [26, 27]. However, chatbots are not yet able to diagnose psychological disorders [26]. In comparison, AI chatbots will define the future of telehealth as they are predicted to remotely diagnose patients at a low cost [28, 29]. Research reveals promising results of such technology use, with outcomes similar to face-to-face therapy [30, 31]. However, studies have shown differing results in user acceptance of this technology. A systematic review of user acceptance of computerised CBT agents (e.g., chatbots) demonstrated that users were predominately accepting of this technology [32]. Meanwhile, another study stated that the inability to form a therapeutic alliance with a chatbot is a contextual hindrance in user acceptance of AI chatbots for mental health care [33].

Online shopping chatbots deliver prompts and customer service updates via messages, decreasing the cost of human-assisted support for companies [6, 9]. In contrast, AI online shopping chatbots are predicted to be able to retain user information, autonomously upsell products, and deliver real-time analytics [34, 35]. Chatbots are currently popular amongst millennials, with 25% of 6,090 individuals aged between 19 and 34 years opting to use a chatbot for personal online shopping [36]. Furthermore, popular brands, such as Sephora and H&M, have introduced chatbots to handle queries and requests from consumers [37]. De Cicco et al. [38] report that users expressed positive sentiments about using online shopping chatbots due to their efficiency.

Messenger chatbots are used in online banking to help manage customers’ accounts like an accountant or financial advisor would [39]. Australian banks, such as Commonwealth Bank, NAB, and Westpac, offer customer service chatbots. However, like online shopping chatbots, online banking chatbots cannot retain user information once the session is complete [39]. AI online banking chatbots will deliver value to the consumer by providing personalised, automatic services such as money transfers, account adjustments, and financial advice [40].

There is a practical need to assess user acceptance of AI chatbots across different human services to comprehend how individuals intend to use this technology and what is required to enhance user acceptance. While research about chatbots has been conducted in related fields, this may not apply to AI chatbots as this technology is still primarily developing. Therefore, research into AI chatbots is required, as a theoretical understanding of the factors that predict behavioural intention is crucial for maximising the uptake of AI chatbots for future end-users and guiding human-centred design initiatives. As such, the implications of this research must be actioned immediately. Further, while previous work has focused on user acceptance of chatbots [24, 32, 38], more research is required to apply theoretical models to measure user acceptance of AI chatbots across various industries.

The current research aimed to explore the utility of an extended Technology Acceptance Model ([TAM]; Davis et al., [41]) in predicting individuals’ future behavioural intentions to use AI chatbots in three scenarios: mental health care, online shopping, and online banking. These services were selected as regular chatbots are frequently used in these three contexts to aid customer service. The TAM was chosen due to its frequent use in the literature and ability to adopt new variables. As such, this research provides a unique opportunity to extend prior research to understand how to increase user acceptance of AI chatbots across multiple industries [42].

Previous research has adopted the TAM [41] to examine user acceptance of a range of technological systems, such as electronic commerce applications (e.g., [39]) and automatic vehicles (e.g., [43]). The TAM has repeatedly emerged as the primary theory adopted to analyse human behavioural intentions to use technology throughout published literature. The TAM postulates that perceived usefulness (PU) and perceived ease of use (PEOU; [41]) are the primary cognitive factors that influence an individual’s behavioural intention (i.e., likelihood that an individual will perform a behaviour [44], which in turn influences actual system usage [41]). PU is defined as the degree to which a user perceives the technology as useful to their everyday life [41]. PEOU refers to a user’s perception of how effortless a particular technological device would be to use [41]. Recently, Ashfaq et al. [45] found that users’ PU and PEOU with customer service chatbots depended on their need for human interaction. However, this study was limited to the acceptance of non-AI chatbots. Given that few studies have examined user acceptance of AI chatbots, this research will ascertain the relevance of the TAM constructs (PU and PEOU) in predicting users’ behavioural intentions to use AI chatbots for mental health care, online shopping, and online banking.

While the TAM is a robust theoretical model for assessing behavioural intentions, technology has developed since Davis proposed the model in 1989. To keep pace with the advancing technology, scholars have included trust, in addition to PU and PEOU, when studying predictors of behavioural intentions [34, 4650]. Trust allows humans to enter a vulnerable situation based on an anticipated positive outcome [51]. For example, Buckley et al. [43] found that trust significantly accounted for additional variance in drivers’ () future behavioural intentions to use automated vehicles, above and beyond the TAM predictors of PU and PEOU. Additionally, Miltgen et al. [22] found that when trust in technology was implemented as a predictor variable in the model, it weakened the influence of PU on behavioural intentions. Therefore, it may be that trust and PU share some variance in predicting behavioural intentions.

While trust has been found to be the strongest positive predictor of behavioural intentions [22], privacy concerns have also been a significant negative predictor of behavioural intentions [52, 53]. Luo et al. [49] found that perceived privacy concerns were a significant negative predictor of behavioural intentions to use mobile banking services amongst 122 American undergraduate students. Supporting Luo et al. [49], both Phelps et al. [54] and Ward et al. [53] found that consumers are highly sensitive about revealing their financial information via online banking services.

Studies have found that personal characteristics, such as pre-existing differences, age, and gender, significantly influence behavioural intentions [49, 55]. For instance, it has been ascertained that pre-existing knowledge positively influences behavioural intentions [9, 22, 49, 5559]. Meanwhile, age has also been reported to have a negative relationship with behavioural intentions in the extant literature around technology usage [9, 25, 60]. Goot et al. [60] stated that older adults prefer a “human touch”. In contrast, younger adults use chatbots to actively avoid human contact [60]. Studies have also established that gender significantly predicts behavioural intentions to use technology [43, 49, 59]. For example, in a sample of 342 participants (186 males), Venkatesh et al. [59] found that females were more likely than males to be influenced by the PEOU of the device. Males’ behavioural intentions, by contrast, were more strongly impacted by the PU [59]. The studies presented thus far provide evidence that personal characteristics influence behavioural intentions. However, it has also been ascertained that personal characteristics are not the sole predictors of behavioural intentions [41, 43, 49]. The current study aimed to move beyond personal characteristics to assess what role PU, PEOU, privacy concerns, and trust play in predicting future behavioural intentions to use AI chatbots (see Figure 1).

In light of the research above, it was hypothesised that:

H1. Gender, PU, PEOU, trust, and pre-existing knowledge would have a significant positive relationship with behavioural intentions across all three scenarios

H2. Privacy concerns and age would have a significant negative relationship with behavioural intentions across all three scenarios

H3. Age, gender, and pre-existing knowledge would significantly predict behavioural intentions to use AI chatbots across all three scenarios

H4. Consistent with the TAM, PU and PEOU would significantly predict behavioural intentions to use AI chatbots across all three scenarios, above and beyond personal characteristics

H5. Privacy concerns and trust would significantly predict behavioural intentions to use AI chatbots across all three scenarios, above and beyond the TAM predictors of PU and PEOU

We also proposed the following research question:

RQ1. Do individuals prefer humans to AI chatbots in mental healthcare, online shopping, or online banking?

2. Method

2.1. Participants

This study included 360 participants aged between 17 and 85 years (, ) from the University population (predominately students and staff out of convenience) and the general community. Of these participants, 128 were undergraduate psychology students at Queensland University of Technology (QUT). Participants comprised 153 males (42.5%) and 202 females (56.1%). Two participants identified their gender as “other”. Two participants responded that they would prefer not to disclose their gender; one participant’s gender was unknown. Inclusion criteria mandated that participants were aged 17 or older. Participation in this study was voluntary. First-year undergraduate psychology students were offered partial course credit (0.5%), and all other participants were offered entry into a prize draw to receive one of 10 AUD $20 shopping vouchers. Ten emails were randomly chosen via randomisation software, and gift vouchers were emailed to the prize winners.

2.2. Design

A cross-sectional within-subjects research design was conducted to assess participants’ future behavioural intentions to use AI chatbots. AI chatbots were defined to the participants as “AI programs that can engage in human-like conversations with users by using natural language processing for a broad range of applications. For instance, messenger chatbots may be found on online shopping websites to aid customer purchase behaviour.” The independent variables were PU, PEOU, privacy concerns, trust, pre-existing knowledge, age, and gender. The dependent variable was behavioural intentions to use AI chatbots. These variables were assessed in three scenarios: AI mental health chatbots, AI online banking chatbots, and AI online shopping chatbots.

2.3. Materials and Measures
2.3.1. Survey

An online Qualtrics survey with 71 items was used to assess participants’ behavioural intentions. This survey included measures of personal characteristics, technology acceptance, trust, and privacy concerns. Only those measures which are relevant to the current study are reported within.

2.3.2. TAM

The TAM [41] was applied to assess the influence of participants’ PU and PEOU on their behavioural intentions to use AI chatbots. For each of the three scenarios, five items assessed PU, for example, “I think using a chatbot (prior to answering these questions, participants were instructed to answer each question in relation to AI chatbots) would make it easier for me to shop online”. Three items assessed PEOU in each scenario, for example, “I think learning to use an online shopping chatbot would be easy to use”. Behavioural intentions were assessed with three items in each scenario, for example, “I intend to use an AI chatbot for online shopping in the future”. These questions were adapted from Cheng et al. [39], Davis [41], and van Eeuwen [9]. Participants rated their responses on a 5-point Likert scale, from 1 (strongly disagree) to 5 (strongly agree). Higher scores on these items reflect higher PU, higher PEOU, and higher behavioural intentions. Cronbach’s alpha for the variables were acceptable (see Table 1).

2.3.3. Privacy Concerns and Trust

A 5-point Likert scale was used to measure privacy concerns and trust, and participants rated their responses from 1 (strongly disagree) to 5 (strongly agree). Seven items were adapted from Dinev and Hart [61] to assess privacy concerns and utilise in each scenario, for example, “It would be risky for me to use an AI chatbot to online shop”. Consistent with previous research [61], two items were used to assess trust for each scenario, “I trust chatbots for my shopping needs” and “Online shopping chatbots are a trustworthy channel for me to share my personal details”. Higher scores on these items reflect higher levels of privacy concerns and trust. Cronbach’s alpha for the variables were acceptable and are presented in Table 1.

2.3.4. Preferences Between AI Chatbot and Human

At the end of each scenario, participants were asked if they would prefer to use an AI chatbot or a human in the future. Then, if participants chose that they preferred a real person, they were asked, “If you chose a real person, why do you not want to interact with an AI chatbot?”. Content analysis was used to uncover themes from the data (see Section 2.5.3).

2.4. Procedure

The study was approved by the University Human Research Ethics Committee of Queensland University of Technology (approval number: 200000031). Participants were recruited via QUT’s classified email list, QUT’s psychology Facebook, and paid social media, including Facebook and Instagram. Participants were also approached via the first-year psychology participant pool and online advertising. Additionally, the research team recruited students and community members by word of mouth or in person. Participants were asked to complete a 71-item Qualtrics online survey. The survey assessed the participants’ demographic information (e.g., age and gender). Participants’ pre-existing knowledge of AI chatbots and current technology usage was then assessed. The participants were then provided with the definition of AI chatbots, “AI programs that can engage in human-like conversations with users by using natural language processing for a broad range of applications”. Following this, three separate scenarios which focused on mental health care, online shopping, and online banking were presented to the participants. Participants were told, “Messenger AI chatbots can be utilised to provide online mental health care. Currently, chatbot-assisted therapy such as Woebot and TESS are efficient and valid methods to treat mood disorders such as depression and anxiety as well as combat difficulties in users’ everyday life.” Online shopping chatbots were defined as “Messenger AI chatbots can be utilised by online shopping websites to efficiently aid customers’ purchase decisions. Customers can use these chatbots in place of personal shoppers or customer relations managers to field any questions about the products, sizing or styling.” Online banking chatbots were presented as, “Messenger AI chatbots can be used in online banking to help users with their everyday transactions as well as provide information about additional products and services.” Participants’ behavioural intentions to use AI chatbots were assessed in each scenario. The scenarios were randomised to control for order and/or fatigue effects. On average, it took the participants 20 minutes to complete the online questionnaire. The survey was conducted from July to December 2020.

2.5. Data Analysis
2.5.1. Data Preparation

Before data analysis was initiated, an inspection of Cook’s distance, the Mahalanobis distance, and studentized deleted residuals revealed outliers in the data for the mental health care (2 outliers), online shopping (5 outliers), and online banking (8 outliers) scenarios. Visual inspection of the residual scatterplots confirmed the existence of outliers. The hierarchical regressions were conducted with and without these outliers. Without the outliers, the findings significantly changed. Therefore, the statistical analyses for online shopping and online banking were conducted with the revised data sets. Visual assessment indicated that data were normally distributed, linear, normal, and homoscedastic. Skewness and kurtosis values were between the recommended value of , and the assumption of multicollinearity was met (i.e., , ; Bowerman & O’Connell, 1990). All significance values were assessed at .

2.5.2. Extended TAM

Descriptive data are presented first, followed by the bivariate correlations to assess H1 and H2 and three hierarchical regressions to assess H3, H4, and H5. Personal characteristics (i.e., age, gender, and pre-existing knowledge) were entered in Step 1 of each hierarchical regression. Next, PU and PEOU were entered in Step 2, and privacy concerns and trust were entered in Step 3. The entry of personal characteristic variables in Step 1, the TAM constructs in Step 2, and the additional variables in Step 3 are consistent with previous research which has applied an extended TAM to assess future behavioural intentions to use advanced technologies (e.g., Buckley et al. [43]).

2.5.3. Open-Ended Questions

A deductive content analysis was undertaken by the first author to review the responses to the open-ended question, “If you chose a real person, why do you not want to interact with an AI chatbot?” which was presented for each scenario. The participants’ written comments were compiled into a Microsoft Excel document and classified into themes by reviewing the frequency of the content mentioned in each response. The co-authors reviewed the themes and provided feedback. The themes were identified as (i) loss of humanity concerns, (ii) concerns about job loss, (iii) privacy concerns, and (iv) inadequate skill concerns.

3. Results

3.1. Descriptive Data

The descriptive data for each of the scales in the study are presented in Table 1. Participant means for trust and behavioural intentions were approximately average, indicating that most participants responded that they were neutral on how these factors influenced their acceptance of AI chatbots. PU and PEOU demonstrated higher means, indicating that the participants agreed that AI chatbots would be useful and easy for them to use. Similarly, the high privacy concerns mean represent the participants’ agreeance that they would be concerned for their privacy when using AI chatbots for mental health care, online shopping, and online banking. Figure 2 displays the mean scores across each condition.

3.2. Model Comparison

Repeated measures ANOVAs were conducted to compare group means for each variable in each industry. As the number of comparisons were equal to 3, no adjustment was required. The findings revealed a significant difference in PU across the conditions, , . Participants had significantly higher mean scores on PU for online shopping when compared to PU for online banking (). Participants also had a significantly higher mean score for mental health care compared to online banking (). However, there was no significant difference in PU between online shopping and mental health care ().

There was a significant difference in PEOU across the conditions, , . Participants had significantly higher mean scores on PEOU for online shopping when compared to PEOU for online banking (). Participants also had a significantly higher mean score for online banking compared to mental health care (). There was no significant difference in PEOU between online banking and online shopping ().

There was a significant difference in privacy concerns across the conditions, , . Participants had significantly higher mean scores in privacy concerns for online banking when compared to privacy concerns for online shopping (), higher mean scores in privacy concerns for mental health care compared to privacy concerns for online shopping (), and higher mean scores in privacy concerns for online banking compared to privacy concerns for mental health care ().

There was a significant difference in trust across the conditions, , . Participants had significantly higher mean scores in trust for online shopping when compared to trust for online banking () and higher mean scores in trust for online shopping compared to trust for mental health care (). However, there was no significant difference in trust between online banking and mental health care ().

There was a significant difference in behavioural intentions across the conditions, , . Participants had significantly higher mean scores in behavioural intentions for online shopping when compared to behavioural intentions for online banking () and higher mean scores in behavioural intentions for online shopping compared to behavioural intentions for mental health care (). However, there was no significant difference in behavioural intentions between online banking and mental health care (). However, it is worth noting that mean scores for both trust and behavioural intentions were both approximately ‘2’ on the 5-point scale, indicating that participants predominately selected that they ‘disagree’ with the statements regarding trusting AI chatbots and intending to use AI chatbots in the future.

3.3. Bivariate Relationships

The bivariate correlations between the independent and dependent variables are shown in Tables 2, 3, and 4. Gender, PU, PEOU, and trust significantly and positively correlated with behavioural intentions. Notably, pre-existing knowledge was not significantly related to behavioural intentions in the online banking scenario. Further, privacy concerns and age had a significant negative relationship with behavioural intentions for all three scenarios. Therefore, H1 was only partially supported.

3.4. Hierarchical Regressions
3.4.1. Mental Health Care

In Step 1, age, gender, and pre-existing knowledge significantly accounted for 7.5% of the variance in behavioural intentions to use an AI chatbot for mental health care in the future, , . While age was a significant negative predictor of behavioural intentions, gender and pre-existing knowledge were significant positive predictors of behavioural intentions (see Table 5). As such, H3 was supported.

Next, PU and PEOU were entered into Step 2. There was a significant increase in the variance of behavioural intentions, , , , and the entire model remained significant, F(5, 333) =118.04, . At Step 2, age, gender, and pre-existing knowledge became nonsignificant predictors of behavioural intentions. PU was a significant positive predictor of behavioural intentions to use AI chatbots for mental health care (see Table 5). Further, PU explained the most unique variance in behavioural intentions. PEOU did not significantly contribute to the regression. As such, H4 was partially supported.

Privacy concerns and trust were entered into Step 3 of the hierarchical regression. When privacy concerns and trust were entered into Step 3, the variance of behavioural intentions significantly increased, , , , and the entire model remained significant, , . At Step 3, PU and trust were significant positive predictors of intentions. Privacy concerns were not a significant predictor of behavioural intentions to use AI chatbots for mental health care. Therefore, H5 was partially supported.

3.4.2. Online Shopping

When entered into Step 1, age, gender, and pre-existing knowledge accounted for 8% of the variance in behavioural intentions to use an AI chatbot for online shopping in the future, , . Age was a significant negative predictor of behavioural intentions, and gender and pre-existing knowledge were significant positive predictors of behavioural intentions (see Table 5). As such, H3 was supported in this model.

PU and PEOU were entered into Step 2 of the hierarchical regression. There was a significant increase in the variance of behavioural intentions, , , , and the entire model remained significant, , . At Step 2, age and gender were no longer significant predictors of intentions. Meanwhile, PU and PEOU were significant positive predictors of behavioural intentions to use AI chatbots for online shopping, supporting H4 (see Table 5).

Privacy concerns and trust were entered into Step 3 of the hierarchical regression. When privacy concerns and trust were entered into Step 3, the variance of behavioural intentions significantly increased, , , , and the entire model remained significant, , . At Step 3, pre-existing knowledge, PU, privacy concerns, and trust were all significant positive predictors of behavioural intentions to use an AI chatbot for online shopping. Therefore, H5 was supported.

3.4.3. Online Banking

Age, gender, and pre-existing knowledge accounted for 5.1% of the variance in the participants’ behavioural intentions to use an AI chatbot for online banking in the future, , . Age was a significant negative predictor of behavioural intentions. Gender and pre-existing knowledge did not significantly predict behavioural intentions in this model (see Table 5). Therefore, H3 was partially supported.

PU and PEOU were entered into Step 2. This resulted in a significant increase in the variance of behavioural intentions, , , , and the entire model remained significant, , . In Step 2, age became a non significant predictor of behavioural intentions. PU and PEOU were significant positive predictors of behavioural intentions in Step 2. Therefore, H4 was supported.

Privacy concerns and trust were entered into Step 3 of the hierarchical regression. When privacy concerns and trust were entered into Step 3, the variance of behavioural intentions significantly increased, , , , and the entire model remained significant, , . At Step 3, PU and trust were significant positive predictors of behavioural intentions to use an AI chatbot for online banking. Privacy concerns were not a significant predictor of behavioural intentions in this final step. As such, H5 was partially supported.

3.5. User Preferences Between AI Chatbot and Human

To answer RQ1, the participants were asked if they would prefer to use an AI chatbot or a human in the future at the end of each scenario (see Table 6). Seventy participants of various gender and age provided a similar number of comments in response to this question. Using content analysis, 184 comments were categorised into four broad themes based on the frequency of responses regarding loss of humanity concerns, concerns about job loss, privacy concerns, and inadequate skill concerns. Table 7 presents examples of quotes for each theme and Figure 3 shows the total number of comments received by theme and scenario.

4. Discussion

This investigation supports the utility of an extended TAM in predicting individuals’ behavioural intentions to use AI chatbots for mental health care, online shopping, and online banking. Further, this investigation assessed if age, gender, and pre-existing knowledge influenced future behavioural intentions to use AI chatbots. Overall, the hypotheses were partially supported, and the findings demonstrated that the variables within the extended TAM (i.e., PU, PEOU, privacy concerns, and trust) influenced behavioural intentions to use AI chatbots differently in each of the three scenarios. By applying the same model across three industries, we have provided a comprehensive reference for applying an extended TAM in a multi-industry analysis of user acceptance.

4.1. Suitability of the TAM to Model Factors Influencing Intention to Use AI Chatbots

PU positively predicted behavioural intentions to use AI chatbots for mental health care, online shopping, and online banking (H1 and H4). This finding supports prior studies which have demonstrated that PU is the most influential predictor in the TAM [39, 41, 59, 6267]. PEOU was a positive predictor of behavioural intentions to use online shopping and online banking AI chatbots in the second step of the hierarchical regressions (H1 and H4). These findings support previous research, which have shown that PEOU has a significant influence on behavioural intentions, albeit a weaker influence than PU [21, 41, 59, 68]. However, PEOU was not a significant predictor of behavioural intentions to use AI chatbots for mental health care, online shopping, or online banking when privacy concerns and trust were entered into the model. This suggestion fits Mun et al.’s [69] proposal that PEOU may become redundant when there is a high level of cognitive ability amongst future users. As 128 participants (of the 360) in this study were university students, it could be suggested that close to a third of the participants regularly interact with technological devices, and therefore had some experience with online shopping and/or online banking chatbots [70]. Another possible explanation is that the participants’ trust and privacy concerns overrode the need for this technology to be easy to use. This fits with Gefen et al. [34] who found that trust is as important as PU and PEOU to online shoppers.

Alongside PU, trust was a significant positive predictor of behavioural intentions in all three scenarios (H5). In accordance with the present results, prior research also found that trust was a significant positive predictor of behavioural intentions when included with the TAM predictors of PU and PEOU [43, 71]. Privacy concerns were not a significant predictor of behavioural intentions for the mental health care and online banking scenarios, contradicting previous literature in this field (H5) [49, 53, 54, 72, 73]. This finding differs from the extant literature, which posits that users frequently associate improved technological capabilities, such as AI, with increased threats to their privacy [22, 24, 49, 61]. It is encouraging to reflect upon Dinev and Hart’s [61] study, which stated that behavioural intentions to use technology that requires the disclosure of personal information is not due to a lack of privacy concerns but a combination of other factors. In the case of AI chatbots, this may be due to the positive influence of trust in the extended TAM.

Privacy concerns were a positive significant predictor of behavioural intentions to use AI chatbots for online shopping. The differing outcome between the three scenarios supports previous findings that participants will alter their privacy concerns depending on the context [49, 53, 54, 72, 73]. The positive direction of privacy concerns in the final step of the online shopping regression contradicts previous findings that privacy concerns are a negative predictor of behavioural intentions [49, 52, 53]. To further investigate this effect, each variable was entered alongside privacy concerns one at a time. It was found that privacy concerns remained a negative predictor of behavioural intentions until trust was entered into the model. This finding may be a consequence of the changes experienced during the COVID-19 pandemic when the data for this study was primarily collected. Australians have increasingly accepted online shopping independently of their growing privacy concerns. Further research is needed to understand the underlying reasons for this finding.

4.2. Personal Characteristics

Pre-existing knowledge was a significant positive predictor of behavioural intentions to use AI chatbots for online shopping when all predictors were entered into the model (H3). Comparatively, online shopping was the scenario in which a higher number of participants stated that they would use AI chatbots in the future. It may be that the recent proliferation of chatbots on popular websites, such as H&M (a clothing brand store), resulted in users feeling a sense of familiarity with this technology which positively influenced their behavioural intentions. It is speculated that user familiarity with chatbots was heightened during the time this study was undertaken (i.e., from July to November 2020) due to the increase in online shopping because of COVID-19 restrictions (i.e., a decrease in in-person shopping). As such, when available, prior experience with chatbots may positively influence individuals’ behavioural intentions to use AI chatbots for online shopping. Interestingly, this effect was not found for online banking or mental health care. It is plausible that the participants had not used online banking or mental health care chatbots to the same extent as they had for online shopping, reducing the influence of pre-existing knowledge when included in the extended TAM for these two scenarios. However, further research is required to assess whether this is the case or not. As predicted, age and gender were not significant predictors in the extended TAM.

4.3. User Preferences between AI Chatbots and Humans

In all three scenarios, participants preferred a human over an AI chatbot. Skills were reported as the primary determinant of participants’ unwillingness to use AI chatbots for mental health care and online shopping. Specifically, responses indicated that some participants were concerned about the skill and ability required to process requests and needs in these scenarios. This implies that education and trust in AI may be required to facilitate user acceptance of AI agents. Alternatively, some participants reported they were hesitant to accept AI chatbots in the online banking scenario because they feared losing the human connection when consulting a professional. The result that participants prefer to be attended to by humans in banking scenarios indicates that AI agents may need to adopt humanistic qualities to enhance user acceptance. Alternatively, it may be that AI chatbots for online banking may need to be used in tandem with humans to facilitate effective and accepted services. The four categories displayed in Figure 3 (i.e., humanity, jobs, privacy, and skill) signify their representation in the qualitative analysis. These findings imply that stakeholders must address societal and systemic behaviours to increase user acceptance of AI-based technologies such as AI chatbots. Perhaps this could be achieved through increasing public knowledge of the capabilities of AI chatbots via mass media education campaigns or other communication strategies. Research has demonstrated that individuals do not necessarily understand what AI can do in different contexts [74].

4.4. Multi-Industry Comparison

The repeated measures ANOVA, which compared the influence of the variables in each condition, displayed that the independent variables significantly differed across the three industries. A prominent pattern was that the acceptance of AI-based chatbots for online shopping seems to be more favourable than the other industries. A potential explanation for this is that the consequences of bad service in retail are seen as significantly lower than that for online banking and mental health care services. It is important to highlight that the influence of each independent variable appears to be different across the applications considered in this study. This further supports the need to conduct industry-specific analysis as the specific application of the AI chatbot plays a role in its acceptance. As few studies have conducted a multi-industry analysis of technology acceptance, it is recommended that future research use this methodology.

4.5. Theoretical and Practical Implications

This investigation’s findings have theoretical and practical implications for the human-computer interaction field. This study extended previous research by proposing an extended TAM to study future behavioural intentions to use AI chatbots. The results showed that the TAM is a valuable and reliable theory for analysing user acceptance of technology that is not yet available. These results are consistent with research by Oviedo-Trespalacios et al. [75], who analysed future mobile phone applications. Additionally, this study highlighted the importance of considering constructs such as trust and privacy when investigating factors influencing technology acceptance. Specifically, trust was demonstrated to be a key variable across different industries.

This study highlighted that the significance of the same variables in the extended TAM differed between each scenario and had a differing effect on behavioural intention. This means that the extended TAM produces different outcomes depending on the studied context. These findings should not be used to generalise user acceptance of AI chatbots across all service industries but to aid developers in mental health care, online shopping, and online banking and inform future research. For example, online shopping stakeholders should target privacy concerns when developing communications to increase acceptance. This is the first study to assess user acceptance across these three specific industries and is of great relevance to the literature due to the growing popularity of AI chatbots in the health care and customer service sectors.

The current study provides practical insights and understanding for AI chatbot stakeholders. AI chatbot stakeholders should promote the usefulness of this technology in mental health care, online shopping, and online banking situations to maximise the uptake of the technology. Additionally, AI chatbot stakeholders should consider developing strategies that increase users’ trust in AI chatbots. It could be anticipated that trust will reduce users’ privacy concerns and maximise the uptake of AI chatbots when it becomes available. Although perceived usefulness and trust influenced intention in all the scenarios, other variables appeared to be critical when using AI chatbots for mental health care. As such, what works in the market for one of these devices will not necessarily lead to the same usage in another scenario, such as mental health care. Strategies to maximise the uptake of AI chatbots should therefore consider heterogeneity across industries and services.

4.6. Limitations and Future Research Suggestions

Some limitations should be acknowledged when interpreting the findings. One such limitation is the generalisability of the results. This research relied on convenience sampling to recruit participants who were primarily undergraduate psychology students and university staff members. These participants may not fully represent the population from which the sample has been drawn (i.e., Australian residents) as these participants represent a homogenous group of young, online, Western, educated individuals [70, 76]. Therefore, it is recommended that future research conduct different sampling techniques and data collection to study a more diverse range of participants. However, it should be noted that other sound studies similarly include undergraduate students [21, 77]. Secondly, this research was conducted during the COVID-19 pandemic in which fewer people attended the doctor/therapist, shops, or banks in person, instead opting for online services. Therefore, it may be that this shift to online services influenced the participant’s responses to the online survey. However, as consumers adjust to living in a pandemic, it can be posited that these findings are a valid measure of consumers’ behavioural intentions moving forward. It also allows the authors to measure attitudes and behavioural intentions to use AI (versus non-AI) technology in a period where many services were transferred online. Thirdly, there was no control condition. As such, the results could not isolate and identify the impact of AI technology on chatbots. A control condition is recommended for future studies. Finally, this was an exploratory study, and as such, hierarchical regressions were used to aid the model’s simplicity and analysis. As a hierarchical regression applied to cross-sectional data cannot make predictions, future studies should consider longitudinal studies with naturalistic data. Other methods of analysis that consider more complex relationships between the variables and account for the unobserved heterogeneity of the participants should be considered.

5. Conclusion

This study assessed the utility of the extended TAM in explaining individuals’ behavioural intentions to use AI chatbots. Additionally, the present study considered the influence of age, gender, and pre-existing knowledge on future behavioural intentions to use AI chatbots. To the best of our knowledge, this is the first study to assess user acceptance of AI chatbots across different industries (i.e., mental health care, online shopping, and online banking). This research further supports that trust should be considered as an additional TAM construct. The results have shown that PU and trust are the most important predictors of behavioural intentions to use AI chatbots across all three scenarios. The study also showed that the effect of variables, such as PEOU and age, differed across the three scenarios. The key implication of this research is that the influence of the variables in one scenario cannot be generalised to all the potential applications of AI chatbots. As such, strategies to increase technology acceptance of AI chatbots should be tailored for each scenario. These results contribute to the theoretical literature and can guide the adoption of AI chatbots.

Data Availability

Access to data is restricted.

Disclosure

The views expressed herein are those of the authors and are not necessarily those of the funders.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

A manuscript completion assistance grant was awarded to the first author by Queensland University of Technology to assist with the write-up of this publication. Dr Oviedo-Trespalacios is funded by an Australian Research Council Discovery Early Career Award (DE200101079).