Abstract

Background. The prominence of technology in modern life cannot be understated. However, for some people, these innovations or their related plausible advancements can be associated with perceptual misinterpretation and/or incorporation into delusional concepts. Objective. This paper aims to explore the intersection of technological advancement and experiencing psychosis. We present a discussion about the explanation seeking that incorporates the concept, that for some people, of technological innovation becoming intertwined with delusional symptoms over the past 100 years. Methods. A longitudinal review of the literature was conducted to synthesize and draw these concepts together, mapping them to a timeline that aligns computing science and healthcare expertise and presents the significant technological changes of the modern era charted against mental health milestones and reports of technology-related delusions. Results. It is possible for technology to be incorporated into the content of delusions with evidence supporting a link between the rate of technological change, the content of delusions, and the use of technology as a way of seeking an explanation. Moreover, analysis suggests a need to better understand how innovations may impact the mental health of people at risk of psychosis and other mental health conditions. Conclusions. Clinical experts and lived experience experts need to be informed about and collaborate with future research and development of technology, specifically artificial intelligence and machine learning, early in the development cycle. This concurs with other artificial intelligence research recommendations calling for design attention to the development and implementation of technological innovation applied in a mental health context.

1. Introduction

It is difficult to understand phenomena that, for some people, lack logical explanation. Generally, people strive to attempt to make meaning within the context they find themselves in, even when it is challenging to do so. For some people, technology provides a platform for the explanation and interpretation of inexplicable perceptions and thoughts regarding confusing and extraordinary concepts that can accompany a deteriorating mental state. Mental health symptoms can create a distorted reality for some, with the incorporation of technological themes intertwined within their internal world, cognitions, perspectives, and delusions, and vice versa [1]. The authors have been reminded of this aspect during the early stages of developing the protocols for a new research programme to investigate the use of machine learning (ML) and artificial intelligence (AI) in mental health. The research team (three mental health professionals and a computing scientist) received unsolicited emails from individuals in the community who seemed to be experiencing significant thought disorder with delusional content relating to the topic of artificial intelligence. This triggered our curiosity and challenged us to consider the ethical implications of e-mental health and AI-/ML-focused research for people with lived experiences of a wide range of mental health conditions including psychosis, depression, and suicidality. This paper is a timely reminder for researchers in this field to consider the broader ramifications associated with developing a public profile of expertise about these topics.

1.1. Aim

The goal of this paper is to conduct a longitudinal review of the literature that aims to examine the relationship between mental illness and technology. In particular, describing an intersection of innovation and expressions of psychosis, the historical timeline has been utilised to examine the alignment of technical innovations and reports of the content of technology-based delusions. Furthermore, we examine the role of mental health and psychosis, the underpinning technologies of AI/ML, the role of science fiction in commonly held beliefs about AI/ML, and how the e-mail communications we received provided the team an opportunity to explore how people may associate these concepts together.

1.2. Significance

The purpose of this study is to improve understanding of how those diagnosed with mental illness may be affected by rapid technological advancements such as the introduction of artificial intelligence and machine learning in clinical care. This research is significant because it is not yet clearly understood how AI-based interventions may impact people with mental illness, especially psychosis. As new technologies are introduced with considerable speed, especially in health care, it is important to understand the factors that influence the interactions between people and technology as they relate to the mental health domain. During early protocol development for a forthcoming study in AI and ML in the mental health field, the research team received an e-mail from a person unknown to the group who had acquired an e-mail address of a team member by chance through their Internet search. Its content revealed that the e-mail’s author was experiencing psychosis, including significant thought disorder and persecutory delusions related to AI, as evidenced by the stated belief in one message that a controlling “neural type network” was using neural pathways to speak to the individual and show them a different world.

It caused the research team to pause and reflect on the past and consider how the research should be framed in a new study protocol for the benefit of end-users, respectful of, and valuing lived experience of people who experience psychosis generally. While suitable help was arranged to assist the individual, it is a reminder of the profound responsibility researchers have during the design phase of diverse digital solutions suitable for addressing mental health problems, and specifically, for the end-users of our innovations. Others have identified that the potential for harm/s to arise when using AI is a factor for consideration when developing technological innovations for people who have preexisting or emerging mental illness and/or suicidality [2]. Additionally, the release of the World Health Organization’s (WHO) Ethics and Governance of Artificial Intelligence in Health Care suggests that future innovations should be iatrogenically sound to ensure the safety of at-risk populations [3]. As the global population increases and the uptake of diverse technologies rises, with the Internet alone reaching 4.72 billion users [4], it is increasingly likely that a convergence of psychosis and technology may emerge for some individuals.

1.3. Background

Intersections of mental health and technology have been described in the literature over the past 100 years. An overview of defining concepts is presented.

1.3.1. The Influencing Machine

Published in 1933, Viennese psychoanalyst and neurologist, Victor Tausk, presented his 1919 essay termed “the influencing machine” to describe reports of external or alien control that people reported experiencing at that time [5, 6]. Tausk explains the phenomenon of psychotic delusions of control from external machinery stating that “The patients are able to give only vague hints of its construction. It consists of boxes, cranks, levers, wheels, buttons, wires, batteries, and the like. Patients endeavour to discover the construction of the apparatus by means of their technical knowledge, and it appears that with the progressive popularization of the sciences, all the forces known to technology are utilised to explain the functioning of the apparatus” [7].

Tausk’s essay provides early evidence that when some people are mentally ill or experiencing phenomena that they do not understand, they may invoke contemporary technology to explain what they are unable to comprehend. Moreover, as technology continues to innovate with each passing human generation, Tausk’s concept of explanation seeking in technology has appeared to remain consistent [1]; additionally, as technology increases in its complexity, the more trusting its users are compelled to become, this, in turn, may lead to mistrust and belief of misinformation [8].

1.4. Mental Health and Psychosis

Psychosis is characterised by disordered thinking and/or the presence of delusions and/or hallucinations [9]. The inner worlds created when a person is unwell are complex and rich and develop as an extraordinary introspective insight into how they are trying to piece together something that makes sense to them [10]. Those with serious mental illness can often be difficult to engage in ongoing treatment [11], and because of this, clinicians must peruse an understanding of how technological delusion [1] interplays with the changes in technological trends. The delusional content of psychosis for some people can include aspects of technological advancement, such as belief in messages received from electronic devices (e.g., radio/T.V.) through digital platforms (e.g., artificial intelligence (AI) programs). By their nature, delusions can be manifested from external sources, and it is common for people to use what they know to try and understand or make sense of the delusions they are suffering. A retrospective case study of the thematic content of psychotic experiences in a first-episode psychosis population of a London early psychosis intervention service found that from a sample of 160 participants, 77 reported that external devices were monitoring them, 33 explicitly that the monitoring was by “electronic devices of consumers” [12]. Thoughts of conspiracy can also be associated with psychosis, with compelling if not confusing information entangled with elements of logic, leading to a belief in misinformation and a focused specificity of mistrust [8]. As an example, this can be observed during the COVID-19 pandemic, where false information was spread rapidly, inferring that the installation of 5G communication towers was responsible for causing the outbreak [13]. This misinformation was spread primarily through social networking sites, causing conspiracy theories, panic, and mistrust [13].

1.4.1. Bizarre Delusions

Delusions are characterised as being created or formed by ignoring evidence that does not adequately support its content, as well as by continuing to maintain it despite the facts [14]. A delusion is deemed to be bizarre if they are “clearly implausible and not understandable to same-culture peers, and do not derive from ordinary life experiences” [15].

An example of bizarre delusion (BD) would be the belief that aliens had abducted a person [16], being taken onto spaceships against their will, be it to receive messages of the fate of the world, for medical experiments involving the harvesting of eggs or producing hybrid offspring with their abductees [14].

However, does the content of delusions matter in the context of treatment? The “bizarreness” of the content needs to be understood not merely as “delusional content” but the “form,” that is, in the way, the delusion is experienced [17]. It is essential to understand how the content forms the structure of the delusion, as it may lead to the delusion becoming pathological and involving harmful malfunction [18]. Therefore, the content of delusions needs to be understood and framed within the context of the individual’s environment and culture. It is the role of the clinician to understand the conditions of the intersubjective encounter, that is, how the BD is described and experienced by the clinician as well as the individual experiencing it [17].

1.5. Science Fiction

The way AI has been portrayed in the media, and in science fiction, is as anthropomorphised entity and superintelligent for example, in 1968, during the space race , Stanley Kubrick’s 2001: A Space Odyssey [19]. Commanding the attention of audiences worldwide, it has since come to be regarded as one of the greatest and most influential films ever made [20]. Spanning the aeons of man, the bulk of the film takes place on a spaceship named Discovery One, and its human astronauts are assisted by the anthropomorphised supercomputer HAL 9000. Of all the character developments throughout the film, HAL is given more attention than the human protagonists under “his” care, and as the film progresses, HAL becomes increasingly agitated, intent on eliminating anyone to achieve the programmed objectives. Finally, the sole surviving crew member deactivates “him,” resulting in the recitation of the lyrics from “On a bicycle built for two.” 2001 provided 1968 audiences with the promise of what the future might look like; public perception of AI was now set [19]. An examination of the character of HAL reveals that, while “he” could be viewed as an out-of-control AI, when viewed pragmatically, “he” was obeying his programming. Ultimately HAL’s programming was flawed, choosing the good of the mission over what HAL concluded to be expendable humans, choosing the many over the few [19]. Many works of science fiction present an anthropomorphised version of AI, exhibiting a very human interpretation of cognition and reason [21, 22]. Asimov [23] goes to the length of defining the “three laws of robotics,” which have since become the basis of many AI science fiction works and the basis for ethical discussions [24]. These fictional worlds should not be denigrated; human beings are creative beings with imaginations and creativity. However, the idea of “what” AI entails is presented to the world in fiction-like forms, with flair and vigorous storytelling.

1.6. Machine Learning and Artificial Intelligence: A Brief Overview

Machine learning (ML) and artificial intelligence (AI) are technologies that are changing the manner in which people go about their everyday business, transforming even the most mundane of tasks, such as evaluating eligibility for mortgages [25] to composing symphonies and sonatas to rival human composers [26]. ML is split into two different classes, the first of which, unsupervised learning, categorizes data using component analysis and clustering techniques. The second, supervised learning, is designed to process the labelled datasets (known as training sets) and identify patterns allowing a model to be built on the results. The model is then supplied with data unknown to the set of data on which it was trained to return predictions from what the model has learned [27]. Using ML to examine big datasets allows researchers to find complex interactions that would be otherwise impossible to find by current human analysis [28].

ML is more suited to process complex data than the traditional (human-based) analytics process, given the complex data combinations and variables at hand [29, 30]. However, an ML system designed to do one task cannot work on a different problem; for example, a system trained in detecting diabetic retinopathy will not diagnose melanoma [31]. Within health care, AI can be divided into assistive, such as predicting care pathway options and autonomous systems that operate without human intervention with the vast majority of health-care AI to-date made up of assistive techniques [32], with human clinicians making the final decisions in the logic process [33]. For example, the IBM Watson platform took only 10 minutes to formulate a treatment plan for a patient with brain cancer, while it took a team of medical officers 160 hours to come up with a comparable plan. This returned many hours of bedside care back to the clinicians and expedited the potential for a timely course of treatment/intervention for the patient. Additionally, IBM Watson found cancer treatments that were overlooked by the treating team, discovering potential therapeutic options after being trained on large amounts of data [31].

1.6.1. Autonomous AI

In 2016, a 40-year-old Florida man was killed when his Tesla Model S hit a white tow truck that was crossing his path while the car, at full speed, was on autopilot [34]. Autonomous AI in vehicles has been heralded as the new revolution in transportation, easing traffic burdens, lowering emissions, and preventing accidents [35] with major tech companies, not just the automotive industries, looking to become part of the autonomous vehicle (AV) ecosystem. The data that are already collected about their users can be combined to create seamless experiences, and AVs would know where and when a person would need to be somewhere, even replacing public transport [36]. Humans never need to take control of the vehicle, offering a safer driving experience by removing the effects of fatigue and human error [35].

However, as the level of autonomy increases in AI, the acceptance of technology among the public decreases [37]. The general acceptance of autonomous AI can be seen in the use of voice-activated assistants (VA) such as Alexa, with an estimated 4.2 billion devices that are in use [38]. While there is some acceptance of having the device always on and listening, there appears to be reluctance and limited trust for use outside of basic tasks, such as scheduling appointments, then for more complex interactions like online purchasing [39].

Furthermore, while people are accepting of the use of VAs, Olson [40] highlights that almost 41% of VAs users are concerned about privacy and passive listening. Similarly, the recommender systems used by Netflix, Spotify, and Google evaluate viewing behaviour based on the television shows or movies that users engage with. These systems learn the user’s habits and then present recommendations based on what it has discovered; moreover, it is not always apparent to the user why the system selected or omitted certain information. This may result in companies, such as those mentioned above, influencing user’s lives by “learning” information about the user’s habits, preferences, and frequency of use, that is, and then populating “offers” aligned with preference characteristics, this results in tailored exposure to or the withholding of information or solutions [41].

To better understand human’s fear of AI and robots, a research team undertook a project to comprehend the sociological phenomenon known as the fear of autonomous robots and artificial intelligence (FARAI). The team collected data from the “Chapman Survey of American Fears, Wave 2,” in 2015, as part of an annual survey project. The survey’s primary focus was to determine Americans’ fears and worry about significant events, politics, and a host of other phenomena. The mail-in survey targeted 2660 households, with 1541 completing the survey, a completion rate of 58%. The sample included data such as household income, age, education, and region. Four specific questions were designed for the purposes of the FARAI study, general fear toward autonomous robots and artificial intelligence, influence of demographic variables such as income and media consumption, exposure to science fiction and relationships with other types of fear. The research found that one in four reported experiencing FARAI, with people reporting significant fear and distrust of autonomous robots; nonetheless, participants could not differentiate their fear of robots and fear of artificial intelligence, citing them as one and the same, the distinction not being relevant to the participants. The research team examined this concept further as most people never had any interaction with a robot yet anticipate fearing an autonomous robot and artificial intelligence, interestingly exposure to communication media related to science fiction uniquely predicted FARAI, carrying some indirect implications for how science fiction portrays artificial intelligence and robots [42].

Consequently, developers have been working to make autonomous systems more “human-like” through the process of anthropomorphism, the “tendency to imbue the real or imagined behaviour of nonhuman agents with human-like characteristics, motivations, intentions, or emotions” [43, 44]. Humanity has a tendency to anthropomorphise objects and artefacts, engendering them with human characteristics such as mental states and emotions [22]. Consequently, AI and robots can be unnerving for the same reason that people are drawn to them since they can be perceived as having a mind [43].

1.6.2. Humanising Technology, the Process of Anthropomorphism

Anthropomorphism generally refers to the attribution of human-like thoughts, feelings, and behaviour to inanimate objects, animals, and in general to natural phenomena [43]. An early attempt at anthropomorphising AI came in the form of ELIZA, a program built in 1966 to give the computer the role of a psychotherapist. ELIZA could ask simple questions, such as “Tell me about your father,” or just repeat the user’s last words back to them as a prompt: “Tell me more about….” ELIZA’s Joseph Weizenbaum was surprised that users of ELIZA, even knowing that the system was limited, attributed a human mental state to the software; that is, they felt ELIZA actually wanted to know about their father, or that ELIZA did, in fact, want to know more [45].

The process of anthropomorphising requires an understanding of how empathy is attributed by the human onlooker; for example, an anthropomorphised cartoon character or puppet is endowed with humanity, showing and displaying empathy towards each other. Yet they possess no cognition or feelings, only what is brought from the puppeteers; however, even knowing that the puppets are not real, humans allow themselves to attribute thoughts and emotions towards them. Attempts to make AI more human, and the notion of artificial empathy, can lead to over humanisation, and while humans are gratified to treat objects as if they had a mind, they do not like when they display explicit human emotions without a puppeteer behind [43]. While early attempts of anthropomorphising the computer interaction with humans can be considered somewhat of indiscretions, Microsoft’s “Clippy” and “BOB,” for example [21], more recent developments have seen the way in which people speak to and are to spoken to, by VA’s becoming more humanised. Not only do VAs sound more human, but the cognition of AI is also changing, being designed and operated in a way similar to human cognition or intelligence [21]. However, if robots are to manifest artificial empathy towards humans as well as present more human qualities through cognitive AI, the fear and distrust are likely to increase. Airenti [43] argues that robots are moving away from novel interactions towards the level of the relationship humans exhibit towards animals, reinforcing the fear of robots becoming too human and too autonomous.

Schizophrenia puts forth several fundamental cognitive impairments, one of which is the inability to draw inferences or correctly predict and interpret the mental state of other people [46]. Frith and Corcoran [47] deemed this to be Theory of Mind (ToM), and while the underlying mechanisms are not yet understood, the empirical evidence is to support that ToM is impaired in those with schizophrenia and can contribute to delusions including alien control or persecutory beliefs [48]. For patients with schizophrenia, many present with unstable identities or changes in self-actualisation [49], and this difficulty in rendering an understanding of self may affect the way in which a person builds and maintains relationships.

As the autonomy of AI and robots becomes more complex, interactive, and even “too real” for some, the empathetic elements that humans use to build relationships may become difficult to understand and process for someone who already has a compromised identity and thought processes or if their ability to draw inferences is impaired; then, for some, the idea that AI is taking control could be quite logical.

2. Methods

The initial e-mail contact caused the researchers to pause and reflect upon a possible relationship between the content of delusions and the evolution of technology over time in society. We conducted a longitudinal review of the literature in the English language across mental health and technological themes. We purposively extracted mental health and technological landmark events across 100 years and plotted them on a 100-year timeline with all events being documented in a comma-separated values file recording the event name, year of event, author, and year of publication. A script was devised and applied employing Python 3 and the Matplotlib package to synthesize the data and generate the timeline diagram. The data included published patient reports, legal records, case studies, and historical reports, to ensure a wide selection of relevant documented human, and developmental experiences were collected to align with our project aims. This method enabled us to derive a narrative and construct a logic that reveals some characteristics pertinent to improving the understanding of the intersection of mental illness and technological development.

3. Results

Technology can be noted as one of the complex but important dimensions that, in modernity, can contribute towards the complexity of mental disorders and illness. It is apparent that technology has long been used as one explanation for the complex phenomena under investigation in this study. This is congruent during an era where technology is becoming more important and invasive in our everyday lives. Understanding the relationships between psychiatric phenomena and the associated behaviour of seeking explanation through technological phenomena will prove essential for clinicians. The results of the longitudinal review of the literature across mental health and technological development themes uncovered three distinct concurrent timelines. The first is compromising of mental health and psychiatric events, which include major milestones encompassing large-scale events such as the introduction of medications or community events that were deemed to have significant cultural impact. Second is technological events such as introduction of the Internet and man landing on the moon and large-scale events where the impact was deemed to have had worldwide and/or significant lasting impact on humanity. Finally, reports of the content of delusions that relate to technology are one possible intersection of psychosis and technology, the manifestation of disordered thoughts that, for some people, offer a plausible explanation for what they are experiencing.

3.1. Evolution of Technological Themes

Our historical timeline (Figure 1) demonstrates ways in which technological developments have aligned with psychological and psychiatric phenomena and with this immersion and reliance on technology comes illness for some individuals, and embedding technology with psychosis and delusions, the gap between what is fiction and reality becomes increasingly narrow.

3.2. Timeline 1: Mental Health Events
3.2.1. Community View of Mental Health

The peak of public psychiatric institutions in the US was in 1955, with 550,000 residents; however, this number started to diminish by introducing the antipsychotic medication chlorpromazine (CPZ) and changes in community acceptance of these facilities [50]. The 1960s brought the publication of the “Myth of Mental Illness” by Szasz, defining mental illness as “Problems with living” [51], and Szasz asserted throughout his career that mental illness is a “nonexistent disease” [52]. Concurrently, the significant social change occurred with the renewed public interest in the role institutions played in the treatment (or lack) of mental illness, as highlighted in Goffman’s seminal work: “Asylums” [53]. This collection of sociological essays on life within institutions argued that “mental-hospital” patients are formed by their institutions, not their illnesses, and their reactions and adjustments emulate those of inmates in other types of restrictive institutions. In contrast to Szasz, Goffman detailed the lived experience of the institutionalised mentally ill, including the abuse and neglect that they were exposed too as victims of the systems that focused on stigma and “spoiled identity” of those deemed to have mental disorders [54]. The decade that followed saw a reduction in the population of many institutions as new psychotropic medications such as chlorpromazine were introduced [55] and the provision of care transitioned towards the community care.

Critical of the reliability of psychiatric diagnosis, Rosenhan [56] conducted a controversial experiment in the early 1970s, placing university students into 12 different institutions as “pseudo-patients,” who each sought voluntary admission by feigning symptoms. Rosenhan reported that all were admitted and immediately reverted to “normal” behaviour, remaining in the hospital between 7 and 52 days, with seven individuals diagnosed with schizophrenia, and one with manic depression. While highly criticised by psychiatrists at the time, his experiment exposed the irregularities evident in diagnostic biases and established that there were no clear guidelines at that time to determine classification determinant for who was “sane or insane,” despite the then-current taxonomical Diagnostic and Statistical Manual of Mental Disorders (DSM)-II [57].

3.2.2. The Evolving Role of Mental Health Nursing

Historians argue that the history of mental health nursing is fraught with contradictions; however, they do agree that the 1950s saw the role of mental health nursing, unique in its practice, and struggle to find its place in psychiatry. This was further amplifiedby the increasing adoption and medicalised preference for pharmacological interventions to treat psychiatric conditionstogether with persistent expectations for nursing and nursing education toadhere to gendered norms associated with the era [58]. 1946 in the United States brought the passing of the National Mental Health Act, which in turn brought the funding for the National Institute of Mental Health providing the opportunity for the advancement of educational curricula for mental health professions including nursing [59]. Through the work of Peplau [60] and Mereness [61], psychiatric nurses were able to gain significant control of their own education. In 1952, Peplau published Interpersonal Relations Theory, the same year as the DSM-I, which helped to lay the foundations of professional nursing by emphasising the nurse-client relationship, identifying four consecutive phases in orientation, identification, exploitation, and resolution [60]. Peplau’s work has contributed to the modern underpinning of what modern nursing defines as the therapeutic relationship [58].

3.2.3. Advances in Pharmacological Technology and Their Alignment with Diagnostic Tools

During the early part of the twentieth century, there were limited pharmacological interventions in the field of psychiatry; this all changed in 1952 when CPZ became available on prescription in France [62], and by the end of 1955, CPZ was reported to be used worldwide [62]. 1952 also saw the introduction of the first American Psychiatric Association (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), heralded to provide a single classification system for mental health issues, and the second edition, DSM-II, closely followed in 1968. In the third edition, published in 1980, the DSM-III represented a move away from the psychoanalytic background and moved towards reconciling medicine with psychiatry. The DSM-IV was published in 1993 and in 2013 saw the release of the most compressive document to date, the DSM-5 [63].

3.2.4. Controversial Alternative Treatment in Mental Health

From 1962 to 1979, Chelmsford Private Hospital in Australia regularly prescribed deep sleep therapy for various mental health conditions such as schizophrenia, anxiety, and depression, consisting of periods of induced coma, often for several weeks, through the administration of intravenous barbiturates. The resulting royal commission in 1990 finds that at least 24 patients died from this intervention [64]. The Royal Australian and New Zealand College of Psychiatrists now state that there is no place for intravenous barbiturates in the treatment of psychiatric illness [65]. In light of all of these events, consumer participation and incorporation of lived experience have become highly valued, with the culture of mental health services changing due to the introduction of lived experience practitioners (LEP), valuing the vital contribution that lived experience brings [66]. Table 1 provides a chronological listing of these events. These mental health-related events provide very distinct changes throughout the timeline, affecting all aspects of community mental health care and the way in which it is delivered.

3.3. Timeline 2: Technological Events
3.3.1. Rebuilding with Technology

From 1941 to 1946, the Manhattan Project’s sole focus was on the development of atomic weapons, the detonation of which resulted in the surrender of Japan to the United States in 1945 after the devastating destruction of Hiroshima and Nagasaki, ushering the period of the Atomic or Nuclear age [69]. After the war, technological efforts turn towards space, when, in 1957, the Russians achieved space flight by launching the Sputnik 1 satellite into space [70] and launched the space race between the United States of America and the Soviet Union. Cosmonaut Yuri Gagarin was the first human to fly into outer space on the 12th of April 1961 [71], and, just days later, on the 25th of May, 1961, USA President Kennedy announced plans to fly to the moon. Eight years later, in July of 1969, Neal Armstrong and Buzz Aldrin set foot on the lunar surface [72].

Two years after this feat of human engineering, in 1971, the first “e-mail” is transmitted via Advanced Research Projects Agency Network (ARPANET), and the age of information was born [73], transforming communication for the future. In 1975, Bill Gates and Paul Allan founded Microsoft, only one year before Steve Jobs and Steve Wozniak formed Apple in 1976 (incorporated in 1977) [73]. In 1988, the first transatlantic fibre optic cable was complete, bridging Europe and America and by 1989, while working at European Council for Nuclear Research (CERN), Tim Berners-Lee released the World Wide Web (WWW), opening the era of information to the masses [73]. Four years later, in 1993, he saw the birth of the MOSAIC browser enabling users to start “surfing” what is now referred to as the “Internet,” heralding the beginning of the multimedia age. In 1994, Amazon was launched, and the term “information superhighway” is coined first by Al Gore. In 1995, the launch of Geocities allowed everyday people to build websites quickly, and by 1997, eBay, Blackboard, and Hotmail had all started, thereby revolutionising online shopping, education, and communications. In 1998, he saw the launch of Google, while Apple introduced the iMac, a consumer-grade all in one desktop computer [73].

The twenty-first century started with Apple introducing the iPod in 2001 and iTunes launching in 2003, paving the way for the social age to begin with the introduction of MySpace, and by early 2004, Mark Zuckerberg launched Facebook. 2005 introduced YouTube, and by 2006, MySpace was the most popular social network service in the world. The same year Apple introduced the MacBook Pro, and Twitter launched its microblogging platform. In 2007, Apple released the first version of iPhone, while Facebook launched its Beacon advertising service. By 2008, Facebook had surpassed MySpace subscriptions [73]. The 2010’s brings the era of Big Data; and to enhance financial viability, Facebook takes steps to tailor its content for each user experience. To enable this, the company collects large volumes from each one of its subscribers [74]. In 2018, it was disclosed that Facebook had handed over the personal data of 87 million individuals to Cambridge Analytica, the content of which was used to target individuals and manipulate the influence of the 2016 US presidential elections [75]. This manipulation was only possible because the massive volume of data that was collected by peak commercial agencies could be processed quickly and trolling enabled them to imitate and manufacture influence among user groups [76].

3.3.2. Technological Velocity

These time periods are shaped by significant advances in technology within the twentieth to the twenty-first century providing pivotal moments in the speed and growth of technology. Starting with the turing machine in 1936–1937 [77], it provided the basis for modern computer science and foundational models of computability. However, a pivotal moment came in 1947 when the transistor was first demonstrated in Bell Laboratories [78], paving the way for the Integrated Circuit (IC) in 1959 [79] which in turn led to the Intel 8086 central processing unit. The 8086’s x86 architecture was at the core of the low-cost IBM PC, with its legacy which is still alive today in most laptops and workstations and is even embedded into the US Space shuttle program [80]. In 1992, Finnish telecommunications company Nokia released its 101 cellular mobile telephones; the model moved away from the traditional bulky telephone form towards a smaller and more convenient form that could be accommodated in a pocket or handbag [81], thus bringing the beginning of wireless communication for the masses. In 1999, the term Internet of Things (IoT) was coined by Kevin Ashton [82]; however, the concept of IoT can be traced back to the “Internet toaster” in 1989. This novel toaster was connected to the Internet and allowed users to turn it on remotely while accepting parameters for the darkness of the toast [82]. With the explosion of low-cost microcontrollers and increasing ease of access to the Internet by 2017, IoT is becoming one of the current dominant emerging technologies [83]. In 2019, SpaceX launched 60 prototype Starlink Internet-distribution satellites into orbit, the first of a proposed 12,000, promising worldwide low-cost Internet [84], which Starlink aimed to have available globally by the end of 2021 [85].

3.3.3. Development of Artificial Intelligence and Machine Learning

In parallel with the aforementioned technical milestones, the basis for AI’s technological development can be traced from [86] paper in which they present the “Neuron Hypothesis,” the origin of connectionism and logicist approach to AI. Their work provided a way to describe brain functions in abstract terms, showing that simple binary (on or off) states connected in a neural network garnering immense computational power [86]. However, it was not until the Dartmouth workshop, the first conference to be devoted to the subject, was held in 1956 that the name “Artificial Intelligence” was proposed by John McCarthy [87]. Initially, the significant challenges presented to AI were that of games such as chess and checkers; with this in mind, while working at I.B.M., Arthur Samuels developed his “Checkers” program and made history by demonstrating for the first time that a computer could beat a human [88]. Further development at IBM brought forth “Deep Blue,” a computer explicitly designed for chess, eventually defeating the then reigning world chess champion Garry Kasparov in 1997 [89]. After conquering chess, research switched to the ancient Chinese game of Go, which has far more possible moves than chess and thus presented a very different problem for AI engineers to solve. In March 2016, DeepMind’s AlphaGo defeated Lee Sedol, a ranked 9-dan world-class player, and AlphaGo won four out of five matches [90].

With significant progress in computing power, the practical aspects of AI could finally catch up to its theoretical promises. Of the many types of AI and ML available, deep reinforcement learning (DRL) [91] has become a technique that leads to successes that surprise even the leading experts in the world while it remains somewhat a mystery how and when exactly reinforcement learning will be successful. The recent advances in DRL have led to often spectacular successes; however, the same solution, when applied to a different problem, will not have the same desired outcome. Often requiring many different solutions to be trialled before finding a suitable solution, control engineers themselves not understanding exactly how each model may work with a given problem until it is tested [92], p. XI). In 2011, Apple launched “Siri,” an anthropomorphised, feminine [93] virtual assistant embedded with its latest iOS product. Users could ask the software agent various day-to-day tasks such as scheduling meetings and reminders and operating IoT devices. Unlike previous speech-to-text programs that had existed to date, Siri was unique because no training by the user was required, simply speaking to the device would result in the speech being recognised and instructions carried out, the AI working in the background to interpret the voice command and execute. Microsoft followed in 2013 with “Cortana,Amazon launched “ALexa” in 2014, and Google’s assistant was embedded into its ecosystem of devices starting in 2016 [94]. Table 2 provides the aforementioned events in chronological order. Technology has improved exponentially throughout the past 100 years; however, in order to make technology accessible by all people, a design requirement to enhance usability is that a significant proportion of the inner workings remains hidden.

3.4. Timeline 3: The Intersection of Psychosis and Technology
3.4.1. Intersection of Psychosis and Technology

Tausk’s [7] essay presents early evidence that people will try to use the technology around them to rationally explain what is occurring for them in times of psychological distress. Gaedtke [5] reports material from the late 1930s through to 1959 where clients recount command hallucinations and delusions concerning the belief that the British Broadcasting Company was using “black boxes” and radiowaves to control and monitor people’s thoughts.

In 1974, an inmate of the Georgia State Prison at Reidsville filed a case against the state for what he believed were experiments conducted on him during his incarceration. He called this a “behaviour modification program” and that the “controlling system is a watchful eye of the state through electronic surveillance upon the human body” [97]. Between 2013 and 2016, four mass shootings are noted in America after individuals who experienced delusions with technological content, reported a belief of having microphones implanted into their brains and being controlled by low-frequency electromagnetic waves while under electronic government surveillance.

Finally, one individual reported suffering from “remote brain experimentation, remote neural monitoring of an entire humans body” [98101]. Kar and Barreto [102] present an account of an individual who suffered various forms of modern-day delusional thinking over ten years. She reported that Internet forums had been set up expressly to make comments and nasty references about her. She spent hours searching the Internet trying to find evidence and believed that specific television channels were making voice-overs that were malicious and full of offensive comments about her family. She believed their house to be monitored by satellites so that people would know her whereabouts and could listen in to her conversations. As mentioned above, case reports from Fischer et al. [103] disclose delusions from the COVID-19 pandemic with one individual who believed himself to already have immunity to COVID-19 after believing the illness had already infected him after receiving a message through “WhatsApp.” Table 3 provides the above reports in chronological order. These reported experiences provide some insight into some possible explanation-seeking incidents that incorporate technology.

4. Discussion

As steam once sparked the first industrial revolution, the fusion of modern technologies such as AI and IoT brings an unprecedented and exponential rate of change, not only has the velocity (how fast) of innovation increased but the volume (how much) that is now exemplified by a synthesis of technologies that is distorting the lines between the digital and physical [104]. From the earliest writings of Tausk [7] influencing machine through to recently, where an individual believed immunity from COVID-19 could be obtained through a message on WhatsApp, it is evident that people seek to find a plausible explanation within their environment/s and as technology becomes an increasingly integral part of our everyday lives. There is a tacit acceptance that electronic devices (e.g., ALexa and Siri) listen to us speak, while wearable technology monitors our bodily functions (e.g., heart rate, sleep, and movement/steps) and interacts with our humanity with a precision never experienced in earlier generations. Personal data are incidentally provided through accept to proceed gateways as a trade-off for social media companies to utilise mass data as they please, in exchange for the use of their platforms. For people who experience alterations in their cognitive or perceptual mental health, the gap is not as far to bridge as it may once have been in search for a technological explanation of perplexing phenomena.

Machine intelligence (ML + AI) models have reached a point of development where not only the general public or nonexperts worry about these technologies, but even the leading experts are losing their ability to fully assess their creations. Silver et al. [104] quite abstractly speculate that reward is a simple enough driver for all animals, and deep reinforcement learning (DRL) is no different. Similar to the way a human learns how to ride a bike or catch a ball, DRL systems are trained in a way that rewards the model for learning successful behaviour, reinforcing the actions taken to achieve the result. This almost humanistic way of learning is proving to have exceptional and often surprising results [105]; however, experts are concerned that it is not always understood what the model may have learned to maximise its outcomes. In a system designed to exploit reward, the agent may inadvertently find a way to game the system, maximising the reward and performing exceptionally well in testing, but this results in negative outcomes. A hypothetical example of “hacking the reward function” or “wireheading” could be present in a heparin dosing system, and the agent may happen upon on a strategy of giving pulses of heparin, immediately before activated partial thromboplastin time (aPTT) measurement, giving positive short-term control but without achieving the intended goal of stable long-term regulation [106]. While learning and reinforcement of learning parallels are apparent in both human and technological experiences, vulnerabilities for precision exist.

Exploration and further research of the underlying mechanisms about how AI, such as how DRL, works could provide a direct pathway towards artificial general intelligence [104], and because of this, researchers and developers must be encouraged to carefully apply ethical AI practices based upon the WHO key ethical guidelines including the protection of patient’s and health-care providers autonomy. Safeguards should be in place such that AI systems can never replace human decision-making, unless it can be proven that the AI system significantly outperforms humans in making better and more ethical decisions.

Additionally, human wellbeing, safety, and the public interest must be promoted and held to the same high standards of safety, accuracy, and efficacy before any system is deployed, ensuring it is transparent, intelligible, and explainable to all parties in the continuum of care. The development of AI in health care requires clear and transparent specifications and guidelines that outline the conditions under which the system can achieve the required performance, with biases known and understood. Individuals should not be disadvantaged because of gender, income, age, ability, or any other characteristic. Finally, AI systems should not be left to become static but must be continuously evaluated to ensure that systems remain transparent and capable of responding within the communicated specifications and expectations [3] (pp. 23–30).

4.1. Implications for Clinical Practice

Mental health presentations can be vague or nondescript in symptomatology and take longer to assess, requiring specialist mental health clinicians [107]. These complexities may be better understood through innovative technologies; however, gaps exist where focused effort is required for mental health clinicians to take on a leadership role in shaping the way technology and AI is used in health systems. Rapid technological advances intersect with the lived experiences of some people with mental health conditions such as psychosis, clinicians need to seek an understanding of the relationship of explanation seeking in technology, impaired inference in psychosis, and the reluctance allegiance that many people have when using new technology. Increasingly, mental health clinicians need to have a general understanding of the technological landscape in which society operates, and this will foster a better understanding of how clients are experiencing the world in the context of delusions and explanation seeking. Further research is recommended to better understand the broader social concerns of innovative technologies used in clinical care [108], and to a wider extent, an understanding of the relationship between the data clinicians collects every day and how it can be used in clinical practice and AI development [109]. Mental health clinicians are concerned with the care of people who experience distress or alteration in healthy functioning related to the technological systems in which they live and work; to this extent, they should be empowered to explain how a system works, even at a high level, when providing care to these people. Moreover, technology and data literacy should be included as part of the integration of any e-mental health tool or programs as well as broader AI health-care projects.

It is recommended that mental health clinicians not only have meaningful involvement through all stages of the development and implementation of innovative technologies in health care [109] but are already in a position to influence decisions related to the integration of AI into health systems while ensuring that lived experience is valued and included [110]. This paper provides clinicians with an overview of the main themes related to the intersection of technology and psychosis as they have evolved over time.

4.2. Implications for Ethics and Research in Health-Care AI/ML

As AI/ML technologies evolve, researchers must be aware of how the “black box” of AI works, ensuring that individuals’ human rights are never compromised and do not violate the first law, either through design or omission. Researchers are encouraged to closely monitor emerging AI/ML technologies such as explainable boosting machines to provide transparent “glass-box” models for use in health care [111, 112] which provide insight into the model and help communicate and understand why they work the way they do.

Significant ethical concepts must be discussed when considering the use of AI in health care, such as, who owns the data, how do researchers ensure informed consent for use, algorithmic accountability, and providing transparency regarding how the data will be used and a right to challenge decisions [3, 113]. It also raises questions about predictions of a person’s future risk, and where no action is taken, with whom does this responsibility lie [114]? Additionally, data collected from people detained under the legal act could be collected in a subversive way, or through latent coercion as a misbalance of the power dynamic, mindful that any information gathered while an individual is under a legal provision may also result in potentially biased data [114]. Consideration must also be given to the legal implications more widely because mental health services operate under a legal framework and are carried out by a combination of the regulated and unregulated workforce [115].

Based on our longitudinal review of the literature, there appears to be minimal information regarding the attitudes and feelings of mental health consumers about the use of machine learning tools in mental health and health in general. One study investigating patient attitudes towards wearable devices and AI in health care found that 35% of the patients refused to integrate at least one intervention using AI and biometric monitoring devices [116]. Without proper validation and understanding of data, there is a high risk of introducing racial or unexpected bias that could, in turn, marginalise already disadvantaged groups. Moreover, when trained on biased data, models could make dangerous assumptions. The release of the Human Rights and Technology Final Report in Australia [113] and the Ethics and Governance of Artificial Intelligence for Health [3] highlights those future innovations should consider potential iatrogenic harms to ensure the safety of at-risk populations [113].

Furthermore, researchers must engage domain experts in interpreting unstructured data while including them in a mechanism for accountability and ethical research practices, with these experts providing nuanced insights to allow data to be classified correctly for research and training [117]. This concurs with other AI research recommendations calling for design attention to developing and implementing technological innovation in a mental health context [109].

Is the solution simply to follow Asimov’s laws? The first law states “A robot may not injure a human being or, through inaction, allow a human being to come to harm” [23]. Barthelmess and Furbach [24] argue that the initial response is “no.” Robots present no threat to humans, and rather, this is an idea emanating from works of fiction. Technology has the opportunity to make significant beneficial differences in people’s lives, but it also may adversely cause harm to others [118]. To improve the outcomes for people with mental health conditions, the collaboration will be a requisite for human safety. It is necessary to operate with an ethical framework in partnership with mental health experts to understand the benefits and limitations in the context of designing the integration of technological and mental health experiences.

5. Limitations

This study has several limitations; firstly, this is a longitudinal review of the literature up to 2021 and as such may not include recent literature after this date. Secondly, the exact dates of some historical events are conflicting, adding limitations to the literature, as such year only has been used to plot the timeline.

In order to examine this topic across a 100-year period, it was necessary to use a purposeful selection of literature and contain the search for the researchers undertaking the project. Furthermore, the selection of the types of literature and a clear conceptual basis for the study have assisted in setting meaningful limitations that ensure that the selected methods are aligned with the aims of the study and yield meaningful results that explain the phenomena under investigation.

6. Conclusion

In conclusion, this paper has explored an intersection of plausible explanation seeking from technology-themed concepts in those with deteriorating mental health throughout the twentieth and twenty-first century technological age. Technological innovation is increasing in both velocity and volume, with AI technologies such as DRL being incredibly powerful; however, the way that AI can be perceived to mimic human learning may lead to misunderstandings of how the technology works with many people’s perception of AI shaped by the media and often portrayed in a fictional context. Nevertheless, advanced technologies based on AI, such as virtual assistants, are becoming part of everyday life. Explanation seeking through the technology of the time has been evident and consistent since the early twentieth century, and if an individual’s ability to draw inference is impaired, the conclusion they draw may be what is easily explainable or makes logical sense in relation to what they are experiencing.

People use the world/environment around them to explain and interpret what is happening to them, and it is crucial that steps are made to understand how technological innovations may impact people at risk of psychosis and other mental health conditions. As the future promised in 2001 arrives, HAL 9000 may not exist, but ALexa, Siri, and Cortana do, and when the idea of talking to an AI was considered a work of fiction, it is now everyday life, as such, the gap between fiction and reality has become very narrow.

Explanation seeking in technology and impaired inferences in schizophrenia has remained constant over the past century, while concurrently, the evolution of technology has continued to a point where trust must be placed in the developers and researchers of these products. Collectively, we are located at an intersection of continuous technological advancement and increasing needs for improvements in access to mental health care; therefore, future technologies must uphold the dignity of people with mental health conditions, thus ensuring that products are ethically transparent about how AI/ML is implemented and all systems used for supporting people with mental health conditions adhere to the same ethical standards as clinicians and are designed with the health and welfare of the end user as the primary focus.

Abbreviations

AI:Artificial intelligence
APA:American psychiatric association
BD:Bizarre delusions
CPZ:Chlorpromazine
DRL:Deep reinforcement learning
DSM:Diagnostic and statistical manual of mental disorders
FARAI:Fear of autonomous robots and artificial intelligence
IoT:Internet of things
ML:Machine learning
MI:Machine intelligence
VA:Voice-activated assistants
WHO:World Health Organization.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

OH contributed to concept development, designed th project, collected data, analysed the data, and prepared the manuscript. BS analysed the data, prepared the manuscript, and supervised the project. SC analysed the data, prepared the manuscript, and supervised the project. RW contributed to concept development, prepared the manuscript, collected data, analysed the data, manuscript preparation, and supervision of project.

Acknowledgments

The authors would like to acknowledge the support of the Central Coast Local Health District. Partial financial support was received from the NSW Ministry of Health as part of the Towards Zero Suicides initiative.