Review Article | Open Access
Andrzej Cichocki, Alexander P. Kuleshov, "Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of AI/AGI Using Multiple Intelligences and Learning Styles", Computational Intelligence and Neuroscience, vol. 2021, Article ID 8893795, 21 pages, 2021. https://doi.org/10.1155/2021/8893795
Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of AI/AGI Using Multiple Intelligences and Learning Styles
This article discusses some trends and concepts in developing a new generation of future Artificial General Intelligence (AGI) systems which relate to complex facets and different types of human intelligence, especially social, emotional, attentional, and ethical intelligence. We describe various aspects of multiple human intelligences and learning styles, which may affect a variety of AI problem domains. Using the concept of “multiple intelligences” rather than a single type of intelligence, we categorize and provide working definitions of various AGIs depending on their cognitive skills or capacities. Future AI systems will be able not only to communicate with human users and each other but also to efficiently exchange knowledge and wisdom with abilities of cooperation, collaboration, and even cocreating something new and valuable and have metalearning capacities. Multiagent systems such as these can be used to solve problems that would be difficult to solve by any individual intelligent agent.
Both human intelligence, as defined by innate, biological intelligence, and artificial intelligence (AI), commonly defined as machine intelligence, have been hot topics in a wide spectrum of scientific literature (see, e.g., [1–14]). In this paper, we explore how multiple aspects of human intelligence and various learning styles may further inspire or promote the development of new kinds of improved intelligent multiagent systems. We also seek to provide an introduction to how different AI systems can be categorized and ranked depending on their cognitive skills and learning styles.
Progress and even breakthroughs in AI have typically demonstrated that AI systems have the ability to perform (often very well) at solving specific problems or tasks such as recognition, classification, ranking, prediction, clustering, segmentation, playing games, like Go/Jeopardy, and even creation of artwork, like music or paintings (see, e.g., [5, 15–20]). However, as AI systems evolve and expand, it is increasingly clear that their full potential lies beyond merely solving well-defined problems or performing tasks within preset parameters. Rather, many AI systems of the future will interact with other AI subsystems (like smart robots or multiagents) alongside human users to solve dynamically changing and complex problems. For this type of interaction to take place, multiagent systems must have the ability to continuously learn, review, and evolve their interaction strategies during an ongoing communication. In other words, research in AGI may go far beyond a single mental-intellectual or logical-mathematical intelligence, toward the concept of multiple intelligences. We refer to this phenomenon as “AGI with multiple intelligences” as until now most AI methods and approaches are based on one single type of intelligence and perform only one specific task or a set of a few closely related tasks, without fully exploring and implementing sophisticated cognitive skills, emotional-social intelligence (ESI), and responsible group decision-making.
In this paper, we mostly consider AI systems which are, in general, in a form of a multiagent system (MAS), e.g., some kind of smart humanoid robots or computerized systems composed of multiple interacting agents (see Figure 1). This is closely related to the concept of Distributed Artificial Intelligence (DAI), which is an important subfield of AI research, dedicated to the development of distributed solutions for specific problems (see, e.g., [15, 16, 21, 22]).
2. AI as Multidisciplinary Research
AI is grounded in, advanced by research from multiple disciplines, inspired, and driven by human cognitive science, systems neuroscience, and the computational sciences (see Figure 2).
Particularly, human cognitive science and systems neuroscience play key roles in the development of new AI concepts and smart/intelligent systems. In fact, AI research spans the intersection of many fields including human brain science, computer science, and applied mathematics [5–7, 23]. That being said, human cognitive science is a highly interdisciplinary area in itself, exploring ideas and methods from biology, psychology, philosophy, and linguistics. Fundamental human cognitive processes are related to higher-level functions of the human brain and encompass language, imagination, perception, and planning [7, 15, 23–26].
Cognitive skills (functions, abilities, or capacities) allow us to receive, select, store, transform, develop, and retrieve information and knowledge that we have received from external stimuli. A better understanding of such cognitive functions and processes in the human brain would allow us to implement them in future generations of AGI systems more effectively and extend their flexibilities and applications (see Section 9).
3. AI Subdomains: Current Key Applications
In recent years, AI research has made tremendous progress which has already found applications in many fields, from computer vision (CV) (e.g., machine vision, robot vision) and pattern recognition (classification, clustering) to areas like robotics and intelligent agents (R/IA), natural language processing (NLP) (natural language generation (NLG), natural language understanding (NLU), machine translation, sentiment analysis, information retrieval, and extraction), speech recognition and synthesis (SR/S), planning, scheduling, and optimization (PSO), knowledge representation and reasoning (KRR), and expert systems (ES) (see Figure 3).
Currently, machine learning (ML) and its subdomains—deep learning (DL) and artificial neural networks (ANNs)—play a key role in AI research. Many researchers consider ML to be an important subdomain of AI. However, in our opinion, not all ML algorithms and methods, and not even all ANNs, can be classified as part of AI since not all methods in machine learning mimic human intelligence. The same is true for all other domains, like expert systems, NLP, or data mining (Figure 3).
4. Multiple Intelligences
In Figure 4, we illustrate seven vital multiple human intelligences: physical intelligence (physical quotient (PQ)), mental-intellectual intelligence (verbal-logical-mathematical (IQ)), emotional and social intelligence (referred to as EQ and SQ), creative intelligence (CQ), innovative intelligence (INQ), and moral and ethical intelligence (MQ) (cf. also [25, 27–33]).(i)Physical intelligence or physical quotient (PQ), also known as bodily-kinesthetic intelligence, is an intelligence derived or learned through physical, tactile, and practical learning such as sports, dance, or craftsmanship. Physical intelligence is an important aspect of personal effectiveness and physical performance.(ii)Mental/intellectual intelligence, also known as intelligence quotient (IQ), is the mental ability involved in language, mathematical analytical skills, logical reasoning, perceiving relationships and analogies, calculating, data interpretation, verbal abilities, visual and spatial reasoning, classification, and pattern detection and recognition.(iii)Emotional intelligence (emotional quotient (EQ)) is the ability to perceive, assess, generate, understand, and control emotions. EQ also involves the regulation of emotions to promote further emotional and intellectual growth. The concept of EQ was conceptualized and investigated by Michael Beldoch and later popularized by Daniel Goleman, among others [34, 35].(iv)Social intelligence (social quotient (SQ)) is the capacity to understand other humans and to act both rationally and emotionally in relation to others. SQ is important, particularly when forming social bonds and when working within a team. This type of ability does not necessarily need to be limited to humans but could also potentially describe the intelligence of a network of intelligent multiagents, which need to perform complex tasks or solve specific problems jointly while resolving the various conflicts, which may arise from working within a group.(v)Creative intelligence (CQ) is the capability to create or to act of conceiving something original or unusual, while innovative intelligence (INQ) is the implementation of something that has never been made before and is recognized as the product of some unique insight. Note that creativity is characterized by generating something new, either a new idea, concept, process, or method, while innovation employs creativity to enhance the performance or feature of a specific product, process, person, team, or organization. Creative and innovation intelligence can be integrated as CINQ, since innovation goes hand in hand with creativity and there is no innovation without creativity. The CINQ can be considered as a higher form of human intelligence because they go beyond knowledge recall and extend into knowledge creation.(vi)Moral and ethical intelligence (aka MQ) is defined according to Lennick and Kiel  as “the mental capacity to determine how universal human principles should be applied to our personal values, goals, and actions.” Usually, being morally and ethically intelligent means not only just assessing what is right and what is wrong but also having the courage to do what is right and prevent both oneself and others from doing the wrong things [27, 36, 37].
As regards social intelligence (SQ), in particular, this was first conceptualized by psychologist Edward Thorndike (see, e.g., ) but was later reinvented, extended, and popularized by many psychologists, especially Howard Gardner [8, 9] and Daniel Goleman . Gardner proposed and investigated eight human multiple intelligences, out of which the two most important ones are intrapersonal intelligence and interpersonal intelligence, which correspond to EQ and SQ in the above schema (see Figures 4 and 5).
According to the theory of multiple intelligences proposed by Gardner, at least eight different types of intelligence exist: logical-mathematical (reasoning, number smart), visual-spatial (picture smart), verbal-linguistic (word smart), musical-rhythmic and harmonic (sound smart), bodily-kinesthetic (body smart), naturalistic (nature smart), intrapersonal (self-smart), and interpersonal (social smart) [8, 9]. Most humans have all of these types of intelligence, but not all of them are developed in all of us equally or sufficiently well; therefore, we often do not use them effectively. A person with only one or two types of intelligence, which are well developed, may have difficulties to function in the world: such is the case, for example, of many people with Autism Spectrum Disorder (ASD).
Howard Gardner defined intelligence as “the ability to find and solve problems and create products of value in one’s own culture.” Pei Wang [3, 4] defined intelligence as “the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources”.
Gardner’s theory of multiple intelligences has come under some criticism from researchers in education, psychology, and philosophy [10, 25, 29–31]. These critics argue that Gardner’s definitions of multiple intelligences are too broad and mostly represent what could be called talents, abilities, preferences, or personality traits. Others, meanwhile, argue that his definition was not broad enough, as he did not include spiritual intelligence in his list (encompassing concepts such as love, generosity, openness, courage, self-discipline, forgiveness, compassion, detachment, and sense of purpose). However, this was mainly due to the challenges of codifying quantifiable scientific criteria for this type of intelligence, which Gardner rigorously investigated. We do, however, note that spiritual intelligence is considered by some researchers as the most sophisticated form of human intelligence since it is related to the formation of higher meanings and human values. Even if some multiple intelligences could be controversial or do not exist for humans [1, 7], we believe that they are very useful not only to categorize but also to develop new AI systems.
Why, then, is multiple intelligence theory so interesting and important in AI research and development? First of all, it allows AI systems to learn a variety of different tasks and solve different or even unrelated subproblems at once. Moreover, possessing multiple intelligences can draw multiagents back into a specific learning style, which may be most appropriate to the task at hand. Furthermore, through using their different types of intelligence together, multiagents can direct their attention to more specific tasks and problems and this may help to increase their efficiency of learning and consequently their performance and/or better decision-making [39–42].
5. Learning Styles and Machine Learning (ML) Algorithms
The main difference between multiple intelligences and perceptual learning styles is that multiple intelligences represent different intellectual and cognitive abilities, while corresponding perceptual (sensory) learning styles are the different ways in which a human or an intelligent agent approaches and learns a specific ability to solve problems or execute desired tasks depending on available sensory data. The Sensory Learning Style, also known as the VAK, uses the three main sensory receivers: visual, auditory, and kinesthetic (see Figure 5(a)).
Multiple intelligences can be learned—or at least improved and enhanced—via systematic and continuous learning using suitable sensory/training data and appropriate social interactions. It should be noted that certain learning styles can help to build social skills in multiagents, to learn and to develop some knowledge and experience about who and what is around them and how properly they communicate and interact socially in order to do some tasks/actions or to make responsible decisions (see Figure 5(b)).
Moreover, by learning from the different modalities of data, we can improve considerably the performance. For example, by the integration of audio data with visual data (lips movements), speech recognition can be dramatically improved in a noisy environment (a mechanism that is called neural binding in the neuroscientific literature ).
Some of the multiple intelligences have already been explored in commercial AI systems. For example, the BAIDU AI Composer is now used to compose creative music inspired by artistic paintings. Just to mention a few others, the AIVA (Artificial Intelligence Virtual Artist) system has a musical intelligence with the ability to compose original music for films, the Intelligent Atlas robot developed by Boston Dynamics possesses impressive bodily-kinesthetic (physical) intelligence, and DeepMind’s AlphaGo, which employs a Monte Carlo tree search combined with a reinforcement learning algorithm, possesses sophisticated logical-mathematical intelligence to play, almost perfectly, a complex game Go. However, as of now, none of these AI systems can perform two or more quite different cognitive tasks.
In current AI systems, we extensively use six basic ways of learning: supervised learning, unsupervised learning, semisupervised learning, reinforcement learning, ensemble learning, and deep learning (see for details Figure 7). Particularly, important and useful for our concepts and models are ensemble learning, deep learning algorithms, and deep reinforcement learning [5, 6, 16, 21, 50] (see also Figures 8–12 for more details).
6. AI Systems with Multiple Intelligences
We now provide a new categorization and working definitions of AI systems (multiagents) depending on their abilities, flexibility, and level/type of intelligence as follows (see also Figure 13).
AI with physical intelligence abilities (AI-PQ) is an AI system implemented not only in software but also physically in hardware (e.g., as an electronic neuromorphic chip), which can perform specific tasks online or in near real-time with the ability to demonstrate good physical efficiency, that is, low power consumption, high speed, low latency, robustness, and resilience to changing conditions and environmental conditions (like temperature, pressure, or humidity). Such an AI system should have also the ability to control and automatically optimize power consumption depending on tasks and preferences.
AI with mental or intellectual abilities (AI-IQ) is a computerized AI system, which can perform some logical, mathematical, analytical, and/or verbal tasks with the abilities of analytical skills, logical reasoning, pattern recognition (the ability to relate or recognize multiple patterns or events), and/or the ability to store and retrieve information.
AI with emotional intelligence abilities (AI-EQ) is an AI system, which possesses self-awareness, self-assessment, and self-regulation (or self-management). In other words, AI-EQ has the capacity to evaluate/assess its own performance. It also should have reliability and robustness of performance for specific tasks, for example, robustness in respect to noisy, corrupted, and incomplete data sets (i.e., efficient treatment/processing of incomplete data). The AI-EQ should also have the ability to self-assess its own performance depending on the noise level or incompleteness of sensory data sets.
Remark 1. It should be noted that our AI-EQ should not be confused with emotional AI. Emotional AI systems refer to technologies that use affective computing and AI methods to sense, detect, and classify human emotions and behaviors. Affective computing, in general, is the study and development of AI systems and devices that can recognize, interpret, process, and simulate human affects. However, affective computing aims mostly to enable AI systems to “understand” the emotional states expressed by human subjects (see, e.g., [51, 52]). It should be noted that, in this paper, we consider a more general scenario, where AI-EQ is defined as an AI system, which possesses its own self-awareness, self-assessment, and self-management (self-regulation).
AI with social intelligence abilities (AI-SQ) is an AI system, which has the ability to interact and communicate with human and/or other AI subsystems (e.g., deep neural networks (DNNs), intelligent robots, and multiagents) and exchange information and knowledge and support each other. Moreover, such AI-SQ has the ability to coordinate, cooperate, and even collaborate with other AI subsystems (intelligent agents), for example, for the ensemble of DNNs which have the ability to not only communicate but also cooperate and/or collaborate to perform joint complex tasks in an optimized way.
6.5. AI-CQ and AI-INQ
AI with computational creativity and innovation (AI-CINQ) is an AI system that has the capacity to both generate, implement, and evaluate novel product or outputs, e.g., images, music, or videos, which would, if produced by a human, be considered creative and to have value and purpose or conform to common sense (see also Figure 14).
Creative intelligence (AI-CQ) involves the generation of novel and useful ideas, while innovative intelligence (AI-INQ) concerned with the work required to make these ideas valuable or in other words entails the implementation of these ideas into new products or processes.
Summarizing, AI-CINQ (AI with Creativity and INnovation Quotient CINQ) is defined as an AI system that has the capacity of solving problems and/or generating new products, processes, or outputs by discovering and combining ideas and methods in a new way.
Remark 2. Creative and innovative solutions or ideas can be produced in several ways: (1) novel (nontrivial, unexpected) combinations of familiar ideas; (2) nonlinear or multilinear transformation of original data sets into higher-dimensional spaces, so that new structures can be generated, which could not have arisen before; and (3) generation of novel ideas by the exploration of structured conceptual spaces. Note that computational creativity (also known as artificial creativity, creative computing, or creative computation) is a closely related multidisciplinary endeavor that can be considered as the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts [23, 26, 32, 43–45].
AI with ethical and moral intelligence (AI-MQ) is defined as an AI system that not only has the ability to judge its own actions and actions of others (whether agents or humans) from the point of view of ethics but also has the executive power to make responsible decisions to prevent “wrong” doing. In other words, AI-MQ should have not only some kind of self-awareness and executive power to judge or assess what is “right” and “wrong” but also the ability to take action to do what is right. AI-MQ intelligence would be most challenging to implement from of all types of intelligence which we discuss in this paper though, however, the most valuable for humanity.
Remark 3. While ethics and morals both relate to “right” and “wrong” behaviors and are, therefore, often used interchangeably, we do differentiate between them in a substantial way: while morality is something normative but usually personal, ethics is the standard of “right and wrong” which is established by a certain community, culture, or social setting (e.g., codes of conduct in workplaces). In other words, ethics refer to rules provided by an external source, whereas morals refer to an individual’s own principles regarding what is “right” and “wrong” [27, 36, 37].
In all our working definitions, we assume that “insufficient knowledge and resources” are the typical working conditions for any real intelligent systems, along with the ability to adapt (according to the definition of intelligence by Pei Wang, see above) [3, 4]. Furthermore, an advanced AGI system may additionally have a metalearning (learning to learn) capability to improve gradually the learning algorithm itself, given the experience of multiple learning episodes [4, 5, 46]. It is interesting to note that, for example, high AI-PQ is necessary for agile robotic and manufacturing systems, while AI-IQ intelligence is needed in all mathematically formulated problem-solving systems.
7. AI with Social Intelligence (AI-SQ)
The main attribute, or characteristic, of AI-SQ is social interactions, which can be represented and realized through communication, coordination, cooperation, collaboration, and cocreative collaboration skills—5C skills (see for details Figure 15).
Remark 4. Although words such as coordination, cooperation, and collaboration are often used interchangeably in the context of social interactions and effective teamwork, we must note substantial differences among them. Using these words interchangeably poses a risk of confusion, as well as diluting their meaning and diminishing the potential for designing desired learning styles by AI researchers and developers [2, 11, 34, 39, 43, 44].
Therefore, we provide here a categorization of AI-SQ and working definitions of AI-SQ depending on their interaction levels and performed tasks:(a)AI with communication ability provides an efficient way for the exchange of information and raw data between intelligent agents.(b)By AI with coordination ability, we understand the ability of multiagents to maintain some harmony and/or alignment among individual agents’ efforts toward the accomplishment of specific common goals. Coordination can also be understood as a sequenced plan of actions to be performed by intelligent agents, by delineating who will do what, when, and within what time duration.(c)AI with cooperation intelligence is a network of multiagents or physical smart robots, where each individual agent/robot exchanges relevant information and resources in support of each other’s goals, rather than a shared common goal. It is interesting to note that, in the case of cooperation, the result will be created by individual/independent agent/robot efforts, rather than through a collective team effort. In such a case, subtasks for each individual agent/robot are separated, but with a well-understood and defined global task for a network of multiagents (see Figure 1(c)).(d)AI with collaboration intelligence is characterized by the ability of multiagents to exchange not only information but also knowledge and to work together and/or with humans to produce or create something in support of a shared task. In general, collaboration is the action of working together with someone to produce or create something. Intelligent agents should share a common goal or principle to contribute jointly to perform a specific task.(e)AI with cocreative intelligence is a network of multiagents, which has the ability to work together to produce something new, innovative, and even unexpected, which has value and purpose and follows common sense. Such cocreative intelligence can be achieved by knowledge/expertise, experience, curiosity, exploration, flexibility, a strong motivation, prototyping, testing, and exchange of ideas via feedback, adaptation, and even wisdom for further improvements (see also Figure 14).
8. AI with Emotional and Social Intelligence
Social intelligence (SQ) can be considered an extension or a superset of emotional intelligence (EQ) since it is a much broader concept than emotional intelligence. In fact, in psychology, both types of intelligence are often integrated as EQ & SQ or briefly as ESI (emotional-social intelligence) [13, 15, 24, 28, 31, 53].
AI with emotional and social intelligence is referred to here as AI with EQ + SQ with five fundamental abilities of an intelligent multiagent: self-awareness, self-management, social awareness, social (interaction) skills, and responsible decision-making skills. These skills would allow an AI multisystem with EQ & SQ not only to understand but also to manage and perform self-regulation and social interactions (see for details Figure 16).
9. Cognitive Skills and Attentional Intelligence
There are several vital higher-order cognitive abilities for AI, encompassing different aspects of intellectual functions and processes, including perception (visual, auditory, tactile), attention (attending specific information and ignoring others), responsible inhibition (ability to suppress inappropriate responses), inference (i.e., a conclusion or idea reached on the basis of evidence and reasoning and/or the process of reaching such a conclusion), the formation of knowledge, pattern recognition, episodic memory (association of events with place and time), short-term and long-term memory, judgment and evaluation, reasoning and computation, planning, strategic problem solving, continual metalearning, responsible decision-making, and comprehension and generation of language (see Figure 18 for details).
In this battery of important cognitive abilities and skills as regards AI, complex attention, continual metalearning, and self-adaptation to the surrounding environment are the ones that will play key roles [7, 15, 16]. Attention in AI can be interpreted as a neural attention mechanism that, for example, equips an ensemble of deep neural networks with the ability to focus and perform a smart selection on a subset of their inputs (or features) [40–42, 56–58]. For example, an AI system with attention has the ability to automatically select specific inputs or a specific subset of stimuli or input data (e.g., some specific patches of images or specific frequency of audio signals), in order to solve a problem more efficiently and/or more robustly with respect to noise or outliers.
Drawing on research in cognitive science, we can say that humans have at least four main types of attention used in our daily lives: selective attention, divided attention, sustained attention, and executive attention. All these “attentions” can principally be implemented and employed in AGI systems [40–42, 56–58]. Selective attention is the ability to focus or concentrate on a task even when some distractions are present (e.g., noise, outliers, changing environmental conditions) (see, e.g.,  and Figure 19 for more details). Alertness is a state of being ready to react immediately to a specific stimulus, while the attention mechanism in AI principally focuses on a certain specific part of information or stimuli or training data sets, when processing a large amount of raw information. Spatial attention is a form of visual attention that involves directing attention to a location in 2D or 3D space, while temporal attention is a special case of attention (e.g., auditory attention) that involves directing attention to specific instants of time. The essence of AI with temporal attention is to flexibly focus on time to recognize temporal, e.g., rhythmic patterns. Attention switching task is a paradigm requiring AI system to switch between performing multiple different individual tasks. It can be interpreted as a perceptual-cognitive function that involves the ability to unconsciously shift attention between one task and another. Divided attention is a type of simultaneous focus that allows AI systems to process different information sources and efficiently perform multiple tasks simultaneously, while executive attention refers to the ability to control responses, particularly in conflict situations, where several responses are possible. Interference suppression is a mechanism to ignore some salient perceptual information in a bivalent task while attending to the less salient conflicting information. On the other hand, inhibition involves the ability to avoid further processing of stimuli or information, which could or should be ignored. Supervisory attentional control is a higher-level cognitive mechanism active in nonroutine or novel situations; it requires conscious control in response to specific environmental stimuli and uses flexible strategies to solve a variety of difficult conflicting problems. Meta-attention or metafocus consists of regulation of attention and knowledge of attention (i.e., noticing where AI-system focus is directed and self-awareness of employing specific strategies) so that it keeps its attention focused on the task at hand. Metamemory is awareness of memory strategies that work best for the AI system. Metaperception or metasensing means noticing what the AI system is sensing/measuring or “feeling,” and finally, the most sophisticated metacognition involves self-awareness of the strategy an AI system is using to learn to perform specific tasks and evaluating whether this strategy is sufficiently effective for specific tasks (see Figure 19).
Note that selective attention occurs when the awareness—whether it is visual, auditory, or tactile—is channeled onto something specific or focused on relevant targets, while divided attention occurs when the mental focus is directed toward multiple tasks or ideas at once. On the other hand, sustained attention is the ability of AI to attend a task continuously for an extended period and executive attention refers to our ability to regulate responses or decisions, particularly in situations of conflict or when AI receives confusing and contradictory stimuli. When utilizing their executive attention in such a conflict setting, a human being or an AI system should have the ability to regulate their responses accordingly, where several nonconsistent responses are possible. In general, attention can be considered as focused self-awareness, and it is attracted to the selected range of features of specific stimuli like images, sounds, and words. Attentional intelligence (AI–AQ) (see Figure 19 for details) is closely associated with the efficient processing of information and knowledge and it plays a key role in human intelligence (cf. [39–42, 56–58]).
Remark 5. It is interesting to note that many features, especially self-control, self-awareness, social emotions, attention, responsible decision-making, ethical-awareness, and moral-ethical responsibilities, of the proposed AI with PQ, IQ, EQ, SQ, CQ, INQ, AQ, and MQ types of intelligence are associated with the Big Five personality traits of humans, also known as the five-factor model (FFM) and the OCEAN model [25, 30, 31] (see Figure 20): openness (tendency to creativity, curiosity, and imagination), conscientiousness (tendency to be diligent, responsible, and self-aware), extraversion (tendency toward sociability, assertiveness, and emotional expressiveness), agreeableness (tendency toward being collaborative and reliable), and emotional stability (tendency toward having robust and stable behaviors/performance and emotions, self-regulation, and resilience).
10. Conclusions and Discussion
There are various definitions of human intelligence and AI intelligence and creativities that have been developed and built up over years of discussion and disputing, rewording, and reworking, among psychologists, philosophers, neuroscientists, and cognitive and computer scientists [4, 7].
Since AI research is inspired by human intelligence, we believe that multiple intelligences and corresponding learning styles will play an important role in the research, development, and evaluation of a new generation of distributed AI/AGI systems with “a human face” . Furthermore, in many specific applications of AI, for example, in biomedical applications, an extremely vast diversity of knowledge and cognitive skills is required, and therefore, many different forms of cognitive skills and/or intelligences could be potentially useful.
Although current state-of-the-art AI systems already exploit and mimic some types of human intelligence, still, emotional, social, attentional, and moral-ethical intelligences are not implemented to their full potential. For example, current AI systems have the ability, to some extent, to detect and recognize human emotions, but so far, they do not possess self-awareness, self-management, self-assessment, social awareness, and social skills to interact with other agents efficiently. Furthermore, current AI systems still have quite limited cognitive skills in other domains and are not yet able to perform intelligent and responsible decision-making.
The main objective of this paper is to consider AGI systems with a more “human face.” Since emotional-social intelligence, creative-innovative intelligence, attentional intelligence, and moral-ethical intelligence related to responsible decision-making are the essence of human relationships and are essential for effective teamwork and social coexistence, we expect more research and development in AI systems which will have the ability to meaningfully interact with users socially, “understand” their behaviors and abilities, and even understand (to some extent) theory of human minds, including complex cognitive tasks, emotions, and human social interactions.
In this paper, we attempted to categorize various AI systems depending on their abilities, learning styles, and learning algorithms. The essential purpose of our categorizations of AI systems (and their corresponding working definitions) is attempted to make them, as much as possible, useful, inspiring, and insightful, due to the following reasons :(a)They explain what kind of features or components should each specific AI system have and they have some explanatory power, which may lead to progress not just in AI but also in computational neuroscience.(b)They not only categorize AI systems but can also allow us to measure their degree of intelligence. If there are different kinds of intelligence, we need some taxonomy for identifying the kind of intelligence possessed by a system (if any) and quantitatively comparing it to that of other AI systems.(c)They could be some kind of guide to measure the progress and/or to demonstrate some potential in the development of a new generation of AI systems. Here, a key point is to measure the cognitive skills, flexibility, and metalevel learning capability, rather than only the concrete problem-solving capability.(d)Furthermore, they allow us to formulate explicitly or implicitly new challenging subproblems in AI, according to the motto of “a problem well-stated is half solved”.
However, it is neither necessary nor practically useful to attempt to develop and design current practical oriented AI systems, which would be able to simulate or mimic exactly all human, multiple intelligences; this is also neither feasible nor realistic. Rather, it is desirable and expected that the next generation of AI systems would have complementary and/or augmented intelligence to existing human multiple intelligences. This concept is related to the recently introduced AI augmented intelligence, where AI works together with humans to enhance cognitive performance, including learning, decision-making, and forming new experiences. Intelligent augmentation will use and integrate human multiple intelligences, together with more advanced cognitive skills and computational technologies, but with the main objective of not replacing humans but rather assisting them and enhancing their capacities. For example, an AI multiagent with emotional-social intelligence would be able to analyze social cues and human interactions so as to enhance human team collaboration. Another example could be AI agents who collaborate with human game players in e-sports to complete custom-designed missions.
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This research was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).
- C. B. Shearer and J. M. Karanian, “The neuroscience of intelligence: empirical support for the theory of multiple intelligences?” Trends in Neuroscience and Education, vol. 6, pp. 211–223, 2017.
- N.-N. Zheng, Z.-Y. Liu, P.-J. Ren et al., “Hybrid-augmented intelligence: collaboration and cognition,” Frontiers of Information Technology & Electronic Engineering, vol. 18, no. 2, pp. 153–179, 2017.
- P. Wang, “On defining artificial intelligence,” Journal of Artificial General Intelligence, vol. 10, no. 2, pp. 1–37, 2019.
- D. Monett, C. W. Lewis, and K. R. Thórisson, Eds., “Introduction to the JAGI special issue “on defining artificial intelligence”—commentaries and author’s response,” Journal of Artificial General Intelligence, vol. 11, no. 2, pp. 1–100, 2020.
- J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
- J. Schmidhuber, “Deep learning: our miraculous year 1990-1991,” 2020, https://arxiv.org/abs/2005.05744.
- H. S. Paik, One Intelligence or Many? Alternative Approaches to Cognitive Abilities, Washington University, St. Louis, MO, USA, 1998.
- H. E. Gardner, Intelligence Reframed: Multiple Intelligences for the 21st Century, Hachette UK, London, UK, 2000.
- H. Gardner and S. Moran, “The science of multiple intelligences theory: a response to Lynn Waterhouse,” Educational Psychologist, vol. 41, no. 4, pp. 227–232, 2006.
- L. S. Almeida, M. D. Prieto, A. I. Ferreira, M. R. Bermejo, M. Ferrando, and C. Ferrándiz, “Intelligence assessment: Gardner multiple intelligence theory as an alternative,” Learning and Individual Differences, vol. 20, no. 3, pp. 225–230, 2010.
- A. L. Guzman and S. C. Lewis, “Artificial intelligence and communication: a Human-Machine Communication research agenda,” New Media & Society, vol. 22, no. 1, pp. 70–86, 2020.
- M. H. Jarrahi, “Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making,” Business Horizons, vol. 61, no. 4, pp. 577–586, 2018.
- W. S. Bainbridge, E. E. Brent, K. M. Carley et al., “Artificial social intelligence,” Annual Review of Sociology, vol. 20, no. 1, pp. 407–436, 1994.
- R. De Berker, “Artificial intelligence: distinguishing between types & definitions,” Nevada Law Journal, vol. 19, no. 3, p. 9, 2019.
- T. J. Wiltshire, S. F. Warta, D. Barber, and S. M. Fiore, “Enabling robotic social intelligence by engineering human social-cognitive mechanisms,” Cognitive Systems Research, vol. 43, pp. 190–207, 2017.
- F. L. D. Silva and A. H. R. Costa, “A survey on transfer learning for multiagent reinforcement learning systems,” Journal of Artificial Intelligence Research, vol. 64, pp. 645–703, 2019.
- I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, vol. 1, MIT Press, Cambridge, MA, USA, 2016.
- Q. Shi, J. Yin, J. Cai et al., “Block Hankel tensor ARIMA for multiple short time series forecasting,” in Proceedings of the AAAI, pp. 5758–5766, New York, NY, USA, February 2020.
- T. Yokota, H. Hontani, Q. Zhao, and A. Cichocki, “Manifold modeling in embedded space: a perspective for interpreting deep image prior,” IEEE Transactions on Neural Networks and Learning Sytems, 2020.
- A. H. Phan and A. Cichocki, “Tensor decompositions for feature extraction and classification of high dimensional datasets,” Nonlinear Theory and its Applications, IEICE, vol. 1, no. 1, pp. 37–68, 2010.
- T. T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Deep reinforcement learning for multi agent systems: a review of challenges, solutions, and applications,” IEEE Transactions on Cybernetics, vol. 50, no. 9, 2020.
- P. Hernandez-Leal, M. Kaisers, T. Baarslag, and E. M. De Cote, “A survey of learning in multi agent environments: dealing with non-stationarity,” 2017, https://arxiv.org/abs/1707.09183.
- W. Duch, R. J. Oentaryo, and M. Pasquier, “Cognitive architectures: where do we go from here?” Artificial General Intelligence–AGI, vol. 171, pp. 122–136, 2008.
- A. V. Samsonovich, “Socially emotional brain-inspired cognitive architecture framework for artificial intelligence,” Cognitive Systems Research, vol. 60, pp. 57–76, 2020.
- D. M. Higgins, J. B. Peterson, R. O. Pihl, and A. G. M. Lee, “Prefrontal cognitive ability, intelligence, Big Five personality, and the prediction of advanced academic and workplace performance,” Journal of Personality and Social Psychology, vol. 93, no. 2, pp. 298–319, 2007.
- W. Duch, “Intuition, insight, imagination and creativity,” IEEE Computational Intelligence Magazine, vol. 2, no. 3, pp. 40–52, 2007.
- D. Lennick and F. Kiel, Moral Intelligence: Enhancing Business Performance and Leadership Success, Pearson Prentice Hall, Upper Saddle River, NJ, USA, 2007.
- F.-Y. Wang, K. M. Carley, D. Zeng, and W. Mao, “Social computing: from social informatics to social intelligence,” IEEE Intelligent Systems, vol. 22, no. 2, pp. 79–83, 2007.
- L. Waterhouse, “Multiple intelligences, the Mozart effect, and emotional intelligence: a critical review,” Educational Psychologist, vol. 41, no. 4, pp. 207–225, 2006.
- V. Swift, K. E. Wilson, and J. B. Peterson, “Zooming in on the attentional foundations of the big five,” Personality and Individual Differences, vol. 164, Article ID 110000, 2020.
- J. M. Caemmerer, T. Z. Keith, and M. R. Reynolds, “Beyond individual intelligence tests: application of Cattell-Horn-Carroll theory,” Intelligence, vol. 79, Article ID 101433, 2020.
- J. Lehman and K. O. Stanley, “Abandoning objectives: evolution through the search for novelty alone,” Evolutionary Computation, vol. 19, no. 2, pp. 189–223, 2011.
- B. Y. Bartholomew, “Why AI will never surpass human intelligence,” Journal of Consciousness Exploration & Research, vol. 11, no. 3, 2020.
- M. Beldoch and J. R. Davitz, The Communication of Emotional Meaning, McGraw-Hill, New York, NY, USA, 1964.
- D. Goleman, Emotional Intelligence (kursiv), Bantam Books, New York, NY, USA, 2006.
- A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.
- B. Brożek and B. Janik, “Can artificial intelligences be moral agents?” New Ideas in Psychology, vol. 54, pp. 101–106, 2019.
- R. J. Sternberg and R. Kostić, Eds.in Social Intelligence and Nonverbal Communication, Palgrave Macmillan, London, UK, 2020.
- A. Zadeh, P. P. Liang, S. Poria, P. Vij, E. Cambria, and L. P. Morency, “Multi-attention recurrent network for human communication comprehension,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, February 2018.
- G. Nayak, R. Ghosh, X. Jia, V. Mithal, and V. Kumar, “Semi-supervised classification using attention-based regularization on coarse-resolution data,” in Proceedings of the 2020 SIAM International Conference on Data Mining, pp. 253–261, Cincinnati, OH, USA, May 2020.
- S. Chaudhari, G. Polatkan, R. Ramanath, and V. Mithal, “An attentive survey of attention models,” 2019, https://arxiv.org/abs/1904.02874.
- A. Galassi, M. Lippi, and P. Torroni, “Attention in natural language processing,” 2019, https://arxiv.org/abs/1902.02181.
- A. Ecoffet, J. Clune, and J. Lehman, “Open questions in creating safe open-ended AI: tensions between control and creativity,” in Proceedings of the Artificial Life Conference, pp. 27–35, Montréal, Canada, July 2020.
- M. A. Boden, “Creativity and artificial intelligence,” Artificial Intelligence, vol. 103, no. 1-2, pp. 347–356, 1998.
- B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Behavioral and Brain Sciences, vol. 40, 2017.
- T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta-learning in neural networks: a survey,” 2020, https://arxiv.org/abs/2004.05439.
- S. Lavelle, “The machine with a human face: from artificial intelligence to artificial sentience,” in Proceedings of the International Conference on Advanced Information Systems Engineering, pp. 63–75, Grenoble, France, June 2020.
- B. A. Richards, T. P. Lillicrap, P. Beaudoin et al., “Deep learning framework for neuroscience,” Nature Neuroscience, vol. 22, no. 11, pp. 1761–1770.
- Y. Li, F. Wang, Y. Chen, A. Cichocki, and T. Sejnowski, “The effects of audiovisual inputs on solving the cocktail party problem in the human brain: an fMRI study,” Cerebral Cortex, vol. 28, no. 10, pp. 3623–3637, 2018.
- L. Lei, Y. Tan, K. Zheng, S. Liu, K. Zhang, and X. Shen, “Deep reinforcement learning for autonomous internet of things: model, applications and challenges,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 1722–1760, 2020.
- W. L. Zheng, W. Liu, Y. Lu, B. L. Lu, and A. Cichocki, “Emotionmeter: a multimodal framework for recognizing human emotions,” IEEE Transactions on Cybernetics, vol. 49, no. 3, pp. 1110–1122, 2018.
- S. Valenzi, T. Islam, P. Jurica, and A. Cichocki, “Individual classification of emotions using EEG,” Journal of Biomedical Science and Engineering, vol. 7, no. 8, pp. 604–620, 2014.
- L. Rosenberg, G. Willcox, D. Askay, L. Metcalf, and E. Harris, “Amplifying the social intelligence of teams through human swarming,” in Proceedings of the 2018 First IEEE International Conference on Artificial Intelligence for Industries (AI4I), pp. 23–26, Laguna Hills, CA, USA, September 2018.
- Y. Bengio, I. Goodfellow, and A. Courville, “Deep learning for AI,” in Proceedings of the Invited Talk at AAAI, New York, NY, USA, February 2020.
- A. Kear and S. L. Folkes, “A solution to the hyper complex, cross domain reality of artificial intelligence: the hierarchy of AI,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 3, pp. 49–59, 2020.
- X. Li, W. Zhang, and Q. Ding, “Understanding and improving deep learning-based rolling bearing fault diagnosis with attention mechanism,” Signal Processing, vol. 161, pp. 136–154, 2019.
- K. Schweizer, H. Moosbrugger, and F. Goldhammer, “The structure of the relationship between attention and intelligence,” Intelligence, vol. 33, no. 6, pp. 589–611, 2005.
- L. Pillette, A. Cichocki, B. N’Kaoua, and F. Lotte, Toward distinguishing the different types of attention using EEG signals, 2018, https://hal.inria.fr/hal-01762978/document.
Copyright © 2021 Andrzej Cichocki and Alexander P. Kuleshov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.