Abstract

Design mimetics is an important method of creation in technology design. Here, we review design mimetics as a plausible approach to address the problem of how to design generally intelligent technology. We argue that design mimetics can be conceptually divided into three levels based on the source of imitation. Biomimetics focuses on the structural similarities between systems in nature and technical solutions for solving design problems. In robotics, the sensory-motor systems of humans and animals are a source of design solutions. At the highest level, we introduce the concept of cognitive mimetics, in which the source for imitation is human information processing. We review and discuss some historical examples of cognitive mimetics, its potential uses, methods, levels, and current applications, and how to test its success. We conclude by a practical example showing how cognitive mimetics can be a highly valuable complimentary approach for pattern matching and machine learning based design of artificial intelligence (AI) for solving specific human-AI interaction design problems.

1. Introduction

Mimetic design is an important design methodology. It refers to technology design in which designers imitate some existing phenomenon or system to generate new technological solutions. The paragons can be anything, but often they are phenomena or systems of nature [1]. Yet, mimicking is not necessarily a simple concept. Design mimetics have often focused on structural and physical similarity between entities of nature and technical solutions. However, the structural and physical similarity between the source and idea may not be sufficient for getting the best out of mimicking. There are classes of design problems, which are not structural or physical but still could be of real use, if designers could find new ideas by studying possible solutions via mimicry. In this paper, our goal is to reanalyze design mimetics at a conceptual level in order to explicate the ways it can serve as an approach for addressing the problem of how to design technological solutions with artificial general intelligence [2].

Designing intelligent systems is becoming a core area in developing modern technologies [3]. Machine translation, image and speech recognition systems, self-driving cars, chatbots, and robot help desks are examples of current technological trends. All of these intelligent (or “smart”) technologies are enabled by artificial intelligence (AI) based on neural networks, pattern recognition, and machine learning. The consequence of these new developments is that computers are becoming more relevant in replacing or reallocating people in tasks, in which so far it has been necessary to use people in order to get the systems to work. Nevertheless, true progress in this area presupposes in-depth understanding of the human cognitive processes that should be replaced by machines. Therefore, it makes sense to rethink the conceptual foundations of design mimetics in this new technological era.

A naïve but intuitive example might help to clarify our position. Consider the way in which design mimetics could be used in designing a cyborg pianist. The first problem is to create the hands that play the piano. They must be like human hands with respect to size and elasticity of movement. They should have the right pressure, timing, and tempo to play like Lang Lang (a well-known Chinese concert pianist). Pianist robots’ coordination should be able to mimic the sensory-motor processes of a human pianist. It should be able to hear the notes and respond accordingly.

A critical question here is whether the imitation of the human hands and eye-hand coordination processes are sufficient in expressing all human skills in piano playing. Skilled pianists often use their hands to play the keyboard in a routine manner but they also have to solve unique and complex problems. They have to rely on higher cortical processes such as categorization, inference, decision making, problem solving, and constructive thinking in order to be able to create new artistic visions and interpretations [4].

In building intelligent systems such as autonomous robots, it is thus not necessarily sufficient to mimic biological structures but it is also necessary to mimic sensory-motoric and even different levels of higher intellectual processes. To explain and to mimic a human expert’s skills, one has to go beyond mere mechanical and sensory-motor levels towards even emotional modeling to be able to comprehend and model a creative and skilled pianist. Here, our common sense example could have been about any human expert at work. The key message is that, to replace activities of human experts with machine intelligence, it is necessary to model and mimic many levels of physical and intellectual work.

In this paper, we will argue for a three-level conceptual model of design mimetics and introduce a novel concept of cognitive mimetics to refer to the mimicry of higher cognitive processes for designing intelligent technology. The main goal of the paper is to review and argue for the need of cognitive mimetics in order to design artificial general intelligence [2] and intelligent human-AI interactions. In a world overrun with pattern recognition and machine learning approaches to AI, we suggest that mimicking actual cognitive processes in AI design is a matter for further consideration. Cognitive mimetics may prove to be an increasingly important design approach as it is expected that, in the near future, we will encounter novel interaction challenges in our confrontations with ubiquitous AI [3].

2. Three Levels of Mimetic Design

New design ideas often utilize existing solutions for problems of similar types. The main attribute of mimicry is that the solution for a design problem is found by imitating some existing object or system. The phenomenon that is the paragon for the design solution can be called the source and outcome idea. Mimicry has always been a component of engineering thinking [5, 6]. The Wright brothers as well as Leonardo da Vinci observed how birds flied and imitated these to design an airplane [7]. One can easily find numerous examples of respective design processes in which some aspect of nature has been mimicked to create new technological solutions. In the 1950s, this kind of design was attributed the label “design mimetics.” As the source was often nature, the design approach was coined biomimetics by Otto Schmitt [8]. Ever since, it has had a solid role in engineering. We argue that there are three main levels of design mimetics that can be utilized when designing intelligent technology. In the following chapters, we will briefly discuss each level of mimetic design.

2.1. Mimicking Structural Similarities

An effective way of approaching the problems of replacing human capacities in work has been biomimetics (also biomimicry or bionics). It is an engineering paradigm, which is based on the imitation of the models, systems, and elements of nature for the purpose of solving complex technological problems [1, 6, 7]. A traditional part of biomimetics is based on imitating the processes of nature on the physical level. For example, bird wings were models and inspiration for designing airplane wings, which could enable airplanes to fly and thus enabling people to fly. The focus of biomimetics is thus more on designing a physical object than on replacing people with machines.

Biomimetics has been a very successful way to ideate new technological solutions from nanometric levels to large technical structures. The way evolution has “solved” construction problems can be applied to the technological sphere, though the solutions need not be identical. For example, the wings of any man-made flying technical artifact are not exactly bird wings, but still one can find many analogical properties. There are numerous examples of working technical solutions, of which design processes have been based on biomimetics. Robots may resemble ants or tortoises. Many fabrics have their origins in studies on biological organisms. Connectionist computational models of artificial intelligence have been inspired by neural networks [9]. Even recently, designers have learned more by studying the wings of the birds when designing airplanes and drones [10]. In photonics, engineers were inspired by the reflecting properties of butterfly wings, when they invented new display technologies [11].

Biomimetics illustrates some important properties of mimicry in design. Firstly, imitation is an important source of ideas in design thinking. Secondly, the source and idea are not identical. Rather, they exist in dialogue with one another. Consequently, much of the information required to generate the final solution is not directly related to the source. Thus, metals and rivets in airplane wings have little to do with bird wings. Furthermore, the solution does not need to be as equally efficient as the source. Instead, it may improve the original performance. Caterpillars are much more efficient in working with soil than human hands. The focus of mimicry is to advance forward in design thinking by finding key solutions.

In a closer look, it is problematic as to how well biomimetics suit the purposes of designing intelligent technologies. It rather concentrates on the structural and physical solutions for design problems such as the structures of robots, architectural solutions, or properties of materials and molecules. Structural similarities are not necessarily sufficient for innovating modern intelligent technologies.

2.2. Mimicking Sensory-Motor Processes

The story of industrial robotics is mostly very different from that of mimicking structural similarities. The goal of robots is to replace people in tasks, in situations where, for instance, people are not necessary, or in situations that are dangerous for humans. A substantial amount of industrial robotics, from a mimicry perspective, models the sensory-motor systems of human. Today, for instance, dexterity is one of the key problems in robotics as it would give robots new application areas [12].

A welding robot recognizes the metal body of a car, moves to the right welding spot, and finally welds the pieces together. Actually, the robot does everything that a welder would do and for this reason, it is possible to free people from many routine welding tasks. There are a large number of robots in similar tasks. They can pack things, they can take mail parcels, or they can operate in harbors, to take some examples. People have earlier carried out these tasks, as it was not possible to build sufficiently accurate robots. Computers have made it possible to reach sufficient accuracy in sensory-motor processing and get these kinds of robots to work.

The first robotic arms were created to replace human arms. Therefore, in this sense, they are imitating a human arm. However, there is more in them. They have sensors and programs, which control their behavior. Wiener’s [13] theories of cybernetics and control were important in creating the first-generation automation robots in the sixties. They had sensors and some versions, such as Grey Walter’s tortoises, could even wander around in one’s apartment [14]. From the mimicking point of view, they had new kinds of properties. They could process information.

Early industrial robots carried out tasks, in which human operators relied on their sensory-motor information processing. In welding cars on conveyer belts, the robots are not supposed to think. They just carefully inspect the parts they need to join and then weld them together. Only in fault situations are more complex actions required, but these are typically rare and are often handled by people. Thus, one only needs to coordinate sensory information with movements to carry out these kinds of tasks. Nevertheless, in order to construct robots, one needs to additionally imitate human sensory-motor information processing in addition to biological structures. This kind of mimicry is qualitatively different in comparison to traditional biomimetics.

Imitation of sensory-motor processes only represents the lowest level of information processing mimetics. Machine vision is not only about enabling the machine to see things but also about actively recognizing objects. The recent developments in artificial intelligence and the rise of autonomous technologies call attention to yet another kind of information processing mimetics. This is based on imitating higher cognitive processes such as thinking.

2.3. Mimicking Higher Cognitive Processes

Human information processing is an interesting source for imitation. It is clearly different from structural biomimetics and also from sensory-motor mimetics, although the notion of embodied cognition [15] somewhat blurs the distinction between these levels of mimetics. There are many possibilities for naming this level of design mimetics. Perhaps the most logical term is cognitive mimetics as the source is human information processing. Human information processing has traditionally been called cognition [16]. Thus, all types of mimetics which are built on the idea of imitating human information processing can be called cognitive mimetics. This is in order to distinguish this form of mimetics from the lower levels of design mimetics.

As far as designers will improve physical and physiological properties of traditional human work such as sensory-motor processes on assembly lines by means of industrial robotics, they can rely on imitating and improving biological sensors and body movements. However, moving the scope of traditional biomimetics to the design of autonomous, intelligent technologies means an essential change of focus. This change of focus entails the shift from mimicking physical structures or sensory-motor processes to the higher cognitive processes. Instead of mimicking the physical movement of body limbs, which is common in assembly line robots, it has already become increasingly more important to mimic human intelligence and higher cognitive processes. Mental processes such as language comprehension and production, categorization, decision making, inference, problem solving, and constructive thinking will become more important in design mimetics.

Cognitive processes are important for human survival and they make it possible for people to behave in a flexible and creative manner. People can respond selectively and rationally to situations in which they have never been before (i.e., general intelligence). This means that people are different from many animals, as they are able to adapt more efficiently to new environmental conditions in an intentional manner and they can invent new ways of meeting the environmental demands. Thus, cognitive processes give people much more independence as regards variation in environment as compared to other animals.

When the goal of technology is to replace human intellectual performance, the understanding of cognitive processes and using this knowledge become important in design. In particular, artificial intelligence and robots can benefit from understanding human cognition and information processing as these will be cooperating more and more with people in novel sociotechnical systems. Generally, intelligent autonomous systems are technical devices that can flexibly and rationally respond to stimuli and environmental situations that they have not met before or which have not been programmed in advance. Thus, the stimulus-independence typical to the human mind should be one of the main criteria of (human-like) intelligence of autonomous systems (i.e., artificial general intelligence [2]). In contrast to biomimetics, cognitive mimetics concentrates on analyzing the human information processes and building intelligent systems on the grounds of modeling how people process information.

The three levels of design mimetics based on the source of imitation are presented in Table 1. Next, we will take a closer look on cognitive mimetics and argue more for its importance in designing intelligent technology.

3. Cognitive Mimetics

3.1. The Brief History of Mimicking Human Cognition

Cognitive mimetics is a unique design conception. However, it is based on the very core knowledge of modern cognitive science. The first example of mimicking human information processing is perhaps Turing’s [17, 18] model of a mathematician, that is, the Turing machine. The idea led to the birth of computers and information technology. The core of Turing’s idea was to construct a model or imitation of how mathematicians solve mathematical problems. Thus, his focus was not on structural aspects or sensory-motor processes of people but on how they process information. Turing’s insights led to a number of important prototypical ways of thinking, which can be seen as the first examples of cognitive mimetics.

An excellent early example of cognitive mimetics can be found in game playing algorithms. Game playing became one of the first challenges for designing technical systems, which had some resemblance to the human mind [19]. The challenge was set in the early fifties by Turing [17] and Shannon [19]. De Groot [20] collected chess players’ thinking-aloud protocols and noticed how they used pruned tree searches. Similarly, early AI researchers suggested that heuristic tree search might be a solution to be used by machines in solving search problems [21]. Consequently, through heuristic search, chess was used for forty years as a context for developing human-like artificial intelligence. In this example, the way human chess players processed information became the model of machine information processing. Thus, it was not any feature of a machine or any biological property of a human brain but the way people process information that became the model of this class of AI systems. Tree search was a suitably similar process between chess-playing computers and chess players, rendering it possible to use cognitive mimicry to develop machines capable of intellectual tasks possible only for people beforehand [22]. As is well known, a computer chess program could finally beat a chess world champion in 1997. More recently, examples can be seen in IBM Watson’s [23] victory over two human experts in the Jeopardy game show in 2011 and also in Google’s Deep Mind’s victory over a grandmaster player in the Chinese game Go in 2016 [24].

A few years after Turing and Shannon’s game-playing programs, in 1958, John McCarthy developed the term artificial intelligence (AI) to describe a new field of engineering [25]. At that time, one could find a number of important systems, which to some degree mimicked human information processing. Logic theories, Checkers, transformation grammar, and related computational linguistics can be taken as examples [26].

In early 1940s, another important line of cognitive mimicry began. McCullough and Pitts developed an AI system later known as Perceptron, which finally led to the fields of neurocomputing and connectionism [9, 27, 28]. These approaches to AI were developed on the ground of mimicking (at a highly reduced level) how human nerve cells and neural networks operate [27].

Later, symbolic production systems known as cognitive architectures, such as SOAR [29, 30] and ACT-R [31], were developed based on the General Problem Solver by Newell and Simon [22]. The cognitive models built on these systems tried to mimic the symbolic (representational) level of human information processing in the constraints given by the general cognitive architecture. In the 1970s, production systems were considered to be the key to modeling cognition, whereas in the 1980s and 1990s, connectionist approaches once again gained popularity in attempts to create AI and expert systems due to observed limitations in the production systems’ capabilities to create AI. Since the 1990s, probabilistic models of human cognition based on Bayesian modeling have been replacing both the connectionist and production models among cognitive scientists due to their ability to significantly increase representational complexity [8]. Recently, the rapid successes of deep learning systems in several application domains (e.g., [2, 23, 24]) have brought neural networks that are capable of impressive pattern recognition capacity superior to humans back into the public spotlight. This is due to the advances in computing power, big data, and algorithms. However, these approaches are currently very different from the way human cognition operates [2, 32].

3.2. Goals

AI is a central concept in designing new intelligent technologies. However, cognitive mimetics and AI are not one and the same concept. Cognitive mimetics is one approach for designing and innovating intelligent technologies. AI can but it does not need to be based on cognitive mimetics. Analogously, all technology design does not rely on biomimetics, although it has been proven to be an important aid in design.

Cognitive mimetics presupposes the understanding of how both higher cognitive processes operate in human mind and how these processes can be imitated by computers. This problem belongs to the very core of cognitive science. The problem is the property of multiple realizability [33], which means that cognitive processes can be realized in human minds as well as in animals and technical systems.

However, the goal of cognitive mimetics is not to slavishly imitate human cognitive processes nor is it to construct devices that can perform tasks as effectively as people. The goal should rather be to produce technical systems that surpass human levels so that they can be of real help for improving human life. A pocket calculator that would make the same number of errors as people would not serve its purpose as well as they do today. Yet, a design is based on cognitive mimicry when the system has elements that can be identified and ideated on the grounds of human information processing.

Thus, a chess machine need not exactly search in the same way as people do. Indeed, these machines do not work in a similar manner to people. The machines consider hundreds of millions of moves, while people often do not generate more than fifty mental moves. Computer chess programs are similar to people in that they use very similar tree search process. Yet, they vary in their inability to distinguish between essential and inessential alternatives. The programs have to replace the selectivity by brute force. Their performance is as effective as that of any human being and if needed, they could replace people in chess competitions. Current mainstream approaches to the design of intelligent systems rely mostly on the enormous data crunching capacity of modern computers, machine learning, and pattern recognition in big data [2, 32, 34]. Cognitive mimetics can be a complementing approach to the design of generally intelligent autonomous systems that could better communicate, interact, and cooperate with humans. For instance, Strabala et al. [35] have shown that it is possible to model the way in which people hand over objects to each other (what, when, and where information) and to utilize the procedure in improving human-robot handovers. This is an example of a task that is trivial for humans, while it has been proven difficult to implement for robots.

3.3. Methods

Human capabilities exceed machine capabilities in certain tasks and vice versa. Thus, a critical target design task for cognitive mimetics is to find the optimal division of work between the AI and the human operator (see, e.g., [36]). One of the general tasks in which the human mind is (still) supreme over the machine is the finding of situationally relevant information and subgoals out of large amounts of available situational and dynamic data. Other critical tasks for cognitive mimetics include the modeling of optimal information sharing and control shifts in order to guarantee the highest level of situation awareness regarding the task-relevant information for both the human cooperator/supervisor and AI systems.

In addition, domain-specific expert behaviors can be modeled in order to find situation-specific goal prioritization and goal selection rules for each task of an AI system in the domain. For instance, Soh and Demiris [37] have developed an expert model that is capable of learning a human expert’s tacit knowledge from demonstration. Their study indicates that it is both possible and useful to use algorithms to learn shared control policies by observing a human expert in the domain of smart wheelchair assistance for the disabled (i.e., how and when to assist).

The methods used in cognitive mimetics can also be based on computational modeling approaches such as ACT-R [31], which are capable of modeling the constraints of human information processing. However, the applicable methods are not limited to existing cognitive architectures. Recent computational design approaches for interaction design (e.g., [27]) try to optimize user interfaces by modeling users’ behaviors with reinforcement learning and other machine learning algorithms. Compared to the modeling of average human behavior that has been of main focus since the 1980s with cognitive architectures (e.g., ACT-R [31]), the main goal in cognitive mimetics is to understand and model expert human behavior at such a level of detail that the behavior could be replicated by a computer. This is in line with the original idea behind cognitive modeling as introduced by Newell and Simon [22]. However, this approach necessitates understanding and modeling human error as well, as expertise is typically gained by significant experience on exposure to various kinds of even exceptional trial and error situations. The observation of only perfect task behavior and imitation of these by a machine would only lead to an unintelligent machine that would not be capable of adjusting its behaviors to unexpected, even minor, changes in situation parameters [32]. Reinforcement learning [38] is a machine learning method that has been found to be highly useful in teaching machines “bounded rationality” [39] similar (or superior) to expert humans in a given task, with given goals, constraints, and rewards, after a large number of task simulations [40].

The ultimate key to success in cognitive mimetics would be to create systems that are able to rapidly modify and learn to adjust their behaviors in a similar fashion to a human expert according to the recognition of a meaningful change in situation parameters [2, 32]. As plausible key solutions, Lake et al. [32] have suggested that generally intelligent artificial systems should have the same capacities as those of a human infant (innate or learned [34]) to(1)build causal models of the world which support explanation and understanding (rather than mere pattern recognition),(2)ground learning in intuitive theories of physics (e.g., persistence and continuity of objects) and psychology (e.g., human agents having intentions, beliefs, and goals),(3)harness construction of new representations through the combination of primitive elements and learning-to-learn (i.e., learning a new task or concept can be accelerated through previous or parallel learning of other related tasks or other related concepts).

These capacities enable humans to rapidly acquire and generalize knowledge for novel tasks and situations.

The ideas of brains as an embodied prediction machine (e.g., [41]) may well offer cognitive science the grand unified theory of the mind. When these predictive processing models are coupled with Bayesian models of learning and inference [8], they may be the most promising current approach in this respect (1.-3. above). Other recent candidates for a grand unified theory of cognition (e.g., [42]) have considered analogical thinking as the core feature of human cognition (likewise it is the core of design mimetics). How to implement these kinds of innate capacities, structures, and mechanisms in AI remains an open question, but if it is solved, the consequences for the development of AI could be immense.

3.4. Levels

So far, utilization of cognitive mimetics in the design of AI has been fairly limited. As reviewed earlier, biological neural networks were the early inspiration for artificial neural networks. Tree search similar to human search has been used since 1950s in a number of AI solutions. These solutions range from different games to logic, fifth-generation computers, and, most recently, in Google’s AlphaGo variants beating the best human experts in the game of Go [24, 40]. Reinforcement learning was a key to the superhuman performance of the AlphaGo Zero in the same game [40]. Reinforcement learning can be seen as highly similar to the behaviorist view of human learning, which was found to be significantly deficient for explaining human learning in the so-called cognitive revolution of the 1950s [38, 43].

There may well be many other examples of the success of cognitive mimetics in the design of intelligent technology. However, one can ask if cognitive mimetics has been utilized sufficiently in the design of AI for achieving artificial general intelligence [2, 34]. From this point of view, Marcus [2] has argued for the necessity of hybrid AI systems, which would more closely resemble the organization of human cognition. A hybrid AI system could have various parallel subsystems, maybe similar to deep learning networks, but that are orchestrated by higher-level mechanisms similar to reinforcement learning, as well as by central executive processes working at an even higher symbolic level. AlphaGo Zero [40] is a recent example of how a hybrid system (combining deep learning, reinforcement learning, and tree search) can be superior to pure deep learning systems. Yet, the symbolic level of processing is still absent from systems such as AlphaGo Zero, which may limit its intelligence to certain domains (e.g., gaming).

3.5. Tests of Success

There are a number of suggested means that could be applied to test the success of cognitive mimetics, of which the most famous is Turing’s test [41]. It can be used to assess whether the performance of an intelligent program is as good as the performance of a human being. Turing’s test does not evaluate if a system processes information like people but it evaluates whether it can perform as well as people in an intellectual task. This is important when the replacement or reallocation of human work by technical systems is considered [31].

The original goal of Turing’s [17] test was to answer one question: Can machines think (i.e., are machines intelligent)? The Turing test is an imitation game. The decisive criterion in these experiments is the capacity of the human interrogator to say whether the answer to a question was given by human or machine. If the interrogator cannot do this, then the machine has passed the test. Turing argued that if machines can imitate human thinking perfectly, they are intelligent. Therefore, the outcome of the experiment is that machines can think if they can perform human tasks in such a way that it is impossible for a competent observer to see the difference between human and machine.

Turing’s imitation game gives an explicit (behavioristic) form of how to compare human and machine behaviors in intelligent tasks. Since the discussion on the intelligence of machines underpins much of modern cognitive science, psychology, and philosophy of the mind and it is also essential in developing AI robots and autonomous systems, it makes sense to consider the true value of Turing’s test for both theoretical and practical purposes [44, 45]. To pass this test can be argued to be the ultimate goal for optimal interaction and communication between humans and an AI system in several practical domains, even if the pass would not imply strong AI in the sense of Searle [46].

However, one could ask as to whether or not it is enough for a system to pass the Turing test in a particular task in order to be as (generally) intelligent as a human. Does it matter how the system has reached this level in performance and if it is able to pass the test also in other tasks (i.e., generalizability of the skills)? The question dates back to a long-standing, but unsolved, debate between machine learning researchers (including statisticians) and linguists (including psychologists) [47].

Lake et al. [32] have recently published an extensive literature review comparing the current high-end pattern recognition systems’ (i.e., neural networks’) performance to human performance. They have also discussed what may be lacking in these systems preventing them from reaching the level of human skills. They argue that, despite the biological inspiration and performance achievements, the deep learning pattern recognition systems differ from human intelligence in crucial ways. They put forward strong arguments for cognitive mimetics without using the concept explicitly. Nowadays, pattern recognition system may be taught to reach a comparable, or higher, level of performance than a human in a specific task (e.g., a video game). However, the difference in the required amount of training between a human child and the system to achieve a comparable level of performance can be calculated in hundreds or even thousands of hours. Furthermore, a child can learn and handle a small change in game dynamics easily. A pattern recognition system may require full reconfiguration and a significant amount of training before reaching again a high level of performance. These observations suggest (at least) three criteria for a system to be as intelligent as a human cooperator:(1)Pass in Turing’s test(2)Comparable level of performance with a comparable amount of training(3)Generalizability of the acquired skills and knowledge to other tasks

AlphaGo Zero, as described by Silver et al. [40], may be argued to be able to pass easily 1. and to an extent also 2. in the game of Go. AlphaGo Zero has demonstrated not only human-level performance but also “superhuman proficiency” in the game. In addition, Silver et al. [40] argue that it achieved human level of performance without human (move) data only by reinforcement learning from self-play over less than 40 hours of self-training. Inarguably, these amazing results indicate the efficiency of the reinforcement learning approach for achieving superior performance in one particularly challenging domain for human cognition. However, as Marcus [34] points out, it remains a question as to how well and easily AlphaGo Zero’s intelligence in this particular board game generalizes beyond gaming and even to other types of games, such as video games. Marcus [34] further argues that, despite Silver et al. [40] claiming that AlphaGo Zero was able to achieve superhuman proficiency “tabula rasa” without any knowledge of human Go games and moves, a critical aspect of its success was the tree search and reward logic of the reinforcement learning, which were built-in by its human creators. These critical aspects are highly similar to the mechanisms human players utilize in the game of Go. For the generalizability of the AI produced by cognitive mimetics, it is not sufficient that the AI design is able to produce seemingly intelligent behavior (i.e., pass Turing’s test), but the types of processes that produce this behavior and how generalizable the artificial intelligence is across different tasks are what matters.

4. Application Example: Interactions with Autonomous Vehicles

The utility of cognitive mimetics for the design of intelligent technology can be illustrated by a practical example. Autonomous vehicles (i.e., self-driving cars) are expected to be one of the megatrends of autonomous AI technologies in the near future. However, according to an analysis by the University of Michigan Transportation Research Institute [48], unexpectedly an autonomous car may be statistically more likely to be involved in an accident than a car with a human driver per million miles traveled. The severity of the accidents seems to be lower, the cars were not considered to be at fault in any accident, and most of the accidents were rear-end crashes (the autonomous car was hit from the back by a human driver). However, the findings seem to suggest that there could be something unexpected in the behavior of an autonomous car which can lead human drivers to be misinterpreted by the car’s behaviors.

At intersections and junctions in particular, a common rhythm by the vehicles in a queue is highly important for the flow and safety of the traffic. A major advantage of an autonomous vehicle over a human driver is its ability to detect potentially risky situations well ahead of the human and react to these at a much higher intensity. The downside can be unexpectedly hard braking behaviors in situations where there is a false positive detection. For instance, a bicyclist approaching a crossing but who gives a sign of eye contact and yielding that is efficiently recognizable by a human driver, but not by an autonomous vehicle, can lead to unexpected behaviors. As long as there are human pedestrians, cyclists, or drivers in the traffic among the autonomous vehicles, the problem is real. The issue is even more pronounced in highly unstructured traffic environments, such as crowded city centers. In these environments, the human driver is able to take the own space even by aggressive gestures and other ways of human communication with fellow road users. Meanwhile, an autonomous vehicle can simply cease to move because it detects continuous possibility of crossing objects.

A recent study by Brown and Laurier [49] demonstrates how current autopilot systems (Tesla autopilot and Google self-driving car) are highly inefficient in detecting the intentions of the fellow human drivers and signaling the intentions of the car to other road users. They stress the importance of social interactions on the road: how human drivers are capable of coping in traffic with the fellow drivers by communicating and interpreting the subtle gestures in the movements of cars. Traffic, while there are humans involved, is a sociotechnical system.

Full automation and replacement of the human operators at once would be the optimal, although impossible, lowest-risk option in this domain. Those sociotechnical systems where the autonomous systems will be introduced to cooperate with humans at a fast pace or in safety-critical tasks will be the high-risk environments as there are still imperfect human operators involved in the same tasks. In these types of contexts, turbulence in cooperation due to the introduction of autonomous systems in human-operated ecosystems can be expected partly because human operators tend to satisfice [39], whereas autonomous systems may be designed for optimal performance. Furthermore, autonomous systems are often unaware of the limitations and constraints of human behavior and human information processing [32, 50]. This makes it impossible for them to take these into account in their own behavior and communications.

In the automotive context, the SAE-J3016 [51] levels of vehicle automation of two to four are the challenging ones as the responsibility of driving is not fully on the driver (Levels 0-1) and not fully on the vehicle (Level 5). The shared responsibility on the control of a vehicle can lead to greater problems than giving the whole responsibility for the human (or machine) driver, if the task-relevant information and handovers are not communicated properly within a few seconds’ timeframe from the machine to the human and back again. The findings of Itoh et al. [52] in a study on an assistance system for emergency collision avoidance aptly illustrate that the human drivers’ choice of direction for an avoidance maneuver can well be different from the one selected by the system. Problems can be expected if the natural human tendencies and decision making processes are not taken into account in the development of these types of assistance systems. This can happen, for instance, when a system makes the decision to steer the vehicle in an emergency situation on the behalf of the driver, while the driver can still override the system by steering to another direction.

Cognitive mimetics could be utilized to solve all of these particular problems, among many other similar interaction problems in different application domains. Lake et al. [32] suggest that perfect autonomous vehicles should have intuitive psychology similar to humans and use this psychological reasoning in order to enable fluent cooperation in traffic with human codrivers. They argue that this kind of reasoning would be especially valuable in unexpected, challenging, and novel driving circumstances for which there is little relevant training data available. These circumstances include, for instance, navigating through highly unstructured construction zones. Another great research question for the near future is how much intelligent technologies require mimicry and understanding of human emotions in order to interact fluently with humans.

In a similar fashion, yet on a more technical level, autonomous vehicles’ machine vision systems’ classification performance could be improved by incorporating a form of intuitive physics similar to human (as discussed briefly earlier), for improved object recognition in unexpected conditions. These conditions include, for instance, poor visibility or objects disappearing behind other objects and suddenly appearing again. These kinds of anticipation capacities could provide the vehicle with a human-like ability to “see beyond the lead car.” Even if the autonomous vehicles already outperform human drivers in a great many ways, people are expecting the autonomous car to rapidly recognize, for example, a tractor-trailer that is pulling in front of the vehicle. This looming effect is something that is immediately recognizable even to a human infant. Even if autonomous vehicles may statistically decrease the overall accident risk, the autonomous vehicle should not perform worse than a human driver in any safety-critical subtask. The exact mechanisms of how AI could be given intuitive psychology and intuitive physics engines similar to humans are still unclear. Yet, as argued, these are important topics of study. There is still a great need to understand and replicate how humans mentally model situations for solving various AI-interaction problems.

Trafton et al. [50] provide various examples on how ACT-R/E (ACT-R/Embodied) cognitive architecture can be utilized to provide robots with a better understanding of the constraints in human information processing and behavior and, thus, to enable more fluent interactions with their human cooperators. This is a two-way street; the human cooperators could also be made aware of the limitations of machine thinking [53] and maybe to prepare for unexpected behaviors of the robot cooperators. As it is probable that, at least during the early stages of development, an AI system is at most capable of animal-level communication with humans, Phillips et al. [54] have suggested human-animal interactions as an analogy for designing human-robot interactions. This is yet another example of mimicking cognitive behavior found in nature.

5. Conclusions

Design mimetics can be conceptually divided into three levels based on the source of imitation. Firstly, biomimetics focuses on the physical and structural similarity between the source of imitation and the technical solution. Secondly, sensory-motor mimetics pay attention to the sensory-motor processes that can be found in nature for enabling technical solutions for perceptual and motoric tasks. Thirdly, the highest level of design mimetics relates to mimicking the higher cognitive processes of human experts in a task, that is, cognitive mimetics. The three-level conceptual model of design mimetics was introduced in order to clarify the difference between designing, for instance, a neural network (structural mimetics), machine vision (sensory-motor mimetics), and higher decision making processes (cognitive mimetics). All of these can be design goals for an artificially intelligent system but to imitate the structure of human vision system is far from sufficient for reaching a human level of intelligence in recognizing visual objects.

Current AI systems try to mimic intelligent human behavior on limited application areas, but in order to produce AI systems that can adapt to changes and possess generic solutions to unexpected and untrained situations, the processes behind the seemingly intelligent behaviors should better mimic the higher cognitive processes of human experts. For instance, human-to-human communication is full of unexpected and untrained situations. AI systems should manage these at a similar level to humans in order to produce as fluent human-to-AI communications as human-to-human communications as possible. These points are well known in AI literature but our critical point is that we should not stop working towards solving these problems, if we want to achieve AI that is capable of similar general intelligence as human experts are. Instead, we have suggested that more research should be devoted to what we have labeled here as cognitive mimetics.

Here, cognitive mimetics has been introduced as a hypernym for different design approaches using higher (human) cognitive processes as a source of imitation in design. The explication is important in order to make this significant approach for designing intelligent systems visible and better known as a viable path to autonomous systems with artificial general intelligence. We have shown that the approach may enable and complement the development of AI solutions that can efficiently and pleasantly understand, communicate, cooperate, and interact with their fellow human cooperators.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors want to thank Rebekah Rousi for proofreading the article. The work was partly funded by DIMECC D4Value project.