Abstract

Artificial intelligence (AI) is essentially the simulation of human intelligence. Today’s AI can only simulate, replace, extend, or expand part of human intelligence. In the future, the research and development of cutting-edge technologies such as brain-computer interface (BCI) together with the development of the human brain will eventually usher in a strong AI era, when AI can simulate and replace human’s imagination, emotion, intuition, potential, tacit knowledge, and other kinds of personalized intelligence. Breakthroughs in algorithms represented by cognitive computing promote the continuous penetration of AI into fields such as education, commerce, and medical treatment to build up AI service space. As to human concern, namely, who controls whom between humankind and intelligent machines, the answer is that AI can only become a service provider for human beings, demonstrating the value rationality of following ethics.

1. Introduction

The term “artificial intelligence” was first used by John McCarthy at Dartmouth Conference in 1956. Since then, artificial intelligence (AI) has gone through three booms during decades of scientific and technological development. The first boom was from 1956 to 1976. Since the 1950s, humans had successively invented the first perception neural network software and chat software and proved some mathematical theorems, exclaiming that the “AI era is coming” and that “robots will surpass human beings in 10 years.” During the second boom (1976–2006), Hopfield neural network [1] and BT training algorithm proposed in the 1980s made AI popular again, which led to the emergence of speech recognition, speech translation plan, and Japan’s fifth-generation computer idea. However, these ideas fell through and the second boom broke up again. After data accumulated to a certain amount, some results would stop rising to some extent. During the third boom (2006 till now), AI broke out again as Hinton put forward deep learning technology in 2006 and ImageNet Competition made breakthroughs in image recognition in 2012. In 2016, AlphaGo defeated Lee Se-dol, once the world champion of Go, and that was regarded as the peak of AI development.

Now, humans have made great progress in various fields such as cognitive psychology, neuroscience, quantum physics, and brain science, and theories related to artificial intelligence have kept emerging. Without the integrated development of computer science with brain science, neuropsychology, linguistics, and other disciplines, the research and development of AI would not have made such great achievements. AI research has also presented some research highlights such as machine learning, neural network (NN), expert system, genetic algorithm (GA), fuzzy inference system (FIS), support vector machine (SVM), and particle swarm optimization (PSO) [2], as shown in Figure 1. AI has been widely used in each aspect of human life, even surpassing human intelligence in some areas. AI can, to some extent, replace humans to complete the tasks of recognition, decision-making, and control. In terms of recognition, AI can distinguish, classify, and retrieve information. In the aspect of decision-making, AI can carry out numerical object evaluation and matching. Regarding control, AI can complete performance generation, design and action optimization, and operation automation.

However, scientific, technological, and social problems arisen from the development of AI are drawing more attention from the public. While promoting social progress, the widespread application of AI also has some prominent negative effects. For example, the application of robot has led to unemployment [3]; the application of AI has widened the gap of wealth; AI algorithms have caused bias; big data has resulted in privacy leakage and degeneration of human’s spiritual life. All these are social problems brought about by the application of AI technologies. In his book AI Future, Kai-fu Lee analyzed the employment risks of manual and mental labor caused by AI. For manual workers, structured jobs with weak social interactions and low skills are facing higher risks, such as truck drivers, fast food cooks, and sewing workers. For mental workers, those engaged in weak social and low creative jobs are in greater danger of unemployment, such as radiologists and telemarketers. Therefore, in the AI era, it is inevitable for some occupations or jobs to be replaced. If your job does not require much talent and skills can be acquired through training, if what you need to do is a lot of repetitive work without too much thinking, and if you work in a narrow social network and have little communication with others, then you are highly likely to be replaced by AI.

The deconstruction and reconstruction of the occupational structure is just one aspect of what AI has brought about. Some people even put forward the “machine threat theory.” Understanding AI is the prerequisite of applying it. AI is in essence the simulation of human intelligence and its development depends on breakthroughs of algorithms. Considering this relationship as well as human’s confusion and concerns in understanding AI, this paper first analyzes the nature of AI, AI, and consciousness, as well as the features and advantages of cognitive computing, and further forecasts the future development of AI.

2. The Nature and Characteristics of AI

The difference between AI and ordinary computers lies in the new forms and technical means of AI, but they share the same nature. AI is still a tool of humans [4]. To accurately grasp AI, we can make an analysis from two aspects: hardware and software. From the hardware point of view, intelligent machines like computers and robots are physical and chemical entities separated from human brains. Although they are not physiological structures of human reflection and control systems, they are developed from the hardware of the system. The human brain is also hardware, but it is a physiological entity instead of a physical or chemical one. The human brain provides the material base of the reflection and control function. In this sense, the brain entity provides the structure and thinking offers the function. Intelligent machines have physical and chemical structures, in which electrons make a physical movement. This structure is a prerequisite for AI. The physiological structure of the human brain is different from the structure of intelligent machines, so they have no physiological or fixed hardware connections. Therefore, although intelligent machines are developed from the hardware of human’s reflection and control system, their development is relatively independent. However, this does not mean that the hardware of intelligent machines can never be coupled with the human brain. Instead, it shows that strengthening the hardware connection between the human brain and intelligent machines is an attractive direction for intelligent development.

The intelligence of AI lies in its software. Why is AI possible? How is it related to human brain intelligence? To answer these questions, we need to carry out software analysis. Thinking is the function of the human brain. The analysis of thinking can be carried out through the functional structure method, hierarchical method, or the two approaches combined. From the hierarchy perspective, thinking has several layers: the first one is the layer of form, which is language; and the second one is the layer of content, which is the consciousness concept. These two layers are inward analysis of thinking itself. From the perspective of its interaction with its applied object, there is another layer of thinking, i.e., its outward function layer. When these three layers are divided according to a functional structure, language and consciousness concept are the internal structures, while the functional layer boasts the external function of thinking and consciousness, namely, intelligence. Consciousness concept, as the internal structure of thinking, refers to the structure of consciousness. Depending on the internal structure, reflection and control activities have corresponding manifestations at all internal levels of thinking. In the content level, it is shown as the activities of various consciousnesses and in the form level as corresponding language activities. Without the activities at all internal layers of thinking, there would be no overall intelligence or activity. Activities at the language layer are coding activities inside the human brain, which can be carried out directly by the bioelectrical movement. Therefore, when people think of specific words, there are always corresponding specific electrical signals in the brain.

The fact that thinking activities have specific manifestations at the language level shows that thinking can be formalized through language. Language is not a conscious or conceptual thing. Instead, it has a corresponding movement of electrical signals in the brain, showing the characteristic of physical movement. As far as language and electrical signals are concerned, language is the content while electrical signals are the form. Language outside the brain refers to the sound that can be heard and words that can be seen. The internalization of language outside the brain is completed by the brain’s thinking language, whose existence depends on the physiological, chemical, and physical movements of the brain. When dissecting a human brain, there is no language entity inside. Instead, only corresponding brain structures and movement can be found. Therefore, the role of the language inside the brain is highlighted as the coding rules of electronic movement. Only when there is a specific movement of electrical signals in the brain can there be language movement and further consciousness activities. These serial relationships lay a foundation for formalizing thinking activities, namely, linguisticization, and then for the electronic movement. This is the most fundamental reason why electronic computers are able to simulate the intelligence of the human brain.

In short, AI is not independent. It is within the scope of human intelligence and part of it. The reason why AI belongs to human intelligence is that it is a product of human intelligence developed to a certain historical stage, a tool of the human brain, and an expansion of human intelligence [5]. All AI is extending along the direction of human intelligence, and all their functions in various fields of social life are within the scope of reflection and control. With the development of humankind, all aspects of social life today are increasingly complex, which makes it more and more difficult for the human brain to make direct adjustment and control and increasingly unable for mankind to meet the requirements in both magnitude and precision. As a result, the activities in many fields of society cannot be regulated and controlled only by human intelligence. Of course, this is not the limitation of the human brain in quality, but in quantity. Therefore, the development of human intelligence can be achieved by brain expansion in a certain way, so as to meet the requirements of social life.

3. AI and Human Consciousness

In the coming years, machines will get smarter. If we cannot distinguish a machine from a human, then we have reason to think that this machine is intelligent [6]. Therefore, the question we are going to face is as follows: can an intelligent machine be considered as having consciousness? This requires us to understand the relationship between AI and human consciousness.

3.1. AI Promotes the Development of Human Intelligence

As a necessary supplement to human intelligence, AI effectively extends the human brain and enlarges its intelligence. AI and the human brain are correlated and have been supporting each other forward. All these make human cognition scope continuously expand to the micro- and macropoles, enabling people to have a deeper understanding of the essence of things indirectly, and greatly enriching the content of consciousness.

AI, which simulates human operational intelligence, is far superior to human beings in computing speed, capacity, and accuracy. It indeed can liberate mental labor. With the support of the Internet and big data technology, AI will help humans in more fields and more profoundly, even conducting rescue operations in extreme environments. In the field of medical practice, brain stimulation is helpful to restore the damaged brain nerve. In terms of transportation, with the application of data, connection, real-time sensing, and traffic prediction, humans will experience shared riding and automatic driving for the first time. The revolution of the third-generation culture carrier represented by AI will promote the great change of human memory and learning style. AI is our brain assist device, which stores a lot of information in an intact manner. The undertaker of memory and thinking is gradually separating from the human body and tends to become objectified. Portable computers have replicated what we call as cognition, and even our human rationality also faces challenges. However, human beings can use their dynamic intuition and give play to their innovation abilities [5]. The printing carrier once ushered in a period of flourishing human culture. Now, we should keep an open mind on the promotion role of AI in human consciousness. AI technology has put forward new requirements for human data observing and processing ability. The workers who receive information technology (IT) vocational training will better adapt to the changes, so we can be one step forward in the transformation to an intelligence-intensive society. With the help of intelligent machines, humans could become new-type, creative, and reliable cognitive subjects.

3.2. Human Consciousness Restricts the Development of AI

The nature of consciousness affects the development of AI. Consciousness can be introspective, which reveals what cannot be reached by the objective research on consciousness. Human consciousness is not a passive or negative reflection of reality; instead, it is a positive and active one. When determining the behavior of the subject, the external experience must be reflected through the inner world as well as the thinking and feeling system of the subject. The so-called animal consciousness is a statement untested, because animals cannot distinguish themselves from their activities. They are integral. The same is true of artificial consciousness. Although AI can complete part of human thinking activities, it does not understand the meaning of doing this. It operates mechanically and aimlessly. Even if AI has a purpose, it is all instilled by humans to achieve goals of themselves. After 70 years, the movement of logical functionalism ended in a dismal way, while the structuralism of consciousness points out a new direction for AI. Structuralism has a successively experienced semantic network and neural network. The latter one argues that connections among things in the world are all the same, while the differences lie in their frequencies of occurrence. The neural network cannot distinguish the “White” as a name and the “White” as a color. This kind of AI, in essence, is a program or function that makes similar reflex responses to specific stimuli. AlphaGo is weak AI, and programming is not an effective way to achieve a machine’s consciousness.

The change and development of consciousness bring about corresponding changes in AI. Under the consciousness theory system of a subject-object dichotomy, abstract operational rules promote the R&D of algorithms. This theory argues that human consciousness can be simply summarized as the brain’s symbolic operation, while the characteristics of consciousness in intuition, common sense, and external environment are ignored. In fact, the change in AI development level is indeed related to the development of human consciousness. Behaviorism and structuralism, which are similar to the human brain neural network, are simulations of human adaptive mechanisms and not restricted by forms. Subject-object integration, a philosophical trend focusing on the interaction with the real world, provides ideological inspiration for AI. Human consciousness theory criticizes principles of form. Deep learning represented by AlphaGo has gotten rid of structural restrictions and acquired problem-solving strategies through learning human’s experiences. In 2016, by adopting Monte-Carlo Tree Search (MCTS) and deep learning (DL) to think from the whole picture and make the optimal choice, AlphaGo, a program developed by DeepMind, won the Go game, getting rid of the restrictions of brute force method [7]. When embodiment philosophy arose, AI began to imitate the human body’s movement and gestures, such as simulating the rules of facial movement. To some extent, the fact that AI began to shift its focus to the human body and external environment is inseparable from the exposition of consciousness by phenomenologists [8]. We can tell that although the philosophical theory of consciousness cannot directly improve the technical essence of AI, its development and change will provide foresight for the exploration of AI.

4. AI and Cognitive Computing Technology

AI is a broad concept. From the perspective of ultimate goals, cognitive computing is an important way to realize AI. Cognitive computing refers to cognition and effective expression of the internal meaning of the objective world as well as of various pieces of information and data that can be observed and measured at present. It is the expression of AI toward specific problems to be solved.

4.1. The Concept of Cognitive Computing

Cognitive computing is a technique that enables humans to cooperate with machines. This term comes from cognitive science and artificial intelligence. It builds algorithms with theories of cognitive science to simulate human’s objective cognition and psychological cognition process, so as to enable machines to reach a certain degree of “brain-like” cognitive intelligence [9]. Cognitive computing uses technology and algorithm to automatically extract concepts and relationships from data, understand their meanings, learn independently from data patterns and prior experience, and ultimately extend what people or machines could do on their own. Based on this, Roma further put forward three main applications of cognitive computing: robotic and cognitive automation to automate repeatable tasks to improve efficiency, quality, and accuracy; cognitive insights to uncover hidden patterns and relationships to identify new opportunities for innovation; and cognitive engagement to drive customer actions by delivering hyper-personalization at scale [10]. Cognitive computing is a synthesis of technologies where each of them contributes a distinct methodology for addressing problems in its domain. Artificial Neural Network (ANN) uses the interactions of biological neurons as a model for pattern recognition, decision, modeling, and forecasting. Fuzzy logic uses approximate information in a manner similar to the human decision process and is useful in control and decision-making applications. Evolutionary computation adopts natural selection and evolution theory and is useful in optimization. Cognitive computing provides an effective way to analyze technological processes and human activities [11].

Based on the above concepts, cognitive computing can be simply understood as a technical field that integrates multiple technologies and aims to use artificial mechanisms based on computing technology to realize the human cognitive function. It is the core technical field of cognitive science. In essence, cognitive computing is expected to understand the internal relationships among various kinds of data and phenomena in the real world through technologies, such as AI, pattern recognition, and machine learning, and further develop tools and systems to improve productivity, protect the environment, and contribute to social governance.

4.2. Characteristics and Advantages of Cognitive Computing

After tabular computing and programming computing, now the era of cognitive computing is coming. Generally speaking, cognitive computing has a wide range of applications, including participation, decision-making, discovery, and other aspects, centering the improvement of human’s “cognitive” ability. Leslie G. Valiant from Harvard University thought that compared with other approaches, cognitive computing has three main characters: each act of memorization, learning, or recall is an algorithmically simple process executed on a network laden with information previously acquired; the system learns continuously as a background activity; in more complex cognitive processes such as analyzing complex scenes or reasoning, the internal computations have an important time domain and state information needs to be retained [12]. The cognitive computing system has a strong comprehension ability. Through natural language comprehension technology and its superior ability to process structured and unstructured data, it can interact with users in various industries and then understand and respond to their problems. The cognitive computing system has an intelligent logical thinking ability. It can reveal insights, patterns, and relations through data and hypothesis and connect scattered pieces of knowledge for reasoning, analysis, comparison, induction, summary, demonstration, and obtainment of deep insights and evidence for decision-making. The cognitive computing system has excellent learning ability. Through evidence-based learning ability, it can rapidly extract key information from big data and learn like a human. It can gain feedback through expert training and experience learning in the interaction to optimize models and make improvements. In addition, a cognitive computing system also has elaborated personalized analysis ability. Using text analysis and psycholinguistic models, it can conduct an in-depth analysis of massive social media data and business data, grasp users’ personalities, and portray individuals in an all-around way. This system is not a simple collection of all these technologies. Instead, it integrates these technologies in an unprecedented way, profoundly changing the means and efficiency of solving business problems.

Compared with previous computing paradigms, cognitive computing has significant characters in adaptability, interaction, iteration, and context sensing. It can perceive the surrounding environment and context and make the corresponding self-adaptation. Cognitive computing requires dynamic programming and must understand, identify, and extract context elements, such as connotation, grammar, time, location, regulation, user profile, process, tasks, and targets. They might use multiple information sources, including structured and unstructured digital information, and sensorial inputs, such as vision, gesture, hearing, or information of the sensors. Cognitive computing also has the “memory” function and is able to conduct an iterative operation. The cognitive computing system must be able to remember previous interactive information to make rational reasoning and aid decision-making through the superposition of information and semantics. For instance, as a digital medical aid, when a user communicates with it about the personal situation of “chest distress or insomnia” at 1:00 a.m., the medical aid must “recognize” the current time and the user’s situation, make comprehensive judgments combining user’s previous conditions, and offer a reasonable suggestion.

At present, there are four key technologies of cognitive computing recognized by researchers: first, machine learning, natural language understanding, and human-computer interaction techniques which are on the top floor; second, big data technologies, including how to store, organize, manage, and analyze big data; third, computer architecture (the computing ability required by the cognitive system is far more than what we can provide today; therefore, how to realize the design of a data-centered system is also a challenge facing us today); fourth, at the bottom level breakthroughs of atomic and nanotechnologies which are required [13]. There are two main tasks of cognitive computing: one is to study and simulate a human’s understanding of the objective world through a computer; the other is to take the cognition and value discovery of information and data as the main goal. Compared with AI, the research of cognitive computing is deeper and more specific. “Deeper” means that it does not only study the simulation of human brain behavior but also focus on the understanding of the operation law of the objective world as well as the internal law and external expression of the data generated in the world; “more specific” means that it has more direct expressions in the applied business area, which can offer direct decision-making suggestions to corporate leaders.

5. Forecast of Future Development of AI

What kind of role will AI play in the future? Professor Jiang from Stanford University said, “I hope young people can understand the difference between the working mode of AI and that of humans, and then develop the abilities that can distinguish them from AI.” [14]. From the perspective of core technologies, breakthroughs in three levels are expected to advance further development of AI. The breakthroughs in platforms, algorithms, and interfaces will promote AI to achieve leapfrog development. Building an intelligent platform that can serve various enterprises and meet different demands will be a major trend of future technological development.

5.1. Making Innovations of Intelligent Technologies and Realizing Reproduction of Human’s Personalized Intelligence

At present, AI is still in the “weak AI” phase and can only simulate, extend, and expand low-end human intelligence, namely, human’s feelings, perceptions, and conventional and programmed logical reasoning. As for human’s high-end creative intelligence, imagination, intuition, potential, and unconventional and nonprogrammed personalized intelligence such as tacit knowledge, experience, and skills which can only be expressed through behavior, “weak AI” is not able to simulate them, let alone extension or expansion. This high-end and unconventional intelligence can only be simulated, extended, and expanded when we enter into a “strong AI” phase.

The development of AI is, in varying degrees, based on the research of brain science. The inexplicable “black box” characteristics of its operation are closely related to the fact that brain science has not yet fully grasped the operation rules and mechanisms of human brain intelligence. The cracking of AI’s “black box” characteristic depends on the further development of brain science. Will AI surpass man’s biological intelligence in the future? This kind of judgment actually presupposes that machine intelligence has great development potential, but it ignores the fact that brain science has already revealed that human brain intelligence is also far from being fully exploited and released and also enjoys great potential. In terms of AI, the proportion of human intelligence that can be simulated or extended is just the tip of the iceberg above the water, only including the conventional, logical, explicit, and universal consciousness and intelligence. The vast number of unconventional, illogical, and personalized consciousnesses hidden in the water is still difficult to be simulated or extended. As is pointed out by the “Iceberg Theory” of the psychoanalyst Freud, the human psychological structure is composed of consciousness, preconsciousness, and subconsciousness. Consciousness is only the tip of the iceberg above the water, while subconsciousness occupies most of the psychological structure and hides below the water surface [15]. This is mainly because humans have little cognition and development of this part of consciousness and intelligence, not to mention the artificial intelligence of it. Of course, with the continuous development of the brain, human potential, and AI itself, it is possible for AI to simulate and extend human’s personalized and fuzzy consciousness and intelligence. American futurist Ray Kurzweil once predicted that the year 2045 will be a time of profound and divisive transformation, “abiotic intelligence in this year will be a billion times the wisdom of all human beings today” [16].

How to realize strong AI? This requires a “brain-computer interface” (BCI), a cutting-edge research area. It studies the means of establishing a direct connection between human or animal brains and external devices to translate consciousness in real time and ultimately transfer and download thoughts freely among humans or between humans and machines. Astonishingly, Neuralink, a brain-computer interface research company founded by Elon Musk, released a groundbreaking “brain-machine interface” technology, which utilizes threads 4 to 6 μm thick, less than a tenth of that of hair, to transmit brain signals fetched by chips. Tiny electronic devices are implanted in the brain so that thoughts can be transmitted through wireless devices and even interact with iPhone apps. Brain-computer interface technology can realize four functions. By manipulating machines through the mind, machines can replace some functions of the human body and repair the physical defects of the persons with disabilities. Anything can be controlled through the mind. Brain operation can be improved through BCIs, making us feel like we have just had a good sleep and feel energized, focused, and quick on the trigger so that we can work soberly and efficiently. In 2014, ABM, an American company, trained testers through an EEG brain-computer interface, making novices learn 2.3 times faster than before. Through the brain-computer interface, we can acquire a lot of knowledge and skills in a short time and even acquire superpowers that ordinary humans cannot possess. With a brain-computer interface, human beings can communicate with each other without using language and only rely on neural signals in the brain, thus realizing “lossless” brain information transmission. Musk crazily envisions that human-computer combination can help realize faster and more accurate communication, and this nonverbal communication is better. However, we believe that communication is a basic human behavior and also the basis of human cooperation. In a virtual digital space, the human language will evolve rather than be replaced entirely. The future development of AI is to enhance, not to replace, the overall intelligence of human beings and promote the complementation of AI and human intelligence, giving play to their respective advantages to realize the “coevolution” of human and AI machines.

5.2. Making Breakthroughs in Specialized Algorithms and Building Intelligence Service Space

AI is expected to soon obtain an almost infinite information storage space, quantum computing ability 100 times or even 10,000 times more than that of humans, and breakthroughs in various specialized algorithms. Even if an AI system is equipped with the most advanced current computing platform, without effective algorithms, it will only be like a person with well-developed limbs but the head of a moron, which cannot be regarded as having real intelligence. The enhancement of algorithm capability will further promote the continuous breakthroughs of AI. In many consumption scenarios in life, people’s demand for personalized experience is increasing, and personalized and scene-based services will gradually become the main direction of AI-driven innovation. With the help of the Internet, a professional-level knowledge base together with program settings is expected to “answer” most professional or scientific questions in the near future. Their professional abilities are equivalent to those of senior doctors, architects, engineers, math professors, and so on.

In the field of education, AI can become an important driver for the education of students. It has unique advantages in creating large-scale personalized learning environments and building smart campuses. With the deep integration of education with cloud computing, big data, VR/AR, and other technologies, the application of AI in the field of education has infinite potential and possibility in the future [17]. At present, BCI technology is applied to measure students’ implicit data, including their learning status, attention level, cognitive load, and learning style [18, 19]. Brain measurement can reveal the brain differences among students and enhance our understanding of learning [20]. Based on the attention data collected when different learners watched, listened, read, wrote, or operated learning materials, the SVM learning algorithm was used to identify their learning styles. Its average recognition rate was 75.8%, and the highest accuracy of a single time was 83.3% [21]. In the future, cognitive computing will be mainly used to customize personalized learning assistants for students to improve their overall learning experience. Through collecting and analyzing students’ learning data, AI gradually outlines each student’s learning style and characteristics and then automatically adjusts teaching content, method, and pace, so that students can have access to the most suitable education for themselves. Not only does an intelligent learning system easily provide personalized teaching for students, but also real-time data and automatic analysis provide teachers with a lot of information so that they can have a deeper understanding of each student. Meanwhile, data can guide teachers to constantly improve their teaching content and make more accurate teaching planning based on the problems of each student, so that their teaching could be more targeted. However, it should be noted that everything will become different in the future. Only the ability to solve new problems remains unchanged [21]. 98 percent of our genetic make-up is the same as that of chimpanzees, but our language, values, artistic expression, understanding of science, and research of technologies make us distinctive. This is the result of creativity, which is credited with most of the interesting, important, and humanized things. Creative people tend to be more independent in thinking and operations. They tend to be imaginative, curious, and willing to try and take risks. In its competition with the Go genius Jie Ke, AlphaGo was like a martial arts master who could absorb the opponent’s power. Each manual and game could be an effective source driving its power growth. It was so powerful that even one of the smartest humans felt “cold.” Moreover, robots never get angry or tired, so it is impossible for humans to win the competition with machines. Therefore, “we cannot just instill our children with knowledge, because machines can learn faster [22].”

In the health care area, many companies are developing AI technologies that can be used in the medical field. Classification and identification capabilities are the primary goals of these devices. It can be very helpful in identifying carcinogens because it can help inspectors interpret countless documents in less time. In addition, it can evaluate relevant information of patients, go through all the medical records thoroughly, find clues that may cause the problems of patients, and assist diagnosis. Through the analysis of the medical history of patients based on lots of medical textbooks and related data, AI can provide some diagnostic bases for doctors. Sometimes, they may offer disease diagnosis that doctors have never considered or even do not know. In the business area, combined with big data and algorithms which can meet the demands of customers, AI can play an important role in the economic decision-making process. Companies can use AI to incorporate various risk factors into their decision-making process and then offer effective suggestions on investment or site selection of branches. In the field of finance, AI promotes the rise of humanized fin-tech, which can be widely adopted by banks and financial institutions to serve growing numbers of customers more effectively. Besides automatically performing back-end and administrative tasks, it can also take the initiative in activities toward customers. In the manufacturing industry, AI can assist designers to complete the design of products. Ideally, it can largely make up for the shortage of medium and high-end designers, thus greatly improving the product design capabilities of the industry. Meanwhile, by mining and studying a large amount of production and supply chain data, AI is expected to help optimize the allocation of resources and improve the efficiency of enterprises. In an ideal situation, AI can provide enterprises with whole-process support including product design, raw material procurement plan and distribution, production and manufacturing, and user’s feedback data collection and analysis, so as to promote the transformation and upgrading of China’s manufacturing industry.

5.3. Demonstrating Value Rationality and Avoiding Ethics Risks

Understanding the trend of technological and ethical risks of AI is a part of the research on the development trend of AI. And how to effectively avoid these risks is also a very popular research topic in the field of AI.

In various AI application scenarios such as data analysis, content recommendation, and face recognition, human identity and behavior are directly involved and affected. The harm and negative impact of abusing related technologies will be far greater than that of traditional networks and digital technologies [23]. Specifically, there is algorithmic bias in AI applications. An algorithm is essentially an objective mathematical expression, but modeling and data input are completed by humans. In this process, human’s inherent bias and discrimination will affect the decision-making of the algorithm and then lead to ethical problems such as algorithmic bias. There are social ethical problems with AI. The application of AI in human life makes the relationship between humans and machines increasingly complicated, further leading to a series of social-ethical problems. Among them, the most primary one is whether AI can replace humans to work? From the perspective of operational effectiveness and economic efficiency, a large number of white-collar or blue-collar jobs will be occupied by AI in the future, which will inevitably lead to large-scale technical unemployment and, to a certain extent, lead to the antagonism between human and AI. There are responsibilities and emotional ethical problems with AI. The widespread application of AI brings about the difficulty in judging subjects in many events or cases. For example, during the sudden braking of driverless automobiles in case of emergency, each step adopted by AI is stipulated by algorithms, which are set by human beings. Therefore, ethical problems are bound to arise in the judgment of liability subjects in scenarios like this. If we stick to the position of optimism and pragmatism without introspection, the potential threats of AI to society and individuals will be overlooked or ignored and the ethical risks will not be prevented in advance or corrected afterward. It will inevitably not only lead to serious consequences but also destroy the trust of the whole society in AI, which will inevitably bring on huge risks and losses to the institutions and enterprises within the innovation ecosystem.

In order to standardize the development of AI, the UK Government Office for Science published the report “Artificial Intelligence: opportunities and Implications for the Future of Decision Making” in 2015, which pointed out the serious consequences of algorithmic bias, nontransparency, and improper accountability and stressed that further development of AI should be based on the premise of enabling innovation, building trust among citizens, establishing a stable environment, and fostering appropriate access to the data necessary [24]. In December 2016, the Institute of Electrical and Electronics Engineers released “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, First Edition,” which suggested that “human benefit,” “responsibility,” “transparency,” and “education and awareness” should be the general principles in the research and development process of products [25]. In June 2019, the “G20 AI principles” were issued in the communique of the G20 Ministerial Meeting on Trade and Digital Economy, which made it clear that AI systems should be “stable and secure” in the whole life cycle and put forward five principles for responsible stewardship of trustworthy AI [26]. In the same month, the “Governance Principles for the New Generation Artificial Intelligence—Developing Responsible Artificial Intelligence” was released by the National Governance Committee for the New Generation Artificial Intelligence of China, proposing eight principles of “harmony and human-friendly,” “fairness and justice,” “inclusion and sharing,” “respect for privacy,” “safety and controllability,” “shared responsibility,” “openness and collaboration,” and “agile governance” [27].

Most of the above documents consider the possible impact of AI technology applications on society from the perspective of national development strategy. Science is a powerful instrument. How it is used, whether it is a blessing or a curse to mankind, depends on mankind and not on the instrument [28]. Generally speaking, the ethical norm of AI aims to equip AI with a “good core (conscientiousness).” This means that, different from other technologies, the ethical research of AI should be carried out centering “machine’s core” and “human conscientiousness.” “Machine’s core” mainly refers to the moral algorithm of AI, which aims to instill “good ethics” into AI so as to generate moral AI or machines. “Human conscientiousness” mainly refers to the design and application ethics of AI, which aims to enable the developers and users of AI to have “conscientiousness,” ensuring the design of AI conform to morality, avoiding malicious design, and making sure that AI is used rationally to benefit human society. Whether AI is a “blessing” or a “curse” to mankind depends on what kind of value human holds. Only by highlighting the value rationality in the design of AI and avoiding the distortion of its development, we can realize the sustainable development of human beings and finally achieve human liberation.

6. Conclusion

Based on the above analysis, the following conclusions are drawn:(1)In the future, the human-computer combination can help realize faster and more accurate communication, and this nonverbal communication is better. However, we believe that communication is a basic human behavior and also the basis of human cooperation. In a virtual digital space, the human language will evolve rather than be replaced entirely. The future development of AI is to enhance, not to replace, the overall intelligence of human beings and promote the complementation of AI and human intelligence, giving play to their respective advantages to realize the “coevolution” of human and AI machines. Breakthroughs in algorithms represented by cognitive computing promote the continuous penetration of AI into fields such as education, commerce, and medical treatment to build up AI service space. As to human’s concern, namely, who controls whom between humankind and intelligent machines, the answer is that AI can only become a service provider for human beings, demonstrating the value rationality of following ethics(2)If we view the emergence of AI from the historical process of human inventing, manufacturing, and using tools, we will find that AI is an intelligent tool. Though it has qualitative differences from previous physical tools, based on their nature as tools, they have some same characteristics. In the competition between man and AI, AlphaGo’s victory, in fact, is not the victory of intelligent robots but the victory of many present and past Go experts. Therefore, we cannot simply draw the conclusion that the intelligence of robots is higher than that of humans or that robots can replace or dominate humans. We should view AI from the history of human inventing and using tools and from the relationship between humans and tools. From the perspective of tool theory, the earliest tools of mankind are stone implements made by the primitive ancestors. After entering the agricultural civilization, farmers invented, manufactured, and used various farm tools. After entering the industrial civilization, engineers invented and manufactured a variety of automated machines. Stone implement, farm tools, and machines are all the replacement, extension, and expansion of human physical ability. Every qualitative leap of physical tools is a huge liberation of human physical fitness, improving productivity and promoting social development and progress. In the postindustrial civilization era, most of the original artificial physical fitness has been transformed and replaced by artificial intelligence. The birth of AI is another revolution in human tools’ history. AI is not only automated but also intelligent. It can replace, extend, and expand not only most of the human physical fitness but also part of human intelligence. It frees humans from not only burdensome and tedious physical labor but also part of the mental labor, once again rapidly improving productivity and promoting human development and social progress(3)Intellectualization is the inexorable trend of the future whether seen from the maturity of the technology or seen from the development of human intelligence. As Kurzweil said, we could neither prevent the acceleration of change nor prevent AI from surpassing humans in various fields [17]. The most unique feature of AI technology is that it can endow “machine” with “intelligence.” Technologies were invented by “human intelligence” before. However, from now on, technologies can also be invented by “the machine’s intelligence.” An important mark of the coming of AI singularity is “producing intelligence with machine’s intelligence” [29]. By 2050, AI will be infinitely close to human intelligence. However, mankind “still has the right to determine future technologies and life.” In other words, many advanced abilities of human beings are irreplaceable by AI. AI can only be a service provider for human beings. Although from the fractional and individual level, AI can replace, expand, and surpass humans; its position as a subject is stronger than that of a human, namely, it can dominate and control individuals, and humans serve AI. However, from the perspective of the whole and mankind, AI is an electronic machine, not an animate individual, having no life form or movement. It has no independent needs, attributes, nature, consciousness, or social behavior like human beings. Therefore, it is impossible for AI to become a subject like a human. On the contrary, it can only be a machine used by people, being tools and accessories in human production and life. If we excessively rely on AI, we will have a dependence tendency, and this tendency (for example, we think that the data provided by advanced instruments are the most perfect and reliable) will not only make people lose their critical thinking skills but also cause irreversible mistakes or disasters. Therefore, for human beings and society, many key abilities, such as the ability to promote social and human progress, can only rely on ourselves

In human history, all major technological revolutions have brought about shocks to humankind and even the whole society. The invention of ironware and the beginning of traditional agricultural society sparked a war in the cold weapon era. In China, this war lasted from the Spring and Autumn Period and the Warring States Period to the Han Dynasty. The invention of electricity and the arising of industrial society incurred World War I and World War II, staining the first half of the 20th century with blood. Now, while delivering technological benefits, the advancement of technologies such as AI, nanotechnology, brain-computer interface, and biotechnology will also start a new storm of social change. As what Charles Dickens said in A Tale of Two Cities, “it was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way” [30].

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.