Abstract

An autonomous household robot passed a self-awareness test in 2015, proving that the cognitive capabilities of robots are heading towards those of humans. While this is a milestone in AI, it raises questions about legal implications. If robots are progressively developing cognition, it is important to discuss whether they are entitled to justice pursuant to conventional notions of human rights. This paper offers a comprehensive discussion of this complex question through cross-disciplinary scholarly sources from computer science, ethics, and law. The computer science perspective dissects hardware and software of robots to unveil whether human behavior can be efficiently replicated. The ethics perspective utilizes insights from robot ethics scholars to help decide whether robots can act morally enough to be endowed with human rights. The legal perspective provides an in-depth discussion of human rights with an emphasis on eligibility. The article concludes with recommendations including open research issues.

1. Introduction

As technological advancements in the field of artificial intelligence continue to progress, so do the ethical and practical implications of incorporating such profound robots into society. A new wave of silicone-based beings as robots would promote countless scientific advances as their abilities are engineered to extend beyond those of carbon-based beings, that is, humans. For example, during the COVID-19 pandemic, robots in the medical field adopted the duties of medical professionals in an effort to provide medical assistance to those in need while safeguarding humans from the risks associated with COVID-19 exposure. This raises the controversial question of whether such cognitively proficient beings could potentially be endowed with human rights to facilitate their survival and protect them so as to help them function better. Yet, the extent of scientific advancement is not reason enough to delegate human rights to robots: this idea should be examined by a cross-disciplinary approach. Cross-disciplinary studies facilitate the analysis of complex questions using insights from various disciplines [1]. This paper offers a detailed review of the possibility of bestowing human rights upon robots by investigating scholarly sources in three disciplines: computer science, ethics, and law.

Computer science scholars investigate the extent to which robots are capable of achieving the cognitive capabilities and behavioral patterns of humans. The dissection and analysis of their hardware construction, software design, and algorithms are important here. This information is used to study modern robots and their contributions to gauge whether they deserve human rights, as predicted by computer scientists.

Equipped with that understanding, we explore the ethics and morality of actions executed by robots. Depending on the algorithms used, the actions of robots can be categorized using a spectrum of ethics. It is important to begin by identifying the characteristics that enable humans to act ethically. The next step is to determine whether these characteristics can be accurately manufactured and implanted into robots. This determines whether autonomous robots, given the proper software, are capable of behaving at least as ethically as humans. It is important to explore the ethical arm of extending human rights to robots as such an extension can invite differing opinions, both controversial and traditional.

Further, we explore the depths of the judicial system with respect to the possibility of human rights for robots. Due to the novelty of robots, the judicial system has not created legislation regarding advances in sentient technology. With the help of legal scholars, the legal perspective studies the origin of human rights as well as the judicial method of delegating them. It considers the legal views in favor of and against human rights for robots. Exploring the legal implications of potentially extending human rights to robots is of increased importance as the creation of a new realm of legal rights may affect currently existing realms of legal rights.

Existing research in related topics explores the legal and ethical implications of robots, generally [2]. However, existing research fails to address the hardware and software composition of robots while exploring the intersection of robots and ethics [3]. As such, the current paper aims to bridge the gap between existing research and the need to consider the extension of human rights for robots in light of computer science principles, ethical principles, and legal implications.

In the following, Section 2 of this paper explores the computer science perspective associated with determining whether robots should be endowed with human rights. Section 3 of this paper addresses the ethical considerations associated with determining whether robots should be endowed with human rights. Section 4 addresses the legal implications associated with endowing robots with human rights. Section 5 explores the present research question in further detail and includes recommendations on the same. Finally, Section 6 concludes the present paper.

2. Computer Science Perspectives

Principles from computer science play a significant role in deciding whether robots should be endowed with human rights. This involves a study of their design as well as applications [4].

2.1. Design Process of Robots

The software within robots is typically a purely human creation often designed to follow the three laws of robotics as mentioned by Science Fiction author Isaac Asimov, sometimes arousing much discussion [5]. Since humans are the only contributors to the software design of a robot, the software is created to imitate every aspect of a human, from behavioral patterns to cognitive capabilities [6]. Humans are capable of experiencing emotions and making ethical decisions. A similar argument can be made about animals, but it is difficult to model the manner in which animals make decisions as they cannot communicate their thoughts effectively. Humans are thus the ideal model by which robots obtain the ability to make wise decisions and behave morally.

Physical abilities of humans serve as the benchmark to build robots. Robots are likely to be ineffective if humans are not used as design models as there would be no structure to replicate. Thus, humans are not only the standard for robot creation but also the ideal embodiment of cognition and ethics. Some scholars view the importance of human intervention as reason enough for the delegation of human rights; if robots are designed to execute tasks as well as or better than humans, then it is important to question why they are not given the same legal standing as humans [5, 6]. An analysis of the design, from assembly to implementation, of a robot can provide an understanding here.

Since humans do not have a formulaic approach for decision-making, building robots with only one algorithm for decision-making would be infeasible. A more efficient design would give robots hybrid algorithms to make decisions about any situations they encounter. Brain-Computer Interface robots use highly effective hybrid algorithms [7]. Robots have the ability to filter through possible options and select the best one, a cognitive process once executed exclusively by humans. Sorting algorithms are used to help a robot navigate a path [7]. Robots are thus able to detect obstacles and avoid them by selecting a safer path. Hence, movements of robots are very similar to humans.

A method of merging hardware with software is to connect the neural networks to the central circuit in a robot. When neurons connect to the circuit they form chemical and electrical communication paths. These allow the robot’s brain to control it, making it easier to complete everyday tasks analogous to humans. The biological brain in a robot can blur the line that separates humans and robots. This fact warrants the question on the delegation of human rights to robots when this line of separation is blurred. Delving deeper into this question can be achieved through examples of humanoid robots in domain-specific applications.

2.2. Domain-Specific Applications with Robots

In the recent COVID-19 pandemic, robots examined patients and provided medication to treat coronavirus, especially for those in quarantine. They were programmed by AI scientists and engineers to perform medical tasks (See Figure 1). Since the robots were programmed to assist COVID-19 patients, medical professionals were shielded from the risks associated with treating COVID-19 patients. It is a huge contribution to robotics in medicine, especially during such a global pandemic [9]. Some medical robots are designed to ease daily tasks of disabled or elderly patients [8], as shown in Figure 2(a). Such robots not only comprehend vocal commands but also contain sophisticated algorithms to sharpen their vision. Thus, they are equipped to locate and retrieve items that patients request. Scientists are further advancing medical robots so they connect to the biological brains of their patients as illustrated in Figure 2(b); this shows an e-robot agent based on an electroencephalogram (EEG) [8]. Thus, humans would not have to speak but would control the actions of robots using thoughts and gestures. Commands would come directly from humans but robots would be instrumental in life-changing procedures. Yet their contributions would not be recognized as they do not have human rights.

3. Ethics Perspectives

In order to assess whether robots should be accorded human rights, it is imperative to comprehend ways in which algorithms guide robots through tasks otherwise designated for humans. We consider human evolution, model construction, and other advances in this matter.

3.1. Ethics and Human Evolution

Prior to determining whether robots are capable of behaving ethically, it is important to build a standard for ethics by monitoring humans, a species capable of behaving morally [10]. The abilities of robots spawn from humans since they are created by humans. Hence, they can only exhibit ethical behavior if created by ethical humans [11].

Humans are capable of moral behavior as they are creatures with a history of biological predisposition toward moral actions [12]. This is the result of evolution from the prehistoric era to the modern day. As time passed, humans learned through trial and error. Using a utilitarian view, they learned that certain actions led to desirable results [5]. This identifies a critical point distinguishing humans and robots: humans endured an ethical evolution, being born with innate abilities to perform ethical actions. Since robots are not born, in the traditional sense, the question arises whether they are capable of evolving into ethical beings. This question can be transformed from the possibility of “evolving” to the possibility of “creating” ethical beings.

3.2. Creating a Model of Ethics

In order to create an ethical robot, it is important to define the exact actions to which it must conform. This requires input from humans because they are the species from which scientists can extract a model of ethics [13]. Scientists must manufacture exact replicas of this model to implant into robots. This is an iterative process with much research [12]. Creating a steadfast model for ethics is a Herculean task: it requires scientists to manufacture characteristics whose origins cannot be precisely monitored. It is hard to translate biological processes to machine-readable languages, yet scientists have made groundbreaking advances here. Two revolutionary methods for this are discussed herewith with respect to this issue [5].

3.3. Using Silicone Brains to Implant Ethics

Silicone brains used in robots are replicas of humans’ biological brains; they contain neural networks, sensors, and connections to actuators [7]. Robots with silicone brains have cognitive capacity similar to humans. The ability to consider different choices teaches robots the importance of rational thought, a characteristic otherwise found in humans. Exercising rational thought allows robots to distinguish ethical actions from unethical ones. Robots are programmed to recognize challenges and weigh various options before making a decision. They are not explicitly told how to respond to situations. Instead, they are given access to a matrix of actions and outcomes from which they choose. Such a matrix is innately available to humans by ethical evolution.

An example of an AI being with a silicone brain is the robot Gordon, from the University of Reading, UK [14], as seen in Figure 3. Gordon’s artificial brain has multiple neural networks atop microelectrodes; these emit electrical currents and stimulate neurons. This silicone brain allows it to navigate paths and make decisions to avoid obstacles. Gordon senses obstacles and rationally decides how to approach them; it recalculates trajectories to avoid obstacles rather than inciting collisions. Gordon’s decisions may seem common sense but consider this what was once an unordered assortment of hardware components is now a meticulously organized, autonomous being that has demonstrated its ability to exercise ethics. Gordon’s existence is a revolutionary breakthrough, yet it can be enhanced by substituting the silicone brain for a biochip to simulate machine learning and memory.

3.4. Biochips as a Medium for Ethics

Specially formulated DNA microchips, called biochips, can be implanted within robots to enhance their cognitive capacity [15]. Biochips are integrated circuits fabricated with or from living matter by biological processes [15] as depicted in Figure 4. Through DNA microarray technology, tremendous data can be extracted from a clinical sample. Biochips have faster operating systems than silicone brains and are directly linked to humans by brain monitoring mechanisms. They are engineered to enhance cognitive functions.

Since biochips are created with fragments of DNA [16], they are highly attuned to sensing and responding to basic emotions. Using biochips, robots can couple the ability to detect emotions with the ability to make ethical, rational decisions [15]. Not only does this reinforce the idea of crafting ethical robots, but also it places robots closer to the threshold of human capabilities: consciousness and sentience.

3.5. Advancing toward Consciousness and Sentience

A futuristic view of robots and their rights has been envisaged by authors a few decades ago [16]. Such works point us to issues on the lines of consciousness and sentience. The term sentience is the ability to perceive emotions and to self-reflect, included within “consciousness.” The definition of consciousness used in the literature is the ability to be aware of oneself, one’s mind, and experiences within an environment [17]. Robots are given sensors and actuators for perceiving and interacting with their environment. Their actions are weighed against respective outcomes until the most feasible action is attained. Their success largely depends on their ability to achieve self-awareness as required in crucial applications [18]. If robots are self-aware, they have the ability to distinguish themselves as entities within and outside their environment. This entails self-reflection and emotive communication, both innate features of humans. Such complex rationalization is a novel feat for robots; it thus seems worthy of consideration for human rights from the “ethics” perspective.

3.6. ACM Code of Ethics

The Association of Computing Machinery has a Code of Ethics and Professional Conduct [19] on General Moral Imperatives, More Specific Professional Responsibilities, and Organizational Leadership Imperatives. An example of a general moral imperative is “Be fair and take action not to discriminate.” Rational humans acting fairly would follow this and we argue that a robot is equally likely, if not more, to do so if built by ethical humans. Consider humans selecting candidates for interviews. Ideally, they should incorporate Equal Opportunity and Affirmative Action (EOAA). However, there may be subtle biases in human minds. Robots would depict no such bias and would perform the selection fairly and effectively if designed with appropriate requirements.

Consider more specific professional responsibilities. A principle of the ACM Code includes “Honor contracts, agreements, and assigned responsibilities.” Humans are expected to do this in a work environment and most of us adhere to it. Yet, some human employees may break existing contracts since new job prospects seem more lucrative. They may face lawsuits and pay penalties or may escape (if authorities do not sue them). Based on current advances in robotics, we claim that ethically programmed robots would strictly adhere to this principle. Robots would not break an existing contract because that would violate a primary notion of AI, that is, simulate rational ethical intelligent humans. In the real world, we have ethical and unethical humans. However, the premise of robotics, as discussed herewith is to create ethical beings. Thus, robots would follow the rules as programmed.

The leadership category is a tough one. Consider the imperative, “Articulate and support policies that protect the dignity of users and others affected by a computing system.” This seems easy for an ethical human leader. A robot equipped with ethics may find this harder as it entails significant decision-making and may include elements of creativity. Any aspect where leadership and innovation are involved may be challenging for robots, given their current cognitive capacities. This is a point in favor of according human rights to humans alone. Further discussions may apply. While we have reviewed human rights issues for robots focusing on computer science and ethics, it is useful to incorporate legal aspects as well.

4. Perspectives of Law

Legislators, attorneys, and judges hold the power to liberate or suppress a new generation of potential citizens. Due to the novelty of robots, the judicial system has not created legislation on them. It is worthwhile to analyze this.

4.1. Human Rights in the USA and the UNO

The origin of human rights in the USA dates back to 1776 when the Founding Fathers signed the Declaration of Independence [20]. It stated that all humans are created equal and naturally endowed with the unalienable right “to life, liberty, and the pursuit of happiness.” The rights therein were safeguarded for humans. Modern robots have silicone brains or biochips to experience their environment similar to humans. Each silicone brain is created by the same hardware engineering process; the difference between each individual brain is in the software design [18]. Thus, all robots are created equally. This is analogous to humans who are all born in a similar manner and differ in individual qualities. When children are born, they do not know much but learn by experiencing situations and observing the outcomes of actions. Likewise, when silicone brains are implanted in robots, they do not know how to navigate until they test each option. As they learn which actions provide good results, robots store that data in memory and use it as needed, similar to humans.

Humans are no longer the only species capable of sentience and rationality. The United Nations Organization’s Universal Declaration of Human Rights (UDHR) defines a human as an agent with a conscience, capable of reason [21]. Robots achieve reason and consciousness when equipped with silicone brains or biochips. These give robots the opportunity to actively determine desired plans of action rather than blindly follow commands hardcoded in their system [15]. This is done by robots on their own without external intervention. It is clearly analogous to humans.

4.2. The Contributions of Robots

Technological advances are capable of making robots achieve more than humans. Scientists recently created robots that travel to locations posing threats to humans. Such robots engage in environmental aims and have the duty of ensuring safety. Robots have also acclimated to areas such as hospitals, households, frontlines of battlefield, and outer space [8, 22]. For example, during the COVID-19 pandemic, robots transitioned into essential workers. In particular, such robots were responsible for sanitizing hospitals, delivering critical supplies, and assisting frontline workers. [22] There is a league of medical robots, as illustrated in Figures 5 and 6, encouraging safety during the COVID-19 pandemic and performing life-saving procedures.

Robots positively contribute to the medical industry and environmental sustainability. Without them, scientific discoveries would be fewer, health concerns would multiply, and the quality of life would drastically reduce. It is critical to ensure the safety and longevity of robots. One method of doing so could potentially be to accord some human rights to robots. Basic human rights would provide a layer of protection between robots and their surrounding environment. No longer would they be treated as property. Instead, they would be members of society, contributing time and effort to advancement of future generations. Distribution of human rights to robots is met with unbridled resentment and zealous celebration [23]. We consider arguments for and against this stand.

4.3. In Favor of Human Rights for Robots

The primary argument in favor of endowing human rights to robots is that they have evolved into rational, autonomous beings. Modern day robots are not merely remote-controlled toys. Their silicone brains and biochips prepare them to handle situations that humans would encounter. Robots have the ability to individually determine their goal and progressively work toward achieving it. Their autonomy can only be matched by humans, not even animals [24]. Increasing abilities of robots are advancing toward a threshold that probably allows them to distinguish themselves as their own sect in society.

Another argument is that robots need to be safeguarded to ensure their survival that benefits humankind. This is analogous to protection for bodyguards of VIPs or soldiers on the battlefield who risk their own lives for others. If robots are property and can be misused without fear of punishment, it violates their safety which adversely affects humans and environment. Instead, if robots are granted human rights, their destroyers can be subject to lawsuits.

Yet another view pertains to fairness and lack of bias exhibited by robots. There could be judicial cases [25] where court verdicts seem unfair due to bias based on gender, race, and so on. If robots could function as attorneys/judges, they could enhance fairness and optimality. In order to achieve this, other aspects would entail the inclusion of more common sense knowledge [26] in robots.

4.4. In Opposition of Human Rights for Robots

Though the cognitive capacity of robots has reached an all-time peak, some scholars remain skeptical. They advocate on behalf of humans who live without sufficient human rights. Rather than acknowledging the existence of robots, a better investment would be to promote needy, underprivileged individuals worldwide who are stripped of basic opportunities. As global citizens, it is our duty to ensure safety of all existing humans before initiating a new wave of citizens. A furor would occur if robots in the USA had greater rights than some humans in developing countries. A few scholars argue that seemingly inanimate robots cannot get more rights than animals, as the latter actually possess real life, which seems quite a valid stand [24].

Inclusion of robots in the judicial system could potentially spark a downward spiral. Global law firm Baker and Hostetler hiring a robot lawyer ROSS in 2016 for bankruptcy practice and legal research aroused much debate [27]. There are arguments that some court cases are too sensitive for robots; humans with intuitive reasoning and emotive abilities are needed. Also, there is no feasible way to predict a robot’s true intentions; we trust that they will not inflict harm upon humans. This is hard, considering the unpredictability of robots. If robots were to inflict danger after obtaining their rights, they would have the opportunity to seek asylum under human rights law. Thus, to avoid potential conflict between humans and robots, it is important to exclude robot beings from gaining access to human rights.

Yet another standpoint pertains to employment. Many employees in postal services, grocery stores, and factory floors lost their jobs due to automated services being more efficient and cost-effective. Bestowing human rights on robots would imply an increase in their employment, leading to further unemployment for humans. Human rights (for humans) would thus be adversely impacted since the pursuit of “life, liberty, and happiness” involves the procurement of food, housing, healthcare, and so on, for which employment is critical.

5. Discussion

Scholars from computer science, ethics, and law promulgate contrasting views on the accordance of human rights to robots. We highlight these with notable points and discuss current as well as future issues.

5.1. Highlights in Computer Science, Ethics, and Law

Computer scientists are divided on the notion of initiating a new species of robotic citizens. While some view this as an opportunity to showcase advances in technology, others believe that robots will cause unforeseen dilemmas. Ethics scholars also have reservations about the capabilities of robots. Though robots are autonomous, they need instructions to develop an initial sense of their environment. In this whirlwind of controversies, legal scholars cannot advocate human rights for robots until there is a consensus.

No human is perfect and not all humans behave ethically, but they still get human rights (even if they are criminals). There are robots who exhibit moral behavior better than some humans and who serve humanity to a greater extent than many humans. Some robots, if granted human rights that ensure their survival and protection, could make the world better for law-abiding humans. Conversely, if almost perfect robots always outperform humans in the future, they might be detrimental to the human race. A noteworthy point is whether human rights, if granted, can be taken away from robots as needed. Such issues aroused debates among scholars on human rights for robots.

In order to warrant further consideration, scholars from computer science and robot ethics must work in concert to ensure that the robots always behave ethically, do their duties at least as well as their human counterparts, and strive to benefit humankind. Complete, successful execution of Turing Tests including ethics is critical here. Broader impacts, as emphasized by the National Science Foundation of the USA, are also significant.

5.2. Current News and Views

World renowned genius, the late Professor Dr. Stephen Hawking, a theoretical physicist in the UK, made statements on the danger of robots. He mentioned to the BBC: “The development of full artificial intelligence could spell the end of the human race” [28]. Dr. Hawking suffers from motor neuron disease amyotrophic lateral sclerosis (ALS); his communication technology entails AI; see Figure 7. In his statements to the BBC, he supported such basic AI systems but feared the creation of AI beings that surpass humans. According to him, “Humans who are limited by slow biological evolution could not compete and would be superseded.” He claimed that efforts to create thinking machines in AI posed a threat to our existence [28]. His arguments are vehemently against robots getting human rights and oppose further advances to bring robots closer to humans. Many people share similar views. While we wish to create ethical robots, there is no guarantee that they will behave morally. Robots could wage war on humans if given complete autonomy. Humans undergo natural birth and death; robots can exist eternally. This puts them ahead of us and they could perpetrate the extinction of the human race, though it is a far-fetched thought.

On a lighter note, consider robots and employment. Statements of Microsoft founder and tech icon Bill Gates are significant here. Mr. Gates said in an interview that “Robots who take human jobs should pay taxes” [29]. We cannot tax robots directly. Thus, “taxing robots would, in reality, be a tax on the capital employed by businesses in using them.” However, businesses would pass this tax burden to their employees through lower salaries and customers through higher prices, causing further problems. Yet, Mr. Gates states that we would be able to use this tax income to fund jobs like eldercare and childcare for humans, for which we are better suited [29]. Consider ROSS, the paralegal robot. “ROSS is an artificially intelligent robot which uses IBM’s Watson technology to scour through billions of legal texts and citations on the Internet within a second” [27]. While ROSS can provide service at par with or better than human paralegals and would have no bias in judgment, there are arguments against such robots being used in law firms, since they can take away jobs from their human counterparts with at least 4 years of education as paralegals [30]. Thus, with Mr. Gates’ suggestions, the question arises whether it is appropriate to make law firms pay higher taxes due to their robotic employees (considering further implications of such high taxation on their human employees).

5.3. The Futuristic Angle

Advancing robotics in the future entails more research on neural networks. This includes further studies in deep learning [31] with paradigms such as CNN (convolutional neural networks), RNN (recurrent neural networks), LSTM (long short term memory), and autoencoders that could provide a clearer understanding of the human brain. Among the latest advances in deep learning, we have the concept of transformers [32] with models such as BERT (bidirectional encoder representations with transformers), GPT (generalized pretrained transformer), and T5 (text-to-text transfer transformer) that tend to be highly effective in dealing with natural language analogous to humans [33]. Models based on advances in such deep learning technologies can be used to build more advanced robots even closer to the thresholds of human cognition.

Consider “Erica” developed by Professor Hiroshi Ishiguro at Osaka University [34]. Erica is a robot comprehending natural language, speaking in a human voice, and portraying facial expressions, built mainly for studying human-robot interaction. Professor Ishiguro’s “wants to create robots that can coexist with us humans.” His team is “working to improve the conversation skills, facial expressions, and body language of their robots, hoping that those abilities will one day become indistinguishable from our own.” Much research is needed to accomplish this work.

A related issue is that of common sense knowledge (CSK). Modern robots accomplish feats in specific domains but may lack generic common sense, often subtle and intuitive. This could adversely affect performance; for example, road tests on autonomous vehicles have failed in some cases. An accident occurred when, for example, a vehicle detected a truck to be an overpass and crashed into it [35]. A human driver would have common sense to distinguish a truck from an overpass but a robot driver may confuse them since they look alike, especially if it sees them the first time. Thus, advancing CSK research and using it within autonomous vehicles is useful [36]. CSK repositories, many of which are surveyed in recent works [26, 37], and related developments could prove very useful here. For example, commonsense knowledge is crucial in systems involving object recognition [38], autonomous driving [36], smart mobility [39], and smart manufacturing [40] often from the safety angle. Building and enhancing AI systems with CSK would help robots function better [26, 37]. This could be a step closer to answering the question on human rights for robots if they are fully equipped with common sense [30, 41].

Robot learning from demonstrations (LfD) is an important research issue that will deepen the relationships between robots and humans and will provide a new perspective to investigate the rights of robots, humans, and human-robot partnerships [42]. Via mimicking human demonstrations, robots can be programmed in real time and further act as humans’ companions in new human-robot collaborative tasks. In this issue, human workers, who are not required to master professional expertise and considerable coding skills, are able to update the robot’s working instructions only through demonstrations to enable robots to autonomously perform new tasks [43, 44]. In addition, the R4 law empowers robots with more rights in human-robot interaction. The R4 law states that robots should collaborate with humans actively to deliver/pick up the Right parts to/from the Right person at the Right time in the Right way under the shared working settings [45]. That is to say, in the human-robot interaction process, the robot not only needs to possess high-level cognitive abilities to understand human actions/intentions but also deduces what next steps should it do to work with its human partner [46].

It is important to consider robotic advancements and human rights, given a pandemic such as COVID-19. As stated in [47], “I wonder what aspects of our daily working lives will be permanently altered, post-COVID-19.” With reference to AI, the author claims, “There is no doubt in my mind that our profession and the products it creates will have a prominent role in shaping our post-COVID-19 society” [47]. This implies that AI and robots would be even more critical, implying that they cannot be treated merely as property. Just like employees created trade unions to express their rights long ago, a modern uprising could involve such issues being raised for robots if they are not given adequate protection in the workplace. This calls for further research on the use of robots post-COVID-19 in conjunction with the human rights angle. Robots were almost indispensable in some aspects of COVID-19 treatment, often surpassing human capabilities. They helped to save many humans. Some future work in this direction where robots could play a vital role would be in the automated detection of COVID-19 symptoms such that robots would get rigorously trained for the detection procedures based on machine learning. Techniques such as transfer learning could be deployed along with computer vision models as described in recent works, for example, [48, 49], to be used in conjunction with robotics. Such work would be particularly beneficial in areas where there is a limitation of testing kits and healthcare professionals such as physicians and other medical staff for conducting real full-fledged COVID-19 tests. Likewise, other automated detection procedures could be harnessed within robotics for various ailments and diseases, thus being helpful in medicine and assisting doctors on the whole. This provides an insight into the helpfulness of robots from a futuristic angle, especially considering the domain of healthcare.

An important future vision of AI is one robot per household. This typically refers to robots serving humans, for example, Alexa and Roomba, but it could be interpreted differently. Would humans want robots in their houses as personal companions, would robots live with other robots thus buying and renting houses, would robots marry and reproduce, and would they vote and contest elections: these are questions to be addressed from a futuristic angle. Recently, citizenship being granted to Sophia, a robot in Saudi Arabia, sparked worldwide controversy, including comments that it has more rights than some women in that country and also that it was probably just a publicity stunt [50]. Future such cases could create worldwide controversy.

A therapeutic robot PARO built in Japan, as appears in Figure 8, simulates a seal and is found attractive as a pet by giving the pleasures without the pains, often useful in nursing homes and various social settings [2]. It helps relieve stress and is useful in treating patients with depression and dementia. Aibo the dog, a robotic pet developed by Sony long ago, has been the subject of behavioral studies [51]. It was recently demonstrated at the AAAI-2020 conference. This was found very friendly by children and adults. The robotic dog was a popular attraction among conference attendees, many of them taking pictures and videos with this pet and some wanting to buy it. Real pets may provide stress relief but there are issues of biting, clawing, allergies, fright, and so on. Robotic pets are already being preferred over real pets in hospitals, nursing homes, and so on. While this seems a boon today, could it be a bane in the future? Could this practice go a step further and make humans prefer robots as room-mates and life-mates? If this happens, the social and biological implications could be disastrous. Humans not needing other humans at home but preferring robots instead could prove detrimental to the human race. Thus, the question of human rights for robots calls for more research on several grounds.

Moreover, prior to the accordance of human rights to robots, it is important for computer scientists to thoroughly explore the decision-making processes of advanced robots, thereby shedding light on the decision-making “black box.” In particular, computer scientists should be able to traverse the neural working of robots to determine and distinguish independent robot behavior from hardcoded robot behavior. In doing so, computer scientists may present the capabilities of robots or lack thereof, in favor of, or opposition to, the accordance of human rights. Only then will legal scholars and ethics scholars be equipped to work in harmony to compose an answer to the multifaceted question presented herein.

As discussed in AAAI-2020, a critical issue is subjectivity. Can robots be as subjective as humans in various situations? If so, would the subjectivity always be positively utilized? Conversely, if robots make their own decisions, can they deliberately cause harm, for example, analogous to drones programmed by unethical humans? Can robots automatically wage war against humans? All these are important questions. It is useful to ascertain(i)Whether human rights if given can be revoked(ii)To what extent the rights should be endowed (partial, e.g., right to life, versus full, e.g., voting rights)

In the future, there are various other open issues that need further attention. Their findings may help in obtaining more definitive answers on the issue of robots and human rights.

6. Conclusions

This paper provides a review, examining the premise of endowing robots with human rights. We investigate scholarly sources from computer science, ethics, and law. Notable points in favor of this premise include the following:(i)Modern day robots are autonomous beings with cognition and sentience (through silicone brains/biochips)(ii)It is important to safeguard robots, similar to humans, so they can serve humankind better(iii)Robots can be more ethical, law-abiding, and bias-free than some humans (who get unconditional human rights)(iv)All robots are created equal, yet differ individually, analogous to humans(v)Contributions of robots in critical applications often surpass those of humans

Despite these points, many scientists and other professionals still oppose human rights for robots. Notable points against the premise include the following:(i)There are needy, underprivileged humans whose needs must be met before envisaging robot citizens(ii)Many situations, for example, court cases, are too sensitive and need real humans only (so robots cannot be our equals)(iii)Robots pose threats to human employment; thus, giving them human rights may adversely affect our rights(iv)Many robots still lack sufficient common sense which is inherent in all humans(v)Animals have a real life while robots are basically inanimate; thus, human rights for robots seem far-fetched(vi)In an extreme situation, robots might be responsible for the extinction of the human race

Given all these points, we take a neutral stand on human rights for robots, more on the negative side as of now. We make the following suggestions for the future that would shed more light on the premise:(i)Scholars from computer science, ethics, and law need to conduct joint work in the area for more advances(ii)Enhanced research in neural networks and deep learning is important to unveil the “black box” in robotics(iii)Further research on common sense knowledge and related areas for inclusion in robotics would be useful(iv)Decisions need to be made on whether human rights can be partially granted and revoked as needed(v)It needs to be investigated whether robots can be between human and machine, to define rights accordingly

Finally, an important question is Who would truly be negatively impacted if robots do not get human rights? In this paper, we claim that this is by far the most significant question on the premise of granting human rights to robots. Further research on the points summarized herewith for and against this premise would help make decisions. Research advances in related AI areas such as neural networks, deep learning, and common sense knowledge would shed further light on the matter. Such advances would unconditionally help robotics and humankind.

Data Availability

No data was used to support this study.

Disclosure

Some of this work occurred when Priya Persaud, Esq. was a Bachelor’s student at Montclair State University, with a triple major, Computer Science, Political Science, and Jurisprudence.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Priya has been funded by the NSF GS-LSAMP grant program at Montclair State University. This work was supported in part by the National Science Foundation under Grant CNS-2104742 and in part by the National Science Foundation under Grant CNS-2018575. We also thank Dr. Niket Tandon from the Allen Institute for Artificial Intelligence, Seattle, WA, for his feedback on this work.