Experts predict that in the next 10 to 100 years scientists will succeed in creating human-level artificial general intelligence. While it is most likely that this task will be accomplished by a government agency or a large corporation, the possibility remains that it will be done by a single inventor or a small team of researchers. In this paper, we address the question of safeguarding a discovery which could without hesitation be said to be worth trillions of dollars. Specifically, we propose a method based on the combination of zero knowledge proofs and provably AI-complete CAPTCHA problems to show that a superintelligent system has been constructed without having to reveal the system itself.

1. Introduction and Motivation

Experts predict that in the next 10 to 100 years scientists will succeed in creating human-level artificial general intelligence (AGI) [15]. While it is most likely that AGI will be created by a government agency [6], such as DARPA, or a large corporation such as Google Inc., the possibility remains that it will be done by a single inventor or a small team of “garage inventors.” The history of computer science is the history of such inventors. Steve Jobs (Apple), Bill Gates (Microsoft), Mark Zuckerberg (Facebook), and Page and Brin (Google) to name just a few, all revolutionized the state of technology while they were independent inventors.

What is an inventor to do after a successful construction of an artificially intelligent system? Going public with such an invention may be dangerous as numerous powerful entities will try to steal the invention. Worse yet, they will also likely try to reduce inventors freedom and safety either to prevent leaking of information or to secure necessary assistance in understanding the invention. Potential nemeses include security agencies, government representatives, military complex, multinational corporations, competing scientists, foreign governments, and potentially anyone who understands the value of such an invention.

It has been said that a true AI is the last invention we will ever have to make [7], as it will make all the other inventions for us. Monetary value of a true AI system is hard to overestimate, but it is well known that billions have been spent on research already by governments and industry [8]. Its potential for military complex is unprecedented both in terms of smart weapons and human-free combat [9]. Even if the initial system has only human-level intelligence, such a machine would among other things be capable of designing the next generation of even smarter intelligent machines and it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins leading to creation of superintelligence. Possession of such a system would clearly put the inventor of the system in danger [7].

In this paper, we address the question of safeguarding a true AI, a discovery which could without hesitation be said to be worth trillions of dollars. Without going into details, we assume that the inventor through code obfuscation, encryption, anonymization, and location obscurity is able to prevent others from directly accessing the system but still wishes to prove that it was constructed. For this purpose, we propose a novel method based on combination of zero knowledge proofs and provably AI-complete CAPTCHA problems to show that a superintelligent system has been constructed without having to reveal the design of the system.

Alternatively, our method could be used to convince a group of skeptics that in fact a true AI system has been invented without having to resort to time-consuming individual demonstrations. This would be useful if the inventor faces a skeptical reception from the general public and scientific community. In the past, exaggerated claims have been made [8] about some AI systems and so a skeptical reception would not be that surprising. The following sections provide an overview of zero knowledge proofs, CAPTCHAs, and the concept of AI-completeness, all of which are necessary to understand the proposed method.

2. Zero Knowledge Proof

Simply stated a zero knowledge proof (ZKP) is an interactive probabilistic protocol between two parties that gives, with a high degree of certainty, evidence that a theorem is true and that the prover knows a proof while providing not a single bit of information about the said proof to the verifier [10]. ZKP works by breaking up the proof into several pieces in such a way that [10]:(1)the verifier can tell whether any given piece of the proof is properly constructed;(2)the combination of all the pieces constitutes a valid proof;(3)revealing any single piece of the proof does not reveal any information about the proof.

To begin, the prover hides each piece of the proof by applying a one-way function to it. After that, the verifier is allowed to request a decryption of any single piece of the proof. Since the verifier can select a specific piece at random, seeing that it is properly constructed provides probabilistic evidence that all pieces of the proof are properly constructed and so is the proof as the whole [10].


With the steady increase in popularity of services offered via the Internet, the problem of securing such services from automated attacks became apparent [11]. In order to protect limited computational resources against utilization by the growing number of human impersonating artificially intelligent systems, a methodology was necessary to discriminate between such systems and people [12]. In 1950 Turing published his best known paper “Computing Machinery and Intelligence” in which he proposed evaluating abilities of an artificially intelligent machine based on how closely it can mimic human behavior [13]. The test, which is now commonly known as the Turing test, is structured as a conversation and can be used to evaluate multiple behavioral parameters, such as agent’s knowledge, skills, preferences, and strategies [14]. In essence, it is the ultimate multimodal behavioral biometric, which was postulated to make it possible to detect differences between man and machine [11].

The theoretical platform for an automated Turing test (ATT) was developed by Naor in 1996 [15]. The following properties were listed as desirable for the class of problems which can serve as an ATT:(i)many instances of a problem can be automatically generated together with their solutions;(ii)humans can solve any instance of a problem quickly and with a low error rate. The answer should be easy to provide either by a menu selection or via typing a few characters;(iii)the best known artificial intelligence (AI) programs for solving such problems fail a significant percentage of times, despite the full disclosure of how the test problem is generated;(iv)the test problem specification needs to be concise in terms of description and area used to present the test to the user.

Since the initial paper by Naor, a great deal of research has been performed in the area, with different researchers frequently inventing new names for the same concept of human/machine disambiguation [16, 17]. In addition to ATT, the developed procedures are known under such names as [11]: reversed Turing test (RTT) [18], human interactive proof (HIP) [19], mandatory human participation (MHP) [20], or Completely automated public Turing test to tell computers and humans apart (CAPTCHA) [21, 22].

As ongoing developments in AI research allow some tests to be broken [2326], research continues on developing more secure and user friendly ways of telling machines and humans apart [2732]. Such tests are always based on as-of-yet unsolved problem in AI [33]. Frequent examples include pattern recognition, in particular character recognition [3440] or image recognition [4143]; a number of CAPTCHAs are based on recognition of different biometrics such as faces [4446], voice [47, 48] or handwriting [49, 50]. Additionally the following types of tests have been experimented with [11, 51] the following.(i)Reading: password displayed as a cluttered image.(ii)Shape: identification of complex shapes.(iii)Spatial: text image is rendered from a 3D model.(iv)Quiz: visual or audio puzzle or trivia question.(v)Match: common theme identification for a set of related images.(vi)Virtual reality: navigation in a 3D world.(vii)Natural: uses media files collected from the real world, particularly the web.(viii)Implicit: test is incorporated into the web page navigation system [52].

4. AI-Completeness

A somewhat general definition of the term included in the 1991 Jargon File [53] states

AI-complete: [MIT, Stanford, by analogy with “NP-complete”] adj. Used to describe problems or subproblems in AI, to indicate that the solution presupposes a solution to the “strong AI problem” (i.e., the synthesis of a human-level intelligence). A problem that is AI-complete is, in other words, just too hard. Examples of AI-complete problems are “The Vision Problem”, building a system that can see as well as a human, and “The Natural Language Problem”, building a system that can understand and speak a natural language as well as a human. These may appear to be modular, but all attempts so far (1991) to solve them have foundered on the amount of context information and “intelligence” they seem to require.

As such, the term “AI-complete” (or sometimes AI-hard) has been a part of the field for many years [54] and has been frequently brought up to express difficulty of a specific problem investigated by researchers (see [5568]).

Recent work has attempted to formalize the intuitive notion of AI-completeness, In particular [54].

In 2003, von Ahn et al. [69] attempted to formalize the notion of an AI-problem and the concept of AI-hardness in the context of computer security. An AI-problem was defined as a triple: “ 𝒫 = ( 𝑆 , 𝐷 , 𝑓 ) , where 𝑆 is a set of problem instances, 𝐷 is a probability distribution over the problem set 𝑆 , and 𝑓 : 𝑆 { 0 ; 1 } answers the instances. Let 𝛿 2 ( 0 ; 1 ] . We require that for an 𝛼 > 0 fraction of the humans 𝐻 , P r 𝑥 𝐷 [ 𝐻 ( 𝑥 ) = 𝑓 ( 𝑥 ) ] > 𝛿 An AI problem 𝒫 is said to be ( 𝛿 , 𝜏 )-solved if there exists a program A, running in time at most 𝜏 on any input from 𝑆 , such that P r 𝑥 𝐷 , 𝑟 [ A 𝑟 ( 𝑥 ) = 𝑓 ( 𝑥 ) ] 𝛿 . (A is said to be a ( 𝛿 , 𝜏 ) solution to 𝒫 .) 𝒫 is said to be a ( 𝛿 , 𝜏 )-hard AI problem if no current program is a ( 𝛿 , 𝜏 ) solution to 𝒫 , and the AI community agrees it is hard to find such a solution.” 𝑓 is a function mapping problem instances to set membership. In other words, it determines if a specific pattern has a property in question. It is necessary that a significant number of humans can compute function 𝑓 . If the same could be accomplished by a program in efficient time, the problem is considered to be solved. It is interesting to observe that the proposed definition is in terms of democratic consensus by the AI community. If researchers say the problem is hard, it must be so. Also, time to solve the problem is not taken into account. The definition simply requires that some humans be able to solve the problem [69].

In 2007, Shahaf and Amir [70] have published their work on the theory of AI-completeness. Their paper presents the concept of the human-assisted Turing machine and formalizes the notion of different Human Oracles (see section on Human Oracles for technical details). Main contribution of the paper comes in the form of a method for classifying problems in terms of human-versus-machine effort required to find a solution. For some common problems such as natural language understanding (NLU), the paper proposes a method of reductions allowing conversion from NLU to the problem of speech understanding via Text-To-Speech software.

In 2010, Demasi et al. [71] presented their work on problem classification for artificial general intelligence (AGI). The proposed framework groups the problem space into three sectors. (i)Non-AGI-bound: problems that are of no interest to AGI researchers. (ii)AGI-bound: problems that require human-level intelligence to be solved. (iii)AGI-hard: problems that are at least as hard as any AGI-bound problem.

The paper also formalizes the notion of human oracles and provides a number of definitions regarding their properties and valid operations.

In 2011, Yampolskiy [54] proposed the following formalization of AI-completeness.

Definition 1. A problem C is AI-complete if it has two properties:(1)it is in the set of AI problems (Human Oracle solvable)(2)any AI problem can be converted into C by some polynomial-time algorithm.

Yampolskiy [54] showed that the Turing test problem is an instance of an AI-complete problem and further showed certain other AI problems to be AI-complete (question answering, speech understanding) or AI-hard (Programming) by utilizing polynomial-time reductions.

Furthermore, according to the Encyclopedia of Artificial Intelligence [72] published in 1992, the following problems are all believed to be AI-complete [54, 72].(i)Natural language understanding—“Encyclopedic knowledge is required to understand natural language. Therefore, a complete Natural Language system will also be a complete Intelligent system.”(ii)Problem solving—“Since any area investigated by AI researchers may be seen as consisting of problems to be solved, all of AI may be seen as involving Problem Solving and Search”.(iii)Knowledge representation and reasoning—“…the intended use is to use explicitly stored knowledge to produce additional explicit knowledge. This is what reasoning is. Together Knowledge representation and Reasoning can be seen to be both necessary and sufficient for producing general intelligence —it is another AI-complete area.”(iv)Vision or image understanding—“If we take “interpreting” broadly enough, it is clear that general intelligence may be needed to do this interpretation, and that correct interpretation implies general intelligence, so this is another AI-complete area.”


In this section, we describe our SuperCAPTCHA method which combines ideas of ZKP, CAPTCHA and AI-completeness to create a proof of access to a superintelligent system.

Imagine a CAPTCHA based on a problem which has been proven to be AI-complete, meaning only a computer with human-level intelligence or a real human would be able to solve it. We call such a problem SuperCAPTCHA. If we knew for a fact that such a test was not solved by real humans, that would lead us to conclude that a human-level artificially intelligent system has been constructed and utilized. One simple way to eliminate humans as potential test solvers is to design a test which would require contribution of all humans many times over in order to solve the test in the allotted time, In other words, a test comprised of 𝐾 instances of a SuperCAPTCHA, for large values of 𝐾 .

We can estimate the current human population at 7 billion people, which is really a great overestimation since not all people have skills to solve even a simple CAPTCHA, much less an AI-complete one. If the developed SuperCAPTCHA test required 50 billion human-effort-hours to be solved and it was solved in 1 hour, we can conclusively state that it has not been done by utilizing real people. To arrive at our conclusion, without the loss of generality, we assume that any AI software could be run on progressively faster hardware until it exceeds the speed of any human by a desired constant factor.

Utilizing the existing AI-complete problems, we propose a few SuperCAPTCHA tests which if properly administered could serve to prove that an artificially intelligent system has been developed without revealing the design of the system. As long as each SuperCAPTCHA is solved an order of magnitude more times than the number of potential human solvers, the conclusion of an artificial origin of the solver will remain valid. Examples of some AI-Complete CAPTCHAS are as follows.(i)Provide a detailed description and explanation of a random image.(ii)Write a book indistinguishable in quality from those written by human authors.(iii)Write a computer program to simulate human-level intelligence (currently too hard for people).

So, suppose a SuperCAPTCHA was administered and was comprised of properly labeling and describing a random set of 100 billion images. Also suppose that it was accomplished in the amount of time in which all humans in the world working together would not be able to complete the task, for example, in 2 minutes. The next question is the evaluation of a claimed solution to a SuperCAPTCHA. Evaluating the complete solution is too complicated, so our proposed method relies on human graders who randomly decide on a piece of the total solution they would like to examine and compare performance of the AI system to that of human users. While the traditional Turing test is based on dialogues, the SuperCAPTCHAs are based on random sampling and verification. The verification procedure itself has to be represented by an efficient algorithm performing in at most a polynomial time or in probabilistic polynomial time. In our example, if a randomly chosen image’s labeling conforms to the expectation of labeling which a human being would have produced, this increases probabilistic evidence towards the belief that a truly artificially intelligent system has been developed. With each additional inspected piece of the solution, public’s confidence in such an explanation will increase in a probabilistic fashion inspired by the ZKP protocol. Best of all is that partially solved SuperCAPTCHAs or even cheating attempts by humans to pass SuperCAPTCHA will result in beneficial labeling of large datasets.

With every additional piece of SuperCAPTCHA verified, public’s confidence that a true AI has been invented will increase just like in a classical zero knowledge proof system. As additional problems get proven to be AI-complete, the repertoire of potential SuperCAPTCHAs will grow proportionally. It is also interesting to observe that the inventor of a truly intelligent artificial system may delegate design of SuperCAPTCHAs to the system itself.

6. Conclusions

In this paper, we addressed the question of safeguarding an invention of a truly artificially intelligent system from public disclosure while allowing the inventor to claim credit for its invention. Short of simply using the developed AI system covertly and claiming no credit for its invention, our approach is the safest route an inventor could take to obtain credit for the invention while keeping its design undisclosed. Our methodology relies on analysis of output from the system as opposed to the system itself. Specifically we proposed a method based on combination of zero knowledge proofs and provably AI-complete CAPTCHA problems to show that a superintelligent system has been constructed without having to reveal the system itself. The only way to break a SuperCAPTCHA is to construct a system capable of solving AI-complete problems, an artificial general intelligence.