Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 236806, 8 pages
http://dx.doi.org/10.1155/2015/236806
Research Article

Simulation of Turing Machine with uEAC-Computable Functions

1School of Automation, Beijing Institute of Technology, Beijing 100081, China
2Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA

Received 10 June 2015; Accepted 3 November 2015

Academic Editor: Jean J. Loiseau

Copyright © 2015 Yilin Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The micro-Extended Analog Computer (uEAC) is an electronic implementation inspired by Rubel’s EAC model. In this study, a fully connected uEACs array is proposed to overcome the limitations of a single uEAC, within which each uEAC unit is connected to all the other units by some weights. Then its computational capabilities are investigated by proving that a Turing machine can be simulated with uEAC-computable functions, even in the presence of bounded noise.

1. Introduction

Analog computer almost disappeared since the blossom of digital computer in the second half of the last century. Actually, the first “computer” in the world, the Antikythera mechanism [1], which was used to predict astronomical positions and eclipses, was an analog computer. Recently analog computer is again regaining interest, and this stems partly from the development of various unconventional computational techniques, such as quantum computation, DNA computation, and cellular automaton.

The first significant paradigm of analog computer is the General Purpose Analog Computer (GPAC) [2] introduced by Shannon as mathematical model of the Differential Analyzer [3]. Shannon proved that GPAC was able to generate differentially algebraic functions, such as polynomials, the exponential functions, the trigonometric functions and sums, products, and compositions of them. More generally, he claimed that a function could be generated by a GPAC if it satisfied some algebraic differential equations. Rubel showed that the Dirichlet problem on the disc cannot be solved by a GPAC and he defined the Extended Analog Computer (EAC) [4], which was able to directly compute partial differential equations, solve the inverse of functions, and implement spatial continuity. Mycka pointed out that the set of GPAC-computable functions was a proper subset of EAC-computable functions [5]. Graca et al. proved that Turing machine could be robustly simulated by flows defined by polynomial ordinary differential equations (ODEs) [6] and pointed out that the solution of the initial value problems defined by some ODEs was computable by GPAC; hence, it followed that GPACs could simulate Turing machines. Piekarz compared the computational capabilities of the EAC and partial recursive functions and proved that EAC could generate any partial recursive function defined over [7].

In his paper that proposed the EAC model, Rubel stressed that the EAC was a conceptual computer and whether it could be realized by actual physical, chemical, or biological devices was not known. However, researches into the continuous-valued Lukasiewicz logic as a computational paradigm led to an electronic implementation of the EAC [8]. Mills and colleagues designed and built an electronic implementation inspired by Rubel’s EAC model, called the micro-Extended Analog Computer (uEAC) [9, 10], after a decade’s research [1113]. Moreover, Mills introduced the -digraph [10], a diagrammatic tool, to demonstrate the relationship of the nature, Rubel’s EAC model, and uEAC, and, particularly, he related the EAC model to uEAC by dividing the “black boxes” of the EAC model into explicit functions and implicit functions. The current version of uEAC was designed in 2005 at Indiana University [10, 14] and had been applied to letter recognition [15, 16], exclusive-OR (XOR) problem [10, 17], Cyclotron Beam Control [18], and biologically derived circuit pattern construction [19], and so forth. It mainly consists of a conductive sheet in which currents can be injected and read at different locations, analog-to-digital and digital-to-analog converters that are used to interface the conductive sheet to the onboard controller, a microprocessor that controls the input/output array and emulates Lukasiewicz logic array (LLA) functions. The topology of the conductive sheet, the material from which it is constructed, and the boundary-valued LLA functions determine the computation of the uEAC forms. In the present study, a fully connected single-input, single-output uEACs array is proposed and its computational capabilities are investigated by showing that any Turing machine can be simulated with uEAC-computable functions, even in the case that some noise is added to the initial configuration of or during the iteration of the system. Turing machine [20] is the standard paradigm for digital computation since the work of Turing in the 1930s; we will prove the main result of this paper by constructing a robust simulation of Turing machine with uEAC-computable functions.

The paper can be outlined as follows. Section 2 provides some basic notations about Turing machine, Rubel’s EAC model, and the uEAC. The fully connected uEACs array is presented and discussed in detail in Section 3. Section 4 states the main result of this paper: a Turing machine can be robustly simulated by uEAC-computable functions, even in the presence of bounded noise. We prove the theorem in Section 5 by constructing a robust Turing machine simulation with uEAC-computable functions. Some conclusions and suggestions are given in Section 6.

2. Preliminaries

2.1. Turing Machine

A Turing machine can be seen as a state machine; at each moment the machine is in one of a finite number of states. It has an infinite one-dimensional tape which is divided into cells and accessed by a read-write head. By infinite one-dimensional tape, we mean that the cells are arranged in a left-right orientation, and the tape has a leftmost cell and stretches infinitely far to the right. Each cell contains one symbol; the read-write head can move left and right along the tape to scan successive cells.

The action of a Turing machine is determined completely by () the current state of the machine, () the symbol in the cell being scanned by the head, and () a table of transition rules. At each step, the machine reads the symbol under the head and then checks the transition rule and executes two operations: writing a new symbol into the current cell under the head of the tape and moving the head one position to the left or to the right or making no move. The tape head moves in the following manner: means moving one cell to the right, means moving one cell to the left if there are cells to the left, and means not to move. If the machine reaches a situation in which no transition rule will be carried out, then the machine halts.

Definition 1. A single tape Turing machine is -tuple  , where(i)is a nonempty finite set of states;(ii) is the tape alphabet that describes the contents of cells of the tape;(iii) is the transfer function;(iv) is the initial state.

Example 2. Consider an example of single tape Turing machine with three states and is the initial state. As discussed above, let be a blank symbol and . The transfer function is given by Table 1.
Given the input , the computation performed by the machine iswhere the symbol “” marks the position of the read-write head.

Table 1: Transfer function of the Turing machine .

In Figure 1, states of the Turing machine are represented by circles, with the concentric circle being the initial state . A transition is represented as a labeled arrow with 3 tuples; the first term is the content of the cell under the read-write head, the second one is the content of the cell after this transition, and the third one is the movement of the read-write head.

Figure 1: The computation performed by the Turing machine .
2.2. Rubel’s EAC Model and the uEAC

In Rubel’s definition, the EAC works on a hierarchy of levels, getting more versatile as one goes to higher levels in the hierarchy. At the lowest level , it produces and manipulates real polynomials of any finite number of real variables , while, at level and higher, it produces differentially algebraic real-analytic functions . The outputs of the machine at level can be used as inputs at level or higher. An important feature of the EAC is that it is “extremely well-posed,” which means that when the inputs at some level are modified by small errors, then the outputs differ from the original outputs only by a small amount on each compact set.

Definition 3. The function is generated by EAC at level   , if is a function such that (i), where ;(ii), where ;(iii), , where ;(iv), and is the solution ofwhere are functions and ;(v), where ;(vi), for any defined on set , whereMoreover, for any function produced at level and for any subset of produced at level , the function can be produced at level ;(vii), is a unique analytic continuation of from to all , where and ;(viii) is a solution of a set of differential functions of the formon set   subject to certain boundary requirements, where , , and are partial derivatives of ;(ix)for and ,where , , and are subsets of .

The EAC is an extension of Shannon’s GPAC. Rubel proved that every -function that could be computed by a GPAC could also be computed by an EAC [4], and, moreover, Euler’s gamma function and Riemann zeta function can also be computed by an EAC [21], while the GPAC cannot solve these problems. Rubel stressed that the EAC was a conceptual computer and whether it could be realized by actual physical, chemical, or biological devices was not known, and most computer scientists also regarded the EAC as a machine that was theoretically impossible to be built. However, Mills and his colleagues designed and built an electronic implementation inspired by the EAC, the micro-Extended Analog Computer (uEAC), and the current version was designed in 2005. Readers who are interested in the hardware of the uEAC are referred to [8, 10, 14] for more details.

Suppose that current is injected to the conductive sheet at location . The distances from to two arbitrary points , on the same radius are , , respectively, and is the voltage between and . Without loss of generality, we suppose that is located outside of and there are resistances between and , the length of every resistance is (i.e., ), and we haveLet ; we havewhere is the electrical resistivity of the conductive sheet. Let , then , and we note that is dependent on , , and ; in other words, once the current input locations and voltage output locations on the conductive material are fixed, is a positive constant and can be seen as a coefficient of the input current and output voltage . The output of uEAC is , where is the LLA basic function [22].

The uEAC is based on the resistance property of the conductive sheet, which makes its input-output relation linear; thus a single uEAC unit is very limited when applied to nonlinear problems. In the next section, we present a fully connected uEACs array to expand the computation capabilities of uEAC.

3. The Fully Connected uEACs Array

We present a fully connected single-input, single-output uEACs array (see Figure 2) in this section, within which each uEAC unit is connected to all the other units by some weights, and corresponds to the external input variable, to a state variable, and to the output variable. In the uEACs array shown as in Figure 2, the output of the array is not necessarily restricted to ; it can be the output of any uEAC unit or a combination of the outputs of several units. Each uEAC unit weights and sums its inputs and updates its state as the following function: where is the number of uEAC units, , are fixed real valued weights, and is the LLA function. It is important to distinguish this uEACs array model from the Analog Recurrent Neural Network (ARNN). In ARNN, the activation function of neurons is a saturated-linear function [23, 24], while, in the uEACs array, is a piecewise linear function implemented by LLA basic functions. Another significant feature that distinguishes uEACs array from the ARNN is that uEAC is an actual programmable physical implementation inspired by Rubel’s EAC model, and, in this paper, the computational capability of such a fully connected uEACs array is studied. We can rewrite the mathematical model in vector form aswhere now and are vectors of size and is a real matrix of size .

Figure 2: Structure of the fully connected uEACs array.

The state of this dynamic system at each instant is a real vector; that is, . The th coordinate of the vector represents the value of the th uEAC units at time . In particular, in the physical structure of uEAC, the term “real” corresponds to values of resistances, capacitances, and electrical field, which may not be directly measured, but they affect the dynamic behavior dramatically. For instance, everything on the earth obeys the exact value of gravitational acceleration even though we are not able to measure it. We may replace with a rational number and observe similar qualitative behavior in finite time simulation; nonetheless, the infinite-time characteristics depend on the true value. Another example is . When modeling this uEACs array, some real values are involved, and, for finite time interval, one may replace these real values by some rational values, and the same qualitative behavior is observed, while the long-term characteristics depend on the true values.

The array is said to be fully connected because there is a weight between every two uEAC units. The status of the weights can be seen either as unknown parameters to be estimated, or as constant after being optimized. This prompts two different views of this uEACs array. When the weights are considered as unknown parameters, the uEACs array is an adaptive topology that approximates some input-output mapping by means of parameter optimization. When the weights are considered constant, the uEACs array performs exact computation. We should note that may equal , which means that there is no connection between units and . Thus this fully connected array can be seen as a general model of a variety of uEACs arrays, including those in which only a subset of the uEAC units are used. Moreover, all the units in this fully connected structure are in the same layer and compute in parallel. The number of uEAC units in the network is countable. We assume that the structure of the array, including the interconnection relationship of the uEAC units and the values of the interconnection weights, remains constant. What changes in time are the state values, that is, outputs of every uEAC unit, which are used in the next iteration.

4. Main Results

Before stating the main results of this paper, we introduce several useful notations. For , . Let , , and denotes its th iteration; that is,

Definition 4. A function is uEAC-computable if letting be an arbitrary element in the domain of and given an output precision , there is a uEACs array that is able to compute a rational approximation of with precision .

We should note that the exponential function, the trigonometric functions, and their compositions are uEAC-computable as basic analytic functions. We may now present the main result of this paper which states the computation capabilities of the uEACs array.

Theorem 5. For a Turing machine , there is a uEACs array that can robustly simulate it.

Actually, we will prove the following theorem.

Theorem 6. Let be the transfer function of a Turing machine ; there is a uEACs array that is able to simulate robustly in the following sense: let be some bounded noise added to the initial configuration of ; there is a uEAC-computable function that, for all satisfying , we have where represents an initial configuration of .

If is a halting configuration of , we have . We will prove this theorem by showing that can be obtained by composing some uEAC-computable functions, such as exponential function and trigonometric functions.

5. Robust Simulation of Turing Machine with uEAC-Computable Functions

For a Turing machine with symbols and states, its tape contents can be represented as where are symbols on the tape and is the blank symbol. Define the encoding functions asLet be the current state of and then the triple is the current configuration of . We use a periodic function to read the symbols written on the tape; by trigonometric interpolation we may takewhere are coefficients that can be obtained by solving a system of linear equations. From the form of we can get that it is uEAC-computable as a composition of trigonometric functions. Note that is continuous in , and, for every given , there is some that , and we have .

Actually, when simulating , we are dealing with a series of approximations of , that is, , in the following sense for given :and thus we need to keep the error under control during the iterations. An error control function can be defined as to keep the error under control when reading symbols and states of the Turing machine, and it is a uniform contraction in a neighborhood of integers.

Proposition 7 (see [6]). Let and ; there is some contracting factor that , .

Another two uEAC-computable error control functions and are defined as follows.

Lemma 8 (see [6]). Let , for any , satisfying and ; there is a function given by such that .

Lemma 9 (see [6]). Let , for any , satisfying and ; there is a function given by such that , where

Without loss of generality, we suppose that the symbol being read by is . By , we havewhere . Then can be seen as an approximation of the symbol being currently read with error bounded by , and it is uEAC-computable. With the approximation of the current symbol, we can determine the next state by polynomial interpolation. Recall that has states and symbols, let be the symbol being currently read and the current state, and the next state of can be represented aswhere is an approximation of satisfying and is the state that follows symbols and state . This map returns the next state of and is also uEAC-computable. Using the similar construction, the symbol to be written on the tape, , and the direction of the move of the head, , can also be approximated with precision , respectively; that is, and .

Let denote a move of the head to the left, let denote stay, and let denote a move to the right. In the absence of error, the next value of , that is, , is a function of , , , and ,where , , and . Consider the error, let be an additional approximation of to be determined later, we define three functions , , and to approximate the tape contents after the head moves left, stays, and moves right, respectively, and then can be approximated aswhereIf we take directly, the error of the term will be amplified when multiplied by . To obtain a sufficiently good approximation of , we have to guarantee that the error is bounded; this can be achieved with the following definition:

By the definition of , we get that , where . Then we can obtain sufficient good such that . We should note that , , , and are defined as a composition of uEAC-computable functions so they are also uEAC-computable. Similarly we can get some satisfying . Putting together the maps defined above, we can define a uEAC-computable function as , such that

Let and satisfying . We can define a map (for a 3-dimensional input , , ) that if is an initial configuration of and , we haveBy the triangle inequality, if , then, for a uEAC-computable function satisfying , we have

From the discussion above we get that one can construct a robust Turing machine simulation with uEAC-computable functions.

6. Conclusion

uEAC is a novel electronic implementation inspired by Rubel’s EAC model. Based on the mathematical model of uEAC, we propose a fully connected uEACs array and investigate its computational capabilities. By proving that any Turing machine can be simulated with uEAC-computable functions, we conclude that the proposed uEACs array is at least as powerful as Turing machine.

This work can be extended in several directions. The structure of the uEACs array, including the interconnection relationship of the uEAC units and the values of the interconnection weights, is not necessary to remain constant during the iteration. Ainslie et al. use genetic algorithm (GA) to evolve a uEAC to solve XOR problem [25], but their research is restricted to a single uEAC unit and it is hard to say that GA is the most suitable evolutionary algorithm for uEAC. The proposed uEACs array shows great advantages, while its optimization is much more difficult since we have to optimize the topology of the array and the particular structure of the single uEAC unit simultaneously. From the mathematical point of view, all the heuristic algorithms, such as particle swarm optimizer (PSO), ant colony algorithm, and simulated annealing algorithm (SA), can be used to optimize the uEACs array, but we must consider the computation complexity and efficiency of these algorithms. Zhu et al. proposed a comprehensive uEACs array optimization strategy based on particle swarm optimizer (PSO) [22], and the simulation results are promising.

Moreover, in the definition of the fully connected uEACs array, we use to denote the uEAC units in the array, while it is still an open question: how many uEAC units are needed in the array to guarantee its computational capabilities? In other words, we employ an indeterminate item “” to represent the number of uEAC units, but it is far from satisfied. A comprehensive analysis to determine the minimum “” that guarantees the robust simulation of Turing machine is of great significance, and this is an emphasis of the future research. The conclusion of this paper can be rewritten as “a robust simulation of the transfer function of a Turing machine can be constructed with uEAC-computable functions.” Actually, when studying the computational capability of the uEACs array, two questions are considered. () Does a Turing machine can calculate any uEAC-computable function? () Is the function generated by a Turing machine also uEAC-computable? If the answers of these two questions are both yes, we can get a more concrete conclusion that the uEACs array and Turing machine are equivalent! This paper focuses on the second question while the first one can be seen as another valuable future research direction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Natural Science Foundation of China (no. 61433003 and no. 61273150), Beijing Higher Education Young Elite Teacher Project, and National Basic Research Program of China (973 Program, 2012CB821206).

References

  1. T. Freeth, Y. Bitsakis, X. Moussas et al., “Decoding the ancient Greek astronomical calculator known as the antikythera mechanism,” Nature, vol. 444, no. 7119, pp. 587–591, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. C. E. Shannon, “Mathematical theory of the differential analyzer,” Journal of Mathematics and Physics, vol. 20, pp. 337–354, 1941. View at Publisher · View at Google Scholar · View at MathSciNet
  3. V. Bush, “The differential analyzer. A new machine for solving differential equations,” Journal of the Franklin Institute, vol. 212, no. 4, pp. 447–488, 1931. View at Publisher · View at Google Scholar · View at Scopus
  4. L. A. Rubel, “The extended analog computer,” Advances in Applied Mathematics, vol. 14, no. 1, pp. 39–50, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. J. Mycka, “Analog computation beyond the turing limit,” Applied Mathematics and Computation, vol. 178, no. 1, pp. 103–117, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. D. S. Graca, M. L. Campagnolo, and J. Buescu, “Computability with polynomial differential equations,” Advances in Applied Mathematics, vol. 40, no. 3, pp. 330–349, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  7. M. Piekarz, “The extended analog computer and functions computable in a digital sense,” Acta Cybernetica, vol. 19, no. 4, pp. 749–764, 2010. View at Google Scholar · View at MathSciNet · View at Scopus
  8. J. W. Mills, B. Himebaug, A. Allred et al., “Extended analog computers: a unifying paradigm for VLSI, plastic and colloidal computing systems,” in Proceedings of the Workshop on Unique Chips and Systems (UCAS-1). Held in Conjunction with IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS '05), Austin, Tex, USA, 2005.
  9. J. W. Mills, M. Parker, B. Himebaugh, C. Shue, B. Kopecky, and C. Weilemann, “‘Empty space’ computes: the evolution of an unconventional supercomputer,” in Proceedings of the 3rd Conference on Computing Frontiers (CF '06), pp. 115–126, Como, Italy, May 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. J. W. Mills, “The nature of the extended analog computer,” Physica D, vol. 237, no. 9, pp. 1235–1256, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. J. W. Mills, M. G. Beavers, and C. A. Daffinger, “Lukasiewicz logic arrays,” in Proceedings of the 20th International Symposium on Multiple-Valued Logic, pp. 4–10, Charlotte, NC, USA, May 1990. View at Publisher · View at Google Scholar
  12. J. W. Mills and C. A. Daffinger, “CMOS VLSI Lukasiewicz logic arrays,” in Proceedings of the International Conference on Application Specific Array Processors, pp. 469–480, Princeton, NJ, USA, September 1990. View at Scopus
  13. J. W. Mills, T. Walker, and B. Himebaugh, “Lukasiewicz' insect: continuous-valued robotic control after ten years,” Journal of Multiple-Valued Logic and Soft Computing, vol. 9, no. 2, pp. 131–146, 2003. View at Google Scholar · View at Scopus
  14. B. Himebaugh, “Design of eac,” 2005, http://www.cs.indiana.edu/~bhimebau/.
  15. J. W. Mills, “The continuous retina: image processing with a single-sensor artificial neural field network,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 2, pp. 886–891, IEEE, Washington, DC, USA, June 1996. View at Publisher · View at Google Scholar
  16. M. Parker, C. Zhang, J. W. Mills, and B. Himebaugh, “Evolving letter recognition with an extended analog computer,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 609–614, IEEE, July 2006. View at Scopus
  17. F. Pan, R. Zhang, T. Long, and Z. Li, “The research on the application of uEAC in XOR problems,” in Proceedings of the International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE '11), pp. 109–112, IEEE, Changchun, China, December 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Tsuda, J. Jones, A. Adamatzky, and J. Mills, “Routing physarum with electrical flow/current,” International Journal of Nanotechnology and Molecular Computation, vol. 3, no. 2, pp. 56–70, 2011. View at Publisher · View at Google Scholar
  19. J. W. Mills, “Programmable vlsi extended analog computer for cyclotron beam control,” Tech. Rep., Indiana University, 1995. View at Google Scholar
  20. A. M. Turing, “On computable numbers, with an application to the entscheidungsproblem,” Proceedings London Mathematical Society, vol. s2-42, no. 1, pp. 230–265, 1937. View at Publisher · View at Google Scholar
  21. L. A. Rubel, “Some mathematical limitations of the general-purpose analog computer,” Advances in Applied Mathematics, vol. 9, no. 1, pp. 22–34, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  22. Y. Zhu, F. Pan, W. Li, Q. Gao, and X. Ren, “Optimization of multi-micro extended analog computer array and its applications to data mining,” International Journal of Unconventional Computing, vol. 10, no. 5-6, pp. 455–471, 2014. View at Google Scholar · View at Scopus
  23. H. T. Siegelmann and E. D. Sontag, “Analog computation via neural networks,” Theoretical Computer Science, vol. 131, no. 2, pp. 331–360, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. H. T. Siegelmann and E. D. Sontag, “On the computational power of neural nets,” Journal of Computer and System Sciences, vol. 50, no. 1, pp. 132–150, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  25. N. Ainslie, R. Baula, N. Deckard et al., “Toward the evolution of analog computers for control of data networks,” Tech. Rep., Indiana University, Indianapolis, Ind, USA, 2002. View at Google Scholar