Abstract

Optical computing is a very interesting 60-year old field of research. This paper gives a brief historical review of the life of optical computing from the early days until today. Optical computing generated a lot of enthusiasm in the sixties with major breakthroughs opening a large number of perspectives. The period between 1980 and 2000 could be called the golden age with numerous new technologies and innovating optical processors designed and constructed for real applications. Today the field of optical computing is not ready to die, it has evolved and its results benefit to new research topics such as nanooptics, biophotonics, or communication systems.

1. Introduction

The knowledge of some history of sciences is useful for understanding the evolution of a research domain, its successes and failures. Optical computing is an interesting candidate for a historical review. This research field is also named optical information processing, and now the terms of information optics or information photonics are frequently used, reflecting the evolution of the domain. Optical computing is approximately 60 years old and it is a well-defined domain with its own specialized conferences, sections in the scientific journals and its own research programs and funding. It was also very active worldwide and therefore it is impossible in the frame of a paper to describe all the research results. Numerous books were written on the subject, for example, the following books describe the state of the art of optical computing at the time of their publication in 1972 [1], in 1981-82 [2, 3], in 1989 [4], and in 1998-99 [5, 6].

Since optical computing is such a well defined field over such a long period of time, it is interesting to study its evolution and this study can be helpful to understand why some research domains were very successful during only a limited period of time while other have generated numerous applications that are still in use. From the beginning there was a lot of questioning about the potential of optics for computing whereas there was no doubt about the potential and the future of electronics. Caulfield wrote in 1998 an interesting and enlightening paper on the perspectives in optical computing [7] where he discusses this competition between optics and electronics and shows that there were three phases, first “ignorance and underestimation” of electronics then “awakening and fear inferiority” and now “realistic acceptance that optical computing and electronics are eternal partners”.

The purpose of this paper is to show a short history of optical computing from the origin until today. This historical overview will show that the first years generated a lot of enthusiasm regarding the potential of optics for information processing, this period was followed by a small slowdown before the golden age that started around 1980 until the beginning of the new century.

Section 2 presents the basic principles of optical information processing, Section 3 gives a historical review of the research from the first years until 1980 and Section 4 describes the research activity from 1980 to 2004. Section 5 shows the evolution of the domain until today.

2. Fundamentals of Optical Information Processing

Optical information processing is based on the idea of using all the properties of speed and parallelism of the light in order to process the information at high-data rate. The information is in the form of an optical signal or image. The inherent parallel processing was often highlighted as one of the key advantage of optical processing compared to electronic processing using computers that are mostly serial. Therefore, optics has an important potential for processing large amount of data in real time.

The Fourier transform property of a lens is the basis of optical computing. When using coherent light, a lens performs in its back focal plane the Fourier transform of a 2D transparency located in its front focal plane. The exact Fourier transform with the amplitude and the phase is computed in an analog way by the lens. All the demonstrations can be found in a book published in 1968 by Goodman [8] and this book is still a reference in the field. The well-known generic architecture of optical processors and the architectures of the optical correlators will be presented successively.

2.1. Optical Processor Architecture

The architecture of a generic optical processor for information processing is given in Figure 1.

The processor is composed by three planes: the input plane, the processing plane, and the output plane.

The data to be processed are displayed in the input plane, most of the time this plane will implement an electrical to optical conversion. A Spatial Light Modulator (SLM) performs this conversion. The input signal can be 1D or 2D. An acousto-optic cell is often used in the case of a 1D input signal and 2D SLMs for 2D signals. The different types of 2D SLMs will be described later. In the early years, due to the absence of SLMs, the input plane consisted of a fixed slide. Therefore the principles and the potential of optical processors could be demonstrated but no real-time applications were possible, making the processor most of the time useless for real life applications.

The processing plane can be composed of lenses, holograms (optically recorded or computer generated) or nonlinear components. This is the heart of the processing, and in most optical processors, this part can be performed at the speed of the light.

A photodetector, a photodetector array or a camera composes the output plane where the results of the processing are detected.

Figure 1 shows clearly that the speed of the whole process is limited by the speed of its slowest component that is most of the time the input plane SLM, since the majority of them are operating at the video rate. The SLM is a key component for the development of practical optical processors, but unfortunately also one of their weakest components. Indeed, the poor performance and high cost of SLMs have delayed the fabrication of an optical processor for real-time applications.

2.2. Optical Processors Classical Architectures

At the beginning, real-time pattern recognition was seen as one of the most promising application of optical processors and therefore the following two architectures of optical correlators were proposed.

Figure 2(a) shows the basic correlator called 4-f since the distance between the input plane and the output plane is four times the focal length of the lenses. This very simple architecture is based on the work of Maréchal and Croce [9] in 1953 on spatial filtering and was developed during the following years by several authors [10, 11].

The input scene is displayed in the input plane which Fourier transform is performed by Lens 1. The complex conjugated of the Fourier transform of the reference is placed in the Fourier plane and therefore multiplied by the Fourier transform of the input scene. Lens 2 performs a second Fourier transform that gives in the output plane the correlation between the input scene and the reference. Implementing a complex filter with the Fourier transform of the reference was the main challenge of this set-up, and Vander Lugt proposed in 1964 to use a Fourier hologram of the reference as a filter [12]. Figures 2(b) and 2(c) show respectively, the output correlation peak for an autocorrelation when the correlation filter is a matched filter and when it is a phase only filter [13].

In 1966, Weaver and Goodman [14] presented another optical correlator architecture, the joint transform correlator (JTC) that is represented by Figure 3(a). The two images, the reference and the scene are placed side by side in the input plane that is Fourier transformed by the first lens. The intensity of the joint spectrum is detected and then its Fourier transform is performed. This second Fourier transform is composed by several terms including the crosscorrelations between the scene and the reference. Using a SLM this Fourier transform can be implemented optically as shown on Figure 3(a). Figure 3(b) shows the output plane of the JTC when the reference and the scene are identical [13]. Only the two crosscorrelation peaks are of interest. To have a purely optical processor, the CCD camera can be replaced by an optical component such as an optically addressed SLM or a photorefractive crystal. One of the advantages of the JTC is that no correlation filter has to be computed, therefore the JTC is the ideal architecture for real-time applications such as target tracking where the reference has to be updated at a high-data rate.

Figures 2 and 3 represent coherent optical processors. Incoherent optical processors were also proposed: the information is not carried by complex wave amplitudes but by wave intensities. Incoherent processors are not sensitive to the phase variations in the input plane and they exhibit no coherent noise. However, the nonnegative real value of the information imposes to use various tricks for the implementation of some signal processing applications [15, 16].

Linear optical processing can be decomposed into space-invariant operations such as correlation and convolution or space-variant operations such as coordinates transforms [17] and Hough transform [18]. Nonlinear processing can also be implemented optically such as logarithm transformation, thresholding or analog to digital conversion [19].

3. A New Start for Optics; the Rise of Optical Computing (1945–1980)

Information optics is a recognized branch of optics since the fifties. However, historically, the knife-edge test by Foucault in 1859 [20] can originate the optical information processing. Other contributors can be noted such as Abbe in 1873 who developed the theory of image formation in the microscope [21], or Zernike who presented in 1934 the phase contrast filter [22]. In 1946, Duffieux made a major contribution with the publication of a book on the use of the Fourier methods in optics [23]. This book was written in French and translated in English by Arsenault [24]. The work by Maréchal is another major contribution; in 1953, he demonstrated the spatial frequency filtering under a coherent illumination [9].

Optical computing is based on a new way of analyzing the optical problems; indeed, the concepts of communications and information theory constitute the basis of optical information processing. In 1952, Elias proposed to analyze the optical systems with the tools of the communication theory [25, 26]. In an historical paper [27], Lohmann, the inventor of the computer-generated holography, wrote “In my view, Gabor’s papers were examples of physical optics, and the tools he used in his unsuccessful attempt to kill the twin image were physical tools, such as beam splitters. By contrast, Emmett (Leith) and I considered holography to be an enterprise in optical information processing. In our work, we considered images as information, and we applied notions about carriers from communications and information theory to separate the twin image from the desired one. In other words, our approach represented a paradigm shift from physical optics to optical information processing.”

Holography can be included into the domain of information optics since both fields are closely related and progressed together. As for holography invented by Gabor in 1948 [28], the development of optical image processing was limited until the invention of a coherent source of light, the laser, in 1960 [29, 30].

The history of the early years of optical computing was published by major actors of the field; for example, in 1974 by Vander Lugt [31] and in 2000 by Leith [32]. Vander Lugt presents in his paper a complete state-of-the-art for coherent optical processing with 151 references. Several books give also a good overview of the state of the art at the time of their publication [13, 33].

It is possible to distinguish two periods of time. First until the early seventies, there was a lot of enthusiasm for this new field of optics and information processing, all the basic inventions were made during this period and the potential of optics was very promising for real-time data processing. However real life optical processors were rare due to technological problems particularly with the SLMs. During the seventies, the research was more realistic with several attempts to build optical processors for real applications [2, 3], and the competition with the computers was also much harder due to their progress [32].

3.1. The Funding Inventions

As soon as the laser was invented in 1960, optical information processing was in rapid expansion and all the major inventions of the field were made before 1970. In the Gabor hologram the different terms of the reconstruction were all on the same axis and this was a major drawback for the display. In 1962, Leith and Upatnieks introduced the off axis hologram that allowed the separation of the different terms of the reconstruction giving remarkable 3D reconstructions [34, 35]. This was the start of the adventure of practical holography.

In 1963, Vander Lugt proposed and demonstrated a technique for synthesizing the complex filter of a coherent optical processor using a Fourier hologram technique [12]. This invention gave to the 4-f correlator, with the matched filter or other types of filter, all its power and generated all the research in the domain. In 1966, Weawer and Goodman presented the Joint Transform Correlator (JTC) architecture that will be widely used [14] as an alternative to the 4-f correlator for pattern recognition.

In 1966, Lohmann revolutionized holography by introducing the first computer generated hologram (CGH) [36, 37] using a cell oriented encoding adapted to the limited power of the computers available at this time. This is the start of a new field and different cell oriented encodings were developed in the seventies. In 1969, a pure phase encoded CGH, the kinoform was also proposed [38] opening the way to the modern diffractive optical elements with a high diffraction efficiency. A review of the state of the art of CGHs in 1978 was written by Lee [39]. Until 1980, the CGHs encoding methods were limited by the power of the computers.

Character recognition by incoherent spatial filtering was introduced in 1965 by Armitage and Lohmann [40] and Lohmann [41]. A review of incoherent optical processing was made by Rhodes and Sawchuk in 1982 [15].

3.2. The Early Spatial Light Modulators

As already emphasized, a practical real-time optical processor can be constructed only if it is possible to control with the SLM, the amplitude and the phase modulation of the input plane or of the filter plane (in the case of a 4-f correlator). A very large number of SLMs have been investigated over the years, using almost all possible physical properties of matter to modulate light, but only few of them have survived and are now industrial products. A review of the SLMs in 1974 can be found in chapter 5 of the book written by Preston [1].

From the beginning, SLMs have primarily been developed for large screen display. Some of the early SLMs were based on the Pockels effect and two devices were very promising: the Phototitus optical converter with a DKDP crystal was used for large screen color display and also for optical pattern recognition, however the crystal had to work at the Curie point ( 0°C) [42, 43]; the Pockels Readout Optical Modulator (PROM) was used for optical processing [44, 45], but it was limited as it required blue light for writing and red light for readout. All these devices have disappeared, killed by their limitations.

The liquid crystal technology has survived and is today the mostly used technology for SLMs. The first Liquid Crystal SLMs were developed in the late sixties, for example, an electrically addressed liquid crystal matrix of elements [46]. Then the matrix size increased and in 1975, a matrix addressed SLM was constructed by LETI in France [47]. This device was using a nematic liquid crystal and had a pixel pitch of 300  m, of cell thickness of 8  m, and the addressing signals were 120 V root mean square (RMS) and 10 V RMS (Figure 4). In 1978, Hughes Corp. developed a very successful optically addressed liquid crystal SLM or valve that was used in optical processors for more than 10 years [4850]. However this SLM was very expensive, it has a 40 line pairs per millimeter resolution, its speed was limited to the video rate, and since it was optically addressed it had to be coupled to a cathode ray tube (CRT) making it particularly bulky if the input signal was electrical (Figure 5).

Other types of SLMs were developed in the seventies, such as deformable membrane SLMs which were presented by Knight [51] in a critical review of the SLMs existing in 1980. He pointed out that, due to their limitations in 1980, they had only a limited use in coherent optical processing applications. However, this SLM technology is still used today for adaptive optics.

In conclusion, at the end of the seventies, despite all the research effort, no SLM really suited for real-time optical information processing was available.

3.3. A Successful Application: Radar Signal Optical Processing

The first coherent optical processor was dedicated to the processing of synthetic aperture radar (SAR) data. This first major research effort in coherent optical processing was initiated in 1953 at the University of Michigan under contracts with the US Army and the US Air Force [10, 11]. The processor has evolved successfully with different improved versions until the beginning of the seventies. The book by Preston [1] as well as the reference paper by Leith will give to the reader an overview on the subject [32]. Since the first processor was designed before the invention of the laser, it was powered by light generated by a mercury lamp. The system was not operating in real time, the input consisted of an input film transport which carried a photographic recording of incoming radar signals, the processing was carried out by a conical lens and the output was a film. At the beginning of the seventies, the digital computers were able to compete successfully with the coherent optical computers for SAR applications and finally they won and this was sadly the end of radar signal optical processing.

3.4. The Future of Optical Information Processing as Seen in 1962

In order to understand the evolution of optical computing, it is enlightening to see the topics of discussions in the early sixties. For example, in October 1962, a “Symposium on Optical Processing of Information” was held in Washington DC, cosponsored by the Information System Branch of the Office of Naval Research and the American Optical Company. About 425 scientists attended this meeting and Proceedings were published [52]. The preface of the proceedings shows that the purpose of this symposium was to bring together researchers from the fields of optics and information processing. The authors of the preface recognize that optics can be used for special-purpose optical processors in the fields of pattern recognition, character recognition, and information retrieval, since optical systems offer in these cases the ability to process many items in parallel. The authors continue with the question of a general-purpose computer. They write: “Until recently, however, serious consideration has not been given to the possibility of developing a general-purpose optical computer. With the discovery and application of new optical effects and phenomena such as laser research and fiber optics, it became apparent that optics might contribute significantly to the development of a new class of high-speed general-purpose digital computers”. It is interesting to list the topics of the symposium: optical effects (spatial filtering, laser, fiber optics, modulation and control, detection, electroluminescent, and photoconductive) and data processing (needs, biological systems, bionic systems, photographic, logical systems, optical storage systems, and pattern recognition). It can be noted, that one of the speakers, Teager [53] from MIT, pointed out that for him the development of an optical general-purpose computer was highly premature because the optical technology was not ready in order to compete with the electronic computers. For him, the optical computers will have a different form than the electronic computers; they will be more parallel. It is interesting to see that this debate is now almost closed, and today, 47 years after this meeting, it is widely accepted that a general-purpose pure optical processor will not exist but that the solution is to associate electronics and optics and to use optics only if it can bring something that electronic cannot do.

3.5. Optical Memory and the Memory of the Electronic Computers

It is useful to replace the research on optical memories in the context of the memories available in the sixties. At this time the central memory of the computers was a core memory and compact memory cells were available using this technology. However, the possible evolution of this technology was limited. The memory capacity was low, for example, the Apollo Guidance Computer (AGC), introduced in 1966, and used in the lunar module that landed on the moon, had a memory of 2048 words RAM (magnetic core memory), 36864 words ROM (core rope memory), with16-bit words. In October 1970, Intel launched the first DRAM (Dynamic Random Access Memory) the Intel 1103 circuit. This chip had a capacity of one Kbit using P-channel silicon gate MOS technology with a maximum access time of 300 ns and a minimum write time of 580 ns. This chip killed the core memory.

Compared to the memories that were available, optical memories had two attractive features: a potential high density and the possibility of parallel access. Already in 1963, van Heerden developed the theory of the optical storage in 3D materials [54]. In 1968 the Bell Labs constructed the first holographic memory [55] with a holographic matrix of 32 by 32 pages of 64 by 64 bits each. In 1974, d'Auria et al. from Thomson-CSF [56] constructed the first complete 3D optical memory system storing the information into a photorefractive crystal with angular multiplexing and achieving the storage of 10 pages of 104 bits. Holographic memories using films were also developed in the US. Synthetic holography has been applied to recording and storage of digital data in the Human Read/Machine Read (HRMR) system developed by Harris Corporation in 1973 for Rome Air Development Center [57].

3.6. Optical Fourier Transform Processors, Optical Pattern Recognition

Optical pattern recognition was from the beginning a prime choice for optical processing since it was using fully the parallelism of optics and the Fourier transform properties. The book edited by Stark [3] in 1982 gives a complete overview of the state of the art of the applications of optical Fourier transforms. It can be seen that coherent, incoherent; space-invariant, space-variant; linear, nonlinear architectures were used for different applications. Hybrid processors, optical/digital emerged as a solution for practical implementation and for solving real problems of data processing and pattern recognition. For example, Casasent, who was very active in this field, wrote a detailed book chapter [58] with a complete review of hybrid processors at the beginning of the eighties. All these processors gave very promising results. Almost all the proposed processors remained laboratory prototypes and never had a chance to replace electronic processors, even if at that time, electronics and computers were much less powerful. There are many reasons for this; the number of applications that could benefit from the speed of optical processors was perhaps not large enough, but the main reason was the absence of powerful, high speed, high quality, and also affordable SLMs, for the input plane of the processor as well as for the filter plane. Optical processors are also less flexible than digital computers that allow a larger number of data manipulations very easily.

4. Optical Computing Golden Age (1980–2004)

This period of time could be called the optical computing golden age. There was a lot of enthusiasm in the field, the future looked very bright, there was funding for the programs on the topic and the research effort was very intensive worldwide. Every years, several international conferences were organized by different international societies on subjects related to optical computing. The journals had frequently a special issue on the topics and Applied Optics had every 10th of month an issue entitled “Information Processing”. The research was very fruitful in all the domains of optical information processing including theoretical work on algorithms, analog and digital computing, linear and nonlinear computing. Optical correlators for real applications were even commercialized. However, around 2000, we could feel that the interest for the subject started to decline. The reasons are multiple, but the evolution of digital computers in term of performance, power and also flexibility can be pointed out. They are also very easy to use even for a non-specialist.

It is impossible to list here all the work carried out in the domain from 1980 to 2004. Several books give the state of the art of the domain at the time of their publication [4, 6, 59].

In the following, we will describe only some aspects of the research during this period, and we apologize for some important results that may be missing. The purpose is to give to reader an idea of the evolution of the domain during this quarter century.

4.1. From Computer Generated Holograms to Diffractive Optical Elements

CGHs are important components for optical processing since they can process the information. The first CGHs were mostly cell-oriented since these methods were well adapted to the power of the computers with a small memory capacity and to the technology of the printers of this time. In the eighties, the technological landscape has changed, more powerful computers with a larger memory capacity were available, e-beam writers were more commonly used. Therefore new encoding methods, the point-oriented methods, were developed in order to achieve high quality and high diffraction efficiency optical reconstructions of the CGHs. First, the error diffusion algorithm, used for printing applications, was adapted to encode CGHs where it was possible to separate the noise from the desired pattern in the reconstruction plane [60]. Then, iterative algorithms were proposed and the best known are the Direct Binary Search (DBS) algorithm proposed by Seldowitz et al. in 1987 [61] and the Iterative Fourier Transform Algorithm (IFTA) proposed by Wyrowski and Bryngdahl in 1988 [62]. The CGHs encoded with these algorithms produce a reconstruction with a high Signal to Noise Ratio and a high diffraction efficiency, especially in the case of pure phase CGHs. Later some refinements were proposed, for example the introduction of an optimal multicriteria approach [63, 64]. It should be noted that these iterative methods are still used.

In the nineties, the main progress concerns the fabrication methods with the use of lithographic techniques allowing the fabrication of high precision phase only components etched into quartz. The name Diffractive Optical Elements (DOEs) that includes the CGHs is now used and reflects this evolution.

Thank to the progress in lithography, submicron DOEs can be fabricated such as a polarization-selective CGH [65], artificial dielectrics [66], a spot generator [67]. The state of the art of digital nano-optics can be found in Chapter 10 of the book written by Kress and Meyrueis [68]. The nano structures fabrication required new studies of the diffraction based on the rigorous theory of diffraction instead of the scalar theory of diffraction [69].

Several books give a complete overview of the field of DOEs and their applications [68, 70] and a very complete paper on the evolution of diffractive optics was published in 2001 by Mait [71].

4.2. The Maturity of Spatial Light Modulators

Since the availability of SLMs was an important issue for the success of optical information processing, a lot of effort has been invested after 1980 into the development of SLMs fulfilling the optical processors requirements in terms of speed, resolution, and size and modulation capability. A paper written by Fisher and Lee gives the status of the 2D SLM technology in 1987 [72] and shows that, at this time, the best feasible SLM performance values are found to include: about resolution elements, 10-Hz framing rates, 1-s storage, less than 50  J/cm2 sensitivity, five-level dynamic range, and 10-percent spatial uniformity. Updated reviews of the state of the art of SLMs is given in a book edited by Efron [73] in 1995 and in several special issues of “Applied Optics” [7477].

More than 50 types of SLM have been introduced in the eighties and nineties [78]. Many different SLMs have been proposed and many prototypes fabricated—for example, besides liquid crystal SLMs, magneto-optic SLMs [79, 80], multiple quantum wells devices (MQW) [81], Si PLZT SLMs [82] and Deformable Mirror Devices [83, 84]. However very few of these SLMs have survived. Therefore, today, among the SLMs commercially available, mostly for display purpose, two technologies prevail: liquid crystal technology and Digital Micromirrors Devices DMD (MEMS based technology).

There are different types of liquid crystal SLMs. Twisted nematic liquid crystal SLMs are commonly used and their theory and experimental characterization show an amplitude and phase coupled modulation [85] as well as an operating speed limited to the video rate. Ferroelectric liquid crystal SLMs can reach a speed of several kilohertz, but most of the devices on the market are binary bistable devices that consequently limit the applications. Although it is not so commonly known, analog amplitude only modulation is possible with specific ferroelectric SLMs [86]. Nematic liquid crystal or Parallel Aligned liquid (PAL) crystal SLMs produce a pure phase modulation that can exceed They are particularly attractive for applications requiring a high light efficiency such as dynamic diffractive optical elements. Their speed can reach 500 Hz [86]. The matrix electrically addressed SLMs using twisted nematic liquid crystal have progressed considerably. Around 1985, the small LC TV screens were extensively evaluated [87, 88], but their poor performance (phase nonuniformity, limited resolution…) limited their use for optical computing; then VGA, SVGA, and XGA resolution SLMs were introduced in video projectors and these SLMs extracted from the video projectors were widely characterized [89] and integrated into optical processors. During the same period, high performance optically addressed SLMs were fabricated, for example, the PAL SLM from Hamamatsu [90]. Now high resolution Liquid Crystal on Silicon (LCoS) SLMs are commercially available, for example an pure phase LCoS SLM with pixel resolution is commercialized [91]. All these SLMs must be characterized very precisely and numerous papers were published on the subject [9295].

In conclusion, today, for the first time since the origin of the optical processors, commercially available SLMs are fulfilling the requirements in terms of speed, modulation capability, and resolution. The applications of SLMs are numerous, for example, recent papers have reported different applications of LCoS SLMs, such as pulse shaping [96], quantum key distribution [97], hologram reconstruction [98], computer generated holograms [99], DOEs [100], optical tweezers [101], optical metrology [102].

4.3. Optical Memories

In a parallel optical computer, a parallel access optical memory is required in order to avoid the bottleneck between the parallel processor and the memory. Therefore the research for developing a 3D parallel access optical memory was very active in the last two decades of the last century. Different architectures using different technologies were proposed. For example, Marchand et al. constructed in 1992 a motionless-head parallel readout optical-disk system [103] achieving a maximum data rate of 1.2 Gbyte/s. Psaltis from Caltech developed a complete program of research on 3D optical holographic memories using different materials such as photorefractive crystals. In the frame of this program, Mok et al. achieved to store 10000 holograms of 440 by 480 pixels [104] into a photorefractive crystal of 3 cm3 . IBM was also very active into the field of holographic memory [105] and two important programs of the Darpa were carried out in the nineties: project PRISM (Photorefractive Information Storage Material), and project HDSS (Holographic Data Storage Systems). All the information on these holographic memories can be found in a book [106]. Several start-up companies were created for developing holographic memories and most of them disappeared. However one of them, In Phase Technologies, is now commercializing a holographic WORM disk memory system using a photopolymer material [107]. Other types of optical memories were investigated such as a two-photon memory [108], spectral hole burning [109].

Today the holographic memory is still seen as a candidate for the memory of the future, however the problem of the recording material is not yet solved; particularly there is no easy to use and cheap rewritable material. The photopolymers can only be written once and despite all the research effort photorefractive crystals are still very difficult to use and expensive.

4.4. Optical Information Processing, Optical Pattern Recognition

The last two decades of the last century were a very intensive period for the research in optical processing and optical pattern recognition. All the aspects of these processors were investigated and the research progressed remarkably.

One key element in an optical correlator is the reference filter, and important part of the research concentrated on it. The correlation is shift-invariant but is scale-variant and orientation-variant. Therefore several solutions, using for example, Synthetic Discriminant Function (SDF) were proposed to overcome this drawback [110112]. Beside the classical matched filter, several other improvements have been presented [113117]. A large amount of work has been carried out to enhance the discrimination of the target in a complex scene [118].

The architecture of the JTC was also studied extensively, particularly by Javidi who proposed several improvements such as the nonlinear JTC [119122].

A very large number of processors were constructed taking advantage of the progress of SLMs and of the theoretical work on the filters and on the architectures. Some of these processors stayed in the laboratories while some others were tested for real applications. Regarding the large number of optical processors that were constructed during this period of time, it is impossible to list them all in the frame of this paper. A book, written in French, by Tribillon gives a very complete state of the art of the optical pattern recognition in 1998 [5]. The book edited in 1999 by Yu and Yin give also a complete overview on the topic [123]. Therefore, you will find here only, some examples of the optical processors developed between 1980 and 2004.

In 1982, Cleland et al. constructed an optical processor for detecting tracks in a high-energy physics experiment. This incoherent processor was using a matrix of LEDs as input plane and a matrix of kinoforms as processing plane. It was used successfully in a real high-energy physics experiment in Brookhaven [124, 125].

The Hough transform is a space-variant operation for detecting the parameters of curves [126]. This transform can take fully advantage of the parallelism of the optical implementation. In 1986, Ambs et al. constructed an optical processor based on a matrix of optically recorded holograms [18, 50]. This implementation was improved ten years later with the use of a large scale DOE composed of a matrix of 64 by 64 CGHs with 4 phase levels fabricated by lithographic techniques [127]. Several other optical implementations of the Hough transform were published. Casasent proposed several different optical implementations for example one using an acousto-optics cell [128]. A coherent optical implementation of Hough transform has been discussed by Eichmann and Dong [129], where the 2D space-variant transfer function is implemented by successively performing 1D space-invariant transforms by rotating the input image around its center point and translating a film plane for recording. Another implementation for coherent or incoherent light was proposed by Steier and Shori [130] where they use a rotating Dove prism to rotate the input image, and the detection is achieved by a linear detector array. Today the Hough transform is widely used in image processing for detecting parametrical curves, but the implementation is electronic.

Yu et al. proposed several optical processors for pattern recognition using different types of input SLMs [131]. For example an adaptive joint transform correlator for autonomous real-time object tracking [132], an optical disk based JTC [133].

Pu et al. constructed a robot that achieved real-time navigation using an optoelectronic processor based on a holographic memory [134].

Thomson-CSF in France, in the frame of a European project, constructed and tested successfully a compact photorefractive correlator for robotic applications. The size of the demonstrator was 600 m 00 mm, it was composed of a mini-YAG laser, a liquid crystal SLM and an updatable holographic BSO crystal [135]. This correlator was also used for finger print identification [136].

Guibert et al. constructed an onboard optical JTC for real-time road sign recognition that was using a nonlinear optically addressed ferroelectric liquid crystal SLM in the Fourier plane [137].

A miniature Vander Lugt optical correlator has been built around 1990 by OCA (formerly Perkin-Elmer). This correlator was composed of a Hughes liquid crystal valve, a set of cemented Porro prisms and a holographic filter. The purpose of this processor was to demonstrate this technology for autonomous missile guidance and navigation. The system was correlating on aerial imagery and guided the missile to its preselected ground target. The processor was remarkable by its rugged assembly; it was 105 mm in diameter, 90 mm long, and weighted 2.3 kg [138].

In 1995, OCA constructed a prototype of an optical correlator that was fitting in the PCI slot of a personal computer and was able to process up to 65 Mbyte of image data per second [139]. This processor was intended to be commercialized.

The Darpa in the USA launched in 1992 a project named TOPS (Transitioning of Optical Processing into Systems) associating some universities and about ten important companies potential users and developers of the technology.

BNS presented in 2004 an optical correlator using four kilohertz analog spatial light modulators. The processor was limited to 979 frames per second by the detection camera. However, the rest of the correlator was capable of 4,000 frames per second [140].

The Jet Propulsion Laboratory (JPL) developed several optical processors for real time automatic target recognition [141]. The University of Sussex constructed also an all-optical correlator and a hybrid digital-optical correlator [142].

It should also be noted that several optical correlators were available commercially but it is not sure that it was a commercial success since most of them are no longer commercialized. For example, in 2000, optical correlators were commercialized by INO [143] and BNS [86]. In 2001, Parrein listed in her PhD thesis 10 optical correlators that were available [144].

Optical processors were also designed for many other operations such as matrix operations [145], or for systolic array processing [146] and neural network processors [147].

4.5. Digital Optical Computing

The optical processors described in the previous sections were analogue. However, in order to compete more efficiently with the digital electronic computers, a very important research effort was directed toward digital optical computing. Again, the field of digital optical computing is extremely broad, and the results obtained are too numerous to be described in the frame of this paper. The interested reader will find several books on the subject, for example, [148, 149]. The proceedings of the numerous conferences dedicated on optical computing are also very instructive. For example, the proceedings of the ICO conference "Optical Computing" held in Edinburgh in 1994 show very well the situation of digital and analog optical computing [150]. Novel optical components such as vertical-cavity surface-emitting lasers (VCELs) or symmetric self-electro-optic-effect devices (SEED) [151] were studied and constructed. Several digital optical computers were proposed, for example, Guilfoyle and Stone constructed a 32-bit, fully programmable digital optical computer (DOC II) designed to operate in a UNIX environment running RISC microcode [152].

4.6. Optical Interconnects

Optical interconnects is a field where optics has a great potential, these interconnects can be guided but also in free space. All the aspects of optical interconnections were studied: components (switches, sources, detectors, etc.), architectures, routing algorithms, and so forth. In 1989, Goodman wrote a complete analysis on optics as an interconnect technology [153] and a brief historical summary of the development of the field of optical interconnect to silicon integrated circuits can be found in a paper written by Miller in 2000 [154]. A very large number of papers were published in the nineties on the subject, for example, on optical perfect shuffle [155], on hypercube-based optical interconnects [156], on crossbar networks [157, 158], on the use of liquid crystal SLMs for optical interconnects [159], on diffractive optics for optical interconnects [160], on holographic interconnection networks and their limitations [161], on board-level interconnects [162].

Today, optics has no challenger in the domain of telecommunications with the optical fibers and optical cables, the Wavelength-division multiplexing (WDM), the optical amplifiers and the switches based on MEMs.

In 2009, Intel is still studying the possibility of replacing electrical interconnects between chips by optical interconnects with its terahertz bandwidth, low loss, and low cross-talk [163]. Miller published also in 2009 a paper on the device requirements for optical interconnects to silicon chips where he pointed out the need of very-low-energy optoelectronic devices and novel compact optics [164].

5. Optical Computing Today

The traditional field of optical computing is no longer so active, it is not dead but it has evolved. Today, numerous research topics benefit from the results of the research in optical computing and therefore the field is perhaps no longer so well defined. Several signs show that the activity has changed. Applied Optics has no longer an issue per month on the subject, but in each issue there is a section “Information processing” with an average of only 4 papers per issue. There are no longer specialized large international conferences named “Optical Computing”. However, it should be noted that there are still two conferences organized by the SPIE on the subject: “Optical Pattern Recognition” since 20 years in Orlando in the frame of the SPIE conference “Defense, Security, Sensing”, and “Optics and Photonics for Information Processing” in San Diego in the frame of the SPIE conference “Optics and Photonics”. In August 2009, a special section on “Optical High-Performance Computing” was published in Applied Optics and JOSA A [165].

The research on optical correlators is continued by fewer research teams, however it should be noted that the Jet Propulsion Laboratory (JPL) is still working on optical correlators for real time automatic target recognition [141].

Some of the algorithms developed for pattern recognition initially for optical processing are now used successfully in digital computers. DOEs are now mature and are part of numerous industrial products. All the research on the fabrication of DOEs made possible the fabrication of nano structures and very exciting new fields of research such as nanophotonics [166, 167], nanofluidics [168] and optofluidics [169]. The list of the papers presented in 2009 at the SPIE conference “Optics and Photonics” reflects the growing interest in all the research related to nanoscience and nanooptics.

Biophotonics is an exponentially growing field that is largely benefiting from the past research in optical processing. Typical examples are the optical tweezers and the optical trapping [170, 171].

Thanks to the digital holography, where the holographic plate is replaced by a camera, holography is again finding industrial applications particularly for the quality control of manufactured products [172174], for digital holographic microscopy [175] opening completely new fields of applications for optical microscopy.

For information processing, optics is also finding a place where it has a unique feature such as the polarimetric imaging [176178], or multispectral imaging [179]. Security applications is also a promising field for optical information processing [180]. It is well known that optics is used commonly for the communication systems.

This list is too short to reflect all the optical processing evolution and its implications in the research of the future.

6. Conclusions

The history of the development of the research in the field of optical computing reveals an extraordinary scientific adventure. It started with the processing power of coherent light and particularly its Fourier transform capability. The history shows that considerable efforts were dedicated to the construction of optical processors that could process in real time a large amount of data. Today, we see that optics is very successful in information systems such as communications and memories compared to its relative failure in computing. This could have changed, if, in the seventies when the electronic computers were slow and with a limited power, today components such as efficient SLMs, laser diodes or high speed and high resolution detectors would have been available. However, all the research results in optical computing contribute strongly to the development of new research topics such as biophotonics, nanophotonics, optofluidics, and femtosecond nonlinear optics. But, the dream of an all optical computer overcoming the digital computer never became reality, and optical correlators for pattern recognition have almost disappeared. The reasons are multiple. The speed of the optical processor was always limited by the speed of the input and output devices. Digital computer have progressed very rapidly, the Moore's law is still valid, multi-core processors are more powerful, and it is clear that digital computer are easier to use and offers more flexibility. Digital computers have progressed faster than optical processors. Optical computing is mostly analogue when electronic computing is digital. The digital optical computers were not able to compete with the electronic due to the lack of appropriate optical components. It appears clearly that the solution is to associate optics and electronics and to use optics only when it can bring something that electronics cannot do. Optical processing is useful when the information is optical and that no electronics to optics transducers are needed.

The potential of optics for parallel real time processing remains and the future will tell if optical computing will be back, for example, by using nanotechnologies.