Abstract

In order to detect and investigate short stochastic optical flares from a number of variable astrophysical objects (GRBs, SNe, flare stars, CVs, X-Ray binaries) of unknown localizations as well as near-earth objects (NEOs), both natural and artificial, it is necessary to perform the systematic monitoring of large regions of the sky with high temporal resolution. Here we discuss the criteria for a system that is able to perform such a task and describe two cameras we created for wide-field monitoring with high temporal resolution—FAVOR and TORTORA. Also, we describe basic principles of real-time data processing for the high frame rates needed to achieve subsecond temporal resolution on a typical hardware.

1. Introduction

The systematic study of night sky variability on subsecond time scales still remains an important, but practically unsolved problem. Its necessity for the search of non-stationary objects with unknown localization has been noted by Bondi in [1]. Such studies have been performed [2, 3], but due to technical limitations it has only been possible either to reach high temporal resolution of tens of milliseconds in monitoring of 5–10 fields, or use 5–10 seconds time resolution in wider fields. The wide-field monitoring systems currently in operation, such as WIDGET [4], RAPTOR [5], BOOTES [6] and of the Sky [7], while having good sky coverage and limiting magnitude, lack temporal resolution, which significantly lowers their performance in the study of transient events of subsecond duration.

Optical transients of unknown localization may be very short. For example, the rise times of flashes of some UV5 Cet-like stars may be as short as 0.2–0.5 seconds [8], 30% of GRBs have the duration shorter than 2 seconds, and details of their light curves may be seen on time scales shorter than 1 ms [9]. Also, of great interest are the observations of very fast meteors which may be of extra-Solar system origin [10].

One more task which requires wide-field observations with high temporal resolution is the monitoring of near-Earth space. There are a number of satellites, as well as a vast amount of small space debris pieces, which have rapidly evolving trajectories, and so are difficult to observe by typical satellite tracking methods. High temporal resolution is needed here due to the fast motion of such objects on the sky.

To study the variability of large sky areas on such time scales, it has been proposed [11, 12] to use large low-quality mosaic mirrors of air Cerenkov telescopes. However, in [13, 14] we demonstrated that it is possible to achieve the subsecond temporal resolution in a reasonably wide field with small telescopes equipped with fast CCDs, to perform fully automatic searching and classification of fast optical transients. Moreover, a two-telescope scheme [15, 16], able to study such transients in a very short time after detection, has been proposed. According to these ideas, we created the prototype fast wide-field camera called FAVOR [14] and the TORTORA camera as part of the TORTOREM [17] two-telescope complex, and operated them over several years.

The recent discovery of the brightest ever GRB, GRB080319B (the Naked-Eye Burst [18]), by several wide-field monitoring systems—TORTORA [19], RAPTOR [20] and Pi of the Sky [21]—and the subsequent discovery of its fast optical variability [22] on time scales from several seconds down to a subsecond time scale [23] demonstrated that the ideas behind our efforts in fast temporal resolution wide-field monitoring are correct.

In this article we describe these basic ideas, covering both the selection of optimal hardware parameters and the principles of real-time data processing for such high temporal resolution observations.

2. General Requirements for Wide-Field Monitoring

Typical followup observations, performed for the detailed study of newly discovered transients, require no more than a good robotic telescope with fast repointing. Such instruments, however, will inevitably only begin to capture data after the first few seconds or tens of seconds of the event. To get information from the start of the event, which is essential for understanding the nature and properties of transients, one needs to observe the position of the transient before it appears. And, as transients occur in unpredictable places, the systematic monitoring of large sky regions becomes an important task.

For such monitoring, one needs to select the optimal set of mutually exclusive parameters—the angular size of the field of view, the limiting magnitude and the temporal resolution. Indeed, the area of the sky , covered by an objective with diameter and focal length , equipped with an pixels CCD with pixel size of and exposure time of seconds, is

while the faintest detectable object flux, for a sky background noise dominating over the CCD read-out noise, is

For the case of CCD read-out noise domination, the limit is

The number of detectable events, uniformly distributed in Euclidean space, is

when the duration of the event is longer than the exposure, and

when it is shorter—as decreases, one can detect a larger number of events in a greater volume. High temporal resolution, thus, is essential in the detection and analysis of short optical transients. On the other hand, it requires the application of fast CCD matrices, which usually have large read-out noise, that limits the detection of a faint objects.

Most of the general-purpose wide-field monitoring systems currently in operation, listed in Table 1, chose a large of field of view while sacrificing the temporal resolution to achieve decent detection limit. Our cameras, FAVOR and TORTORA, in contrast, chose high temporal resolution as a key parameter.

Several examples of various hardware configurations leading to different sizes of field of view for a fixed temporal resolution are shown in Figure 1.

3. FAVOR and TORTORA Cameras

It is obvious that realization of an ideal wide-field system combining both wide field, high temporal resolution and a good detection limit is very difficult. One possible variant of such a project, the Mega TORTORA one, based on a multi-objective design and fast, low-noise EM-CCDs, is presented elsewhere [24]. For our cameras, we have used a simpler and cheaper approach, described in [14]. Its main idea is the application of an image intensifier to both effectively reduce the focal length by downscaling the image, and to overcome the read-out noise of a fast TV-CCD by the high amplification factor of the intensifier. Technical parameters of the cameras are listed in Table 2, and their images are shown in Figure 2. Compared to FAVOR, TORTORA has a somewhat smaller aperture and focal length, as well as cruder pixel scale, but a significantly greater field of view and slightly sharper PSF FWHM.

Both cameras use custom f/1.2 objectives, refractive in FAVOR and reflective in TORTORA, and are equipped with image intensifiers and fast CCDs based on the Sony ICX285AL chip. The TORTORA camera is also equipped with two motors for automatic focusing of the main objective and transmission optics, while FAVOR is focused manually. The FAVOR camera is placed on a dedicated equatorial mount, while the TORTORA is mounted on top of the Italian REM telescope, which has an alt-azimuthal mount design (it leads to the field rotation during the observations). The cameras, when possible, are pointed to the regions of the sky observed by the Swift telescope according to its real-time pointing information distributed through GCN network [25].

4. Data Acquisition

The cameras are operated by a cluster of three PCs connected via a dedicated Gigabit Ethernet network. The first one is equipped with a TV5 frame-grabber; it acquires the data from a TV-CCD at a 7.5 frames per second rate, attaches a timestamps to them and supplies them to the network as a set of broadcast UDP packets. Any number of clients on this network can collect and process these data. In a typical scheme, one of clients is the RAID array which continuously stores all broadcast data for possible detailed future investigation. The size of the RAID array has to be chosen carefully to be able to store at least the amount of data collected during one or two nights; for an 8-hour long acquisition it corresponds to 580 Gb of data. The RAID PC also hosts the network service which is able to extract any given frame from the RAID array and send it to the client.

One more client connected to the local network performs the real-time data processing, extracting the information on transients and performing their basic classification during the observations. Then, during day time, it performs the detailed analysis of these events. Also, this PC hosts the software controlling the whole cluster and, in the case of the TORTORA system, communicating with the REM telescope. A schematic view of the complete set of interacting network services inside the TORTORA system is presented in Figure 3.

5. Real-Time Data Processing

The fast TV-CCD camera with 1388 1036 pixels operating at a 7.5 Hz frame rate produces 20.5 megabytes of data each second. The processing of such an enormous data flow is complicated. A typical data processing pipeline, for example, SExtractor-based [26], requires at least one second on a modern PC to extract all objects from a single frame and perform their comparison with the catalogue. As this is an order of magnitude slower than needed for our task, we decided to use simpler data processing, aimed at the detection of variable objects only. This approach, often named “the differential imaging” method, is based on a statistical analysis of pixel values for a sequence of images. During the data acquisition, an “average frame” is being constantly updated, representing the mean values for each pixel over some number of previous frames (100 frames seems to be the optimal amount, since on longer scales the atmospheric variations become significant). Also, the “dispersion frame” is being formed in a similar way, representing the estimated standard deviations of each pixel over the same number of previous frames.

Then, for each consecutive frame we subtract “average frame” to cancel the spatial variations across the frame related to inhomogeneities of both CCD and image intensifier sensitivities, and of the sky due to bright stars and Milky Way. Then, the resulting residual frame is divided by the “dispersion frame” to compensate for different levels of stationary variability—star PSF wings are more variable than the background, while CCD defects are mostly not variable. Finally, the thresholding is performed to locate pixels deviating significantly (i.e., greater than 3) from the mean level. The process is illustrated in Figure 4. A fast clustering algorithm is then used to locate continuous areas of interest on the frame—the so-called “events”. A 4 pixel connected with a 3 one is regarded as the minimal acceptable event. Assuming Gaussian statistics, this leads to a 0.5 purely stochastic false events per frame; in real operation, the number of such events is found to be 2–4 due to non-Gaussian nature of image intensifier noise, caused by stochastic nature of electron multiplication process as well as by thermal and ion events.

Thus, the marginally detected event has a 7 flux, which corresponds to for FAVOR and for TORTORA, depending on the sky conditions.

It is worth mentioning that the dispersion estimation used in differential imaging corresponds to the temporal variations of a pixel value, not spatial variations of values across the single frame. The noise behaviour is generally non-ergodic—temporal and spatial dispersions may significantly differ, and the differential imaging limit may deviate from a single-frame “classical” detection limit by up to in both directions.

The differential imaging method is not universal. The estimation of a “null hypothesis” statistical properties of each pixel over previous frames becomes biased when the transient is sufficiently long and bright—its flux on the first frames where it occurs influences the mean and dispersion values for the next frames. This is demonstrated in Figure 5 for the case of the 60-seconds-long Naked-Eye Burst as well as a 2-seconds long artificial event with the same temporal structure and peak brightness. It is obvious that differential imaging is ineffective for a long and slowly-changing transients (which may be well studied by another methods on longer time scales); it is, however, extremely powerful in the detection of highly-variable or fast-moving ones (as fast motion implies rapid changes in pixels on the object edges). Also, due to this feature of the differential imaging, it is also insensitive to a slow variations of atmospheric conditions.

In spite of these problems, the application of differential imaging method solves the “bottleneck” problem in real-time data processing in a high frame-rate case. The extraction of all events from a single frame, deviating significantly from the average one, requires only 10 ms on a typical PC.

6. Detection and Classification of Fast Optical Transients

Detection of a continuous region significantly deviating from the average frame level is not enough to draw a conclusion on its nature—it may be a meteor, satellite, flash of a star-like object or just a noise event. Therefore, some additional steps are then needed to check whether the object is real, and to determine its type.

In our software, we use a simple three-stage algorithm. It is based on the determination of the parameters of object motion—its direction and speed. As a first step, after the initial detection of each event on the first frame, we form a circular region of expectation for its appearance on the next one, by assuming that its speed is below some reasonable value (for FAVOR and TORTORA, we chose 1 degree per second as an upper limit for the speed). Then, on the second frame, for all events inside this region, we form a set of hypotheses about the possible directions and speeds of the object; for each hypothesis errors of coordinate determinations for the first and second events define a new, smaller elliptical region of expectation for the next frame. Finally, on the third frame, we check whether the object is detected in the positions predicted by each hypothesis, and usually only one hypothesis survives. This three-stage process is illustrated in Figure 6. The object for which no suitable events appeared on the second frame, is assumed to be a noise event, and is rejected. Objects with no suitable events on the third frame may be optionally checked on the fourth one. Objects with detected and confirmed motion may be tracked while they are staying inside the field of view, constantly adding events from new frames to their trajectories and light curves, which allows their motion parameters to be determined with increasing accuracy; a typical satellite passing over the FAVOR site may be seen on a set of up to 500 consecutive frames.

Immobility is just an extremal kind of a motion—the motion with a zero speed. So, if the object is detected during three consecutive frames, and its speed is estimated to be no more than its statistical error, it is assumed to be an immobile transient, probably at a galactic or cosmological distance. Slowly moving satellites, however, may produce flares due to rotation, and the duration of the flares may be too short to measure their speed. An example of such a flare is shown in Figure 7. Such events may be identified with known satellites by examining the catalogue of satellite orbital elements [27] and pre-computing their trajectories over the observatory field of view. It is, unfortunately, not always possible to identify such flashes as not all satellites (and large enough pieces of cosmic debris) are in the database. However, they may be identified if their flashes are repeating while they pass through the camera field of view.

Meteors cannot be reliably detected by the algorithm, as they move too fast, forming significantly non-point-like tracks on the frames, and they are often visible for only one or two frames. However, bright ones can be extracted by purely geometrical criteria on each frame, and then combined together on a postprocessing stage. Then, classical methods like the Hough transform may be used to estimate the direction of the meteor track, to get their intensity profiles and—in the case of events lasting several frames—to measure their velocities. An example of a fast meteor detected by the FAVOR camera is shown in Figure 8.

The algorithm described above can detect a transient and measure its motion parameters in a three consecutive frames, which for the FAVOR and TORTORA cameras corresponds to less than 0.4 seconds. Then the nature of the transient may be guessed based on this information and by comparing its position and time of appearance with the catalogue of satellite passes over the site. At that point, any event not showing signatures of motion and not present in the satellites database, may become the targets of a follow-up and detailed investigation by robotic telescopes [17].

7. Conclusions

The importance of wide-field monitoring is obvious. High temporal resolution must be an essential part of such monitoring, as it has been demonstrated by the discovery of the fast optical variability of the Naked-Eye Burst. The real-time data processing is also important, as it allows the detected transient information to be distributed to robotic telescopes or, in the case of a Mega TORTORA-like complex [24], to reconfigure the system for a detailed investigation of the event. We have demonstrated that it is not too difficult to both construct the cameras able to perform such monitoring, and organize the real-time detection and classification of fast optical transients.

Acknowledgments

This work was supported by the Bologna University Progetti Pluriennali 2003, by grants of CRDF (no. RP1-2394-MO-02), RFBR (no. 04-02-17555, 06-02-08313 and 09-02-12053), INTAS (04-78-7366), and by the Presidium of the Russian Academy of Sciences Program. The first author has also been supported by grant of President of Russian Federation for federal support of young scientists.