Abstract

The Montsec Astronomical Observatory (OAdM) is a small-class observatory working in fully unattended control, due to the isolation of the site. Robotic operation is, then, mandatory for its routine use. We present a description of the general control software and several software packages developed. The general control software protects the system specially at the identified single points of failure and makes a distributed control of any subsystem.

1. Introduction

During the coming months, the OAdM will begin routine operations working in fully unattended operation. The telescope and instrumentation finished their commissioning phases and are now being operated on a robotic supervised manner (astronomers are present at the observatory in case human intervention is required) for a six-month testing period. The primary guideline of the project is to guarantee the robotic operation of the observatory in order to carry out astronomical observations. The design and development processes were focused on ensuring a reliable and efficient system while working in this operational mode.

2. Observatory Description

2.1. Site

The OAdM is located at an altitude of 1570 m at the Montsec mountain, 50 km South of the central Pyrenees and 50 km North of the city of Lleida (Catalonia, Northeastern Spain).

Site testing campaigns were conducted during three years, with measurements of air stability, transparency, and weather conditions, and showed the high-quality night sky of the Montsec mountain. Light pollution in the OAdM was found to be very low, with a night sky brightness at the zenith as low as 22.0 mag arcsec in the Johnson visual band , and between 21.5–20.6 mag arcsec at a height above the horizon. Concerning the weather conditions, humidity is below 80% (90%) during 64% (77%) of the time (resulting in 81% of useful astronomical nights). Night temperatures are in the range from ()–()C in winter to 20–25C in summer. The mean wind speed is usually below 15 m s and the sky is cloudless for 84% of the time. Recent measurements using a Robotic Differential Image Motion Monitor (DIMM, [1]) have confirmed these results: a median seeing of was obtained from five months of data, better than during 32% of the time.

2.2. Instrumentation

The OAdM is endowed with a 0.8 m telescope called TJO, standing for Joan Oró Telescope. It was supplied by Optical Mechanics Inc. and installed in March 2004. The 6.15 m dome was manufactured by Baader Planetarium GmbH and is fully automatic. The only current instrument is a ProLine Finger Lakes Instrumentation CCD camera, with a CCD42–40 Marconi back illuminated chip with a pixel size of 13.5 m, and a FOV at the Cassegrain focus of  arcmin (0.36 arcsec pixel). The telescope is also equipped with an automatically controlled 12-position filter wheel, and the 5 filters of the Johnson-Cousins photometric system (U, V, B, R, I). Several instruments for environment monitoring are acquiring data continuously: two weather stations, a GPS antenna, a storm detector, and so forth. External communication necessary for remote supervision and control is provided by a radio link antenna with 10 Mbps bandwidth.

A complex software (SW) architecture, with several newly developed applications, manages all observatory operations. Basic telescope and dome control is conducted through the TALON SW (described in Section 5.1).

As complementary instrumentation, an all-sky CCD camera is working to detect bolides and meteors. As a mid-term objective, its images will be used to generate cloud maps that will allow the selection of clear areas of the sky.

2.3. Science Case

The OAdM scientific program will be comprehensive, embracing the observation of stellar variability, solar-type stellar activity, exoplanets, follow-up of novae and supernovae, Solar System bodies, and transient phenomena. The Observatory will offer time to the astronomical community via a Time Allocation Committee (TAC). Moreover, the robotic design will carry an important flexibility in the night scheduling, which will allow to react rapidly to observational alerts related to GRBs, new supernovae, and so forth. The participation to the networks of robotic observatories will be considered to carry out, for example, observations that require a dense time coverage.

2.4. Project Technological Scope

There are numerous advantages in operating an astronomical observatory using a robotic control instead of human attendance. These have been widely described in publications [2] over the last decade and will not be discussed here. The OAdM was designed from the beginning with a concept in mind: ensuring a reliable, secure, and efficient system while working in robotic mode. This mode of operation implies complex technology that is not commonly used in classical ground-based observatories, but rather in space observatories or some industrial applications. Its use has been extended today to ground-based observatories thanks to the evolution of hardware (HW) and SW capabilities and motivated by the clear advantages it offers in terms of time optimization to maximize the scientific return. But, achieving reliable operations is still an issue for most of these facilities that must be adapted to work on remote site locations and at extreme environment conditions.

The robotization level of an observatory is given by the confidence reached in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. Human attendance diminishes as robotization level increases, and that includes all the processes involved in observatory operations. Many situations should be automatically solved by a highly robotized observatory: reacting correctly to critical errors or bad environment conditions, giving a response to external or internal alarms, making real-time evaluation of the images acquired, adapting the scheduling observations to current conditions, and so forth. The OAdM has been designed to achieve this high level of robotic control, just to be serviced once per month.

Single Points of Failure and Redundancies
Unattended observatories have to emulate human response to all those events that perturb the normal workflow. Any decision must be planned at the design phase. The malfunctioning of any HW or SW element must be detectable and a suitable response must be provided to ensure reliability and safety. Special attention must be paid to those points that could cause damage to critical parts of the system.

The dome shutter has been identified as the most critical point for the safety of the telescope, which is the core element at the observatory. Any error on its response would be fatal if it happened during a storm; so, special care has been put on dome control. See Section 4 for more details.

The specific environment at each site also introduces particular requirements. An observatory must have a planned response to any weather variable, for example, that could have an effect on the correct performance of the system or the quality of the acquired data. Instrument safety and working efficiency depend on a good response to environment conditions. The reliability of weather data is, then, the second critical point. That is the reason why there are several weather sensors with redundancies (see Section 6). All sensors are revised and calibrated periodically.

3. Initial Setup

The core element for the robotic operation at OAdM was the OMI telescope and its control SW (TALON) that at the time of purchase guaranteed reliable unattended control. After a poor installation by the manufacturer, which implied subsequent extensive efforts to reach working condition, we realized that a number of aspects should be improved and new features to be added to achieve a reliable, safe, and efficient robotic control. The system provided by the manufacturer, for example, did not have an application to control a dome nor any redundancy in case the main dome control channel failed; just a predefined small number of environment sensors could be connected, without the possibility of adding a redundant weather station, a storm detector, In addition, a simple automatic queue scheduling was used to sort the observations before the beginning of the night, without real-time response capability to any kind of alarm. Except for the poor initial installation, the original setup of telescope and SW enabled low-level robotic observatory control, but that was in a much more limited manner than anticipated in the project.

We describe in the following sections the HW and SW elements, distributed in work packages, focusing the attention to unsolved aspects found during the implementation phase.

4. WP Dome

The dome was manufactured by the company Baader Planetarium GmbH. It is an automatic dome of 6.15 m diameter that receives from the telescope electronics the control commands, based on TTL protocol, to synchronize their movements and to open and close the shutter.

A redundant control of shutter closing (developed by Insercad Electronica S.L.) was installed in order to reduce the risk at this single point of failure. It uses a dedicated extremely robust computer to run this parallel control in case the main control subsystem crashes. A UNIX based application sends a shutter close order to the dome control electronics when it happens. It is done through an electronic board specially designed to manage signals coming from the telescope and from the redundant system.

Power is supplied using a dedicated UPS unit, whose load and battery charge values are automatically monitored.

5. WP Telescope

The TJO telescope has an equatorial fork mount. The instrumentation is placed at the Cassegrain focus. A segmented mirror cover protects the primary mirror from dust when the telescope is stowed. All these elements are automatically controlled and a standard pointing accuracy of about 10 arcsec over the entire sky is achieved. The electronics used to handle all the aspects of each axis of motion (Right Ascension (RA), Declination, focus and filter wheel) are the Clear Sky Institute Motion Controller (CSIMC) boards, distributed by OMI. A network of five CSIMC boards (four of them placed on the telescope mount) is used to communicate the devices for the axis movement control with the computer through an RS232 connection with only one of the boards, which serves as a gateway node. These standalone boards have the following features: communication with motor controllers, reception of a closed loop stepper axis encoder data and an open loop stepper axis, control of two limit switches and one home and several “user definable’’ opto-isolated I/O. RA, declination, dome rotation axis, and focus linear axis are closed loops, while filter wheel is an open loop stepper axis. Mirror covers aperture and shutter aperture and azimuth slew movements of the dome are computer controlled, through the CSIMC boards, too. The CCD camera uses a USB connection linked with the computer for the SW control of cooling, shutter aperture, and image acquisition and processing.

5.1. TALON

The basic equipment supplied by the telescope manufacturer included TALON, based on C and shell scripting programming, that enables the control of different elements to automatically manage astronomical observations, and which is now a GNU license SW. The HW elements controlled by TALON are the following: telescope, roof/dome, CCD camera, weather station, and UPS unit. Two of its applications (MKSCH and TELSCHED) can be used to create a list of objects to observe and to generate an observation schedule for the following night, based on simple queue scheduling. And, finally, it also has applications for image reduction and image photometric and astrometric analysis. Robotic operations could be carried out using this SW, but the risk level would be high and it would imply a continuous human supervision. Solutions to integrate TALON in a more general structure that makes a reliable and safe global control have been found and developed.

6. WP Housekeeping

Housekeeping is the most critical aspect when considering the safety in robotic operations. It involves real-time environment knowledge and HW reliability, aspects that have been dealt at the OAdM with the appropriate equipment.

The use of just a weather station would be suitable for most observatories, but it is insufficient for a robotic one. A special effort has been made to have, at least, as many sensors as the number of environment variables to monitor. For those variables most critical for the observatory safety, redundancies are mandatory. Among the available sensors, there is a Davis weather station, used as a main weather data supplier, a second weather station (Campbell Scientific) used for data validation, a rain detector (Eigenbrodt IRSS 88), an opto electronic sensor for establishing the start and end of precipitation, a storm detector (INGESCO, Previstorm) that measures the variation of the electrostatic field and has four switches that configure different ways of automatically open electric lines to disconnect instrumentation in case of high risk of electric storms, and a cloud sensor (Boltwood Cloud Sensor II) that derives the fraction of cloud cover by comparing the temperature of the sky to the ambient ground level temperature. The information obtained with these sensors increases security, but a tool to manage the data and the generated alarms is required (see Section 8).

Internet communications and power supply are two more elements to take into account when facing the safety of the observatory. A radio link antenna of 10 Mbps bandwidth is used for external communication. It connects the observatory with the Anella Cientfica (Scientific Ring), a high-speed communications network (2 Gbps bandwidth) managed by CESCA (the Supercomputing Center of Catalonia), that connects universities and research centers in Catalonia. The network is connected to RedIRIS, the Spanish research network. Through RedIRIS, it reaches the most advanced international research networks. Continuous monitoring of the internet connection is done by CESCA that activates an alarm protocol when it breaks. Network monitoring is also done locally at the observatory, and an alarm manager is in charge of the execution of an alarm protocol if necessary.

6.1. Power and Electronics

The reliability of the power supply and the protection of signals against induced current and perturbations were also a matter of concern. The use of several UPS, SW controlled switches and electric insulation (optocouplers) increased the power supply security and its stability and reliability. The observatory is equipped with instruments to ensure a continuous supply of energy and to avoid any unforeseen distortion or power outage. They are mandatory due to the site isolation and the characteristics of the local weather, with a frequent occurrence of lightning during storms, specially on Summer. The initial design of the building considered the installation of two lightning rods, a deep connection to ground, and the above mentioned storm detector for lightning protection. But some events showed that more protection and filtering was needed: voltage peaks produced by induced current at the communication wires, perturbation of the communication signal due to proximity of power lines, random power outages, and so forth.

Three UPS units, using double-conversion online technology (level 9, the maximum level of protection, with batteries always connected to an inverter, so no power transfer switches are necessary) were installed to supply power to the equipment that is very sensitive to power fluctuations and requires electrical isolation: astronomical instrumentation, sensors, computers, and dome. These UPSs allow a typical backup time of 15–30 minutes and have the capability to execute a computer controlled switching off.

Protection elements were installed at all electronic connections when communication between components required a cable length above 3–5 m. Commercial solutions were used for standard serial ports and network connectors, and optocouplers elements were implemented for the rest.

Reduction of noise at communication wiring was achieved by using optocouplers or fiber optic cables. Electrical noise was a big concern for dome encoder signals, because the cable transmitting this signal was very close to dome azimuth and shutter motors. These three-phase AC motors produce current voltage peaks when movement starts, which caused false signals and frequent encoder damaging. The use of a new encoder with high noise interference immunity designed for industrial applications (Hengstler Incremental Shaft Encoders, type RI30) in combination with fiber optic wiring drastically reduced the noise and corrected the position detection problem.

And, finally, different grounding for physically linked components caused a distortion of electric signals. This could only be solved by electric insulation.

7. WP Data Storage and Backup

Data storage and backup policy was designed and implemented to handle the data and assure a safe storage for raw images, extracted data, and logging information.

The maximum data rate produced by the TJO is 8 GB per day. On-site and off-site backups are conducted daily, using Redundant Arrays of Independent Disks (RAID) that provide redundancy and high reliability.

Images stored are compressed using the Rice algorithm. It is 2-3 times faster than GZIP or Hcompres, compresses an image of a factor of 3, 1.4 times better than GZIP, and headers for all extensions are still visible in compressed images [3].

8. WP Systems Control

Observatory control is based on a distributed task scheme executed using several computers. The distribution of applications in four subsystems (see Figure 1) implements a top-down control structure that manages the workflow of the observatory. Almost all the SW acquired and developed is running on a UNIX platform, which is more stable and reliable than other operating systems.

8.1. Environment Control System (ECS)

The main scope of this system is to provide a set of tools to monitor the conditions (environment, HW, SW applications, power supply, etc.) of the observatory and to manage and generate alarms according to these conditions.

The ECS is able to provide a reliable control mechanism ensuring that when the conditions become (un)suitable for observation, all the systems involved in it are able to act accordingly. The system provides the following features.

(i)Constant monitoring of weather conditions: temperature, wind, relative humidity, atmospheric pressure, cloud cover, approaching storms, rain, and so forth. (ii)Information about the monitored elements: it serves this data to clients. (iii)Alarm generation and management: alarms are generated and managed according to defined criteria. These alarms are designed per observation setup (different location or different configuration) as each one may have different observation requirements. The same ECS can be used for several observatories at the same site, but each one needs a different alarm manager. An infrastructure for more than one alarm generator is provided. Alarms are generated when: an observation should start or should not continue, the system should shut down, and so forth. (iv)Reliability: ECS is critical. The developed SW has been designed to execute a reliable response when an alarm is triggered. The ECS is self-controlled, thanks to the distributed task scheme.

As it can be seen in Figure 2, the ECS is composed of two main internal elements and many external elements. The two internal blocks are as follows.

(i)Environment monitor (EM): all the environment sensors are connected to this part, which serves all the environmental data to the other elements of the system using a server-client architecture. It reads the data collected and sends it to the clients that are connected over the network. The main feature is to provide a “generic’’ format and structure for the weather and environmental data collected from many different sensors. Usually each sensor has its own protocol or data format and this increases the complexity of a system managing data from various sensors. With this EM, every client that is connected will receive a common data structure with the different values collected. This approach yields a reduction of the overhead for these clients and a simpler implementation. In terms of portability, once this package is installed at a new observatory, there is just one part to be adapted. It corresponds to the lowest-level part: the driver framework. It has to be adapted just when different kinds of sensors are used, because each sensor normally has its own application programming interface (API) and/or data stream. This application is implemented using Mono, C #, and C/C. (ii)Alarm manager (AM): this element generates and manages alarms according to the status check it makes of HW and SW elements and the data values received from the EM. There can be as many AMs as needed and there should be at least one per facility. In the case of the OAdM, there are several AMs, as it is described next (Section 8.2). AMs are coded in JAVA and use XML configuration files.
8.2. Alarm Managers

This block is included in the ECS. It is described in a different section because it has critical relevance in the observatory management safety.

As the EM, it also has a server-client architecture, where one or more AM servers and AM clients are running per facility located at the same site. At OAdM, there is a distribution of AMs, each one running on a different computer, which identifies a subsystem. They all have the same generic routines, and a configuration file defines which ones are applicable at a particular subsystem and must be activated.

Here follows a list of subsystems and a description of the routines running at their associated AMs.

(i)SUB-ESTALL: This subsystem includes the control of all the HW involved in the observation process: telescope, CCD camera, and dome. The AM responds to an incorrect behavior of anyone of these instruments. (ii)SUB-ALIS: Subsystem in charge of data management and backup, pipeline, and scheduling routines. It is installed on the server. (iii)SUB-REBEI: The EM is running on this computer. The AM checks if this application is running correctly. (iv)SUB-SARGA: Subsystem in charge of the dome redundant control application. Environment alarms and any detected problem at the SUB-ESTALL subsystem activate a shutter close order sent by the AM.

SUB-REBEI and SUB-SARGA are the most critical subsystems and must work with a high level of confidence, as justified in Section 2.4. Two extremely robust computers (low dissipation CPU, modest amount of RAM memory, Flash memory with a Linux distribution, no hard disk, etc.) are used for that purpose.

Finally, a description of routines common to all the AMs is given.

(i)Environment: Every subsystem is continuously checking the environment conditions and activates a protocol when critical values are detected. For weather data registered by redundant sensors, a validation routine is running to decide if any sensor is not working properly or has lost calibration. The response given by the AM to a weather alarm depends on the HW each subsystem controls. (ii)Network control: Every subsystem monitors both local and remote network services. When an error is detected in any subsystem, the other AMs evaluate the risk and decide which protocol must be executed. (iii)Power: each UPS is constantly being monitored by the AMs. When they detect an error on power supply network, every subsystem AM responds with a shutdown of all the HW it controls if necessary. An automatic restart is also defined to ensure the correct sequence when switching on the subsystems.

AM uses broadcasting to inform all the AMs at other subsystems when an alarm appears.

8.3. Interfaces

Software interfaces between all the applications have been developed to integrate new packages with the commercial ones, in order to have a software suite that manages the whole observatory in a consistent manner.

A GUI to remotely control and monitor the observatory status is under designing phase.

Proposal Handling and Data Management
A new proposal is submitted in two phases. In Phase 1, proposers submit a scientific justification and observation summary for review. The Telescope Allocation Committee (TAC) recommends a list of programs with associated priorities to the TJO for preliminary approval and implementation. A technical assessment is the last step in this phase to determine the feasibility of the proposal and its final approval. In Phase 2, investigators with approved Phase 1 proposals provide complete details of the observations in their proposed observing program. This allows the TJO to conduct a technical feasibility review, and to schedule and obtain the current observations.

A web-based proposal submission tool stores all the information in the database. Information is accessible then by the TAC to evaluate the proposals and by the automatic scheduler.

The images and the extracted data stored at the observatory archive are easily accessed using a web interface. The user can download the data identified with his/her proposal code after logging in a private area of the observatory web page.

9. WP Database

The OAdM database (DB) is structured using a relational model. It is characterized by the presence of a common attribute used as a key identifier to link information about the same entry that is stored in different tables. The management system used is MySQL, widely deployed and regarded as being fast. The database is then updated by different applications using an ORM (Object-Relational Mapping) and SQL queries.

The optimal structure of a DB depends on the natural organization of the application data, and on the application requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost). For the OAdM observatory, different levels are identified following these criteria (see figure 3). The list of levels is detailed below, according to the global control design and the specific applications that interact with each one. They are sorted from high (Proposals and Projects) to low (Selection and Observation) level of human interaction.

(i)Proposals: General information included in the first phase proposal submission (proposal identifier, investigator data, target list and scientific justification, date of submission, etc.). New entries are generated through a web user interface, acceptance date is updated manually after the submission to the observatory TAC, the proposal identifier is automatically created and date of completion and status are updated automatically. (ii)Projects: Specific information of the submitted proposal (phase 2 proposal submission, as described at Section 8.3), including observational data and scheduling constraints (detector to use, filters, exposure times, number of iterations, period between iterations, environment conditions, etc.). It also includes the number of present successful observations, which is automatically updated. The entries of this DB are accessed by the scheduler after validation by the observatory staff. (iii)Selection: Temporary list of targets extracted from the Projects DB according to their possibility of observation following defined criteria. This list is generated by the scheduler application and is used as input information for the dispatch scheduler (Section 10). (iv)Observation: Entries generated by the application that manages image acquisition when an observation is executed. There is an entry per observation, that includes: proposal and target identifiers, date and time of observation, calibration and analysis status, storage directory and filenames (raw image and analysis results), image quality report (seeing, ), error flags, and quality average (average value to decide if an image fulfills user requirements). This table is used for the calibration and analysis tool (Section 11) to decide which images have to be treated and to know where the results obtained on that process are stored.

A project is finished when the number of successful observations is equal to the number of the required iterations. And when all the projects included within the same proposal are executed, the proposal is considered completed and a notification is automatically sent to the proposal PI.

10. WP Scheduler

The scheduler main task is the time optimization and it has a direct effect on the scientific return achieved by an observatory. Then, it is the core SW application for a robotic facility and it is required to be fast and reliable.

The OAdM scheduling is separated in two parts.

(i)Pre-scheduling: It makes a temporary selection of objects according to their possibility of observation from those projects approved by the TAC. The selection is done before the beginning of the night following these selection criteria: height above horizon is positive at some moment during the night; night date is included in the defined observation window; moon brightness and distance fulfill user requirement, and so forth. (ii)Dispatch-scheduling: A dispatch scheduling is executed any time a target observation is over and a new one must be scheduled. It is done in real time according to current environment conditions and the set of priorities. The application uses the selected objects at the pre-scheduling process as input information and executes an algorithm that calculates the figure-of-merit of each object. Then, the object with the highest merit value is scheduled. A general definition of a merit function is where functions and their associated weight describe the different effects that contribute to increase or reduce of the global merit per object. Among these effects, we consider (a)observing conditions: height above horizon, moon brightness and distance from the moon, best time in case observation is not required at the meridian, period between iterations, distance from previous target, proposal priority, (b)environment conditions: seeing, photometric conditions, (c)proposal history: a term to ensure a balance between the observation time spent to different proposals that have the same priority.

The definition of the dispatch scheduling algorithm is under study. Other works with a similar approach to this problem can be found in literature [4, 5].

11. WP Data Processing—The IEEC Calibration and Analysis Tool (ICAT)

Reliable and fast data calibration and analysis software is crucial to treat automatically the vast amount of images obtained at a robotic observatory. The ICAT software has been developed as a tool for robotic observatories with the objective of managing astronomical images to extract relevant scientific information in real time.

ICAT is based on Perl scripting to control program flux and manage image files. It is executed together with UNIX shell, NOAO-IRAF scripts, and CFITSIO library routines. When doing the installation of this SW, a few parameters must be set up according to the system: minimum number of calibration images to use, image header keywords, analysis parameters (crowding boundary), and so forth. After that, the reduction is automatic. It has been designed to be easily adapted to be used at other observatories setup.

Its general characteristics are: automatic management and treatment of FITS images according to database input information (Section 9), high accuracy photometric and astrometric data extraction and real-time execution. Although automatic execution is its main functionality, user-controlled execution is also available, using a web interface (written in PHP) and enabling the controller to modify a few parameters to adjust the reduction process to different criteria. Two kinds of reductions are executed automatically.

(i)On-the-fly reduction: done after each observation in order to identify if the image has the required quality; the obtained information is then used by the scheduler program to decide which object to observe next. (ii)Final reduction: done during the day, includes obtaining final master calibration images before calibrating and analyzing the astronomical images in order to improve the data quality obtained.
11.1. Reliable and Fast Data Calibration and Analysis SW

Three packages are used for different purposes according to their capabilities: NOAO-IRAF, calibration and analysis SW, DAOPHOT [6] reliable PSF photometry package, and SExtractor [7] fast and good precision analysis SW, using flexible aperture photometry.

(i)Calibration: NOAO-IRAF packages are used to combine raw astronomical images with processed bias, dark, and flat images. (ii)Analysis: Two different SW packages are used depending on image crowding in order to obtain good photometric quality with a minimum time of execution. ICAT, after a fast analysis using SExtractor, defines a density function on each image and identifies the crowded areas. The DAOPHOT package is then applied to those areas to obtain PSF photometry using an improved selection of stars for the PSF matching function calculation. Non-crowded areas are analyzed using SExtractor. Astrometric and photometric data for the totality of the objects in the entire image is then a combination of the data obtained with SExtractor and DAOPHOT.

12. Conclusions

The OAdM is designed to be an unattended small-class observatory, and HW and SW features respond to this aim. Early results obtained during the commissioning phase have shown that robotic operations can already be carried out with proper management of the risks. No human intervention is required during an entire night process, but the reliability of the system is still to be improved. Packages in charge of enhanced safety, those included at the ECS, are being developed at the moment. The other tools that complement robotic operations and enable the reduction of human attendance are also under development, except for the ICAT-pipeline that has already been finished and tested with the calibration and analysis of several photometric exoplanet transit observations (see Figure 4). Its capabilities for an efficient and reliable extraction of scientific data have been proved. The OAdM is expected to start high-level robotic operations in a few months.

Acknowledgments

The authors wish to thank all who have contributed to the development of this project. The OAdM project is supported by the Catalan government and led by its local official institution: the Consorci del Montsec. Research institutes and universities have actively contributed with scientific and technical supervision and manpower to the development of the project: the Institute for Space Studies of Catalonia (IEEC), the University of Barcelona (UB), the Technical University of Catalonia (UPC), the Spanish Research Council, (CSIC) and the Joan Oró Foundation (FJO).