The Large Hadron Collider (LHC) has entered its second operational period, reaching center of mass energies of 13 TeV for proton-proton collisions. This is almost twice as much energy as achieved during the previous run, where protons were accelerated to 4TeV. During the Long Shutdown 1 (LS1) the machine protection systems have been upgraded accordingly, allowing to safely achieve these record energy and beam intensities in the LHC. While powerful supervisory systems exist for the majority of the LHC equipment systems, the shier number of diagnostics and monitoring data does not allow for a continuous screening and surveillance of all available machine data. Today, the LHC control system alone monitors more than 4 million of signals, with various update rates ranging from 1Hz to ~10 Hz. Out of all available signals, around 1 million of them are permanently stored in the CERN Accelerator Logging System (CALS). A continuous, yet configurable monitoring and analysis of a subset of the available data would allow to further enhance machine protection as well as machine availability by detecting early signs of potential failures and degraded machine performance. The monitoring framework hereby has to be capable of coping with complex signal analysis such as:
- signals coming from multiple sources (such as the CERN Accelerator Logging System for low frequency acquisitions, the LHC Post Mortem system storing transient data recordings, real-time data from equipment systems,..) must be correlated together.
- signals must be monitored for a long duration, typically from hours to days and even years.
- the monitoring must be performed selectively as a function of the operational state of the machine, beam modes or other signals.
The vast amount of available data and the fact that signals are mostly interesting during short periods of their acquisition time only (e.g. to observe the state of certain devices after significant events) calls for a framework that allows users to dynamically configure (and subsequently modify) signal acquisition, conditions and analysis and to trigger according actions if deemed necessary.
The monitoring of signals calls for a wide range of features to allow a first pre-processing and analysis of the data. It ranges from simple assertions (i.e. the validation of a simple analysis criteria) to more complex calculations. Equipment and domain experts defining the required analysis logic would ideally not have to deal with the complexity of the signal data representation, neither with the complexity of the API to the data source, such as online data acquisition in CALS, LSA, Post mortem etc. The proposed approach is based on re-using a Domain Specific Language (eDSL), notably developed to achieve the same objective in the framework of LHC Powering Test Campaigns.