Introduction: Difference between revisions

From MidasWiki
Jump to navigation Jump to search
No edit summary
(21 intermediate revisions by 2 users not shown)
Line 2: Line 2:
"MIDAS" is an acronym for Maximum Integrated Data Acquisition System.  
"MIDAS" is an acronym for Maximum Integrated Data Acquisition System.  


MIDAS is a general-purpose system for event-based data acquisition in small and medium scale Physics experiments. It is an on-going development at the Paul Scherrer Institute (Switzerland) and at TRIUMF (Canada), since 1993. Presently, on-going development is focused on the interfacing capability of the MIDAS package to external applications such as ROOT for data analysis (see Data Analyzers).
MIDAS is a general-purpose system for event-based data acquisition in small and medium scale Physics experiments. It is an on-going development at the Paul Scherrer Institute (Switzerland) and at TRIUMF (Canada), since 1993.  


MIDAS is based on a modular networking capability and a central database system. MIDAS consists of a C library and several applications, which can run on many different platforms (i.e. operating systems) such as UNIX-like, Windows NT, VxWorks etc. While the system is already in use in many laboratories, the development continues with addition of new features and tools. Recent developments involve multi-threading, FGPA/Linux support, MSCB extension. For the latest status, check the MIDAS home page: Switzerland , Canada
''' It is not to be confused with MIDAS (Multi Instance Data Acquisition System) from UK, or any of the MIDAS (Mobile Instrumentation Data Acquisition System), or MIDAS digital and analogue consoles and finally with MIDAS Brakes and Mufflers! '''


MIDAS is for small and medium sized experiments
MIDAS is based on a modular networking capability and a central database system. MIDAS consists of a C library and several applications, which can run on many different platforms (i.e. operating systems) such as UNIX-like, Windows NT, VxWorks etc. While the system is already in use in many laboratories, the development continues with addition of new features and tools. Recent developments involve multi-threading, FGPA/Linux support, MSCB extension.


MIDAS has been designed for small and medium experiments. It can be used in distributed environments where one or more frontends (application acquiring the data from the hardware) are connected to the backend (application handling the gathering of the data from the frontend and managing the run sequence) via the network (i.e.Ethernet).
MIDAS has been designed for small and medium experiments. It can be used in distributed environments where one or more frontends (application acquiring the data from the hardware) are connected to the backend (application handling the gathering of the data from the frontend and managing the run sequence) via the network (i.e.Ethernet).


==What Midas can do for you ==
==What Midas can do for you ==
... collect data from different source (HW, Network)
In a few words, Midas can:
* '''Collect data from local and/or remote hardware sources within your defined experiment.'''
* Provides the mean to configure your hardware.
* Manages the data flow and the control flow.
* Provides tools for data and data flow monitoring (console and web applications).
* Implement experiment access security.
* Records the data to common storage media (disk, tape, ftp).
* Includes programming layer to interface the data stream to your favorite data analysis package.


== Midas components ==
==History==
===Core system ===
In the early '90s, based on a previous Data Acquisition system running under MS-DOS with network capability (HIX), Dr. Stefan Ritt at Paul Sherrer Institute PSI (Switzerland) started coding a new set of applications which would be OS independent. At that time, OS such as VMS, ULTRIX, VxWorks and Windows were available.
The main elements of the MIDAS package are listed below with a short description of their function. Please refer to the diagram of the MIDAS system to see how these elements interract to form the MIDAS system.
The first deployment of Midas was for the "Canadian High Acceptance Orbital Spectrometer" (CHAOS) experimental facility at TRIUMF (Canada). Network based, the data were collected from a VME processor running VxWorks (collecting CAMAC, FastBus & VME data) and sending them to a backend computer running VMS and later ULTRIX.


Since then, Midas has been deployed on all major experiments at TRIUMF and PSI. It is also used around the world in over 80 locations. From simple workbench test setups to world class experiments such as PiBeta - muon decay [http://meg.icepp.s.u-tokyo.ac.jp/ '''MEG'''] (PSI, Switzerland) - trapping antihydrogen atoms [http://alpha.web.cern.ch/ '''ALPHA'''] (CERN, Switzerland) - neutrino oscillation [http://t2k-experiment.org/ '''T2K'''](J-Parc, Japan) - Decay study Pienu - precision muon measurement Twist (TRIUMF, Canada) - Dark Matter search [http://deap.phy.queensu.ca/ '''DEAP'''] (Sudbury, Canada) - Neutron Capture DANCE (Los Alamos, USA).


*The Buffer Manager
Midas has demonstrated its versatile capabilities and proven to be a mature and modern Data Acquisition Software package.
*Message System
A non-exhaustive list of experiments can be found [https://midas.triumf.ca/MidasWiki/index.php/MIDAS_around_the_world here].
*Online Database (ODB)
*Frontend Acquisition code.
*MIDAS Server Remote access server (RPC server).
*Data Logger Data storage.
*Analyzer Data analyzer.
*Run Control Data flow control.
*Slow Control system Device monitoring and control.
*History system Event history storage and retrival.
*Alarm System Overall system and user alarm.
*Electronic Logbook Online User Logbook.
*Run Sequencer Run sequencer


===The Buffer Manager===
=== Recent Developments ===
* July 2013 : Stefan Ritt visit at Triumf
** Midas code from local SVN repository to Bitbucket cloud based GIT repository [https://bitbucket.org/tmidas/midas Tmidas]
** Implementation of multi-threading transition
** New Midas web page
** JSON, JSON-P for custom Midas web page support
*Jan 2014 : New History Logging scheme "FILE"
* July 2015 : Stefan Ritt visit at Triumf
* August 2015 : Important Network Security upgrades:
** Default MIDAS is now secure (clients must run on localhost only)
** Network connections must be explicity allowed (see [[Security]])
** [[mhttpd]] uses secure HTTPS connections
* January 2016 : mjson-rpc functions added to Javascript libary [[mhttpd.js]] - see [[mjsonrpc]]
* July 2017
** New web page layout with menu buttons on left side and alarm and message display on every page
** New custom pages based on the mjson-rpc layer - see [[Custom_Page|Custom Pages 2017]]


The "buffer manager" consists of a set of library functions for event collection and distribution. A buffer is a shared memory region in RAM, which can be accessed by several processes, called "clients". Processes sending events to a buffer are called "producers". Processes reading events from the buffer are called "consumers".
==[[Overall Midas system diagram]]==
A graphical view of the generic acquisition components forming the MIDAS DAQ


A buffer is organized as a FIFO (First-In-First-Out) memory. Consumers can specify which type of events they want to receive from a buffer. For this purpose each event contains a MIDAS header with an event ID and other pertinent information.
==[[Midas Core]]==
 
A general description of the different components composing the Midas Data Acquisition package
Buffers can be accessed locally through the shared memory or remotely via the MIDAS server acting as an interface to that same shared memory.
 
A common problem in DAQ systems is the possible crash of a client, such as a user analyzer. This can cause the whole system to hang up, and may require a restart of the DAQ causing a loss of both time and, eventually, precious data. In order to address this problem, a special watchdog scheme has been implemented. Each client attached to the buffer manager signals its presence periodically by storing a time-stamp in the shared memory. Every other client connected to the same buffer manager can then check if the other parties are still alive. If not, proper action is taken consisting in removing the dead client hooks from the system, leaving the system in a working condition.
 
 
===Message System===
 
Any client can produce status or error messages with a single call using the MIDAS library. These messages are then forwarded to any other clients who may be available to receive these messages, as well as to a central log file system. The message system is based on the buffer manager scheme, but with a dedicated header to identify the type of message. A dedicated buffer (i.e. shared memory) is used to receive and distribute messages. Predefined message types contained in the MIDAS library cover most of the message requirements. See Logging in MIDAS and Customizing the MIDAS data logging for more details.
 
 
===Online Database (ODB)===
 
In a distributed DAQ environment, configuration data is usually stored in several files on different computers. MIDAS, however, uses a different approach: all relevant data for a given experiment are stored in a central database called the "Online DataBase" (ODB). This database contains run parameters; logging channel information; condition parameters for front-ends and analyzers; slow control values; status and performance data; and any information defined by the user.
 
The main advantage of this concept is that all programs participating in an experiment have full access to these data without having to contact different computers. A possible disadvantage could be the extra load put on the particular host serving the ODB. As the access to such a database can be remote, the connection is performed through an RPC layer. MIDAS includes its own RPC which has been optimized for speed. Byte ordering (i.e. big/little endian) is taken care of, such that cross-platform database access is possible, with the advantage that the RPC doesn't define a byte ordering. Instead it uses the transmitter type, and converts to the required byte ordering only if needed by the receiver. Measurement shows that up to 50,000 accesses per second with a local connection, and around 500 accesses per second remotely over the MIDAS server, can be obtained (numbers from 1990).
 
The ODB is hierarchically structured, similar to a file system, with directories and sub-directories (see ODB Structure) . The data are stored in key/data pairs, similar to the Windows NT registry. Keys can be dynamically created and deleted. The data associated with a key can be of different types such as: byte, words, double words, float, strings, etc. or arrays of any of those. A key can also be a directory or a symbolic link (c.f. Unix).
 
The MIDAS library provides a complete set of functions to manage and operate on these keys.
Any ODB client can register a "hot-link" between a local C-structure and any element of the ODB. The hot-link mechanism ensures that whenever a client (program) changes a value in this ODB sub-tree, the local C-structure automatically receives an update of the changed data. Additionally, a client can register a callback function which will be executed as soon as the hot-link's update has been received. For more information see Event Notification (Hot-Link) .
 
 
MIDAS Server
 
For remote access to a MIDAS experiment, a remote procedure call (RPC) server is available (mserver). It uses an optimized MIDAS RPC scheme for improved access speed. The server can be started manually or via inetd (UNIX) or as a service under Windows NT. For each incoming connection it creates a new sub-process which serves this connection over a TCP link. The MIDAS server not only serves client connections to a given experiment, but takes the experiment's name as a parameter meaning that only one MIDAS server is necessary to manage several experiments on the same node.
 
 
Frontend
 
The frontend program refers to a task running on a particular computer which has access to hardware equipment. Several frontends can be attached simultaneously to a given experiment. Each frontend can be composed of multiple Equipments. The term "Equipment" refers to a single or a collection of sub-task(s) meant to collect and regroup logical or physical data under a single and uniquely identified event.
 
The frontend program is composed of a general framework which is experiment-independent, and a set of template routines for the user to fill in. This program will:
 
Register the given Equipment(s) list to a specific MIDAS experiment.
Provide the means of collecting data from hardware sources defined by each Equipment Read function.
Gather these data in a known format (e.g. Fixed, MIDAS) for each equipment.
Send these data to the buffer manager either locally or remotely.
Periodically collect statistics of the acquisition task, and send them to the Online Database.
The frontend framework sends events to the buffer manager and optionally a copy to the ODB. A "Data cache" in the frontend and on the server side reduces the amount of network operations, pushing the transfer speed closer to the physical limit of the network configuration.
 
The data collection in the frontend framework can be triggered by several mechanisms. Currently the frontend supports four different kind of event trigger:
 
Periodic events: scheduled event based on a fixed time interval. They can be used to read information such as scaler values, temperatures etc.
Polled events: hardware trigger information read continuously which in turns if the signal is asserted will trigger the equipment readout.
LAM events: generated only when pre-defined LAM is asserted (CAMAC).
Interrupt events: generated by particular hardware device supporting interrupt mode.
Slow Control events: special class of events that are used in the slow control system.
Each of these types of trigger can be enabled/activated for a particular experimental State, Transition State, or a combination of any of them. Examples such as "read scaler event only when running" or "read periodic event if the run state is not paused and on all transitions" are possible.
 
Dedicated header and library files for hardware access to CAMAC, VME, Fastbus, GPIB and RS232 are part of the MIDAS distribution set.
For full details see SECTION 6: Frontend Operation .
 
 
Data Logger
 
The data logger is a client running on the backend computer receiving events from the buffer manager and saving them onto disk, tape or via FTP to a remote computer. It supports several parallel logging channels with individual event selection criteria. Data can currently be written in five different formats: MIDAS binary, ASCII, ROOT and DUMP (see MIDAS format).
 
Basic functionality of the logger includes:
 
Run Control based on:
event limit not reached yet.
recorded byte limit not reached yet.
logging device not full.
Logging selection of particular events based on Event Identifier.
Auto restart feature allowing logging of several runs of a given size or duration without user intervention.
Recording of ODB values to a so-called MIDAS History System
Recording of the ODB to all or individual logging channels at the begin-of-run and end-of-run States, as well as to a separate disk file in XML or ASCII format.
For more information see Logging in MIDAS .
 
Analyzer
 
The Analyzer is a backend task (as opposed to the frontend). As in the front-end section, the analyzer provided by MIDAS is a framework on which the user can develop his/her own applications. This framework can be built for private analysis (no external analyzer hooks) or specific analysis packages such as HBOOK, ROOT from the CERN (none of those libraries are included in the MIDAS distribution). See SECTION 7: Data Analysis for more information.
 
The analyzer takes care of receiving events (a few lines of code are necessary to receive events from the buffer manager); initializing the HBOOK or ROOT system; and automatically booking N-tuples/TTree for all events. Interface to user routines for event analysis is provided.
 
The analyzer is structured into "stages", where each stage analyses a subset of the event data. Low level stages can perform ADC and TDC calibration, while high level stages can calculate "physics" results. The same analyzer executable can be used to run online (where events are received from the buffer manager) and off-line (where events are read from file). When running online, generated N-tuples/TTree are stored in a ring-buffer in shared memory. They can be analysed with PAW without stopping the run.
 
When running off-line, the analyzer can read MIDAS binary files, analyse the events, add calculated data for each event and produce a HBOOK RZ output file which can be read in by PAW later. The analyzer framework also supports analyzer parameters. It automatically maps C-structures used in the analyzer to ODB records via Event Notification (Hot-Link). To control the analyzer, only the values in the ODB have to be changed, which are automatically propagated to the analyzer parameters. If analysis software has been already developed, MIDAS provides the functionality necessary to interface the analyzer code to the MIDAS data channel. Support for languages such as C, C++ is available.
 
 
Run Control
 
As mentioned earlier, the Online Database (ODB) contains all the pertinent information regarding an experiment. For that reason a run control program requires only to access the ODB. A basic program supplied in the package called ODBEdit provides a simple and safe means of interacting with the ODB. However, to access all the MIDAS capabilities, mhttpd: the MIDAS Web-based Run Control utility should be used.
 
Three "Run States" define the state of the MIDAS data acquisition system: Stopped, Paused, and Running. In order to change from one state to another, MIDAS provides four basic "Transition" functions: TR_START, TR_PAUSE, TR_RESUME, and TR_STOP. During these transition periods, any MIDAS client registered to receive notification of such a transition will be able to perform dedicated tasks in either synchronized or asynchronized mode, within the overall run control of the experiment.
 
In order to provide more flexibility to the transition sequence of all the MIDAS clients connected to a given experiment, each transition function has a transition sequence number attached to it. This transition sequence is used to establish within a given transition the order of the invocation of the MIDAS clients (from the lowest sequence number to the highest). See Run Transition Priority for details.
 
===Frontend ===
concept, hooks to the experiment ...
 
=== Data logging ===
Data collection scheme ...
 
=== Analysis ===
Analysis client ...
 
=== Online Database ===
Experiment specific Midas Database ...
 
===[[Overall Midas system diagram]]===

Revision as of 09:47, 3 January 2018

What is Midas

"MIDAS" is an acronym for Maximum Integrated Data Acquisition System.

MIDAS is a general-purpose system for event-based data acquisition in small and medium scale Physics experiments. It is an on-going development at the Paul Scherrer Institute (Switzerland) and at TRIUMF (Canada), since 1993.

It is not to be confused with MIDAS (Multi Instance Data Acquisition System) from UK, or any of the MIDAS (Mobile Instrumentation Data Acquisition System), or MIDAS digital and analogue consoles and finally with MIDAS Brakes and Mufflers!

MIDAS is based on a modular networking capability and a central database system. MIDAS consists of a C library and several applications, which can run on many different platforms (i.e. operating systems) such as UNIX-like, Windows NT, VxWorks etc. While the system is already in use in many laboratories, the development continues with addition of new features and tools. Recent developments involve multi-threading, FGPA/Linux support, MSCB extension.

MIDAS has been designed for small and medium experiments. It can be used in distributed environments where one or more frontends (application acquiring the data from the hardware) are connected to the backend (application handling the gathering of the data from the frontend and managing the run sequence) via the network (i.e.Ethernet).

What Midas can do for you

In a few words, Midas can:

  • Collect data from local and/or remote hardware sources within your defined experiment.
  • Provides the mean to configure your hardware.
  • Manages the data flow and the control flow.
  • Provides tools for data and data flow monitoring (console and web applications).
  • Implement experiment access security.
  • Records the data to common storage media (disk, tape, ftp).
  • Includes programming layer to interface the data stream to your favorite data analysis package.

History

In the early '90s, based on a previous Data Acquisition system running under MS-DOS with network capability (HIX), Dr. Stefan Ritt at Paul Sherrer Institute PSI (Switzerland) started coding a new set of applications which would be OS independent. At that time, OS such as VMS, ULTRIX, VxWorks and Windows were available. The first deployment of Midas was for the "Canadian High Acceptance Orbital Spectrometer" (CHAOS) experimental facility at TRIUMF (Canada). Network based, the data were collected from a VME processor running VxWorks (collecting CAMAC, FastBus & VME data) and sending them to a backend computer running VMS and later ULTRIX.

Since then, Midas has been deployed on all major experiments at TRIUMF and PSI. It is also used around the world in over 80 locations. From simple workbench test setups to world class experiments such as PiBeta - muon decay MEG (PSI, Switzerland) - trapping antihydrogen atoms ALPHA (CERN, Switzerland) - neutrino oscillation T2K(J-Parc, Japan) - Decay study Pienu - precision muon measurement Twist (TRIUMF, Canada) - Dark Matter search DEAP (Sudbury, Canada) - Neutron Capture DANCE (Los Alamos, USA).

Midas has demonstrated its versatile capabilities and proven to be a mature and modern Data Acquisition Software package. A non-exhaustive list of experiments can be found here.

Recent Developments

  • July 2013 : Stefan Ritt visit at Triumf
    • Midas code from local SVN repository to Bitbucket cloud based GIT repository Tmidas
    • Implementation of multi-threading transition
    • New Midas web page
    • JSON, JSON-P for custom Midas web page support
  • Jan 2014 : New History Logging scheme "FILE"
  • July 2015 : Stefan Ritt visit at Triumf
  • August 2015 : Important Network Security upgrades:
    • Default MIDAS is now secure (clients must run on localhost only)
    • Network connections must be explicity allowed (see Security)
    • mhttpd uses secure HTTPS connections
  • January 2016 : mjson-rpc functions added to Javascript libary mhttpd.js - see mjsonrpc
  • July 2017
    • New web page layout with menu buttons on left side and alarm and message display on every page
    • New custom pages based on the mjson-rpc layer - see Custom Pages 2017

Overall Midas system diagram

A graphical view of the generic acquisition components forming the MIDAS DAQ

Midas Core

A general description of the different components composing the Midas Data Acquisition package