Back Midas Rome Roody Rootana
  Midas DAQ System, Page 142 of 145  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  7   16 Jan 2004 Razvan Stefan Gornea Access to hardware in the MIDAS framework
The multimeter device is indeed to simple to use MIDAS but I am just trying 
it as a learning experience. The DAQ system to develop involves VME crates 
and general purpose I/O boards. The slow control part, especially accessing 
the I/O boards seem to me more complex then the VME access. I want to 
understand very well the "correct" way of using the MIDAS slow control 
framework before starting the project.

I chose the second method and created a meterdev.c driver (essentially a 
copy of the nulldev.c) where I changed the init. function and the get 
function. I am not sending a "INIT ..." string because for this device it 
is useless. In the get function I send a "D" and read my string. I changed 
the frontend of the example to have a new driver list (in the first try I 
eliminated the Output device but the ODB got corrupted, I guess the class 
multi needs to have defined output channels). The output channel is linked 
with nulldev and null (I guess this is like if they would not be present).

The result is strange because the get function is called all the time very 
fast (much faster then the 9 seconds as set in the equipment) and even 
before starting the run (I just put the flag RO_RUNNING).

Thanks for any help
Attachment 1: frontend.c
//********************************************************************************************
//
// Name:		frontend.c
// Created by:	Razvan Stefan Gornea
//
// Contents:	Slow Control frontend for a portable multimeter
//
// Log:			2004-01-15 14:22
//				Writing down initial code.
//
//********************************************************************************************

#include <stdio.h>
#include "midas.h"
#include "class/multi.h"
#include "device/nulldev.h"
#include "meterdev.h"
#include "bus/null.h"
#include "bus/rs232.h"

// globals variables

// frontend name
char *frontend_name = "Slow Control";
// frontend file name
char *frontend_file_name = __FILE__;
// frontend loop
BOOL frontend_call_loop = FALSE;
// frontend display refresh
INT display_period = 1000;
// maximum event size in bytes
INT max_event_size = 10000;
// maximum event size for fragments in bytes
INT max_event_size_frag = 5*1024*1024;
// buffer size in bytes
INT event_buffer_size = 10*10000;

// equipment list

// device drivers
DEVICE_DRIVER multi_driver[] = {
	{"Input", meterdev, 1, rs232, DF_INPUT},
	{"Output", nulldev, 1, null, DF_OUTPUT},
	{""}
};
// equipment list
EQUIPMENT equipment[] = {
  { "Multimeter",         /* equipment name */
    11, 0,                /* event ID, trigger mask */
    "SYSTEM",             /* event buffer */
    EQ_SLOW,              /* equipment type */
    0,                    /* event source */
    "FIXED",              /* format */
    TRUE,                 /* enabled */
    RO_RUNNING,           /* read when running */
    9000,                 /* read every 9 sec */
    0,                    /* stop run after this event limit */
    0,                    /* number of sub events */
    1,                    /* log history every event */
    "", "", "",
    cd_multi_read,        /* readout routine */
    cd_multi,             /* class driver main routine */
    multi_driver,         /* device driver list */
    NULL,                 /* init string */
  },
  { "" }
};

// routines

INT  poll_event(INT source[], INT count, BOOL test) {return 1;};
INT  interrupt_configure(INT cmd, INT source[], PTYPE adr) {return 1;};
// frontend initialization
INT frontend_init()
{
  return CM_SUCCESS;
}
// frontend exit
INT frontend_exit()
{
  return CM_SUCCESS;
}
// frontend loop
INT frontend_loop()
{
  return CM_SUCCESS;
}
// begin of run
INT begin_of_run(INT run_number, char *error)
{
  return CM_SUCCESS;
}
// end of run
INT end_of_run(INT run_number, char *error)
{
  return CM_SUCCESS;
}
// pause run
INT pause_run(INT run_number, char *error)
{
  return CM_SUCCESS;
}
// resume run
INT resume_run(INT run_number, char *error)
{
  return CM_SUCCESS;
}
Attachment 2: meterdev.c
//********************************************************************************************
//
// Name:		meter.c
// Created by:	Razvan Stefan Gornea
//
// Contents:	Device driver for a portable multimeter
//
// Log:			2004-01-15 14:22
//				Writing down initial code.
//
//********************************************************************************************

#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include "midas.h"

// globals variables

#define DEFAULT_TIMEOUT 10000 // 10 secondes

typedef struct {
  int  address;
} METERDEV_SETTINGS;

#define METERDEV_SETTINGS_STR "\
Address = INT : 1\n\
"
typedef struct {
  METERDEV_SETTINGS meterdev_settings; // device settings
  float         *array; // data array
  INT           num_channels; // number of channels associated with this device
  INT (*bd)(INT cmd, ...); // bus driver entry function
  void *bd_info; // private settings and data related to the bus driver
  HNDLE hkey; // ODB key for bd_info structure
} METERDEV_INFO;

// routines

// initialization function: access the ODB to creates the settings, initializes the variables
// and calls the initialization function of the bus driver
INT meterdev_init(HNDLE hkey, void **pinfo, INT channels, INT (*bd)(INT cmd, ...))
{
	int status, size;
	HNDLE hDB, hkeydd;
	METERDEV_INFO *info;

	// allocate info structure
	info = calloc(1, sizeof(METERDEV_INFO));
	*pinfo = info;
	// get handle on current experiment ODB
	cm_get_experiment_database(&hDB, NULL);
	// create METERDEV settings record
	status = db_create_record(hDB, hkey, "DD", METERDEV_SETTINGS_STR); // force the ODB structure to match the METERDEV_SETTINGS C structure
	if (status != DB_SUCCESS) {
		return FE_ERR_ODB;
	}
	db_find_key(hDB, hkey, "DD", &hkeydd); // get handle on the DD key in the ODB associated with a certain equipment and device as pointed by the "hkey" handle
	size = sizeof(info->meterdev_settings); // get the size of the device settings structure
	db_get_record(hDB, hkeydd, &info->meterdev_settings, &size, 0); // load the device settings for the ODB, i.e. ontent of DD key
	// initialize the driver
	info->num_channels = channels; // define the nmber of channels
	info->array = calloc(channels, sizeof(float)); // allocate space for data
	info->bd = bd; // set handle on bus driver
	info->hkey = hkey; // set handle on the ODB key for the evice driver
	if (!bd) { // if handle invalid return error
		return FE_ERR_ODB;
	}
	// call the bus driver initialization routine
	status = info->bd(CMD_INIT, info->hkey, &info->bd_info);
	if (status != SUCCESS) {
		return status;
	}
	// initialization of device, something like ... 
	//BD_PUTS("init");
	// for this device no initialization string is needed ...

  return FE_SUCCESS;
}

// decomission function: free memory allocation(s) and close device(s)
INT meterdev_exit(METERDEV_INFO *info)
{
	// call EXIT function of bus driver, usually closes device
	info->bd(CMD_EXIT, info->bd_info);
	// free local variables
	if (info->array) {
		free(info->array);
	}
	free(info);
	
	return FE_SUCCESS;
}

// set channel value
INT meterdev_set(METERDEV_INFO *info, INT channel, float value)
{
	char str[80];
	
	// set channel to a specific value, something like ...
	sprintf(str, "SET %d %lf", channel, value);
	BD_PUTS(str);
	BD_GETS(str, sizeof(str), ">", DEFAULT_TIMEOUT);
	// simulate writing by storing value in local array, has to be removed in a real driver
	if (channel < info->num_channels) {
		info->array[channel] = value;
	}
	
	return FE_SUCCESS;
}

// set all channels values
INT meterdev_set_all(METERDEV_INFO *info, INT channels, float *value)
{
	int  i;
	char str[1000];

	// put here some optimized form of setting all channels simultaneously like ...
	strcpy(str, "SETALL ");
	for (i=0 ; i<min(info->num_channels, channels) ; i++) {
		sprintf(str+strlen(str), "%lf ", value[i]);
	}
	BD_PUTS(str);
	BD_GETS(str, sizeof(str), ">", DEFAULT_TIMEOUT);
	// simulate writing by storing values in local array
	for (i=0 ; i<min(info->num_channels, channels) ; i++) {
		info->array[i] = value[i];
	}
	
	return FE_SUCCESS;
}

// get channel value
INT meterdev_get(METERDEV_INFO *info, INT channel, float *pvalue)
{
	int  status, i;
	char str[80];
	char ascii_number[5];

	// read value from channel, something like ...
	//sprintf(str, "GET %d", channel);
	sprintf(str, "D"); // request data
	BD_PUTS(str);
	status = BD_GETS(str, sizeof(str), "\n", DEFAULT_TIMEOUT); // read until getting a cariage return or exit on timeout
	for (i = 0; i < 4; i++) { // transfer the number
		ascii_number[i] = str[i+4];
	}
	ascii_number[4] = '\0'; // end of string
	*pvalue = (float) atof(ascii_number); // convert from ASCII to float
	// simulate reading by copying set data from local array
	//if (channel < info->num_channels) {
	//	*pvalue = info->array[channel];
	//}
	//else {
	//	*pvalue = 0.f;
	//}
	
	return FE_SUCCESS;
}

// get all channels values
INT meterdev_get_all(METERDEV_INFO *info, INT channels, float *pvalue)
{
	// int  i;
	
	/* put here some optimized form of reading all channels. If the deviced
	does not support such a function, one can call nulldev_get() in a loop 
	strcpy(str, "GETALL");
	BD_PUTS(str);
	BD_GETS(str, sizeof(str), ">", DEFAULT_TIMEOUT);
	for (i=0 ; i<min(info->num_channels, channels) ; i++)
	pvalue[i] = atof(str+i*5); // extract individual values from reply
	*/
	/* simulate reading by copying set data from local array */
	//for (i=0 ; i<min(info->num_channels, channels) ; i++)
	//	pvalue[i] = info->array[i];
	
	return FE_SUCCESS;
}

// device driver entry point
INT meterdev(INT cmd, ...)
{
	va_list argptr;
	HNDLE   hKey;
	INT     channel, status;
	DWORD   flags;
	float   value, *pvalue;
	void    *info, *bd;
	va_start(argptr, cmd);
	status = FE_SUCCESS;
	
	switch (cmd) {
	case CMD_INIT:
		hKey = va_arg(argptr, HNDLE);
		info = va_arg(argptr, void *);
		channel = va_arg(argptr, INT);
		flags = va_arg(argptr, DWORD);
		bd = va_arg(argptr, void *);
		status = meterdev_init(hKey, info, channel, bd);
		break;
	case CMD_EXIT:
		info = va_arg(argptr, void *);
		status = meterdev_exit(info);
		break;
	case CMD_SET:
		info = va_arg(argptr, void *);
		channel = va_arg(argptr, INT);
		value = (float) va_arg(argptr, double); // floats are passed as double
		status = meterdev_set(info, channel, value);
		break;
	case CMD_SET_ALL:
		info = va_arg(argptr, void *);
		channel = va_arg(argptr, INT);
		pvalue   = (float *) va_arg(argptr, float *);
		status = meterdev_set_all(info, channel, pvalue);
		break;
	case CMD_GET:
		info = va_arg(argptr, void *);
		channel = va_arg(argptr, INT);
		pvalue  = va_arg(argptr, float*);
		status = meterdev_get(info, channel, pvalue);
		break;
	case CMD_GET_ALL:
		info = va_arg(argptr, void *);
		channel = va_arg(argptr, INT);
		pvalue  = va_arg(argptr, float*);
		status = meterdev_get_all(info, channel, pvalue);
		break;
	default:
		break;
	}
	va_end(argptr);
	
	return status;
}
  8   17 Jan 2004 Stefan Ritt Access to hardware in the MIDAS framework
> The result is strange because the get function is called all the time very 
> fast (much faster then the 9 seconds as set in the equipment) and even 
> before starting the run (I just put the flag RO_RUNNING).

This is on purpose. When the frontend is idle, it loops over the slow control 
equipment as fast as possible. This way, you see changes in your hardware very 
quickly. I see no reason to waste CPU cycles in the frontend when there are 
better things to do like reading slow control equipment. Presume you have the 
alarm system running, which turns off some equipment in case of an over 
current. You better do this as quickly as possible, not wasting up to 9 
seconds each time.

The 9 seconds you mention are for reading *EVENTS*. You have double 
functionality: First, reading the slow control system, writing updated values 
to the ODB, where someone else can display or evaluate them (in the alarm 
system for example). Second, assemble events and sending them with the other 
data to disk or tape. Only the second one gets controlled by RO_RUNNING and 
the 9 seconds. You can see this by the updating event statists on your 
frontend display, which increments only when running and then every 9 seconds.
  2004   13 Oct 2020 Soichiro KuribayashiInfoAbout remote control of front end part of MIDAS on chip
Hello!

My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University. 
I'm a T2K collaborator and working for Super FGD which is new detector in ND280.

I'm a beginner of MIDAS and I've just started to develop the DAQ software with 
MIDAS for Super FGD.
For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ 
which is system on chip.

For this remote control of front end part with mserver, we have to mount home 
directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
So I wonder if we should use NFS(Network file system) + NIS(Network information 
service) + autofs for the mounting. Is it correct?

If you have any information or any suggestion for the remote control on chip, 
please let me know.

Best regards,
Soichiro 
  2005   13 Oct 2020 Konstantin OlchanskiInfoAbout remote control of front end part of MIDAS on chip
> My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University. 
> I'm a T2K collaborator and working for Super FGD which is new detector in ND280.

Hi! I did much of the DAQ software for the original FGD. I hope I can help.

> For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ 
> which is system on chip.

This would be the same as the existing FGD. Inside the FGD DCC is a Virtex4 FPGA
with a 300MHz PPC CPU running Linux from a CompactFlash card (Kentaro-san did this 
part). On this linux system runs the FGD DCC midas frontend. It connects
to the FGD midas instance using the mserver. This frontend executable is
copied to the DCC using "scp", there is no common nfs mounted home directory.

> For this remote control of front end part with mserver, we have to mount home 
> directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
> So I wonder if we should use NFS(Network file system) + NIS(Network information 
> service) + autofs for the mounting. Is it correct?

Since you have a bigger SOC and you can run pretty much a complete linux,
I do recommend that you go this route. During development it is very convenient
to have common home directories on the main machine and on the frontend fpga
machines.

But this is not necessary. the midas mserver connection does not require
common (nfs-mounted) home directory, you can copy the files to the frontend
fpga using scp and rsync and you can use the gdb "remote debugger" function.

I can also suggest that on your frontend SOC/FPGA machine, you boot linux
using the "nfs-root" method. This way, the local flash memory only
contains a boot loader (and maybe the linux kernel image, depending on
bootloader limitations). The rest of the linux rootfs can be on your
central development machine. This way management of flash cards,
confusion with different contents of local flash and need to make backups
of frontend machines is much reduced.

If you use a fast SSD and ZFS with deduplication, you will also have good
performance gain (NFS over 1gige network to server with fast SSD works
so much better compared to the very slow SD/MMC/NAND flash).

I can point you to some of my documentation how we do this.

>
> If you have any information or any suggestion for the remote control on chip, 
> please let me know.
> 

I would say you are on a good track. For early development on just one board,
pretty much any way you do it will work, but once you start scaling up
beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
home directories, NFS-root booted linux, etc.

And of course you may want to study the existing ND280/FGD DAQ. I hope you
have access to the running system at Jparc. If not, I have a copy of
pretty much everything (except for running hardware, it is stored in the basement, 
dead) and I can give you access.

P.S. This reminds me that the cascade software from ND280 (they key part
for connecting the FGD, the TPC, the slow controls & etc into one experiment)
was never merged into the midas repository. I opened a ticket for this,
now we will not forget again:

https://bitbucket.org/tmidas/midas/issues/291/import-cascase-frontend-from-t2k-
nd280-fgd

K.O.
  2006   13 Oct 2020 Soichiro KuribayashiInfoAbout remote control of front end part of MIDAS on chip
Dear Konstantin,

Thank you very much for your reply and detailed information.
I would appreciate if you could help us.

> I can also suggest that on your frontend SOC/FPGA machine, you boot linux
> using the "nfs-root" method. This way, the local flash memory only
> contains a boot loader (and maybe the linux kernel image, depending on
> bootloader limitations). The rest of the linux rootfs can be on your
> central development machine. This way management of flash cards,
> confusion with different contents of local flash and need to make backups
> of frontend machines is much reduced.

As you said, we can run complete Linux (Ubuntu 16) on ZYNQ and I'm using common NFS 
system now. However, I didn't know "nfs-root" method which you mentioned and this method 
seems to be reasonable way to just share linux rootfs.
First of all, I will try this method for simpler system.

> If you use a fast SSD and ZFS with deduplication, you will also have good
> performance gain (NFS over 1gige network to server with fast SSD works
> so much better compared to the very slow SD/MMC/NAND flash).
>
> I can point you to some of my documentation how we do this.

I'm concerned about such performance and I have checked the performance with common NFS 
over gige network and my DAQ PC roughly(data transfer rate ~ O(10) MByte/sec). However, I 
didn't know the ZFS and also how we can have performance gain with a fast SSD and ZFS.
Please let me know your documentation how to do it if possible.

> I would say you are on a good track. For early development on just one board,
> pretty much any way you do it will work, but once you start scaling up
> beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
> home directories, NFS-root booted linux, etc.

I'm developing with just one board and common NFS-mounted now. I'm looking forward to 
seeing such benefits when I will use multiple frontends.
 
> And of course you may want to study the existing ND280/FGD DAQ. I hope you
> have access to the running system at Jparc. If not, I have a copy of
> pretty much everything (except for running hardware, it is stored in the basement, 
> dead) and I can give you access.

I don't have access to the system at Jparc, but Nick has told us where FGD DAQ code is.
Is bellow URL everything of code of FGD DAQ?
https://git.t2k.org/hastings/fgddaq/-/tree/master

Best regards,
Soichiro
  2007   20 Oct 2020 Stefan RittInfoAbout remote control of front end part of MIDAS on chip
We also use a Zynq chip and boot in the following order:

1. SD Card
   a. First Stage Bootloader
   b. PL Firmware
   c. UBOOT
2. NFS over Ethernet
   a. Linux kernel
   b. RootFS
   c. Mounting home directories


If you need details I can bring you in contact with the person who actually implemented that.

Best,
Stefan
  2008   21 Oct 2020 Soichiro KuribayashiInfoAbout remote control of front end part of MIDAS on chip
Dear Stefan,

Thank you very much for your help.

I have already contacted someone who has used ZYNQ in that order and It's working fine for now.
But, I'll let you know if something goes wrong.

Best regards,
Soichiro 
  614   04 Aug 2009 Exaos LeeForumAbout python interface
Coding in Python is faster than in C (but running slower). So, some python interfaces are useful for testing purpose. I hope you may like the PyMVME module for VME bus testing.
  943   16 Dec 2013 Konstantin OlchanskiBug FixAbolished SYNC and ASYNC defines
A few months ago, definitions of SYNC and ASYNC in midas.h have been changed away from "0" and "1", 
and this caused problems with some event buffer management functions bm_xxx().

For example, when event buffers are getting full, bm_send_event(SYNC) unexpectedly started returning 
BM_ASYNC_RETURN instead of waiting for free space, causing unexpected crashes of frontend programs.

Part of the problem was confusion between SYNC/ASYNC used by buffer management (bm_xxx) and by run 
transition (cm_transition()) functions. Adding to confusion, documentation of bm_send_event() & co used 
FALSE/TRUE while most actual calls used SYNC/ASYNC.

To sort this out, an executive decision was made to abolish the SYNC/ASYNC defines:

For buffer management calls bm_send_event(), bm_receive_event(), etc, please use:
SYNC -> BM_WAIT
ASYNC -> BM_NO_WAIT

For run transitions, please use:
SYNC -> TR_SYNC
ASYNC -> TR_ASYNC
MTHREAD -> TR_MTHREAD
DETACH -> TR_DETACH

K.O.
  1878   24 Apr 2020 Pintaudi GiorgioForumAPI to read MIDAS format file
Dear MIDAS people,
I need to borrow your wisdom for a bit.
I am developing a piece of software that should read the history data stored in a
.midas file (MIDAS format) and integrate it into the WAGASCI data quality output.
In other words, I need to read some temperature values stored in a .midas file and
compare them with the MPPC gains and check for temperature/gain dependence.
I see three possibilities:
  • write a custom parser in C++ using the instructions contained in the Mhformat page;
  • call the mhist program from within my application;
  • call the mhdump program from within my application;
Which solution do you think is the best?
Because there is no need for raw performance, if possible, I would like to write my application in Python3 but C++ is also an option.
  Draft   24 Apr 2020 Stefan RittForumAPI to read MIDAS format file
  1880   24 Apr 2020 Stefan RittForumAPI to read MIDAS format file
I guess all three options would work. I just tried mhist and it still works with the "FILE" history

mhist -e <equipment name> -v <variable name> -h 10

for dumping a variable for the last 10 hours.

I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.

Writing your own parser is certainly possible (even in Python), but of course more work.

Stefan
  Draft   24 Apr 2020 Pintaudi GiorgioForumAPI to read MIDAS format file

Stefan Ritt wrote:
I guess all three options would work. I just tried mhist and it still works with the "FILE" historymhist -e <equipment name> -v <variable name> -h 10for dumping a variable for the last 10 hours.I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.Writing your own parser is certainly possible (even in Python), but of course more work.Stefan
Thank you for the quick reply. Do notice that we have "MIDAS" history files and not "FILE", so both mhist and mhdump should be fine (however I have only tested mhist). Hipotetically, which one between mhist and mhdump do you think is better suited to be "batched"? I mean to be controlled and read by a routine?
  1882   24 Apr 2020 Pintaudi GiorgioForumAPI to read MIDAS format file

Stefan Ritt wrote:
I guess all three options would work. I just tried mhist and it still works with the "FILE" history

mhist -e <equipment name> -v <variable name> -h 10

for dumping a variable for the last 10 hours.

I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.

Writing your own parser is certainly possible (even in Python), but of course more work.

Stefan


Dear Stefan,
thank you very much for the quick reply. Sorry if my message was not very clear, actually we are using the "MIDAS" history format and not the "FILE" one. So both mhist and mhdump should be ok (however I have only tested mhist).
Hypothetically which one between the two lends itself the better to being "batched"? I mean to be read and controlled by a program/routine. For example, some programs give the option to have the output formatted in json, etc...
  1883   24 Apr 2020 Stefan RittForumAPI to read MIDAS format file

Pintaudi Giorgio wrote:

Hypothetically which one between the two lends itself the better to being "batched"? I mean to be read and controlled by a program/routine. For example, some programs give the option to have the output formatted in json, etc...


Can't say on the top of my head. Both program are pretty old (written well before JSON has been invented, so there is no support for that in both). mhist was written by me mhdump was written by Konstantin. I would both give a try and see what you like more.

Stefan
  1884   25 Apr 2020 Konstantin OlchanskiForumAPI to read MIDAS format file
<p>[quote=&quot;Pintaudi Giorgio&quot;]Dear MIDAS people, I need to borrow your 
wisdom for a bit. I am developing a piece of software that should read the history data 
stored in a [FONT=Times New Roman].midas[/FONT] file (MIDAS format) and integrate it 
into the WAGASCI data quality output. In other words, I need to read some temperature 
values stored in a [FONT=Times New Roman].midas[/FONT] file and compare them with 
the MPPC gains and check for temperature/gain dependence. I see three possibilities: 
[LIST] [*] write a custom parser in C++ using the instructions contained in the 
[URL=https://midas.triumf.ca/MidasWiki/index.php/Mhformat]Mhformat page[/URL]; [*] call 
the mhist program from within my application; [*] call the mhdump program from within my 
application; [/LIST] [B]Which solution do you think is the best?[/B] Because there is no 
need for raw performance, if possible, I would like to write my application in Python3 but 
C++ is also an option. [/quote]</p>

(Please write messages in plain text format, thank you)

The format of .hst midas history files is pretty simple and mhdump.cxx is an easy to read 
illustration on how to read it from basic principles (without going through the midas library, 
which can be somewhat complicated). The newer "FILE" format for history is even simpler 
to read because it is just fixed-record-size binary data prepended by a text header.

You can also use the mh2sql program to import history data into an sql database (mysql 
and sqlite should work) or to convert .hst files to "FILE" format files. This works well
for "archiving" history data, because the "FILE" format works better for looking at old data,
and for looking at data in "months" or "years" timescale.

Back to your question, you can certainly use "mhdump" as is, using a pipe (popen()), or 
you can package mhdump.cxx as a c++ class and use it in your application. If you go this 
route, your contribution of such a c++ class back to midas would be very welcome.

You can also use mhist, but the mhist code cannot be trivially packaged as a c++ class
to use in your application.

You can also suggest that we write an easier to use history utility, we are always open to 
suggested improvements.

Let us know how it works out for you. Good luck!

K.O.
  1901   03 May 2020 Pintaudi GiorgioForumAPI to read MIDAS format file
> The format of .hst midas history files is pretty simple and mhdump.cxx is an easy to read 
> illustration on how to read it from basic principles (without going through the midas library, 
> which can be somewhat complicated). The newer "FILE" format for history is even simpler 
> to read because it is just fixed-record-size binary data prepended by a text header.
> 
> You can also use the mh2sql program to import history data into an sql database (mysql 
> and sqlite should work) or to convert .hst files to "FILE" format files. This works well
> for "archiving" history data, because the "FILE" format works better for looking at old data,
> and for looking at data in "months" or "years" timescale.
> 
> Back to your question, you can certainly use "mhdump" as is, using a pipe (popen()), or 
> you can package mhdump.cxx as a c++ class and use it in your application. If you go this 
> route, your contribution of such a c++ class back to midas would be very welcome.
> 
> You can also use mhist, but the mhist code cannot be trivially packaged as a c++ class
> to use in your application.
> 
> You can also suggest that we write an easier to use history utility, we are always open to 
> suggested improvements.
> 
> Let us know how it works out for you. Good luck!
> 
> K.O.

Dear Konstantin,
thank you very much for the wealth of information you provided.
I have thought about it and I see two options:

- One is to convert to SQL format and then use a SQLite library to import the data in my 
application.

- The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.

I am leaning towards the first option for three reasons.
1. I have never used a SQLite database so it is a good learning opportunity for me.
2, The SQLite database format is very well known and widespread, so there are tons of tools to 
handle it
3. I have taken a look at the mhdump.cxx source code and I think it is a beautiful piece of code, 
but has a very "functional" taste with little encapsulation. Basically, all the fun is happening 
inside the readHstFile function and there is no trivial way to get the data out of it. I don't mean 
that it would be difficult to wrap it around a C++ class, but I feel that I can learn more by going 
the SQL way.

PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
(together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
your recommendation.
  1902   03 May 2020 Konstantin OlchanskiForumAPI to read MIDAS format file
>
> - One is to convert to SQL format and then use a SQLite library to import the data in my 
> application.
> 

You can also configure midas to write history directly to an SQLITE database. I have not used
it recently, but it should still work. In terms of efficiency, sqlite file size is about the same
as .hst files. sqlite file and table naming is similar to the SQL and FILE implementation.

(But note that back when I implemented the SQLITE history writer, sqlite database corruption
recovery instructions were "delete the file, restore from backup". And indeed in every test
experiment I tried, the sqlite history databases eventually corrupted themselves. You see
same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
in it's terminal output).

>
> - The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.
> 

If I were to write this today, there would be a c++ class that takes a history file,
iterates over all records and calls "callback" classlets. You can see this in the history.h
(HistoryBufferInterface) and in the tmfe.h (RpcHandlerInterface, etc).

I think this style of OO programming originally comes from java. If you so desire,
an "mhdump" class could be a nice way to learn it.

> 
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
> your recommendation.
>

I remember, years ago, the Borland TurboC IDE was like a gift from Gods. But today, I think IDEs have
declined in quality and usefulness. They clog the screen with too much eye candy and fluff, use hard
to read fonts and silly colours, insist on using tabs where I want spaces, reformat the text even as I type it,
and detract from productive work with distracting popups ("try this new function!", "let's upgrade now!").

For serious programming, I use emacs with minimal decorations. I can easily open 3 or 4 windows at the same
time and still have enough screen space left for a terminal to run "make". And it is the only editor that can
edit the same file in two or more windows at the same time. You do not know you need this until
you work on odb.cxx.

K.O.
  1903   03 May 2020 Stefan RittForumAPI to read MIDAS format file
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
> your recommendation.

Was probably me. I use it as my standard IDE and am quite happy with it. All the things KO likes with emacs, plus much 
more. Especially the CMake integration is nice, since you don't have to leave the IDE for editing, compiling and debugging. 
The tooltips the IDE gave me in the past months made me write code much better. So quite an opposite opinion compared 
with KO, but luckily this planet has space for all kinds of opinions. I made myself the cheat sheet attached, which lets me 
do things much faster. Maybe you can use it.

Stefan
Attachment 1: ReferenceCardForMac.pdf
ReferenceCardForMac.pdf
  1904   04 May 2020 Pintaudi GiorgioForumAPI to read MIDAS format file
> (But note that back when I implemented the SQLITE history writer, sqlite database corruption
> recovery instructions were "delete the file, restore from backup". And indeed in every test
> experiment I tried, the sqlite history databases eventually corrupted themselves. You see
> same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
> in it's terminal output).

Thank you for the info. But I do not quite understand the comment above.
Do you mean that there is something wrong with the SQLite library itself or with the way that MIDAS creates the SQLite 
database?
ELOG V3.1.4-2e1708b5