Back Midas Rome Roody Rootana
  Midas DAQ System, Page 76 of 142  Not logged in ELOG logo
Entry  07 Jun 2007, Konstantin Olchanski, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
Running MIDAS at CERN is proving more challenging than I expected. The network environement is not 
as benign as I am used to (i.e. at TRIUMF) and our machines are being constantly probed by something/
somebody.

This already caused failures in the mserver (fixed in midas svn) and I would like to resolve this problem 
once and for all. The age of "nice networks" is over.

The case of the mserver and for the midas rpc servers (every midas applications listens for midas rpc 
requests, i.e. run transitions) is simple. The list of machines running midas applications is known ahead 
of time, so we can put them all into a list of permitted machines and deny rpc connections to anybody 
else. I propose we keep this list of permitted mserver clients in "/experiment/security/mserver hosts".

(The already existing "/experiment/security/allowed hosts" mechanism is insufficient: it does not 
prevent the mserver from accepting connections from hostile machines, and talking to them, for 
example giving them the list of available experiments. There is a fair amount of code involved and I do 
not presume to certify any of it as hack-proof or even as crash-proof.)

For mhttpd http:// access control, I thought of using tcp_wrappers, but C-API documentation does not 
exist (I looked), the example code in tcpd.c is way too complicated, editing the ACL /etc/hosts.allow 
unnecessarily requires root privileges and non of it would work on Windows.

So I am favouring a home-made hostname or ip-address filter, similar to /etc/hosts.allow, with ACL 
stored, for example, in "/experiment/security/mhttpd hosts".

Any thoughts?

K.O.
    Reply  07 Jun 2007, John M O'Donnell, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access hosts_access.3-nroffedhosts_access.3
I am in favor of tcp_wrappers.

tcp_wrappers is well understood.

It works well in combination with a firewall.

mhttpd hangs when our security folks scan us.  We are not allowed to block them
with a firewall, but we can use tcpwrappers.

Would it make sense to put the same mechanism on mserver?

the man page for libtcpwrappers.a (taken from the tcpwrappers7.6 tar ball) is
attached. And the output after running it through nroff -man.

The odb is too fragile for security.  It is not understood well enough by many
experimenters.

As you can see I am in favor of tcp_wrappers.  This is mainly because it is part
of an existing and tested security model.  I don't know about the windows
world, but as you can also see, I vote for using something that is already part
of the windows security model.  Here's an example of how well the integrated
security model works:

    if an person is part of an experiment I make sure they can ssh to the
    experiment's computer

    the same rules could provide them with web access

Second is that when a change is needed to the security model then it is easy to
keep it current.  What if somebody restores an old ODB?  What if they setup a
small test with a new ODB?

If mhttpd used tcp_wrappers, then all our machines here at LANL would already be
configured!  No need for
users to do any root access (though those that need it have it anyway).

John.
    Reply  08 Jun 2007, Stefan Ritt, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
First I have a general question: mserver is started through xinetd, and xinetd has
the options "only_from" and "no_access". This is equivalent to the tcp_wrapper
functionality. Why not using this? It's possible without changing anything in midas.
Or am I missing anything?

If that does not work for some reason, here are some thought from my side:

- We don't have much of a problem with malicious hackers, but with institute-wide
security checking. Hackers are only interested in mechanisms where they can obtain
control over thousands of machines (like breaking ssh etc.). The few midas machines
are not a good target for them. But even at PSI there are security scans, which try
to connect to various ports and can crash systems, so I agree that something needs
to be done.

- Whatever we do, it should be consistent on linux and windows and should not rely
on external packages, since I don't want to get into dependencies there.

- I see that both having the security information in the ODB or having them in
external files can be advantageous. There is certainly the aspect of restoring old
ODBs, or keeping several experiments (ODB) on one machine consistent. On the other
hand storing data in the ODB might me liked by people who are familiar with this
concept, and want to change things though mhttpd for example.

- Having said all that, it would make sense to me to write a simple central routine
access_allowed(), which takes the IP address of a remote client wanting to connect,
and return true or false. This routine should read /etc/hosts.allow, /etc/hosts.deny
and interprete it, but only the section for midas, and maybe only a subset of the
functionality there (we probably don't need NIS netgroup names, external files and
spawn commands there). If the files /etc/hosts.x do not contain anything about midas
or are not preset (Windows!), the routine should look in the ODB under
/experiment/security/mserver/hosts.allow and /experiment/security/mserver/hosts.deny
and use that information instead of the files.

- We probably need different mechanisms for mserver and for mhttpd. The mserver
clients are usually only a few programs like the front-ends, while one may want to
control an experiment over mhttpd from much more machines. So we should establish a
second ACL for mhttpd. The already present "/experiment/security/allowed hosts" for
mhttpd should be converted into "/Experiment/Security/mhttpd/hosts.allow" and the
function access_allowed() should be used to interprete that, so that we only need to
write it once.
    Reply  07 Mar 2008, Konstantin Olchanski, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
The mhttpd host-based access control list as used by ALPHA at CERN is now committed to
SVN (revision 4135).

When accepting connection from a remote host, the remote IP address is converted to a
hostname using gethostbyaddr(). If ODB directory "/experiment/security/mhttpd hosts",
exists, access is permitted if there is an entry for the this hostname. "localhost" is
always permitted.

In other words:

1) To enable the mhttpd access control list, create an ODB directory
"/experiment/security/mhttpd hosts".

2) From this moment, only access from "localhost" is permitted.

3) All connections from remote hosts are rejected with an error written into the midas
log file: Rejecting http connection from 'ladd05.triumf.ca'.

4) To permit access from remote hosts, take the hostname from this error message and
create an entry in "mhttpd hosts": odbedit -> cd "/Experiment/Security/mhttpd hosts" ->
create INT ladd05.triumf.ca

The idea behind this is that mhttpd is running behind an SSL proxy (or an SSH tunnel)
and only accepts connections from this proxy and perhaps from selected machines in the
experiment counting room.

P.S. I considered using tcp_wrappers, but this package does not seem to contain any
simple-to-use function "bool areTheyPermitted(const char* remoteHostname);".

P.P.S. The ODB path name is in variance from Stefan's email. I committed this code
before rereading it, please let me know if I should change the ODB paths.

P.P.P.S. I will now proceed with implementing similar code for the mserver/midas rpc.
Again, the use case is very simple: all machines permitted access to the mserver are
known in advance and can be listed in the access list. All unknown machines should be
rejected.

K.O.
    Reply  10 Mar 2008, Stefan Ritt, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
> When accepting connection from a remote host, the remote IP address is converted to a
> hostname using gethostbyaddr(). If ODB directory "/experiment/security/mhttpd hosts",
> exists, access is permitted if there is an entry for the this hostname. "localhost" is
> always permitted.

While your "positive list" will certainly work, it is much more inflexible than a more
general hosts.allow/hosts.deny with wildcards. Assume some experiment decides it wants to
be controlled from all inside CERN. With hosts.allow/deny you could do

host.deny *
host.allow *.cern.ch

to have everything ending with "cern.ch" allowed. Otherwise it would be a nightmare finding
all possible terminals at CERN and add them manually. If you are considering modifying your
committed code to this scheme, you could have a look at my elog package, where exactly this
is implemented. You could copy/paste it from there.

After you finished, also talk to Pierre about documenting this in doxygen (or do it yourself).
    Reply  10 Mar 2008, Konstantin Olchanski, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
> While your "positive list" will certainly work, it is much more inflexible than a more
> general hosts.allow/hosts.deny with wildcards. Assume some experiment decides it wants to
> be controlled from all inside CERN. With hosts.allow/deny you could do

I was going to bring this up later, but since mhttpd does not pass security audits, I believe
the only way it should be run in the modern computing environement is behind
a password-protected SSL proxy. In this case, the allow/deny list is very simple: deny all,
allow localhost (assuming httpd runs on the same host as mhttpd).

Speaking about CERN, "deny all; allow *.cern.ch" is the "default" setting, enforced by the CERN firewall. Our problem is with 
random "*.cern.ch" computers poking at our DAQ and crashing the mserver. Plus we do not want our competition to access our 
DAQ system, so "allow *.cern.ch" is a no go.

But since hosts.allow/hosts.deny is a superset of what I want, and since we can reuse existing code from elogd, I guess I have 
no ground to object your suggestion.

I will do the mserver/mrpc this way, then retrofit it into mhttpd. (But have to commit mlogger history changes first!!!).

K.O.
    Reply  10 Mar 2008, Stefan Ritt, Suggestion, RFC- ACLs for midas rpc, mserver, mhttpd access 
> I was going to bring this up later, but since mhttpd does not pass security audits, I believe
> the only way it should be run in the modern computing environement is behind
> a password-protected SSL proxy.

I recently built in native SSL into elogd.c and found it was very simple. We could do the same for mhttpd.

> Speaking about CERN, "deny all; allow *.cern.ch" is the "default" setting, enforced by the CERN firewall. Our problem is with 
> random "*.cern.ch" computers poking at our DAQ and crashing the mserver. Plus we do not want our competition to access our 
> DAQ system, so "allow *.cern.ch" is a no go.

I understand your point. But I want to tell you that there are other experiments, which want domain based access. For example at
PSI some experiments want allowed access from the experimental hall, which is the subdomain 129.129.140.* (there is not so much
competition here ;-) but not from other PSI subdomains. So you would need "deny all; allow 129.129.140.*; allow 129.129.228.*" for
example.

> I will do the mserver/mrpc this way, then retrofit it into mhttpd. (But have to commit mlogger history changes first!!!).

Agree.
Entry  26 Apr 2020, Yu Chen (SYSU), Forum, Questions and discussions on the Frontend ODB tree structure. 
Dear MIDAS developers and colleagues,

    This is Yu CHEN of School of Physics, Sun Yat-sen University, China, working in the PandaX-III collaboration, an experiment under development to search the neutrinoless double 
beta decay. We are working on the DAQ and slow control systems and would like to use Midas framework to communicate with the custom hardware systems, generally via Ethernet 
interfaces. So currently we are focusing on the development of the FRONTEND program of Midas and have some questions and discussions on its ODB structure. Since I’m still not 
experienced in the framework, it would be precious that you can provide some suggestions on the topic.
    The current structure of the frontend ODB tree we have designed, together with our understanding on them, is as follows:
      /Equipment/<frontend_name>/
        ->  /Common/: Basic controls and left unchanged.
        ->  /Variables/: (ODB records with MODE_READ) Monitored values that are AUTOMATICALLY updated from the bank data within each packed event. It is done by the update_odb() 
function in mfe.cxx.
        ->  /Statistics/: (ODB records with MODE_WRITE) Default status values that are AUTOMATICALLY updated by the calling of db_send_changed_records() within the mfe.cxx.
        ->  /Settings/: All the user defined data can be located here. 
            -> /stat/: (ODB records with MODE_WRITE) All the monitored values as well as program internal status. The update operation is done simultaneously when 
db_send_changed_records() is called within the mfe.cxx.
            -> /set/: (ODB records with MODE_READ) All the “Control” data used to configure the custom program and custom hardware.

    For our application, some of the our detector equipment outputs small amount of status and monitored data together with the event data, so we currently choose not to use EQ_SLOW 
and 3-layer drivers for the readout. Our solution is to create two ODB sub-trees in the /Settings/ similar to what the device_driver does. However, this could introduce two 
troubles:
    1) For /Settings/stat/: To prevent the potential destroy on the hot-links of /Variables/ and /Statistics/ sub-trees, all our status and monitored data are stored separately in 
the /Settings/stat/ sub-tree. Another consideration is that some monitored data are not directly from the raw data, so packaging them into the Bank for later showing in /Variables/ 
could somehow lead to a complicated readout() function. However, this solution may be complicated for history loggings. I have find that the ANALYZER modules could provide some 
processes for writing to the /Variables/ sub-tree, so I would like to know whether an analyzer should be used in this case.
    2) For /Settings/set/: The “control” data (similar to the “demand” data in the EQ_SLOW equipment) are currently put in several /Settings/set/ sub-trees where each key in 
them is hot-linked to a pre-defined hardware configuration function. However, some control operations are not related to a certain ODB key or related to several ODB keys (e.g. 
configuration the Ethernet sockets), so the dispatcher function should be assigned to the whole sub-tree, which I think can slow the response speed of the software. What we are 
currently using is to setup a dedicated “control key”, and then the input of different value means different operations (e.g. 1 means socket opening, 2 means sending the UDP 
packets to the target hardware, et al.). This “control key” is also used to develop the buttons to be shown on the Status/Custom webpage. However, we would like to have your 
suggestions or better solutions on that, considering the stability and fast response of the control.

    We are not sure whether the above understanding and troubles on the Midas framework are correct or they are just due to our limits on the knowledge of the framework, so we 
really appreciate your knowledge and help for a better using on Midas. Thank you so much!
    Reply  26 Apr 2020, Stefan Ritt, Forum, Questions and discussions on the Frontend ODB tree structure. 
Dear Yu Chen,

in my opinion, you can follow two strategies:

1) Follow the EQ_SLOW example from the distribution, and write some device driver for you hardware to control. There are dozens of experiments worldwide which use that scheme and it works ok for lots of 
different devices. By doing that, you get automatically things like write cache (only write a value to the actual device if it really has chande) and dead band (only write measured data to the ODB if it changes more 
than a certain value to suppress noise). Furthermore, you get events created automatically and history for all your measured variables. This scheme might look complicated on the first look, and it's quite old-
fashioned (no C++ yet) but it has proven stable over the last ~20 years or so. Maybe the biggest advantage of this system is that each device gets its own readout and control thread, so one slow device cannot 
block other devices. Once this has been introduced more than 10 years ago, we saw a big improvement in all experiments.

2) Do everything by yourself. This is the way you describe below. Of course you are free to do whatever you like, you will find "special" solutions for all your problems. But then you move away from the "standard 
scheme" and you loose all the benefits I described under 1) and of course you are on your own. If something does not work, it will be in your code and nobody from the community can help you.

So choose carefully between 1) and 2).

Best regards,
Stefan

> Dear MIDAS developers and colleagues,
> 
>     This is Yu CHEN of School of Physics, Sun Yat-sen University, China, working in the PandaX-III collaboration, an experiment under development to search the neutrinoless double 
> beta decay. We are working on the DAQ and slow control systems and would like to use Midas framework to communicate with the custom hardware systems, generally via Ethernet 
> interfaces. So currently we are focusing on the development of the FRONTEND program of Midas and have some questions and discussions on its ODB structure. Since I’m still not 
> experienced in the framework, it would be precious that you can provide some suggestions on the topic.
>     The current structure of the frontend ODB tree we have designed, together with our understanding on them, is as follows:
>       /Equipment/<frontend_name>/
>         ->  /Common/: Basic controls and left unchanged.
>         ->  /Variables/: (ODB records with MODE_READ) Monitored values that are AUTOMATICALLY updated from the bank data within each packed event. It is done by the update_odb() 
> function in mfe.cxx.
>         ->  /Statistics/: (ODB records with MODE_WRITE) Default status values that are AUTOMATICALLY updated by the calling of db_send_changed_records() within the mfe.cxx.
>         ->  /Settings/: All the user defined data can be located here. 
>             -> /stat/: (ODB records with MODE_WRITE) All the monitored values as well as program internal status. The update operation is done simultaneously when 
> db_send_changed_records() is called within the mfe.cxx.
>             -> /set/: (ODB records with MODE_READ) All the “Control” data used to configure the custom program and custom hardware.
> 
>     For our application, some of the our detector equipment outputs small amount of status and monitored data together with the event data, so we currently choose not to use EQ_SLOW 
> and 3-layer drivers for the readout. Our solution is to create two ODB sub-trees in the /Settings/ similar to what the device_driver does. However, this could introduce two 
> troubles:
>     1) For /Settings/stat/: To prevent the potential destroy on the hot-links of /Variables/ and /Statistics/ sub-trees, all our status and monitored data are stored separately in 
> the /Settings/stat/ sub-tree. Another consideration is that some monitored data are not directly from the raw data, so packaging them into the Bank for later showing in /Variables/ 
> could somehow lead to a complicated readout() function. However, this solution may be complicated for history loggings. I have find that the ANALYZER modules could provide some 
> processes for writing to the /Variables/ sub-tree, so I would like to know whether an analyzer should be used in this case.
>     2) For /Settings/set/: The “control” data (similar to the “demand” data in the EQ_SLOW equipment) are currently put in several /Settings/set/ sub-trees where each key in 
> them is hot-linked to a pre-defined hardware configuration function. However, some control operations are not related to a certain ODB key or related to several ODB keys (e.g. 
> configuration the Ethernet sockets), so the dispatcher function should be assigned to the whole sub-tree, which I think can slow the response speed of the software. What we are 
> currently using is to setup a dedicated “control key”, and then the input of different value means different operations (e.g. 1 means socket opening, 2 means sending the UDP 
> packets to the target hardware, et al.). This “control key” is also used to develop the buttons to be shown on the Status/Custom webpage. However, we would like to have your 
> suggestions or better solutions on that, considering the stability and fast response of the control.
> 
>     We are not sure whether the above understanding and troubles on the Midas framework are correct or they are just due to our limits on the knowledge of the framework, so we 
> really appreciate your knowledge and help for a better using on Midas. Thank you so much!
Entry  18 May 2009, Exaos Lee, Suggestion, Question about using mvmestd.h 
The "mvmestd.h" uses the following function to open a VME device:
int mvme_open(MVME_INTERFACE **vme, int idx)
I found that the "driver/vme/sis3100/sis3100.c" uses the implementation as:
   /* open VME */
   sprintf(str, "/dev/sis1100_%02dremote", idx);
   (*vme)->handle = open(str, O_RDWR, 0);
   if ((*vme)->handle < 0)
      return MVME_NO_INTERFACE;
   }

The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the "sis3100.c".
Shall we have some smart way? Smile
    Reply  18 May 2009, Stefan Ritt, Suggestion, Question about using mvmestd.h 

Exaos Lee wrote:
The "mvmestd.h" uses the following function to open a VME device:
int mvme_open(MVME_INTERFACE **vme, int idx)
I found that the "driver/vme/sis3100/sis3100.c" uses the implementation as:
   /* open VME */
   sprintf(str, "/dev/sis1100_%02dremote", idx);
   (*vme)->handle = open(str, O_RDWR, 0);
   if ((*vme)->handle < 0)
      return MVME_NO_INTERFACE;
   }

The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the "sis3100.c".
Shall we have some smart way? Smile


In principle one could pass the device name to the user level. But I would like to keep the same code for Windows and Linux, and Windows does not need a device name. So you can either hack the file (I'm pretty sure it won't change in the next few years) or what I do is to make a symbolic link

/dev/sis1100/xxxx -> /dev/sis1100_00remote

Best regards,

Stefan
    Reply  19 May 2009, Konstantin Olchanski, Suggestion, Question about using mvmestd.h 
> The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the 
"sis3100.c".

As in the old joke, "Doctor, it hurts when I do *this*; Doctor answers: then don't do it!"

But I am curious why you want to change the "manufacturer-default" device names. For the vmivme.c and 
gefvme.c drivers that we use at TRIUMF, there is no obvious reason or gain from changing device names.

K.O.
    Reply  20 May 2009, Exaos Lee, Suggestion, Question about using mvmestd.h 
> > The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the 
> "sis3100.c".
> 
> As in the old joke, "Doctor, it hurts when I do *this*; Doctor answers: then don't do it!"
> 
> But I am curious why you want to change the "manufacturer-default" device names. For the vmivme.c and 
> gefvme.c drivers that we use at TRIUMF, there is no obvious reason or gain from changing device names.
> 
> K.O.

I used the old V2.04 driver for SIS1100/SIS3100. The old package contains a script which creates devices
as /tmp/sis1100_XXXX. So I created another script and installed it into /etc/init.d/. That script can be
invoked by using standard rc.d tools. In order to make the /dev directory tidy, it creates device files
into just one directory as /dev/sis1100/. That's the story.

Now, I found, the new sis1100.ko of version 2.12 can create devices automatically as /dev/sis1100_xxxx.
So, my script can be retired now. And also, I needn't to hack the "sis3100.c" anymore.
Entry  21 Jul 2018, Hiroaki Natori, Forum, Question about distributing event builder function on remote PC MIDAS_tmp.pdf
Dear expert,

I'm going to develop MIDAS DAQ for COMET experiment.
I'm thinking to distribute the load of event building to different PCs.
I attach a schematics of one of the examples of the design.
Please tell me how can I accomplish a kind of "sub-EventBuilder".
I'm reading the midas code to understand the scheme of MIDAS but
it is a lot and I want to know which one to focus on.
Can I do it writing user function based on either "mfe.c" or "mevb.c"?
Frontend program with multithread equipment is the one to do?
Or should I modify the original midas files?


Best regards,
Hiroaki Natori
    Reply  23 Jul 2018, Konstantin Olchanski, Forum, Question about distributing event builder function on remote PC 
> I'm going to develop MIDAS DAQ for COMET experiment.
> I'm thinking to distribute the load of event building to different PCs.
> I attach a schematics of one of the examples of the design.

Your schematic is reminiscent of the T2K/ND280 structure where the MIDAS DAQ
was split into several separate MIDAS instances (separate "experiments": the FGD, the TPC, 
the slow controls, etc).

They were joined together by the "cascade" equipment which provided a path
for the data events to flow from subsidiary midas instances to the main system (the one
with the final mlogger). It also provided a reverse path for run control, where starting
a run in the main experiment also started the run in all the subsidiary experiments.

This cascade frontend was never included in the midas distribution (an oversight),
but I still have the code for it somewhere.

How many "frontend PC" components do you envision? (10, 100, 1000?).

In T2K/ND280, each subsidiary experiment had it's own ODB which made sense
because e.g. the FGD and the TPC were quite different and were managed by different
groups.

But for you it probably makes sense to have one common ODB. This means a MIDAS
structure where ODB is located on the main computer ("event builder PC"),
all others connect to it via the mserver and midas rpc.

But you will need to have the MIDAS shared event buffers on each "frontend PC" to be local,
which means the bm_xxx() functions have run locally instead of throuhg the mserver rpc.
This is not how midas works right now, but it could be modified to do this.

On the other hand, you do not have to use midas to write the "frontend pc" code. Today's
C++ provides enough features - threads, locks, mutexes, shared memories, event queues,
etc so you can write the whole sub-event builder as one monolithic c++ program
and use midas only to send the data to the main event builder. (plus midas rpc to handle
run control). In this scheme, technically, this "frontend pc" program would
be a multithreaded midas frontend.

K.O.
    Reply  28 Jul 2018, Hiroaki Natori, Forum, Question about distributing event builder function on remote PC 
Dear Mr. Olchanski

Thank you for your comment.
We exect the number of readout channels is ~1000, boards ~100 and the frontend pc <10.	
We expect that trigger rate is a few kHz.

Writing monolithic c++ code may need complete understanding on midas,
and I will consider more about writing from scratch or modifying midas code.

Best regards
Hiroaki Natori

> > I'm going to develop MIDAS DAQ for COMET experiment.
> > I'm thinking to distribute the load of event building to different PCs.
> > I attach a schematics of one of the examples of the design.
> 
> Your schematic is reminiscent of the T2K/ND280 structure where the MIDAS DAQ
> was split into several separate MIDAS instances (separate "experiments": the FGD, the TPC, 
> the slow controls, etc).
> 
> They were joined together by the "cascade" equipment which provided a path
> for the data events to flow from subsidiary midas instances to the main system (the one
> with the final mlogger). It also provided a reverse path for run control, where starting
> a run in the main experiment also started the run in all the subsidiary experiments.
> 
> This cascade frontend was never included in the midas distribution (an oversight),
> but I still have the code for it somewhere.
> 
> How many "frontend PC" components do you envision? (10, 100, 1000?).
> 
> In T2K/ND280, each subsidiary experiment had it's own ODB which made sense
> because e.g. the FGD and the TPC were quite different and were managed by different
> groups.
> 
> But for you it probably makes sense to have one common ODB. This means a MIDAS
> structure where ODB is located on the main computer ("event builder PC"),
> all others connect to it via the mserver and midas rpc.
> 
> But you will need to have the MIDAS shared event buffers on each "frontend PC" to be local,
> which means the bm_xxx() functions have run locally instead of throuhg the mserver rpc.
> This is not how midas works right now, but it could be modified to do this.
> 
> On the other hand, you do not have to use midas to write the "frontend pc" code. Today's
> C++ provides enough features - threads, locks, mutexes, shared memories, event queues,
> etc so you can write the whole sub-event builder as one monolithic c++ program
> and use midas only to send the data to the main event builder. (plus midas rpc to handle
> run control). In this scheme, technically, this "frontend pc" program would
> be a multithreaded midas frontend.
> 
> K.O.
Entry  26 Jan 2009, Derek Escontrias, Forum, Question - ODB access from a custom page 
Hi, I am looking for a way to mutate ODB values from a custom page. I have been
using the edit attribute for the 'odb' tag, but for some things it would be nice
if a form can handle the change. I have seen references to ODBSet on the forums,
but I haven't been able to find documentation on it. Is there an available
Javascript library for Midas and/or are there more tags than I am aware of (I am
only aware of the 'odb' tag)?
    Reply  27 Jan 2009, Suzannah Daviel, Forum, Question - ODB access from a custom page 
At present the only documentation on the Javascript library is in this elog
e.g. Message 496 31 Jul 08

The Javascript library which you can view
http://<your mhttpd host>/mhttpd.js
now supports ODBEdit as well as ODBGet and ODBSet
 
I advise you get the latest version of mhttpd.c so you can use ODBEdit which changes
the ODB value directly via ODBSet.

You use it like this:
document.write('<a href="#" onclick="ODBEdit(/Equipment/test/Variables/Demand[0])">');
document.write('<odb src="/Equipment/test/Variables/Demand[0]">');
document.write('</a>');

You can also use HTML to edit the variables, but the advantage of Javascript is that
you can use variable ODB paths, so it is more powerful.

Here is an example of using a form on a custom page to edit a variable (in the
example, the run number) using Javascript (ODBEdit) and HTML. 

To try this example, in ODB, create key (STRING)
/custom/try& 
 and set it to "/home/user/try.html"

where the path of the example code on the disk is  /home/user/try.html

This will put an alias link on the Main Status page called "try" which you click on
to see the custom page.

Code of try.html:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN">
<html><head>
<title> ODBEdit test</title>
<script src="/js/mhttpd.js" type="text/javascript"></script>

<script type="text/javascript">
var my_action = '"/CS/try&"'
var rn
var path

document.write('</head><body>')
document.write('<form method="get" name="form2" action='+my_action+'> ') 
document.write('<input name="exp" value="'+my_expt+'" type="hidden">');

document.write('Using Javascript and ODBEdit:
') path='/runinfo/run number' rn = ODBGet(path) document.write('Run Number: '+rn+'
') document.write('Edit Run Number:') document.write('<a href="#" onclick="ODBEdit(path)" >') document.write(rn) document.write('</a>'); document.write('
') ; </script>
Using HTML :
Using edit=2 ... Run Number: <odb src="/runinfo/run number" edit=2>
Using edit=1 ... Run Number: <odb src="/runinfo/run number" edit=1>
</form> </html> Note the "edit=2" feature is handy so that you can use Javascript or HTML on your page and the user sees no difference. > Hi, I am looking for a way to mutate ODB values from a custom page. I have been > using the edit attribute for the 'odb' tag, but for some things it would be nice > if a form can handle the change. I have seen references to ODBSet on the forums, > but I haven't been able to find documentation on it. Is there an available > Javascript library for Midas and/or are there more tags than I am aware of (I am > only aware of the 'odb' tag)?
Entry  03 Oct 2023, Gennaro Tortone, Bug Report, Python midas.file_reader get_eor_odb_dump() 
Hi,

the method get_eor_odb_dump() of midas.file_reader does not contain an
initial jump_to_start() and this is a problem if the following access
pattern is used:

---

mfile = midas.file_reader.MidasFile("run00008.mid.lz4")
begin_odb = mfile.get_bor_odb_dump().data

# loop on data events
...

end_odb = mfile.get_eor_odb_dump().data

---

in this case the script ends with a RuntimeError (Unable to find EOR event) and
force user to do a manual mfile.jump_to_start() before mfile.get_eor_odb_dump();

Thanks,
Gennaro
    Reply  16 Oct 2023, Ben Smith, Bug Report, Python midas.file_reader get_eor_odb_dump() 
Thanks for the bug report Gennaro! 

I've fixed the code so that we'll now find the end-of-run ODB dump even if the user is already at the end of the file when they call get_eor_odb_dump().

Ben
ELOG V3.1.4-2e1708b5