Back Midas Rome Roody Rootana
  Midas DAQ System, Page 68 of 142  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subject
  1499   18 Mar 2019 Andreas SuterBug Reportmhttpd - slowcontrol frontend - multi class driver
When using a slowcontrol frontend which operates a device using the multi class
driver the current midas version (ec3225902d6) has the following issue:

There is a row labeled with: All Input Output

So far by clicking e.g. on Input, only the Input related part was displayed, etc.

Currently this leads to the following error message: 

Error: cannot find key LS336/Variables/Input

where LS336 is my DD.
  1498   16 Mar 2019 Gennaro TortoneForumassertion failed
Hi,
I'm developing a Slow Control equipment on a Linux board that send data on a remote server
running 'mserver'; the build goes fine, but  when I run the executable it seems that an assertion in 
midas.c failed:

[dfe01,INFO] Slow control equipment initialized
dfe: src/midas.c:838: cm_msg_flush_buffer: Assertion `rp[3]=='_'' failed.

if I remove line 838 from midas.c (fixing message length) the problem disappear...

Regards,
Gennaro
  1497   14 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> > In my mind only one issue remains - when we say "we will serve files from directory X", how
> > to we prevent mhttpd from serving files outside this directory by using trick URLs containing ".."
> > and/or other gimmicks.
> > 
> > Disallow multi-level URL path names (by rejecting names that contain the directory separator "/").
> > Replace the URL check "/ is not permitted" with ".. is not permitted".
>
> https://en.wikipedia.org/wiki/Directory_traversal_attack

and, from Mitre's "Common Weakness Enumeration", with examples:

http://cwe.mitre.org/data/definitions/22.html

K.O.
  1496   14 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> In my mind only one issue remains - when we say "we will serve files from directory X", how
> to we prevent mhttpd from serving files outside this directory by using trick URLs containing ".."
> and/or other gimmicks.
> 
> Disallow multi-level URL path names (by rejecting names that contain the directory separator "/").
> Replace the URL check "/ is not permitted" with ".. is not permitted".
> 

https://en.wikipedia.org/wiki/Directory_traversal_attack

K.O.
  1495   14 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
I now understand Stefan's and Thomas's proposal a little bit better.

In my mind only one issue remains - when we say "we will serve files from directory X", how
to we prevent mhttpd from serving files outside this directory by using trick URLs containing ".."
and/or other gimmicks.

The code I currently put in mhttpd, disallows multi-level URL path names (by rejecting names that contain the directory separator "/").

This has the effect of keeping the mhttpd URL space flat (without subdirectories).

http://localhost:8080/Status.html  <--- no multi level URLs like we used to have:
http://localhost:8080/CS/Custom.html <--- no more of these

Keeping the URL space restricted to one level is important if we do not want to defeat
the recent change to the mhttpd URL scheme - if mhttpd runs behind a proxy, without
a "Base URL" (which we just removed), we can only use relative URLs to navigate
between midas pages and if we permit multi-level URLs, it becomes hard to get
back to the status page without the ugly counting of ".." URL elements (which ugly and
brittle code we also just recently removed) (n.b. to navigate from CS/Custom.html
to the status page, one must redirect to "../Status.html").

But this whole beautiful cathedral falls apart from one valid use case: we want to serve "jsroot"
from a subdirectory called "jsroot" - this is how this package is packaged and we do not want
to mess with it just to make midas happy.

So at the least we must enable serving of multi-level URL path names to serve 3rd party packages.

The most trivial way out is to replace the URL check "/ is not permitted" with ".. is not permitted".

(One could also have a list of all permitted subdirectories in ODB, but this would be hard to use and 
difficult to implement. Not my favourite solution.)

This will break the flatness of the mhttpd URLs (no subdirectories). But maybe it is sufficient
to write down "do not do this!" and close with "wontfix" all bug reports about "my custom page
is at http://localhost:8080/mycustomdir/new/verynew/custom.html, how come the [status] button
does not take me back to the status page?".


K.O.

> Parsing all URL in mhttpd to prevent /etc/passwd etc. to be returned is tricky, because people can use escape sequences etc. Therefore I think it is much better to restrict file access 
> on the file system level when opening a file. The only escape there one could have is "..", which can be tested easily. 
> 
> Therefore, I propose to restrict file access to two well-defined directories, which is one system directory and one user directory. The system directory should be defined via 
> $MIDASSYS/resources, and the user directory should be the experiment directory (as defined in exptab) followed by "resources". So if MIDASSYS equals to /usr/local/midas and the 
> experiment directory equals to /home/users/exp for example, we would only have these two directories (and of course the subdirectories within these) served by mhttpd:
> 
> $MIDASSYS/resource -> /usr/local/midas/resources
> <exptab>/resources -> /home/users/exp/resources
> 
> These directories should be hard-wired into mhttpd, and not go through and ODB entry, since otherwise one could manipulate the ODB entries (knowingly or unknowingly) and open a 
> back-door. 
> 
> If users need a more complex structure, they can put soft links into these directories.
> 
> The code which opens a resource file should then first evaluate $MIDASSYS, then add "/resources/", then add the requested file name, make sure that there is no ".." in the file name, 
> then open the file. If not existing, do the same for the <exptab>/resources/ directory.
> 
> This change will break most experiments, and forces people to move their custom pages to different directories, but I think it's the only clean solution and we just have to bite the 
> bullet.
> 
> Comments are welcome.
> 
> Stefan
  1494   14 Mar 2019 Konstantin OlchanskiInfoswitch to json odb dump format
Regarding odb dumps saved into the midas output files, there are several
requests to make it possible to save odb in the json format.

Since 2019-02-12 commits
https://bitbucket.org/tmidas/midas/commits/ccaeeef4d6a1f0d91a67f2dc0a68105b7f6b77ae
and
https://bitbucket.org/tmidas/midas/commits/b3251587a68ef577919c81a221559e9af4dc8ae6

The format of the odb dump is specified by ODB
/Logger/Channels/0/Settings/ODB dump format (default is "json", was hardcoded as "xml").

The filename of the odb dump file saved at the end of run is specified by ODB
/Logger/ODB Last Dump File (default is "last.json", was hardcoded as "last.xml").

In addition, the following defaults have been changed, enabling LZ4 compession and CRC32C checksums by default:

/Logger/ODB dump file (new default is "run%05d.json", was "run%05d.odb")
/Logger/Channels/0/Settings/
Output "FILE"
Compress "lz4"
Checksum "CRC32C".

This brings mlogger default settings closer to what one would expect in the 21st century - "free" compression (LZ4)
and output file data integrity protected by checksums (CRC32C). These defaults are "free" in the sense that turning
them off would not noticeably improve the system performance. (Some users may want to enable better compression,
such as BZIP2 or PBZIP2 and better checksums, such as SHA256 or SHA512 - both choices that have significant CPU-use cost).

The default ODB dump format is now consistently "json" (vs the previous mix of XML and ODB formats).

Why "json" as the default? There has to be some default. The old ODB dump format is right out. Compared to XML,
JSON dump file size is smaller and the encoder is faster (important for reducing the time to start a run - where ODB dump is done twice). Tools for parsing 
JSON data are more widely available and "JSON has won in the marketplace". See also https://www.w3schools.com/js/js_json_xml.asp

Those who want to continue to use XML ODB dumps can trivially continue to do so by changing the settings listed above.

In the rootana package, we have the XmlOdb class for parsing XML ODB dumps, but no matching JsonOdb class for parsing
JSON ODB data. This reminds me of difficulties working with XML data. I originally wrote the XmlOdb class using
the "libxml2" package. After discovering that many Linux distributions do not install this package by default, I switched
to the DOM and XML parsers from ROOT (since rootana is not very useful without ROOT anyway). Only to discover that
some binary ROOT distributions distributed by CERN exclude the XML parser from their default build.

I do intend to write the JsonOdb class for rootana, and I will probably do it using the mjson.{h,cxx} parser that I wrote
for MIDAS. (lack of the JsonOdb class is the reason I did not switch MIDAS ODB dumps to JSON much earlier
than now. the camel back was broken by the extra slow run start time in the ALPHA-g DAQ system).

K.O.
  1493   14 Mar 2019 Konstantin OlchanskiInfobitbucket issue tracker "feature"
> It turns out the bitbucket issue tracker has a feature - I cannot make it automatically add me 
> to the watcher list of all new issues.
> 
> So when you create a new issue, I think I get one message about it, and no more.
> 

I retested this and I confirm that for new issues I am not a watcher "by default". If I reply to an issue,
the system enters an inconsistent state - it thinks I am a watcher, the best I can tell, but it also
says "0 watchers".

K.O.
  1492   14 Mar 2019 Konstantin OlchanskiForumsystemd unit file for mhttpd
> As far as NIS is concerned, I am sorry but I don't know how it is used by MIDAS.

NIS is traditionally used together with autofs to form clusters of UNIX/Linux machines. NIS is a database
of user names, passwords, home directories, NFS (autofs) mount points and NFS exports. autofs would
read all it's configuration from the NIS database.

So the correct boot sequence in most cases would be like this:
hardware init -> network init -> nis -> autofs -> users can login -> start midas (mhttpd) from cron @reboot or similar.

(In organizations that have a dedicated staff of IT sysadmins, you would see LDAP instead of NIS).

K.O.
  1491   13 Mar 2019 Pintaudi GiorgioForumsystemd unit file for mhttpd
> Note: user name "neo" and home directory is hardwired into the unit file. Also
> it runs after "network.target", this may be too early, it should run after nis and autofs
> have started (and made home directories accessible). (not sure what systemd target
> that is).

Thank you very much for the comments!

My home directory is hardwired because it is not straightforward to add environment variables into 
systemd units. Actually, I install MIDAS through a bash shell script that automatically generates 
the unit file during installation. So, for another user who would use my script, the correct path 
in the unit file would be generated at installation time. Another option would be to create an 
environment file and then feed it to the unit file (EnvironmentFile directive) as explained here: 
https://coreos.com/os/docs/latest/using-environment-variables-in-systemd-units.html

For the autofs, thank you for the hint. I have modified the unit file accordingly.

As far as NIS is concerned, I am sorry but I don't know how it is used by MIDAS. Actually, I don't 
even have it installed on my machine. Anyway, I have modified the unit file accordingly (but I 
haven't tested with NIS installed).

The modified unit file is this:

[Unit]
Description=MIDAS data acquisition system
After=network.target rpcbind.target ypbind.target
StartLimitIntervalSec=0
RequiresMountsFor=%h

[Service]
Type=simple
Restart=always
RestartSec=3
User=neo
ExecStart=/opt/midas/bin/mhttpd -e <nameofyourexperiment> --http <yourhttpport> --https 
<yourhttpsport>
Environment="MIDASSYS=/opt/midas" "MIDAS_EXPTAB=<path/to/your/exptab>" "MIDAS_EXPT_NAME=
<nameofyourexperiment>" "SVN_EDITOR=emacs -nw" "GIT_EDITOR=emacs -nw"
PassEnvironment=MIDASSYS MIDAS_EXPTAB MIDAS_EXPT_NAME SVN_EDITOR GIT_EDITOR

[Install]
WantedBy=multi-user.target
  1490   13 Mar 2019 Konstantin OlchanskiForumRun length
I did not quite understand your desired sequence, is this what you want:

- at 1pm
- start a run
- record 100 events
- end the run
- (this will be, say, 1:15pm)
- start a run
- at 2pm
- end the run
- start a run
- record 100 events
- ad infinitum

There are 2 difficulties with this:

1) If you want your cycle to be exactly 1 hour, you need to use cron or something similar - if you just "start; sleep 3600; stop", 
your cycle will be slightly longer than 1 hour because starting and stopping runs takes some time to complete.

2) if you want your "100 event" run to start exactly precisely on the hour, you need to stop the previous run a few 
minutes/seconds before the hour to avoid the "run stop" delay.

Instead of using the sequencer, I would use a shell script (run it from crontab to avoid problem (1))

#/bin/sh
mtransition stop # stop the previous long run
odbedit set "/logger/channels/0/settings/event limit" 100
odbedit set "/logger/auto restart" "y"
mtransition start # start the short run
# end

In your frontend end_of_run() function, add this:
odb set "/logger/channels/0/settings/event limit" 0

This will produce the following sequence:

- script will stop previous long run
- set event limit to 100, start the "100 events" run
- logger will stop at 100 events, call your frontend end_of_run(), set event limit to 0
- logger auto restart will start a new run, event limit is now 0, this is your long run
- on the hour, cron runs your script, cycle repeats from the top.

Instead of cron, your can use a looper script. Note that you must run
the main script in the background (note the "&") to avoid problem (1).

#!/bin/foo
# looper script
while (true) {
   main_script &
   sleep 3600   
}
#end

To stop the sequence, kill the looper script.

K.O.


> Dear all,
>         I need to implement a DAQ sequence where a short run (100 events, which takes a couple of 
> minutes) is taken every hour, with a long run in between two short runs. In the sequencer, I can do:
> 
> LOOP infinite
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT events 100
>      TRANSITION STOP
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT seconds 3600
>      TRANSITION STOP
> 
> ENDLOOP
> 
> 
> I have two questions: 
> 
> - for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
> the event polling in the frontend, with an ODB query of the number of collected events. I'm 
> wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
> number of events, but the subsequent short run should still start exactly after 1h from the previous 
> short run. 
> 
> - with the script above, the real time lapse between the start of two short runs would depend on 
> the duration of the short run itself. Is there a way to start the short run exactly 1 h after the starting 
> of the previous short run?
> 
> Thank you in advance for your help,
>               Francesco
  1489   13 Mar 2019 Konstantin OlchanskiForumProblem stopping every second run
>          I'm running a DAQ frontend and it works well if one single run is
> taken. If I try to take a second run right after, the run is performed but, when
> stopping it, I get the error messages below. Any hint?

Sure. I will read the error messages for you: note that they come in reverse time order - oldest message is at the bottom. I 
will reverse them in my reading:

> 11:42:23.994 2019/03/12 [cygnus_daq,ERROR] [midas.c:1951:,ERROR]
> cm_disconnect_experiment not called at end of program

your frontend has exited (this error message is printed by the atexit() code, so you did not crash but called exit(), 
somehow).

> 11:42:24.011 2019/03/12 [mhttpd,ERROR] [system.c:4661:recv_tcp2,ERROR]
> unexpected connection closure

mhttpd reports that the socket connection to your frontend has closed (because the frontend stopped, closing all it's 
sockets)

> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [system.c:4715:ss_recv_net_command,ERROR]
> error receiving network command header, see messages

mhttpd network code next layer reports this same error again

> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:10821:rpc_client_call,ERROR]
> call to "cygnus_daq" on "localhost" RPC "rc_transition": error,
> ss_recv_net_command() status 411

mhttpd network code next layer reports this same error again, now we see that it was trying
to execute a "run start" (or "stop") RPC call to your frontend. But your frontend unexpectedly
shutdown (instead of replying to the RPC).

Next messages after that is mhttpd decided that your frontend is faulty (does not respond to RPC correctly) and tried to 
shut it down, but failed (cannot connect, etc). Last message is mhttpd cleaning up your frontend from ODB (because the 
frontend did not cleanup after itself - it did not call cm_disconnect_experiment(), per the very first error message).


So this is what we see from the midas messages - your frontend unexpectedly exited during the run transition - if as you 
say the run was stopping at the time, it would be in your end_of_run() function.

To debug this, I would do:

a) put some printf() statements in end_of_run() and see what they say during the crash
b) run the frontend inside the debugger, you may need to set a breakpoint on exit() or something like that.


Good luck,
K.O.





> 
> Thank you for your help,
>       Francesco
> 
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6022:cm_shutdown,ERROR] Killing
> and Deleting client 'cygnus_daq' pid 12472
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6019:cm_shutdown,ERROR] Cannot
> connect to client 'cygnus_daq' on host 'localhost', port 46341
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:9539:rpc_client_connect,ERROR]
> cannot connect to host "localhost", port 46341: connect() returned -1, errno 111
> (Connection refused)
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:10821:rpc_client_call,ERROR]
> call to "cygnus_daq" on "localhost" RPC "rc_transition": error,
> ss_recv_net_command() status 411
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [system.c:4715:ss_recv_net_command,ERROR]
> error receiving network command header, see messages
> 
> 11:42:24.011 2019/03/12 [mhttpd,ERROR] [system.c:4661:recv_tcp2,ERROR]
> unexpected connection closure
> 
> 11:42:23.994 2019/03/12 [cygnus_daq,ERROR] [midas.c:1951:,ERROR]
> cm_disconnect_experiment not called at end of program
  1488   13 Mar 2019 Konstantin OlchanskiForumsystemd unit file for mhttpd
> > Can you post your systemd unit file to this elog, others may find it useful.

Thank you very much!

Note: user name "neo" and home directory is hardwired into the unit file. Also
it runs after "network.target", this may be too early, it should run after nis and autofs
have started (and made home directories accessible). (not sure what systemd target
that is).

K.O.

> 
> [Unit]
> Description=MIDAS data acquisition system
> After=network.target
> StartLimitIntervalSec=0
> 
> [Service]
> Type=simple
> Restart=always
> RestartSec=3
> User=neo
> ExecStart=/opt/midas/bin/mhttpd -e WAGASCI --http 8081 --https 8444
> Environment="MIDASSYS=/opt/midas" "MIDAS_EXPTAB=/home/neo/Code/WAGASCI/MIDAS/online/exptab" 
"MIDAS_EXPT_NAME=WAGASCI" 
> "SVN_EDITOR=emacs -nw" "GIT_EDITOR=emacs -nw"
> PassEnvironment=MIDASSYS MIDAS_EXPTAB MIDAS_EXPT_NAME SVN_EDITOR GIT_EDITOR
> 
> [Install]
> WantedBy=multi-user.target
  1487   12 Mar 2019 Pierre GorelForumRun length
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT events 100
>      TRANSITION STOP
> I have two questions: 
> 
> - for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
> the event polling in the frontend, with an ODB query of the number of collected events. I'm 
> wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
> number of events, but the subsequent short run should still start exactly after 1h from the previous 
> short run. 

I don't know about a way to give you an exact number of events (maybe /Logger/Run duration). 

I personally use 
    WAIT ODBValue,"/Equipment/DTM/Statistics/Events sent",>,100

Where DTM is the frontend of my trigger. Because of the lag in the run stop, the run will always exceed by few
seconds*rates.

Hope it helps
  1486   12 Mar 2019 Stefan RittForumRun length
> Is there a way to start the short run exactly 1 h after the starting 
> of the previous short run?

This is not possible with the current sequencer.
  1485   12 Mar 2019 Francesco RengaForumProblem stopping every second run
Dear all,
         I'm running a DAQ frontend and it works well if one single run is
taken. If I try to take a second run right after, the run is performed but, when
stopping it, I get the error messages below. Any hint?

Thank you for your help,
      Francesco


11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6022:cm_shutdown,ERROR] Killing
and Deleting client 'cygnus_daq' pid 12472

11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6019:cm_shutdown,ERROR] Cannot
connect to client 'cygnus_daq' on host 'localhost', port 46341

11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:9539:rpc_client_connect,ERROR]
cannot connect to host "localhost", port 46341: connect() returned -1, errno 111
(Connection refused)

11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:10821:rpc_client_call,ERROR]
call to "cygnus_daq" on "localhost" RPC "rc_transition": error,
ss_recv_net_command() status 411

11:42:24.012 2019/03/12 [mhttpd,ERROR] [system.c:4715:ss_recv_net_command,ERROR]
error receiving network command header, see messages

11:42:24.011 2019/03/12 [mhttpd,ERROR] [system.c:4661:recv_tcp2,ERROR]
unexpected connection closure

11:42:23.994 2019/03/12 [cygnus_daq,ERROR] [midas.c:1951:,ERROR]
cm_disconnect_experiment not called at end of program
  1484   11 Mar 2019 Francesco RengaForumRun length
Dear all,
        I need to implement a DAQ sequence where a short run (100 events, which takes a couple of 
minutes) is taken every hour, with a long run in between two short runs. In the sequencer, I can do:

LOOP infinite

.... some ODB settings ....
     TRANSITION START
     WAIT events 100
     TRANSITION STOP

.... some ODB settings ....
     TRANSITION START
     WAIT seconds 3600
     TRANSITION STOP

ENDLOOP


I have two questions: 

- for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
the event polling in the frontend, with an ODB query of the number of collected events. I'm 
wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
number of events, but the subsequent short run should still start exactly after 1h from the previous 
short run. 

- with the script above, the real time lapse between the start of two short runs would depend on 
the duration of the short run itself. Is there a way to start the short run exactly 1 h after the starting 
of the previous short run?

Thank you in advance for your help,
              Francesco
  1483   06 Mar 2019 Pintaudi GiorgioForumBest MIDAS branch/version for "production"
> I see. Would this work as well? Instead of "make install" do this:
> su - root
> cd /opt
> git pull midas
> cd midas
> make
> add /opt/midas/linux/bin to your PATH. (is it time to get rid of the "linux" part from the default build path?!?)

Got it. I will do that in the future.

> Can you post your systemd unit file to this elog, others may find it useful.

[Unit]
Description=MIDAS data acquisition system
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=3
User=neo
ExecStart=/opt/midas/bin/mhttpd -e WAGASCI --http 8081 --https 8444
Environment="MIDASSYS=/opt/midas" "MIDAS_EXPTAB=/home/neo/Code/WAGASCI/MIDAS/online/exptab" "MIDAS_EXPT_NAME=WAGASCI" 
"SVN_EDITOR=emacs -nw" "GIT_EDITOR=emacs -nw"
PassEnvironment=MIDASSYS MIDAS_EXPTAB MIDAS_EXPT_NAME SVN_EDITOR GIT_EDITOR

[Install]
WantedBy=multi-user.target
  1482   06 Mar 2019 Konstantin OlchanskiInfomhttpd magic urls
> > Here is the list of mhttpd magic URLs.
> See additional magic URLs at the very bottom:

added redirect for ODB top level "root"

> > 
> > http "get" path:
> > 
> > handle_http_message()
> > handle_http_get()
> > ?mjsonrpc_schema -> serve mjsonrpc_get_schema() // JSON RPC Schema in JSON format
> > ?mjsonrpc_schema_text -> serve mjsonrpc_schema_to_text() // same, but human-readable
> > handle_decode_get()
> > decode_get()
> > interprete()
> > 
> > http "post" path:
> > 
> > handle_http_message()
> > handle_http_post()
> > ?mjsonrpc -> serve mjsonrpc_decode_post_data() // process RPC request
> > handle_decode_post()
> > decode_post()
> > - maybe decode file attachment
> > interprete()
> > 
> > interprete() path:
> > 
> > url contains favicon.{ico,png} -> send_icon()
> > url contains mhttpd.css -> send_css() (see ODB /Experiment/CSS File) // obsolete? see midas.css below
> > url ends with "mp3" -> send_resource(url) // alarm sound
> > url contains midas.js -> send_resource("midas.js")
> > url contains midas.css -> send_resource("midas.css")
> > url ... ditto mhttpd.js
> > url ... ditto obsolete.js
> > url ... ditto controls.js
> > cmd is "example" -> send_resource("example.html")
> > ?script -> cm_exec_script(), see ODB /Script/...
> > ?customscript -> same, see ODB /CustomScript/...
> > cmd is "start" -> send resource start.html
> > cmd is blank -> send resource status.html
> > cmd is "status" -> send resource status.html
> > cmd is "newODB" -> send resource "odb.html" // not used at the moment
> > cmd is "programs" -> programs.html
> > cmd is "alarms" -> alarms.html
> > cmd is "transition" -> transition.html
> > cmd is "messages" -> messages.html
> > cmd is "config" and url is not "HS/" -> config.html
> > cmd is "chat" -> chat.html
> > cmd is "buffers" -> buffers.html
> > // elog section
> > cmd is "Show elog" -> elog
> > cmd is "Query elog" -> elog
> > cmd is "New elog" -> elog
> > cmd is "Edit elog" -> elog
> > cmd is "Reply elog" -> elog
> > cmd is "Last elog" -> elog
> > cmd is "Submit Query" -> elog
> > // end of elog section
> > url is "spinning-wheel.gif" -> send_resource("spinning-wheel.gif")
> 
> // "new custom pages" moved to the bottom
> 
> > // section for old AJAX requests
> > cmd is "jset", "jget", etc -> javascript_commands()
> > // commented out: send_resource(command+".html") // if cmd is "start" will send start.html
> > cmd is "mscb" -> show_mscb_page()
> > cmd is "help" -> show_help_page()
> > cmd is "trigger" -> send RPC RPC_MANUAL_TRIG
> > cmd is "Next subrun" -> set ODB "/Logger/Next subrun" to TRUE
> > cmd is "cancel" -> redirect to getparam("redir")
> > cmd is "set" -> show_set_page() // set ODB value
> > cmd is "find" -> show_find_page()
> > cmd is "CNAF" or url is "CNAF" -> show_cnaf_page()
> > cmd is "elog" -> redirect to external ELOG or send_resource("elog_show.html")
> > cmd starts with "Elog last" -> send_resource("elog_query.html") // Elog last N days & co
> > cmd is "Create Elog from this page" -> redirect to "?cmd=new elog" // called from ODB editor
> > cmd is "Submit elog" -> submit_elog() // usually a POST request from the "elog_edit.html"
> > cmd is "elog_att" -> show_elog_attachment()
> > cmd is "accept" -> what does this do?!?
> > cmd is "eqtable" -> show_eqtable_path() // page showing equipment variables as a table ("slow control page")
> > // section for the sequencer
> > cmd is "sequencer" -> show_seq_page()
> > cmd is "start script" -> seq
> > cmd is "cancel script" -> seq
> > cmd is "load script" -> ...
> > cmd is "new script" -> ...
> > cmd is "save script" -> ...
> > cmd is "edit script" -> ...
> > cmd is "spause" -> ...
> > cmd is "sresume" -> ...
> > cmd is "stop immeditely" -> ...
> > cmd is "stop after current run" -> ...
> > cmd is "cancel stop after current run" -> ...
> > // end of sequencer
> > cmd is "odb" -> show_odb_page()
> 

if URL is "root", redirect to odb editor at the odb top level

> if ODB path URL exists, redirect to the odb editor with odb_path=URL // this restores the old URL scheme for the ODB editor
> 
> > cmd is "custom" -> show_custom_page()
> 
> odb entry exists "/Custom/Images/URL/Background" -> show_custom_gif(URL)
> odb entry exists "/Custom/URL" or "/Custom/URL&" or "/Custom/URL!" -> show_custom_page(URL)
> -- inside show_custom_page(URL):
> -- if URL contains ".gif" -> show_custom_gif(URL)
> -- if URL contains "." (i.e. "bnmr.css") -> show_custom_file(URL) -> send_file()
> -- otherwise process custom page (substitute <odb> tags, etc)
> 
> // section "new custom pages"
> if ODB /Custom exists,
> create blank ODB /Custom/Path if it does not exist yet
> if URL contains "/" or DIR_SEPARATOR, reject it with an error (prevent escape from file jail)
> if ODB /Custom/Path is not blank, concatenate value of ODB /CustomPath and the URL
> if this file exists, send_file() it.
> // end of "new custom pages" section
> 
> try send_resource(URL) // this serves "status.html" & co
> 
> > show_error()
> > 
> > K.O.
> 
> K.O.
  1481   06 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> > The biggest problem so far we have seen is with some pages having incorrect form submission
> > settings - some forms use the wrong form "action" attribute, which worked before, we do not know
> > why, and definitely does not work now. This is not something that we can fix on the midas side.
> 
> Make sure you check any page which has a GIF image with bars and labels. I believe the new URL system has an issue there (mayby still an explicity /CS/... somewhere).
> 

Yes, we tried it with Suzannah and to my amazement it worked from the 1st try in her test experiment.

However the example in midas/examples/custom does not quite work, the gif file seems to be broken, displays as gibberish for me.

Also in the show_custom_page() I see code for "toggle" and "edit", but I have to example to test them.

K.O.
  1480   06 Mar 2019 Konstantin OlchanskiForumBest MIDAS branch/version for "production"
> > > Hmm... for most experiments, we do not "install" midas. I should probably remove the "install" target from the Makefile.
>
> install MIDAS in the /opt/midas folder to remain consistent with the other programs that we are using for 
> our experiment (Pyrame and Calicoes from LLR).

I see. Would this work as well? Instead of "make install" do this:
su - root
cd /opt
git pull midas
cd midas
make
add /opt/midas/linux/bin to your PATH. (is it time to get rid of the "linux" part from the default build path?!?)

> I am also using Linux systemd to enable mhttpd on startup

We use cron @reboot to start mhttpd.

But. CentOS7 systemd cron unit file has a bug and they run @reboot cron jobs before NIS and autofs is ready, see
http://www.triumf.info/wiki/DAQwiki/index.php/SLinstall#Enable_crontab_.40reboot_for_MIDAS_.28CentOS7.29

Can you post your systemd unit file to this elog, others may find it useful.

K.O.
ELOG V3.1.4-2e1708b5