Back Midas Rome Roody Rootana
  Midas DAQ System, Page 99 of 153  Not logged in ELOG logo
ID Date Authordown Topic Subject
  1461   04 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
Hi, guys, as I was exploring the code and the commit history on Thursday (git rules!) and
as I worked on getting the old custom files to work with Suzannah on Friday, I think
I know how I want this code to work. I think there is no need to break with the old
way of doing things and force every experiment to move things around if they want
to use the latest midas.

I will write more about this, but in the nutshell, I am happy with the current code:
- the old custom pages work (filename is taken from ODB, with /Custom/Path prepended if it exists; this is what we had for a very long time)
- the serving of new custom pages also works - via Stefan's code in interprete() where /Custom/File is added to the URL
  and if the file exists, we serve it. The only problem in that code was the missing check for absent or empty ODB /Custom/Path.
- the serving of new custom pages through the normal resource path (show_resource()) also works now,
   this serves files from $MIDASSYS/resource, from ODB /Experiment/Resource and a few other locations. (this code was there
   for a long time, but disabled because for a number of reasons, things like http://localhost:8080/Status.html did not work right,
   and after fixing a few buglets, they do now).

The serving of /etc/passwd I killed by forbidding "/" (the directory separator) in resource file names. I think this is safer
for enforcing a file jail compared to checking for "..".

I think the current code fixes all the reported problems (in conjunction with the change of the URL scheme) -
- /Custom/Path set to "" now works and provides the old way of doing custom pages
- /Custom/Path set to a directory name works and all Thomas's experiments should be good. the old custom files way *also* works, as long as the filenames in ODB are adjusted.
- URL /root and /System no longer try to serve system directories, plus /Custom/Path set to "/" is explicitly forbidden.
- mhttpd cannot serve /etc/passwd by default as "/" is forbidden in file names added to /Custom/Path. (I discount the case where mhttpd is running in /etc or /Custom/Path is set to /etc or symlinked to /etc)

But not all is good. The change of custom page URL scheme from /CS/... to top level or ?cmd=custom&page=...
cannot be swept under the rug - the user will have to make changes to their custom pages to adjust for it,
I see no way to avoid that. The current code catches and redirects/serves/helps with some of that,
i.e. pages loading custom files like "bnmr.css" still work even though bnmr.css is no longer under /CS/bnmr.css).

Again, apologies that things are moving faster than I can write them up. I am trying.


K.O.


> I see two separate issues here. 
> 
> One is restricting the custom pages to ONE directory such as 
> <exptab>/resources -> /home/users/exp/resources
> and its subdirectories which seems like a good solution for all the
> reasons you've mentioned. 
> 
> The other issue is the use of the "Path" key in /Custom, which is used to differentiate
> between the "new" way (all resources served from the Path directory) 
> and the original way where all the custom keys are specified with their full directories.
> 
> Recent versions of Midas had broken the original behaviour by insisting on the presence of the
> "Path" key.  Konstantin fixed this by allowing the "Path" key to take the value "".  It is true
> that some experiments currently may be serving resources from more than one directory tree, but changing
> to storage of all custom pages in one directory (and its subdirectories) does not necessarily mean that
> the original way of serving resources must be made obsolete.
>  
> I actually like the original way of specifying the custom keys for the pages and resources under /Custom, which
> is presently selected  without the /Custom/Path key present at all (older versions) or with the
> /Custom/Path key set to "" (latest versions). I like it for debugging, and I like to be able to see 
> at a glance what resource files are in use from /Custom. 
> 
> I have a suggestion:
> 
> The resources could still be served from the /Custom directory if desired, except now mhttpd will ALWAYS add the 
> fixed path in front of the given paths in /Custom. This would mean a fixed path and a minimal disruption to older pages 
> (the <script> and <link> statements in the HTML code to include the resources would not need to be changed).
> The "/Path" key is no longer be useful, since the resource path is now fixed. Instead a key e.g. "FlagRS"  could 
> be used to select the desired behaviour,  with the default being the "new" (no key present).
> 
> For example, the full directory paths in /custom
> ScanParams&                     /home/users/online/custom/scan/scan_select_popup.html
> mpet.css!                       /home/users/online/custom/rs/mpet.css
> scanvoltages!                   /home/users/online/custom/scan/scan_voltages.js
> 
> would become subdirectory path(s)
> ScanParams&                     custom/scan/scan_select_popup.html
> mpet.css!                       custom/rs/mpet.css
> scanvoltages!                   custom/scan/scan_voltages.js
> FlagRS                          y
> 
> The pages would be served from /home/users/exp/resources/custom/...
> 
> Suzannah
> 
>  
> > > Hi Stefan and Konstantin,
> > > 
> > > I think that this proposal sounds fairly reasonable.  I agree that we might as well move to a secure final solution at this point.  
> > > 
> > > One comment: since this change would break almost every experiment I have worked on for the last 4 years, it would be nice to add a command-line option to mhttpd that preserves the old /Custom/Path behavior.  This would allow experiments a transition 
> > > period, so that they didn't immediately need to fix their setup.  The command-line option could be clearly marked as obsolete behaviour and could be removed within a year.
> > > 
> > > Cheers,
> > > Thomas
> > > 
> > > 
> > > 
> > > > Parsing all URL in mhttpd to prevent /etc/passwd etc. to be returned is tricky, because people can use escape sequences etc. Therefore I think it is much better to restrict file access 
> > > > on the file system level when opening a file. The only escape there one could have is "..", which can be tested easily. 
> > > > 
> > > > Therefore, I propose to restrict file access to two well-defined directories, which is one system directory and one user directory. The system directory should be defined via 
> > > > $MIDASSYS/resources, and the user directory should be the experiment directory (as defined in exptab) followed by "resources". So if MIDASSYS equals to /usr/local/midas and the 
> > > > experiment directory equals to /home/users/exp for example, we would only have these two directories (and of course the subdirectories within these) served by mhttpd:
> > > > 
> > > > $MIDASSYS/resource -> /usr/local/midas/resources
> > > > <exptab>/resources -> /home/users/exp/resources
> > > > 
> > > > These directories should be hard-wired into mhttpd, and not go through and ODB entry, since otherwise one could manipulate the ODB entries (knowingly or unknowingly) and open a 
> > > > back-door. 
> > > > 
> > > > If users need a more complex structure, they can put soft links into these directories.
> > > > 
> > > > The code which opens a resource file should then first evaluate $MIDASSYS, then add "/resources/", then add the requested file name, make sure that there is no ".." in the file name, 
> > > > then open the file. If not existing, do the same for the <exptab>/resources/ directory.
> > > > 
> > > > This change will break most experiments, and forces people to move their custom pages to different directories, but I think it's the only clean solution and we just have to bite the 
> > > > bullet.
> > > > 
> > > > Comments are welcome.
> > > > 
> > > > Stefan
  1462   04 Mar 2019 Konstantin OlchanskiForumBest MIDAS branch/version for "production"
Hi, Giorgio - you are asking excellent questions. I will try to answer them, but as ever, there are no 
easy answers.

In general, the top of the midas "develop" branch is "the best midas there is".

So for a new experiment, it is a reasonable place to start. Of course you can see
that there is quite a bit of commit activity going on, however, most substantial
changes are done on separate branches, where we try the new code, debug
it and test it. Only when the new code is "ready", we commit it to the "develop"
branch. Then, most often, we find some more last minute post-merge bugs,
and fix them right then and there on the head of the "develop" branch. Eventually
the dust settles and we have stable code that stays stable for a long time.

For example, right now we are waiting for the dust to settle on the change
of the MIDAS URL scheme, which was necessitated by needs of several experiments
that have more-complex-then-usual https proxy configurations. Unusual today,
but more common as we move forward, I think.

So if the head of the "develop" branch does not work for you, we encourage
that you file a bug report (here on this forum or on the bitbucket issue tracker).

While we try to sort out your problem, you can fall back to a previous version
of midas:

a) go back to one of the older "midas-YYYY-MM-X" tags
b) go back to one of the release candidate branches "feature/midas-YYYY-MM".

But if you have an already running system and you already have a working
instance of MIDAS, you do not have to update it to the latest version
unless you need some newest feature or you suffer from a bug
that has been fixed in a newer version.

In general, I find that it is fairly safe to update a working instance of MIDAS to
the latest code. But do keep your old working copy, if there is trouble, you can
always go back to something that works.

Now to your questions:

> For a running experiment that needs software stability what branch of the MIDAS 
> repository is better suited?

easy to answer. the most stable is the version you are using right now (doh!). each time
you upgrade, there is a risk that something will go wrong.

if you start from scratch, use the head of the "develop" branch (the latest and greatest),
if you run into trouble, report the trouble and update to a newer version with your
trouble fixed, or go back in time to previous tags and release candidate branches (as I described 
above).

> The master branch or the develop branch?

the "develop" branch.

> Moreover, what point in time do you think is more stable?

I find that it is impossible to have a stable-stable-stable version of midas
because the rest of the world flows forward in time. Old versions of midas
stop compiling because of OS and compiler changes; they stop running
because of OS or web browser or hardware changes. Then somebody
always asks for new features and new or old bugs surface constantly.

But we try to mark some "good" spots using git tags and release branches,
however the more far you go back in time, the more likely the code will
not even compile anymore.


K.O.



> Hello!
> My name is Giorgio Pintaudi and I am a Ph.D. student at Yokohama National 
> University (Japan). I also happen to be a T2K collaborator.
> I am currently developing the DAQ software for a T2K near detector called WAGASCI 
> (different from ND280) and we recently decided to adopt MIDAS as a user 
> interface.
> Now I am using the "develop" branch of the MIDAS BitBucket repository: I merge 
> the remote repository every now and then with my local copy and that is fine ... 
> but on the 25th of April our experiment is officially starting and I was 
> wondering which version of MIDAS should I use for "production".
> So my question is:
> For a running experiment that needs software stability what branch of the MIDAS 
> repository is better suited? The master branch or the develop branch? Moreover, 
> what point in time do you think is more stable?
> Best regards
> Giorgio
  1466   05 Mar 2019 Konstantin OlchanskiForumBest MIDAS branch/version for "production"
> 
> PS other than the code peculiar to our experiment, I have made a little modification to the MIDAS install 
> Makefile (I noticed that there is an "install" target but not an "uninstall" target so I wrote it).
> If you are interested, I could make a merge request on BitBucket, just let me know.
> 

Hmm... for most experiments, we do not "install" midas. I should probably remove the "install" target from the Makefile.

K.O.



> Bests
> Giorgio
> 
> > Hi, Giorgio - you are asking excellent questions. I will try to answer them, but as ever, there are no 
> > easy answers.
> > 
> > In general, the top of the midas "develop" branch is "the best midas there is".
> > 
> > So for a new experiment, it is a reasonable place to start. Of course you can see
> > that there is quite a bit of commit activity going on, however, most substantial
> > changes are done on separate branches, where we try the new code, debug
> > it and test it. Only when the new code is "ready", we commit it to the "develop"
> > branch. Then, most often, we find some more last minute post-merge bugs,
> > and fix them right then and there on the head of the "develop" branch. Eventually
> > the dust settles and we have stable code that stays stable for a long time.
> > 
> > For example, right now we are waiting for the dust to settle on the change
> > of the MIDAS URL scheme, which was necessitated by needs of several experiments
> > that have more-complex-then-usual https proxy configurations. Unusual today,
> > but more common as we move forward, I think.
> > 
> > So if the head of the "develop" branch does not work for you, we encourage
> > that you file a bug report (here on this forum or on the bitbucket issue tracker).
> > 
> > While we try to sort out your problem, you can fall back to a previous version
> > of midas:
> > 
> > a) go back to one of the older "midas-YYYY-MM-X" tags
> > b) go back to one of the release candidate branches "feature/midas-YYYY-MM".
> > 
> > But if you have an already running system and you already have a working
> > instance of MIDAS, you do not have to update it to the latest version
> > unless you need some newest feature or you suffer from a bug
> > that has been fixed in a newer version.
> > 
> > In general, I find that it is fairly safe to update a working instance of MIDAS to
> > the latest code. But do keep your old working copy, if there is trouble, you can
> > always go back to something that works.
> > 
> > Now to your questions:
> > 
> > > For a running experiment that needs software stability what branch of the MIDAS 
> > > repository is better suited?
> > 
> > easy to answer. the most stable is the version you are using right now (doh!). each time
> > you upgrade, there is a risk that something will go wrong.
> > 
> > if you start from scratch, use the head of the "develop" branch (the latest and greatest),
> > if you run into trouble, report the trouble and update to a newer version with your
> > trouble fixed, or go back in time to previous tags and release candidate branches (as I described 
> > above).
> > 
> > > The master branch or the develop branch?
> > 
> > the "develop" branch.
> > 
> > > Moreover, what point in time do you think is more stable?
> > 
> > I find that it is impossible to have a stable-stable-stable version of midas
> > because the rest of the world flows forward in time. Old versions of midas
> > stop compiling because of OS and compiler changes; they stop running
> > because of OS or web browser or hardware changes. Then somebody
> > always asks for new features and new or old bugs surface constantly.
> > 
> > But we try to mark some "good" spots using git tags and release branches,
> > however the more far you go back in time, the more likely the code will
> > not even compile anymore.
> > 
> > 
> > K.O.
> > 
> > 
> > 
> > > Hello!
> > > My name is Giorgio Pintaudi and I am a Ph.D. student at Yokohama National 
> > > University (Japan). I also happen to be a T2K collaborator.
> > > I am currently developing the DAQ software for a T2K near detector called WAGASCI 
> > > (different from ND280) and we recently decided to adopt MIDAS as a user 
> > > interface.
> > > Now I am using the "develop" branch of the MIDAS BitBucket repository: I merge 
> > > the remote repository every now and then with my local copy and that is fine ... 
> > > but on the 25th of April our experiment is officially starting and I was 
> > > wondering which version of MIDAS should I use for "production".
> > > So my question is:
> > > For a running experiment that needs software stability what branch of the MIDAS 
> > > repository is better suited? The master branch or the develop branch? Moreover, 
> > > what point in time do you think is more stable?
> > > Best regards
> > > Giorgio
  1467   05 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> > - mhttpd cannot serve /etc/passwd by default as "/" is forbidden in file names added to /Custom/Path.
> You do this with a simple
> if (custom_path == "/")
> which does work but does not cover cases such as
> "/./"

Hmm... and this is just fine. Since I do not allow "/" in the file name, they can
set the resource path to any alias for the root filesystem, but they cannot
get to "/etc/passwd" unless they run mhttpd in /etc or set /Custom/Path to "/etc".

All these cases are not normal use of mhttpd, not "oops, I made a mistake"
and not "I will kludge my paths just for today just for this one experiment". They
have to make an explicit decision to break the security.

These days, I am thinking that we should not try to prevent all insecure uses of midas,
but at least we should make the default configuration secure and disallow some of the more
obviously insecure configurations (i.e. do not permit password protection without https).

Take the root password as an example. Empty root passwd is not permitted, but
root password set to "root" is allowed (some password tools may throw a warning).

>
> Still, in my opinion we should not have a path in the ODB. The custom path should be hard-wired and combined with symbolic links if necessary. The custom HTML pages under /Custom in the ODB have to be scanned for ".."s.
> 

Stefan, we already allow execution of arbitrary commands via ODB "/Programs/xxx/Start Command".

So for all practical purposes, somebody with access to the mhttpd web pages also has shell access
to the user account running mhttpd.

K.O.
  1469   05 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> First, I did not propose to give up the /Custom tree in the ODB, sorry for the misunderstanding.
> We still need it in order to display the menu with the custom pages at the left side navigation bar.
> In principle all can stay like it is, except we remove /Custom/Path and rewrite the file server to restrict it only 
> to the two mentioned directories.
> 

We have several large installations at TRIUMF that use the old-style custom pages - MUSR, BNMR/BNQR, TITAN (and more?) -
none of these experiments are going away any time soon and none of these custom pages are rewriting themselves.

So, I think we are stuck with supporting old-style custom pages in the short and in the long term, unless we decide to abandon these existing users of MIDAS.

And the old-style custom pages come with the old-style scheme of keeping file names in ODB. Removing
it will require experiments to reorganize their filesystems (different custom pages may come from different
git repositories on different disks - not trivially placable all in the same directory).

Plus some people (including myself) like to have some things explicitly specified - I want to know that
this page loads this file without having to guess where in a search path it came from today.

>
> Thomas proposed a "deprecated" mhttpd command line option.
>

"deprecated" does not work.

"we removed it because it stopped working and we cannot fix it" works,
"do not use it for new experiments" works (until a user comes back with "but I really like this feature!")
"we will remove it at an unknown future date because we think nobody should use it" does not work.

>
> As an alternative, I propose to make a symbolic link from <exptab>/resources to where the old /Custom/Path was pointing 
> to. This should work as well, and we don't have to implement a parameter in mhttpd.
> 

All experiments at TRIUMF that use old-style custom pages do not use /Custom/Path. In the new scheme
of things, setting /Custom/Path to anything other than blank will break them.

>
> Third, the /Custom/Path should really go away.
>

For once, I agree. /Custom/Path did not exist and should not have been added.

> We are all concerned that people can read security critical files from the whole file system.

Only in the default configuration and in configurations that would realistically used by an experiment. This
excludes the case of "/Custom/Path" set to "/etc" - not the default and unlikely to be used by any experiment.

> To read those files, people have to access to mhttpd, so they have to know at least the authentication credentials to pass the Apache firewall in front of mhttpd or 
> whatever. This means they have access to the ODB, and then they can simply change /Custom/Path to "/" and voila - they have again access to /etc/passwd.

Nope. They have to set /Custom/Path to "/etc" or start mhttpd under "/etc" or point /Custom/Path to a symlink to /etc. (mhttpd will not serve "etc/passwd", "/" in the filename is not permitted).

> This is why I propose symbolic links on the file system.

Some people dislike symlinks.
Some filesystems do not implement symlinks. (should we prevent mhttpd from running on the ms-dos fat filesystem "just because"?)

> This area is much better protected than the ODB, since people have to physically log into 
> a machine to change it.

Nope. You can create symlinks from mhttpd by putting running the "ln -s" command from ODB "/Programs/xxx/Start Command"

K.O.
  1472   05 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> Just set 
> /Custom/Path = /./ 
> which is allowed right now and then access etc/passwd, which translates to /./etc/passwd and then you get the password file.

Nope. http://localhost:8080/etc/passwd yields this error regardless of the value of /Custom/Path: MIDAS error: Invalid custom file name 'etc/passwd' contains '/' or '/'

> We should make up our mind:
> 1) We trust each user who has access to mhttpd. The accessing /etc/passwd is not a problem and I don't understand all the fuzz we had recently. Why all the recent work?

Stefan, "/etc/passwd" is a stand-in for "$HOME/.ssh/id_rsa". Surely stealing the ssh keys is a very bad thing.

> 2) We do not trust users connected via mhttpd, but we trust users who can log in to the online machine.
> If we do not trust users having access to mhttpd, then it does not make sense in my mind to fix one hole and keep a few other open.
> You correctly mentioned the /Programs/xxx/Start command, and there are a few others, like executing scripts directly. Either we fix all (known) holes or we don't bother.

I see a distinction between accessing a magic URL (a read-only operation) and a targeted attack (editing odb, etc).

In the first case, it's a script-kiddie "let's try http:blah/etc/passwd, http:blah/../etc/passwd, etc, bingo!).

In the second case, if it's a targeted attack, forget about it. If an attacker wants in, they will get it. Have secure backups, etc.

Plus eventually we will restore the user access controls - read-only access, operator access (can start runs, edit history plots) and Q access. Only
this very last one would allow editing of ODB.

> 3) We do not trust users who can log in to the online machine, since they can just cat /etc/passwd. But then why give them access to the online machine?

ssh foo cat /etc/passwd is normal
http://foo/etc/passwd is not normal

> So which of the three options would you prefer?

I try to think simple:
if something worked yesterday and works today, let it be
if we add something new, it should not be obviously insecure

> 
> Thanks to the nice public discussion here on the forum (and I still think this is the correct way to discuss these things), all forum subscribers are now aware of several security holes.
> So either they are evil, then we have to fix all (known) holes. Or we trust  them, then we don't care.
> 

Here is the elephant in the room. mhttpd has never gone through a security audit. It's easy to find
security holes (buffer overflows) can be easily found by "grep sprintf".

To me, this means that access to mhttpd always must be password-protected.

Password protection requires https.

The built-in mongoose https server is as safe as the mongoose library (probably yes) and the openssl library (yes-ish, see openbsd libressl)

The external apache https takes a few minutes to setup, as the "industry standard", it is trusted to be safe.

>
> > Stefan, we already allow execution of arbitrary commands via ODB "/Programs/xxx/Start Command".
> This is on the same level as accessing /etc/passwd. So either we allow all of them or none of them. Something in between absolutely does not make sense to me.
>

Ok, we have identified our difference of opinion:

I think serving arbitrary files over http is a bad idea (on general principles)
and you think it is okey because there are other ways to get to those files.

Fair enough.

K.O.

P.S. Hmm... not fair enough.

I am now thinking about web-browser security -
suppose some kind of cross-site or cross-tab exploit comes out and suddenly
arbitrary web pages loaded from facebook.com (or worse) gain access
to the mhttpd web pages if both are open at the same time.

We may be still protected by obscurity and they presumably do not know how to change
things in odb, but I think it is a good idea to protect ourselves at least
against drive-by attacks (try http:blah/etc/passwd, try http:blah/../etc/passwd, etc,
replace /etc/passwd by some other secret file, try again - think  ssh keys, etc).

K.O.
  1474   05 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> > We have several large installations at TRIUMF that use the old-style custom pages - MUSR, BNMR/BNQR, TITAN (and more?) -
> > none of these experiments are going away any time soon and none of these custom pages are rewriting themselves.
> 
> Then you have a problem. Last time I told you that the new URL scheme breaks parts of the custom pages, especially the ones containing GIF images with labels on it. You then said "these experiments have to bite the bullet and 
> change it", and I proceeded. Now you tell me that this will not happen. So please be aware that these experiments do have a problem and probably are stuck with an older midas version.
> 

Yes, there is a problem and some adjustment is needed.

I do not think the new URL scheme requires us to abandon existing experiments.

At this point I am trying to minimize the number of adjustments required, for example:

If we do not need to move files around (and create symlinks), it is good - so the old scheme of saving file names in ODB lives on
If we do not need to edit every file to adjust every URL, it is good - so now that http://blah/CS/custom_page is gone (no more "CS/"), http://blah/custom_page had to be implemented
If we do not need to replace all the ODB /Alias entries, http://blah/ODB_PATH now redirects to http://blah/?cmd=odb&odb_path=ODB_PATH and all the old aliases to ODB still work

I am going through these things as I discover them with Suzannah.

The biggest problem so far we have seen is with some pages having incorrect form submission
settings - some forms use the wrong form "action" attribute, which worked before, we do not know
why, and definitely does not work now. This is not something that we can fix on the midas side.

K.O.
  1475   05 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> 
> That sounds fine, as long as it is clearly documented.
> 

I am a true believer in the two-man rule. One person writes the code, another person
documents it - this keeps everything honest and ensures that at least one
person (the documenter) understands what is going on. (as the coder, I can easily
write documentation that nobody understands, or that is completely wrong).

> > Third, the /Custom/Path should really go away.
> Yes, I agree that /Custom/Path should go away.

/Custom/Path as a resource search path or
/Custom/Path that is prepended to file names of old-style custom pages?

The second use is safe and does not need to be removed.

The first use is now redundant - files are now also served through send_resource()
through the normal resource path, that includes ODB /Experiment/Resource
(something that has been there for a long time).

K.O.
  1477   05 Mar 2019 Konstantin OlchanskiInfomhttpd magic urls
> Here is the list of mhttpd magic URLs.

See additional magic URLs at the very bottom:

> 
> http "get" path:
> 
> handle_http_message()
> handle_http_get()
> ?mjsonrpc_schema -> serve mjsonrpc_get_schema() // JSON RPC Schema in JSON format
> ?mjsonrpc_schema_text -> serve mjsonrpc_schema_to_text() // same, but human-readable
> handle_decode_get()
> decode_get()
> interprete()
> 
> http "post" path:
> 
> handle_http_message()
> handle_http_post()
> ?mjsonrpc -> serve mjsonrpc_decode_post_data() // process RPC request
> handle_decode_post()
> decode_post()
> - maybe decode file attachment
> interprete()
> 
> interprete() path:
> 
> url contains favicon.{ico,png} -> send_icon()
> url contains mhttpd.css -> send_css() (see ODB /Experiment/CSS File) // obsolete? see midas.css below
> url ends with "mp3" -> send_resource(url) // alarm sound
> url contains midas.js -> send_resource("midas.js")
> url contains midas.css -> send_resource("midas.css")
> url ... ditto mhttpd.js
> url ... ditto obsolete.js
> url ... ditto controls.js
> cmd is "example" -> send_resource("example.html")
> ?script -> cm_exec_script(), see ODB /Script/...
> ?customscript -> same, see ODB /CustomScript/...
> cmd is "start" -> send resource start.html
> cmd is blank -> send resource status.html
> cmd is "status" -> send resource status.html
> cmd is "newODB" -> send resource "odb.html" // not used at the moment
> cmd is "programs" -> programs.html
> cmd is "alarms" -> alarms.html
> cmd is "transition" -> transition.html
> cmd is "messages" -> messages.html
> cmd is "config" and url is not "HS/" -> config.html
> cmd is "chat" -> chat.html
> cmd is "buffers" -> buffers.html
> // elog section
> cmd is "Show elog" -> elog
> cmd is "Query elog" -> elog
> cmd is "New elog" -> elog
> cmd is "Edit elog" -> elog
> cmd is "Reply elog" -> elog
> cmd is "Last elog" -> elog
> cmd is "Submit Query" -> elog
> // end of elog section
> url is "spinning-wheel.gif" -> send_resource("spinning-wheel.gif")

// "new custom pages" moved to the bottom

> // section for old AJAX requests
> cmd is "jset", "jget", etc -> javascript_commands()
> // commented out: send_resource(command+".html") // if cmd is "start" will send start.html
> cmd is "mscb" -> show_mscb_page()
> cmd is "help" -> show_help_page()
> cmd is "trigger" -> send RPC RPC_MANUAL_TRIG
> cmd is "Next subrun" -> set ODB "/Logger/Next subrun" to TRUE
> cmd is "cancel" -> redirect to getparam("redir")
> cmd is "set" -> show_set_page() // set ODB value
> cmd is "find" -> show_find_page()
> cmd is "CNAF" or url is "CNAF" -> show_cnaf_page()
> cmd is "elog" -> redirect to external ELOG or send_resource("elog_show.html")
> cmd starts with "Elog last" -> send_resource("elog_query.html") // Elog last N days & co
> cmd is "Create Elog from this page" -> redirect to "?cmd=new elog" // called from ODB editor
> cmd is "Submit elog" -> submit_elog() // usually a POST request from the "elog_edit.html"
> cmd is "elog_att" -> show_elog_attachment()
> cmd is "accept" -> what does this do?!?
> cmd is "eqtable" -> show_eqtable_path() // page showing equipment variables as a table ("slow control page")
> // section for the sequencer
> cmd is "sequencer" -> show_seq_page()
> cmd is "start script" -> seq
> cmd is "cancel script" -> seq
> cmd is "load script" -> ...
> cmd is "new script" -> ...
> cmd is "save script" -> ...
> cmd is "edit script" -> ...
> cmd is "spause" -> ...
> cmd is "sresume" -> ...
> cmd is "stop immeditely" -> ...
> cmd is "stop after current run" -> ...
> cmd is "cancel stop after current run" -> ...
> // end of sequencer
> cmd is "odb" -> show_odb_page()

if ODB path URL exists, redirect to the odb editor with odb_path=URL // this restores the old URL scheme for the ODB editor

> cmd is "custom" -> show_custom_page()

odb entry exists "/Custom/Images/URL/Background" -> show_custom_gif(URL)
odb entry exists "/Custom/URL" or "/Custom/URL&" or "/Custom/URL!" -> show_custom_page(URL)
-- inside show_custom_page(URL):
-- if URL contains ".gif" -> show_custom_gif(URL)
-- if URL contains "." (i.e. "bnmr.css") -> show_custom_file(URL) -> send_file()
-- otherwise process custom page (substitute <odb> tags, etc)

// section "new custom pages"
if ODB /Custom exists,
create blank ODB /Custom/Path if it does not exist yet
if URL contains "/" or DIR_SEPARATOR, reject it with an error (prevent escape from file jail)
if ODB /Custom/Path is not blank, concatenate value of ODB /CustomPath and the URL
if this file exists, send_file() it.
// end of "new custom pages" section

try send_resource(URL) // this serves "status.html" & co

> show_error()
> 
> K.O.

K.O.
  1480   06 Mar 2019 Konstantin OlchanskiForumBest MIDAS branch/version for "production"
> > > Hmm... for most experiments, we do not "install" midas. I should probably remove the "install" target from the Makefile.
>
> install MIDAS in the /opt/midas folder to remain consistent with the other programs that we are using for 
> our experiment (Pyrame and Calicoes from LLR).

I see. Would this work as well? Instead of "make install" do this:
su - root
cd /opt
git pull midas
cd midas
make
add /opt/midas/linux/bin to your PATH. (is it time to get rid of the "linux" part from the default build path?!?)

> I am also using Linux systemd to enable mhttpd on startup

We use cron @reboot to start mhttpd.

But. CentOS7 systemd cron unit file has a bug and they run @reboot cron jobs before NIS and autofs is ready, see
http://www.triumf.info/wiki/DAQwiki/index.php/SLinstall#Enable_crontab_.40reboot_for_MIDAS_.28CentOS7.29

Can you post your systemd unit file to this elog, others may find it useful.

K.O.
  1481   06 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> > The biggest problem so far we have seen is with some pages having incorrect form submission
> > settings - some forms use the wrong form "action" attribute, which worked before, we do not know
> > why, and definitely does not work now. This is not something that we can fix on the midas side.
> 
> Make sure you check any page which has a GIF image with bars and labels. I believe the new URL system has an issue there (mayby still an explicity /CS/... somewhere).
> 

Yes, we tried it with Suzannah and to my amazement it worked from the 1st try in her test experiment.

However the example in midas/examples/custom does not quite work, the gif file seems to be broken, displays as gibberish for me.

Also in the show_custom_page() I see code for "toggle" and "edit", but I have to example to test them.

K.O.
  1482   06 Mar 2019 Konstantin OlchanskiInfomhttpd magic urls
> > Here is the list of mhttpd magic URLs.
> See additional magic URLs at the very bottom:

added redirect for ODB top level "root"

> > 
> > http "get" path:
> > 
> > handle_http_message()
> > handle_http_get()
> > ?mjsonrpc_schema -> serve mjsonrpc_get_schema() // JSON RPC Schema in JSON format
> > ?mjsonrpc_schema_text -> serve mjsonrpc_schema_to_text() // same, but human-readable
> > handle_decode_get()
> > decode_get()
> > interprete()
> > 
> > http "post" path:
> > 
> > handle_http_message()
> > handle_http_post()
> > ?mjsonrpc -> serve mjsonrpc_decode_post_data() // process RPC request
> > handle_decode_post()
> > decode_post()
> > - maybe decode file attachment
> > interprete()
> > 
> > interprete() path:
> > 
> > url contains favicon.{ico,png} -> send_icon()
> > url contains mhttpd.css -> send_css() (see ODB /Experiment/CSS File) // obsolete? see midas.css below
> > url ends with "mp3" -> send_resource(url) // alarm sound
> > url contains midas.js -> send_resource("midas.js")
> > url contains midas.css -> send_resource("midas.css")
> > url ... ditto mhttpd.js
> > url ... ditto obsolete.js
> > url ... ditto controls.js
> > cmd is "example" -> send_resource("example.html")
> > ?script -> cm_exec_script(), see ODB /Script/...
> > ?customscript -> same, see ODB /CustomScript/...
> > cmd is "start" -> send resource start.html
> > cmd is blank -> send resource status.html
> > cmd is "status" -> send resource status.html
> > cmd is "newODB" -> send resource "odb.html" // not used at the moment
> > cmd is "programs" -> programs.html
> > cmd is "alarms" -> alarms.html
> > cmd is "transition" -> transition.html
> > cmd is "messages" -> messages.html
> > cmd is "config" and url is not "HS/" -> config.html
> > cmd is "chat" -> chat.html
> > cmd is "buffers" -> buffers.html
> > // elog section
> > cmd is "Show elog" -> elog
> > cmd is "Query elog" -> elog
> > cmd is "New elog" -> elog
> > cmd is "Edit elog" -> elog
> > cmd is "Reply elog" -> elog
> > cmd is "Last elog" -> elog
> > cmd is "Submit Query" -> elog
> > // end of elog section
> > url is "spinning-wheel.gif" -> send_resource("spinning-wheel.gif")
> 
> // "new custom pages" moved to the bottom
> 
> > // section for old AJAX requests
> > cmd is "jset", "jget", etc -> javascript_commands()
> > // commented out: send_resource(command+".html") // if cmd is "start" will send start.html
> > cmd is "mscb" -> show_mscb_page()
> > cmd is "help" -> show_help_page()
> > cmd is "trigger" -> send RPC RPC_MANUAL_TRIG
> > cmd is "Next subrun" -> set ODB "/Logger/Next subrun" to TRUE
> > cmd is "cancel" -> redirect to getparam("redir")
> > cmd is "set" -> show_set_page() // set ODB value
> > cmd is "find" -> show_find_page()
> > cmd is "CNAF" or url is "CNAF" -> show_cnaf_page()
> > cmd is "elog" -> redirect to external ELOG or send_resource("elog_show.html")
> > cmd starts with "Elog last" -> send_resource("elog_query.html") // Elog last N days & co
> > cmd is "Create Elog from this page" -> redirect to "?cmd=new elog" // called from ODB editor
> > cmd is "Submit elog" -> submit_elog() // usually a POST request from the "elog_edit.html"
> > cmd is "elog_att" -> show_elog_attachment()
> > cmd is "accept" -> what does this do?!?
> > cmd is "eqtable" -> show_eqtable_path() // page showing equipment variables as a table ("slow control page")
> > // section for the sequencer
> > cmd is "sequencer" -> show_seq_page()
> > cmd is "start script" -> seq
> > cmd is "cancel script" -> seq
> > cmd is "load script" -> ...
> > cmd is "new script" -> ...
> > cmd is "save script" -> ...
> > cmd is "edit script" -> ...
> > cmd is "spause" -> ...
> > cmd is "sresume" -> ...
> > cmd is "stop immeditely" -> ...
> > cmd is "stop after current run" -> ...
> > cmd is "cancel stop after current run" -> ...
> > // end of sequencer
> > cmd is "odb" -> show_odb_page()
> 

if URL is "root", redirect to odb editor at the odb top level

> if ODB path URL exists, redirect to the odb editor with odb_path=URL // this restores the old URL scheme for the ODB editor
> 
> > cmd is "custom" -> show_custom_page()
> 
> odb entry exists "/Custom/Images/URL/Background" -> show_custom_gif(URL)
> odb entry exists "/Custom/URL" or "/Custom/URL&" or "/Custom/URL!" -> show_custom_page(URL)
> -- inside show_custom_page(URL):
> -- if URL contains ".gif" -> show_custom_gif(URL)
> -- if URL contains "." (i.e. "bnmr.css") -> show_custom_file(URL) -> send_file()
> -- otherwise process custom page (substitute <odb> tags, etc)
> 
> // section "new custom pages"
> if ODB /Custom exists,
> create blank ODB /Custom/Path if it does not exist yet
> if URL contains "/" or DIR_SEPARATOR, reject it with an error (prevent escape from file jail)
> if ODB /Custom/Path is not blank, concatenate value of ODB /CustomPath and the URL
> if this file exists, send_file() it.
> // end of "new custom pages" section
> 
> try send_resource(URL) // this serves "status.html" & co
> 
> > show_error()
> > 
> > K.O.
> 
> K.O.
  1488   13 Mar 2019 Konstantin OlchanskiForumsystemd unit file for mhttpd
> > Can you post your systemd unit file to this elog, others may find it useful.

Thank you very much!

Note: user name "neo" and home directory is hardwired into the unit file. Also
it runs after "network.target", this may be too early, it should run after nis and autofs
have started (and made home directories accessible). (not sure what systemd target
that is).

K.O.

> 
> [Unit]
> Description=MIDAS data acquisition system
> After=network.target
> StartLimitIntervalSec=0
> 
> [Service]
> Type=simple
> Restart=always
> RestartSec=3
> User=neo
> ExecStart=/opt/midas/bin/mhttpd -e WAGASCI --http 8081 --https 8444
> Environment="MIDASSYS=/opt/midas" "MIDAS_EXPTAB=/home/neo/Code/WAGASCI/MIDAS/online/exptab" 
"MIDAS_EXPT_NAME=WAGASCI" 
> "SVN_EDITOR=emacs -nw" "GIT_EDITOR=emacs -nw"
> PassEnvironment=MIDASSYS MIDAS_EXPTAB MIDAS_EXPT_NAME SVN_EDITOR GIT_EDITOR
> 
> [Install]
> WantedBy=multi-user.target
  1489   13 Mar 2019 Konstantin OlchanskiForumProblem stopping every second run
>          I'm running a DAQ frontend and it works well if one single run is
> taken. If I try to take a second run right after, the run is performed but, when
> stopping it, I get the error messages below. Any hint?

Sure. I will read the error messages for you: note that they come in reverse time order - oldest message is at the bottom. I 
will reverse them in my reading:

> 11:42:23.994 2019/03/12 [cygnus_daq,ERROR] [midas.c:1951:,ERROR]
> cm_disconnect_experiment not called at end of program

your frontend has exited (this error message is printed by the atexit() code, so you did not crash but called exit(), 
somehow).

> 11:42:24.011 2019/03/12 [mhttpd,ERROR] [system.c:4661:recv_tcp2,ERROR]
> unexpected connection closure

mhttpd reports that the socket connection to your frontend has closed (because the frontend stopped, closing all it's 
sockets)

> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [system.c:4715:ss_recv_net_command,ERROR]
> error receiving network command header, see messages

mhttpd network code next layer reports this same error again

> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:10821:rpc_client_call,ERROR]
> call to "cygnus_daq" on "localhost" RPC "rc_transition": error,
> ss_recv_net_command() status 411

mhttpd network code next layer reports this same error again, now we see that it was trying
to execute a "run start" (or "stop") RPC call to your frontend. But your frontend unexpectedly
shutdown (instead of replying to the RPC).

Next messages after that is mhttpd decided that your frontend is faulty (does not respond to RPC correctly) and tried to 
shut it down, but failed (cannot connect, etc). Last message is mhttpd cleaning up your frontend from ODB (because the 
frontend did not cleanup after itself - it did not call cm_disconnect_experiment(), per the very first error message).


So this is what we see from the midas messages - your frontend unexpectedly exited during the run transition - if as you 
say the run was stopping at the time, it would be in your end_of_run() function.

To debug this, I would do:

a) put some printf() statements in end_of_run() and see what they say during the crash
b) run the frontend inside the debugger, you may need to set a breakpoint on exit() or something like that.


Good luck,
K.O.





> 
> Thank you for your help,
>       Francesco
> 
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6022:cm_shutdown,ERROR] Killing
> and Deleting client 'cygnus_daq' pid 12472
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:6019:cm_shutdown,ERROR] Cannot
> connect to client 'cygnus_daq' on host 'localhost', port 46341
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:9539:rpc_client_connect,ERROR]
> cannot connect to host "localhost", port 46341: connect() returned -1, errno 111
> (Connection refused)
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [midas.c:10821:rpc_client_call,ERROR]
> call to "cygnus_daq" on "localhost" RPC "rc_transition": error,
> ss_recv_net_command() status 411
> 
> 11:42:24.012 2019/03/12 [mhttpd,ERROR] [system.c:4715:ss_recv_net_command,ERROR]
> error receiving network command header, see messages
> 
> 11:42:24.011 2019/03/12 [mhttpd,ERROR] [system.c:4661:recv_tcp2,ERROR]
> unexpected connection closure
> 
> 11:42:23.994 2019/03/12 [cygnus_daq,ERROR] [midas.c:1951:,ERROR]
> cm_disconnect_experiment not called at end of program
  1490   13 Mar 2019 Konstantin OlchanskiForumRun length
I did not quite understand your desired sequence, is this what you want:

- at 1pm
- start a run
- record 100 events
- end the run
- (this will be, say, 1:15pm)
- start a run
- at 2pm
- end the run
- start a run
- record 100 events
- ad infinitum

There are 2 difficulties with this:

1) If you want your cycle to be exactly 1 hour, you need to use cron or something similar - if you just "start; sleep 3600; stop", 
your cycle will be slightly longer than 1 hour because starting and stopping runs takes some time to complete.

2) if you want your "100 event" run to start exactly precisely on the hour, you need to stop the previous run a few 
minutes/seconds before the hour to avoid the "run stop" delay.

Instead of using the sequencer, I would use a shell script (run it from crontab to avoid problem (1))

#/bin/sh
mtransition stop # stop the previous long run
odbedit set "/logger/channels/0/settings/event limit" 100
odbedit set "/logger/auto restart" "y"
mtransition start # start the short run
# end

In your frontend end_of_run() function, add this:
odb set "/logger/channels/0/settings/event limit" 0

This will produce the following sequence:

- script will stop previous long run
- set event limit to 100, start the "100 events" run
- logger will stop at 100 events, call your frontend end_of_run(), set event limit to 0
- logger auto restart will start a new run, event limit is now 0, this is your long run
- on the hour, cron runs your script, cycle repeats from the top.

Instead of cron, your can use a looper script. Note that you must run
the main script in the background (note the "&") to avoid problem (1).

#!/bin/foo
# looper script
while (true) {
   main_script &
   sleep 3600   
}
#end

To stop the sequence, kill the looper script.

K.O.


> Dear all,
>         I need to implement a DAQ sequence where a short run (100 events, which takes a couple of 
> minutes) is taken every hour, with a long run in between two short runs. In the sequencer, I can do:
> 
> LOOP infinite
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT events 100
>      TRANSITION STOP
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT seconds 3600
>      TRANSITION STOP
> 
> ENDLOOP
> 
> 
> I have two questions: 
> 
> - for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
> the event polling in the frontend, with an ODB query of the number of collected events. I'm 
> wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
> number of events, but the subsequent short run should still start exactly after 1h from the previous 
> short run. 
> 
> - with the script above, the real time lapse between the start of two short runs would depend on 
> the duration of the short run itself. Is there a way to start the short run exactly 1 h after the starting 
> of the previous short run?
> 
> Thank you in advance for your help,
>               Francesco
  1492   14 Mar 2019 Konstantin OlchanskiForumsystemd unit file for mhttpd
> As far as NIS is concerned, I am sorry but I don't know how it is used by MIDAS.

NIS is traditionally used together with autofs to form clusters of UNIX/Linux machines. NIS is a database
of user names, passwords, home directories, NFS (autofs) mount points and NFS exports. autofs would
read all it's configuration from the NIS database.

So the correct boot sequence in most cases would be like this:
hardware init -> network init -> nis -> autofs -> users can login -> start midas (mhttpd) from cron @reboot or similar.

(In organizations that have a dedicated staff of IT sysadmins, you would see LDAP instead of NIS).

K.O.
  1493   14 Mar 2019 Konstantin OlchanskiInfobitbucket issue tracker "feature"
> It turns out the bitbucket issue tracker has a feature - I cannot make it automatically add me 
> to the watcher list of all new issues.
> 
> So when you create a new issue, I think I get one message about it, and no more.
> 

I retested this and I confirm that for new issues I am not a watcher "by default". If I reply to an issue,
the system enters an inconsistent state - it thinks I am a watcher, the best I can tell, but it also
says "0 watchers".

K.O.
  1494   14 Mar 2019 Konstantin OlchanskiInfoswitch to json odb dump format
Regarding odb dumps saved into the midas output files, there are several
requests to make it possible to save odb in the json format.

Since 2019-02-12 commits
https://bitbucket.org/tmidas/midas/commits/ccaeeef4d6a1f0d91a67f2dc0a68105b7f6b77ae
and
https://bitbucket.org/tmidas/midas/commits/b3251587a68ef577919c81a221559e9af4dc8ae6

The format of the odb dump is specified by ODB
/Logger/Channels/0/Settings/ODB dump format (default is "json", was hardcoded as "xml").

The filename of the odb dump file saved at the end of run is specified by ODB
/Logger/ODB Last Dump File (default is "last.json", was hardcoded as "last.xml").

In addition, the following defaults have been changed, enabling LZ4 compession and CRC32C checksums by default:

/Logger/ODB dump file (new default is "run%05d.json", was "run%05d.odb")
/Logger/Channels/0/Settings/
Output "FILE"
Compress "lz4"
Checksum "CRC32C".

This brings mlogger default settings closer to what one would expect in the 21st century - "free" compression (LZ4)
and output file data integrity protected by checksums (CRC32C). These defaults are "free" in the sense that turning
them off would not noticeably improve the system performance. (Some users may want to enable better compression,
such as BZIP2 or PBZIP2 and better checksums, such as SHA256 or SHA512 - both choices that have significant CPU-use cost).

The default ODB dump format is now consistently "json" (vs the previous mix of XML and ODB formats).

Why "json" as the default? There has to be some default. The old ODB dump format is right out. Compared to XML,
JSON dump file size is smaller and the encoder is faster (important for reducing the time to start a run - where ODB dump is done twice). Tools for parsing 
JSON data are more widely available and "JSON has won in the marketplace". See also https://www.w3schools.com/js/js_json_xml.asp

Those who want to continue to use XML ODB dumps can trivially continue to do so by changing the settings listed above.

In the rootana package, we have the XmlOdb class for parsing XML ODB dumps, but no matching JsonOdb class for parsing
JSON ODB data. This reminds me of difficulties working with XML data. I originally wrote the XmlOdb class using
the "libxml2" package. After discovering that many Linux distributions do not install this package by default, I switched
to the DOM and XML parsers from ROOT (since rootana is not very useful without ROOT anyway). Only to discover that
some binary ROOT distributions distributed by CERN exclude the XML parser from their default build.

I do intend to write the JsonOdb class for rootana, and I will probably do it using the mjson.{h,cxx} parser that I wrote
for MIDAS. (lack of the JsonOdb class is the reason I did not switch MIDAS ODB dumps to JSON much earlier
than now. the camel back was broken by the extra slow run start time in the ALPHA-g DAQ system).

K.O.
  1495   14 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
I now understand Stefan's and Thomas's proposal a little bit better.

In my mind only one issue remains - when we say "we will serve files from directory X", how
to we prevent mhttpd from serving files outside this directory by using trick URLs containing ".."
and/or other gimmicks.

The code I currently put in mhttpd, disallows multi-level URL path names (by rejecting names that contain the directory separator "/").

This has the effect of keeping the mhttpd URL space flat (without subdirectories).

http://localhost:8080/Status.html  <--- no multi level URLs like we used to have:
http://localhost:8080/CS/Custom.html <--- no more of these

Keeping the URL space restricted to one level is important if we do not want to defeat
the recent change to the mhttpd URL scheme - if mhttpd runs behind a proxy, without
a "Base URL" (which we just removed), we can only use relative URLs to navigate
between midas pages and if we permit multi-level URLs, it becomes hard to get
back to the status page without the ugly counting of ".." URL elements (which ugly and
brittle code we also just recently removed) (n.b. to navigate from CS/Custom.html
to the status page, one must redirect to "../Status.html").

But this whole beautiful cathedral falls apart from one valid use case: we want to serve "jsroot"
from a subdirectory called "jsroot" - this is how this package is packaged and we do not want
to mess with it just to make midas happy.

So at the least we must enable serving of multi-level URL path names to serve 3rd party packages.

The most trivial way out is to replace the URL check "/ is not permitted" with ".. is not permitted".

(One could also have a list of all permitted subdirectories in ODB, but this would be hard to use and 
difficult to implement. Not my favourite solution.)

This will break the flatness of the mhttpd URLs (no subdirectories). But maybe it is sufficient
to write down "do not do this!" and close with "wontfix" all bug reports about "my custom page
is at http://localhost:8080/mycustomdir/new/verynew/custom.html, how come the [status] button
does not take me back to the status page?".


K.O.

> Parsing all URL in mhttpd to prevent /etc/passwd etc. to be returned is tricky, because people can use escape sequences etc. Therefore I think it is much better to restrict file access 
> on the file system level when opening a file. The only escape there one could have is "..", which can be tested easily. 
> 
> Therefore, I propose to restrict file access to two well-defined directories, which is one system directory and one user directory. The system directory should be defined via 
> $MIDASSYS/resources, and the user directory should be the experiment directory (as defined in exptab) followed by "resources". So if MIDASSYS equals to /usr/local/midas and the 
> experiment directory equals to /home/users/exp for example, we would only have these two directories (and of course the subdirectories within these) served by mhttpd:
> 
> $MIDASSYS/resource -> /usr/local/midas/resources
> <exptab>/resources -> /home/users/exp/resources
> 
> These directories should be hard-wired into mhttpd, and not go through and ODB entry, since otherwise one could manipulate the ODB entries (knowingly or unknowingly) and open a 
> back-door. 
> 
> If users need a more complex structure, they can put soft links into these directories.
> 
> The code which opens a resource file should then first evaluate $MIDASSYS, then add "/resources/", then add the requested file name, make sure that there is no ".." in the file name, 
> then open the file. If not existing, do the same for the <exptab>/resources/ directory.
> 
> This change will break most experiments, and forces people to move their custom pages to different directories, but I think it's the only clean solution and we just have to bite the 
> bullet.
> 
> Comments are welcome.
> 
> Stefan
  1496   14 Mar 2019 Konstantin OlchanskiInfoGyrations of custom pages and ODB /Custom/Path
> In my mind only one issue remains - when we say "we will serve files from directory X", how
> to we prevent mhttpd from serving files outside this directory by using trick URLs containing ".."
> and/or other gimmicks.
> 
> Disallow multi-level URL path names (by rejecting names that contain the directory separator "/").
> Replace the URL check "/ is not permitted" with ".. is not permitted".
> 

https://en.wikipedia.org/wiki/Directory_traversal_attack

K.O.
ELOG V3.1.4-2e1708b5