Back Midas Rome Roody Rootana
  Midas DAQ System, Page 28 of 46  Not logged in ELOG logo
Entry  06 May 2013, Konstantin Olchanski, Info, Recent-ish SVN changes at PSI 
A little while ago, PSI made some changes to the SVN hosting. The main SVN URL seems to remain the 
same, but SVN viewer moved to a new URL (it seems a bit faster compared to the old viewer): 
https://savannah.psi.ch/viewvc/meg_midas/trunk/

Also the SSH host key has changed to:

savannah.psi.ch,192.33.120.96 ssh-rsa 
AAAAB3NzaC1yc2EAAAABIwAAAQEAwVWEoaOmF9uggkUEV2/HhZo2ncH0zUfd0ExzzgW1m0HZQ5df1OYIb
pyBH6WD7ySU7fWkihbt2+SpyClMkWEJMvb5W82SrXtmzd9PFb3G7ouL++64geVKHdIKAVoqm8yGaIKIS0684
dyNO79ZacbOYC9l9YehuMHPHDUPPdNCFW2Gr5mkf/uReMIoYz81XmgAIHXPSgErv2Nv/BAA1PCWt6THMMX
E2O2jGTzJCXuZsJ2RoyVVR4Q0Cow1ekloXn/rdGkbUPMt/m3kNuVFhSzYGdprv+g3l7l1PWwEcz7V1BW9LNPp
eIJhxy9/DNUsF1+funzBOc/UsPFyNyJEo0p0Xw==

Fingerprint: a3:18:18:c4:14:f9:3e:79:2c:9c:fa:90:9a:d6:d2:fc

The change of host key is annoying because it makes "svn update" fail with an unhelpful message (some 
mumble about ssh -q). To fix this fault, run "ssh svn@savannah.psi.ch", then fixup the ssh host key as 
usual.

K.O.
Entry  30 Apr 2013, Konstantin Olchanski, Info, ROOT switched to GIT 
Latest news - the ROOT project switched from SVN to GIT.

Announcement:
http://root.cern.ch/drupal/content/root-has-moved-git

Fons's presentation with details on the conversion process, repository size and performance 
improvements:
https://indico.cern.ch/getFile.py/access?contribId=0&resId=0&materialId=slides&confId=246803

"no switch yard" work flow:
http://root.cern.ch/drupal/content/suggested-work-flow-distributed-projects-nosy

GIT cheat sheet:
http://root.cern.ch/drupal/content/git-tips-and-tricks

K.O.
Entry  11 Apr 2013, Thorsten Lux, Forum, Persistent ipcrm error 
Hello,

I have a problem with our DAQ which is based on Midas. Until now, for about 3 years, it worked quite well but since I tried to restart data taking after a break of 2 months, I get always the following error message:

[system.c:308:ss_shm_open,ERROR] Shared memory segment with key 0x4d008002 already exists, please remove it manually: ipcrm -M 0x4d008002
[midas.c:1950:cm_connect_experiment1,ERROR] cannot open database
Unexpected error #304


Then I tried the following to fix the problem:

-) I first checked with ipcs the shared memory segments:
0x4d008002 3244040 next 666 1077248 1
0x4d00006e 3276809 next 666 116444 1


Sometimes there is an additional line which I also delete.

-) I deleted with ipcrm -M 0x4d008002 / 0x4d00006e the shared memory segments

-) I removed the .SYS*.SHM files:
-rw-r--r-- 1 next users 0 Mar 16 2010 MIDAS/online/.ALARM.SHM
-rw-r--r-- 1 next users 0 Mar 16 2010 MIDAS/online/.ELOG.SHM
-rw-r--r-- 1 next users 0 Mar 16 2010 MIDAS/online/.HISTORY.SHM
-rw-r--r-- 1 next users 0 Mar 16 2010 MIDAS/online/.MSG.SHM
-rw-r--r-- 1 next users 1089536 Apr 11 15:46 MIDAS/online/.ODB.SHM
-rw-r--r-- 1 next users 116444 Apr 11 15:43 MIDAS/online/.SYSMSG.SHM
-rw-r--r-- 1 next users 16793660 Apr 11 15:43 MIDAS/online/.SYSTEM.SHM


-) I reboot the PC

-) I start the midas daemon using a shell script with the following lines:
cd /home/next/CAEN/A2818Drv/
sudo sh a2818_load
mhttpd -p 8080 -D


-) Normally I can start then a run but when I try to stop it I get again the error message from above.

In addition I get from time to time the following error messages:
[mhttpd,INFO] Client 'unknown' on buffer 'SYSMSG' removed by cm_watchdog because client pid 3287 does not exist
[NEXT DAQ,INFO] Client 'unknown' on buffer 'SYSMSG' removed by bm_wait_for_free_space because client pid 3280 does not exist
[mtransition,INFO] Client 'mhttpd' (PID 3229) on buffer 'ODB' removed by cm_watchdog (idle 47.4s,TO 10s)


Since all this did not help and although there was no update of the operation system, I decided the recompile the whole midas framework on this machine.
It compiled and I installed but the error persisted. In addition now I cannot start anymore the mlogger from the web interface but only manually. However, I can stop it from the web interface.

Do you have an idea what could be the problem? I start to be a bit desperate. Also because I am user of the DAQ system but the person who developed the system in the past, left already some years ago.

I am using a midas version from the 15.03.2010 (midas20100315.tar.gz) as it seems. In principle there is only one frontend device, a CAEN V1740 digitizer, connected to Midas.

Thanks!
    Reply  11 Apr 2013, Konstantin Olchanski, Forum, Persistent ipcrm error 
> [system.c:308:ss_shm_open,ERROR] Shared memory segment with key 0x4d008002 already exists, 
please remove it manually: ipcrm -M 0x4d008002
> [midas.c:1950:cm_connect_experiment1,ERROR] cannot open database
> Unexpected error #304

For the record, the SYSV shared memory with it's keys and segments has always been brittle and hard to 
debug with problems such as you describe.

Also SYSV shared memory suffers from key aliasing - shared memory segments created with different 
names all map into the same key, collide and nothing works. You may not see this if all the files are 
located on a local disk, but if the .SHM files are located on an NFS disk, it can happen (and did happen in 
T2K).

For this reason, since around August 2010, MIDAS also implements the POSIX shared memory and for new 
MIDAS installations, POSIX shared memory is the default. (On MacOS, POSIX shared memory was always 
the default because MacOS has very small maximum SYSV shared memory size).

The type of shared memory is set by the contents of .SHM_TYPE.TXT and it is possible to switch between 
SYSV and POSIX shared memory at will. (Ask me).

MIDAS still uses SYSV semaphores because they have a built-in feature to automatically unlock the 
semaphore if the program that locked it dies for any reason. POSIX semaphores do not have this built-in 
feature and we would have to implement some kind of detection and recovery for the case when a 
semaphore is locked by a program that died (and will never unlock it back).

K.O.

P.S. I will address the rest of Prof. Thorsten's question in a private email.

P.P.S. Please post elog messages in the "plain" format. NOT HTML or ELCODE.
    Reply  11 Apr 2013, Stefan Ritt, Forum, Persistent ipcrm error 

Thorsten Lux wrote:
In addition now I cannot start anymore the mlogger from the web interface but only manually. However, I can stop it from the web interface.


At least that one can be fixed easily. Each program has a certain command with which one can start it. This has to be put into the ODB under /Programs/<program>. In your case you probably need

/Programs/Logger/Start command = mlogger -D

to start the logger from the Web page. To debug your run stop problems, I would recommend to start all programs in a terminal window and look which one crashes on the run end.

/Stefan
       Reply  12 Apr 2013, Thorsten Lux, Forum, Persistent ipcrm error 
[quote="Stefan Ritt"][quote="Thorsten Lux"]In addition now I cannot start
anymore the mlogger from the web interface but only manually. However, I can
stop it from the web interface.[/quote]

At least that one can be fixed easily. Each program has a certain command with
which one can start it. This has to be put into the ODB under
/Programs/<program>. In your case you probably need

/Programs/Logger/Start command = mlogger -D

to start the logger from the Web page. To debug your run stop problems, I would
recommend to start all programs in a terminal window and look which one crashes
on the run end.

/Stefan[/quote]


Hi Stefan,

under /Programs/Logger/Start command I have
/home/next/MIDAS/midas/linux/bin/mlogger -D . This command does not work if I
press the "Start Logger" button on the mhttpd webpage but when I copy and paste
this command to a terminal window, it does the job. 

Well, thanks to you both for the fast response. I wrote Konstantin an email with
the results of the tests he suggested me to do.

Ciao
          Reply  12 Apr 2013, Stefan Ritt, Forum, Persistent ipcrm error 
> Hi Stefan,
> 
> under /Programs/Logger/Start command I have
> /home/next/MIDAS/midas/linux/bin/mlogger -D . This command does not work if I
> press the "Start Logger" button on the mhttpd webpage but when I copy and paste
> this command to a terminal window, it does the job. 
> 
> Well, thanks to you both for the fast response. I wrote Konstantin an email with
> the results of the tests he suggested me to do.
> 
> Ciao

Let me guess: mhttpd is started under root (to be able to connect to port 80), and for root the mlogger program 
is not in the path. Try to put into the odb the full path:

/Programs/Logger Start command = /usr/local/bin/mlogger -D
             Reply  12 Apr 2013, Thorsten Lux, Forum, Persistent ipcrm error 
> 
> > Hi Stefan,
> > 
> > under /Programs/Logger/Start command I have
> > /home/next/MIDAS/midas/linux/bin/mlogger -D . This command does not work if I
> > press the "Start Logger" button on the mhttpd webpage but when I copy and paste
> > this command to a terminal window, it does the job. 
> > 
> > Well, thanks to you both for the fast response. I wrote Konstantin an email with
> > the results of the tests he suggested me to do.
> > 
> > Ciao
> 
> Let me guess: mhttpd is started under root (to be able to connect to port 80), and for root the mlogger program 
> is not in the path. Try to put into the odb the full path:
> 
> /Programs/Logger Start command = /usr/local/bin/mlogger -D

Yes, mhttpd is started as sudo, but I have the full path in the start command. And every user has the right to
execute mlogger. But okay, I will concentrate first to get the rest working again and then I will fight this problem.

Thanks!
    Reply  12 Apr 2013, Thorsten Lux, Forum, Persistent ipcrm error 
Hi,

it seems that I solved the problem in a quite brutal way.
I opened the database with odbedit and saved first the whole database as a ASCII
file and then I did the same for each section separately. Then I closed odbedit.
Afterwards I deleted all .*.SHM files including .ODB.SHM and rebooted the system.
After the restart I opened odbedit and started mhttpd. With this blank system
the problem had disappeared. Afterwards I loaded section by section from the
previous created ASCII files. After each section I tested if I can start and
stop runs and it worked without problems. At the end I also loaded the ASCII
file which contained the whole database. In this case I got the following error
message:
[odb.c:6038:db_paste,ERROR] found string exceeding MAX_STRING_LENGTH

However, after a reboot everything worked fine. I can start and stop runs, with
and without frontend, without any error message. Only the mlogger resisted to
work again. 

But also this problem we solved. It seems it was related to a missing library
path. It is strange since while in the mhttpd web page the command does not work
and is not giving any error message, copying the same command to a terminal and
to start it manually does the job. We solved it by putting the start of mlogger
in a simple shell script and to execute it then from the mhttpd web page.
Probably not an elegant solution but it does the job.

Well, with this I can enjoy my weekend to start over with data taking next week!

Thanks a lot!

Thorsten
       Reply  12 Apr 2013, Stefan Ritt, Forum, Persistent ipcrm error 
> [odb.c:6038:db_paste,ERROR] found string exceeding MAX_STRING_LENGTH

Ok, so here is what probably happened. Some user program wrote a long string into the ODB and somehow corrupted it. This corruption persists as long as you work with 
binary data. Indeed "rebuilding" the ODB helps in that case. What we do actually is at the beginning of every run, the ODB contents is dumped into the data file via

/Logger/Channels/0/Setting/ODB dump

in case we get ODB corruption, we clear all *.shm files as well as the shared memory segments, create a fresh ODB, extract the ODB from the last successful run via

odbhist -e runxxx.mid

and load it via odbedit. I put some additional code in most midas functions to prevent this corruption (and thus your saw the above error "found string exceeding 
MAX_STRING_LENGTH"), but since the ODB is physically in the address space of each midas program, they can theoretically bypass the midas functions and write accidentally 
into the ODB with an uninitialized pointer or so.

Best regards,
Stefan
Entry  13 Feb 2013, Konstantin Olchanski, Info, Review of github and bitbucket 
I have done a review of github and bitbucket as candidates for hosting GIT repositories for collaborative 
DAQ-type projects. Here is my impressions.

1. GIT as a software management tool seems to be a reasonable choice for DAQ-type projects. "master" 
repositories can be hosted at places like github or self-hosted (in the simplest case, only 
http://host/~user web access is required to host a git repository), for each "daq project" aka "experiment" 
one would "clone" the master repository, perform any local modifications as required, with full local 
version control, and when desired feed the changes back to the master repository as direct commits (git 
push), as patches posted to github ("pull requests") or patches emailed to the maintainers (git format-
patch).

2. Modern requirements for hosting a DAQ-type project include:
a) code repository (GIT, etc) with reasonably easy user access control (i.e. commit privileges should be 
assigned by the project administrators directly, regardless of who is on the payroll at which lab or who is 
a registered user of CERN or who is in some LDAP database managed by some IT departement 
somewhere).
b) a wiki for documentation, with similar user access control requirements.
c) a mailing list, forum or bug tracking system for communication and "community building"
d) an ability to web host large static files (schematics, datasheets, firmware files, etc)
e) reasonable web-based tools for browsing the files, looking at diffs, "cvs annotate/git blame", etc.

3. Both github and bitbucket satisfy most of these requirements in similar ways:

a) GIT repositories:
aa) access using git, ssh and https with password protection. ssh keys can be uploaded to the server, 
permitting automatic commits from scripts and cron jobs.
bb) anonymous checkout possible (cannot be disabled)
cc) user management is simple: participants have to self-register, confirm their email address, the project 
administrator to gives them commit access to specific git repositories (and wikis).
dd) for the case of multiple project administrators, one creates "teams" of participants. In this 
configuration the repositories are owned by the "team" and all designated "team administrators" have 
equal administrative access to the project.

b) Wiki:
aa) both github and bitbucket provide rudimentary wikis, with wiki pages stored in secondary git 
repositories (*NOT* as a branch or subdirectory of the main repo).
bb) github supports "markdown" and "mediawiki" syntax
cc) bitbucket supports "markdown" and "creole" syntax (all documentation and examples use the "creole" 
syntax).
dd) there does not seem to be any way to set the "project standard" syntax - both wikis have the "new 
page" editor default to the "markdown" syntax.
ee) compared to mediawiki (wikipedia, triumf daq wiki) and even plone, both github and bitbucket wikis 
lack important features:
1) cannot edit individual sections of a page, only the whole page at once, bad if you have long pages.
2) cannot upload images (and other documents) directly through the web editor/interface. Both wikis 
require that you clone the wiki git repository, commit image and other files locally and push the wiki git 
repo into the server (hopefully without any collisions), only then you can use the images and documents 
in the wiki.
3) there is no "preview" function for images - in mediawiki I can have small size automatically generated 
"preview" images on the wiki page, when I click on them I get the full size image. (Even "elog" can do this!)
ff) to be extra helpful, the wiki git repository is invisible to the normal git repository graphical tools for 
looking at revisions, branches, diffs, etc. While github has a special web page listing all existing wiki 
pages, bitbucket does not have such a page, so you better write down the filenames on a piece of paper.

c) mailing list/forum/bug tracking:
aa) both github and bitbucket implement reasonable bug tracking systems (but in both systems I do not 
see any button to export the bug database - all data is stuck inside the hosting provider. Perhaps there is 
a "hidden button" somewhere).
bb) bitbucket sends quite reasonable email notifications
cc) github is silent, I do not see any email notifications at all about anything. Maybe github thinks I do not 
want to see notices about my own activities, good of it to make such decisions for me.

d) hosting of large files: both git and wiki functions can host arbitrary files (compared to mediawiki only 
accepting some file types, i.e. Quartus pof files are rejected).

e) web based tools: thumbs up to both! web interfaces are slick and responsive, easy to use.

Conclusions:

Both github and bitbucket provide similar full-featured git repository hosting, user management and bug 
tracking.

Both provide very rudimentary wiki systems. Compared to full featured wikis (i.e. mediawiki), this is like 
going back to SCCS for code management (from before RCS, before CVS, before SVN). Disappointing. A 
deal breaker if my vote counts.

K.O.
    Reply  14 Feb 2013, Stefan Ritt, Info, Review of github and bitbucket 
Let me add my five cents:

We use bitbucket now since two months at PSI, and are very happy with it.

Pros:

- We like the GIT flow model (http://nvie.com/posts/a-successful-git-branching-model/). You can at the same time do hot fixes, have a "distribution 
version", and keep a development branch, where you can try new things without compromising the distribution.
- Nice and fast Web interface, especially the "blame" is lightning fast compared to SVN/CVS
- GIT is non-centralized, so your local clone of a repository contains everything. If bitbucket is down/asks for money, you can continue with your local 
repository and clone it to some other hosting service, or host it yourself
- SourceTree (http://www.sourcetreeapp.com/) is a nice GUI for Mac lovers. 
- Easy user management
- Free for academic use

Con:

- Wiki is limited as KO wrote, so it should not be used as a "full" wiki to replace Plone for example, just to annotate your project
- SVN revision number is gone. This is on purpose since it does not make sense any more if you keep several parallel branches (merging becomes a 
nightmare), so one has to use either the (random) commit-ID or start tagging again.

So I conclusion, I would say that it's time to switch MIDAS to GIT. We'll probably do that in July when I will be at TRIUMF.

/Stefan
       Reply  01 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
And my 2ct:

Go for git!

I've been using git since 2007 or so, after cvs and svn. Git has some killer features which I can't miss any more:

* No central repo. Have all the history with you on the train.
* Branching and merging, with stable branches and feature branches.
  Happy hacking while my students do analysis on a stable version.
  Or multiple development branches for several features.
  And merging really works, including fixing up merge conflicts.
* "git bisect" for finding which commit introduced a (reproducible) bug.
* "gitk --all"

I use git for everything: Software, tex, even (Ooffice) Word documents.

Go for git. :-)

Randolf
          Reply  02 Apr 2013, Konstantin Olchanski, Info, Review of github and bitbucket 
Hi, thanks for your positive feedback. I have been using git for small private projects for a few years now
and I like it. It is similar to the old SCCS days - good version control without having to setup servers,
accounts, doodads, etc.

> * No central repo. Have all the history with you on the train.
> * Branching and merging, with stable branches and feature branches.
>   Happy hacking while my students do analysis on a stable version.
>   Or multiple development branches for several features.

This is the part that worries me the most. Without a "central" "authoritative" repository,
in just a few quick days, everybody will have their own incompatible version of midas.

I guess I am okey with your private midas diverging from mainstream, but when *I* end up
with 10 different incompatible versions just in *my* repository, can that be good?

>   And merging really works, including fixing up merge conflicts.

But somebody still has to do it. With a central repository, the problem takes care of
itself - each developer has to do their own merging - with svn, you cannot commit
to the head without merging the head into your code first. But with git, I can just throw
my changes int some branch out there hoping that somebody else would do the merging.
But guess what, there aint anybody home but us chickens. We do not have a mad finn here
to enforce discipline and keep us in shape...

As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
branches going, neither has all the features combined together and each branch has bugs with
the fixes in a different branch. What a way to run a railroad.

> * "git bisect" for finding which commit introduced a (reproducible) bug.
> * "gitk --all"
>
> Go for git. :-)

Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".

K.O.
             Reply  02 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
Hi Konstantin,

> > * No central repo. Have all the history with you on the train.
> > * Branching and merging, with stable branches and feature branches.
> >   Happy hacking while my students do analysis on a stable version.
> >   Or multiple development branches for several features.
> 
> This is the part that worries me the most. Without a "central" "authoritative" repository,
> in just a few quick days, everybody will have their own incompatible version of midas.

No! This is probably one of the biggest misunderstandings of the git workflow.

You can of course _define_ one central repo: This is the one that you and Stefan decide to be "the source" (as
Linus does for the kernel). It's like the central svn repo: Only Stefan and you can push to it, and everybody
else will pull from it. Why should I pull MIDAS from some obscure source, when your "public" repo is available.

Look at the Linux Kernel: Linus' version is authoritative, even though everybody and his best friend has his
own kernel repo.

So, the main workflow does not change a lot: You collect patches, commit them, and "push" them to the central
repo. All users "pull" from this central repo. This is very much what svn offers.

> 
> I guess I am okey with your private midas diverging from mainstream, but when *I* end up
> with 10 different incompatible versions just in *my* repository, can that be good?

See above: _You_ define what the central repo is.

But: I _bet_ you will very soon have 10 versions in your personal repo, because _you choose_ to do so. It's
just SO much easier. The non-linear history with many branches is a _feature_. I can't live without it any more:


Looking at my MIDAS analyzer:

I have a "public" repo in /pub/git/lamb.git. This is where I publish my analyzer versions. All my collaborators
pull from this.

Then I have my personal repo in ~/src/lamb. 
This is where I develop. When I think something is ready for the public, I merge this branch into the public repo. 

Whenever I start to work on a new feature, I create a branch in my _local_ repo (~/src/lamb).  I can fiddle and
play, not affecting anybody else, because it never sees the public repo.
OK, collaborator A finds a bug. I switch to my local copy of the public version, fix the bug, and push the fix
to the publix repo. Then I go back to my (local) feature branch, merge the bug fix, and continue hacking.
Only when the feature is ready, I push it to the public repo.

Things get moe interesting as you work on several features simultaneously. You have e.g. 3 topic branches:
(a) is nearly ready, and you want a bunch of people to test it.
    push branch "feature (a)" to the public repo and tell the people which branch to pull.
(b) is WIP, you hack on it without affecting (a).
(c) is bug fixes which may or may not affect (a) or (b).
And so on.

You will soon discover the beauty of several parallel branches.

Plus, git merges are SO simple that you never think about "how to merge"

> 
> >   And merging really works, including fixing up merge conflicts.
> 
> But somebody still has to do it. With a central repository, the problem takes care of
> itself - each developer has to do their own merging - with svn, you cannot commit
> to the head without merging the head into your code first. But with git, I can just throw
> my changes int some branch out there hoping that somebody else would do the merging.
> But guess what, there aint anybody home but us chickens. We do not have a mad finn here
> to enforce discipline and keep us in shape...

See above: You will have the exact same workflow in git, if you like.




> As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
> branches going, neither has all the features combined together and each branch has bugs with
> the fixes in a different branch. What a way to run a railroad.

I haven't look at this. All I can say: Branches are one of the best features.

> 
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> > * "gitk --all"
> >
> > Go for git. :-)
> 
> Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".

Easy: YOU do it.

Keep going as in svn: Collect patches, and send them out.

And then, try "git checkout -b my_first_branch", hack, hack, hack,
"git merge master".

Best,

Randolf


> 
> K.O.
          Reply  03 Apr 2013, Stefan Ritt, Info, Review of github and bitbucket 
> * "git bisect" for finding which commit introduced a (reproducible) bug.

I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100 
revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then 
slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending 
commit. Finding that there is a command it git which does this automatically is great news.

Stefan
             Reply  03 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> 
> I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100 
> revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then 
> slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending 
> commit. Finding that there is a command it git which does this automatically is great news.

even more so considering the nonlinear history (due to branching) in a regular git repo.
Entry  08 Mar 2013, Konstantin Olchanski, Info, ODB /Experiment/MAX_EVENT_SIZE 
Somebody pointed out an error in the MIDAS documentation regarding maximum event size 
supported by MIDAS and the MAX_EVENT_SIZE #define in midas.h.

Since MIDAS svn rev 4801 (August 2010), one can create events with size bigger than 
MAX_EVENT_SIZE in midas.h (without having to recompile MIDAS):

To do so, one must increase:
- the value of ODB /Experiment/MAX_EVENT_SIZE
- the size of the SYSTEM shared memory event buffer (and any buffers used by the event builder, 
etc)
- max_event_size & co in your frontend.

Actual limits on the bank size and event size are written up here:
https://ladd00.triumf.ca/elog/Midas/757

The bottom line is that the maximum event size is limited by the size of the SYSTEM buffer which is 
limited by the physical memory of your computer. No recompilation of MIDAS necessary.

K.O.
Entry  11 Feb 2013, Wes Gohn, Forum, send_tcp error 
I am getting a series of errors from MIDAS that I do not understand, so I hope
someone can help me figure this out.

I am attempting to run many frontends on one machine. I can run 8 with no
problem, but if I try to add a 9th I get errors relating to send_tcp. 

I have tried adjusting the max event sizes and buffer sizes, but it has not
resolved the problem. I also tried adjusting the data rates and the total data
volume going through each frontend, but there was no change. And as far as I can
tell I am not up against any hardware limits.

The errors are repeated continuously while a run is going. The three errors I
get are:

16:45:22 [FakeData09,ERROR] [midas.c:9958:rpc_client_call,ERROR] send_tcp() failed
16:45:22 [FakeData09,ERROR] [frontend_rpc.c:191:rpc_call,ERROR] No RPC to master
16:45:22 [FakeData09,ERROR] [system.c:4166:send_tcp,ERROR]
send(socket=9,size=16) returned -1, errno: 32 (Broken pipe)

If you have any suggestions of how I can debug this, please let me know. Thanks!
    Reply  11 Feb 2013, Stefan Ritt, Forum, send_tcp error 
> I am getting a series of errors from MIDAS that I do not understand, so I hope
> someone can help me figure this out.
> 
> I am attempting to run many frontends on one machine. I can run 8 with no
> problem, but if I try to add a 9th I get errors relating to send_tcp. 
> 
> I have tried adjusting the max event sizes and buffer sizes, but it has not
> resolved the problem. I also tried adjusting the data rates and the total data
> volume going through each frontend, but there was no change. And as far as I can
> tell I am not up against any hardware limits.
> 
> The errors are repeated continuously while a run is going. The three errors I
> get are:
> 
> 16:45:22 [FakeData09,ERROR] [midas.c:9958:rpc_client_call,ERROR] send_tcp() failed
> 16:45:22 [FakeData09,ERROR] [frontend_rpc.c:191:rpc_call,ERROR] No RPC to master
> 16:45:22 [FakeData09,ERROR] [system.c:4166:send_tcp,ERROR]
> send(socket=9,size=16) returned -1, errno: 32 (Broken pipe)
> 
> If you have any suggestions of how I can debug this, please let me know. Thanks!

Can you tell me

- why you need 9 frontends
- what kind of data your frontends produce
- how your event builder looks like and how you assemble the fragments
- what messages/errors you see when you run odbedit BEFORE the crash

/Stefan
       Reply  11 Feb 2013, Wes Gohn, Forum, send_tcp error 
> > I am getting a series of errors from MIDAS that I do not understand, so I hope
> > someone can help me figure this out.
> > 
> > I am attempting to run many frontends on one machine. I can run 8 with no
> > problem, but if I try to add a 9th I get errors relating to send_tcp. 
> > 
> > I have tried adjusting the max event sizes and buffer sizes, but it has not
> > resolved the problem. I also tried adjusting the data rates and the total data
> > volume going through each frontend, but there was no change. And as far as I can
> > tell I am not up against any hardware limits.
> > 
> > The errors are repeated continuously while a run is going. The three errors I
> > get are:
> > 
> > 16:45:22 [FakeData09,ERROR] [midas.c:9958:rpc_client_call,ERROR] send_tcp() failed
> > 16:45:22 [FakeData09,ERROR] [frontend_rpc.c:191:rpc_call,ERROR] No RPC to master
> > 16:45:22 [FakeData09,ERROR] [system.c:4166:send_tcp,ERROR]
> > send(socket=9,size=16) returned -1, errno: 32 (Broken pipe)
> > 
> > If you have any suggestions of how I can debug this, please let me know. Thanks!
> 
> Can you tell me
> 
> - why you need 9 frontends
> - what kind of data your frontends produce
> - how your event builder looks like and how you assemble the fragments
> - what messages/errors you see when you run odbedit BEFORE the crash
> 
> /Stefan

Our experiment will need 24 frontends that will each run on its own machine. For now we
want to run 24 "fake" frontends on one machine for testing purposes. 9 is the limit
where it stops working properly. 

We have a pulser that is giving us periodic data at a constant rate. We have a master
frontend running on a different PC in interrupt mode that assembles the events, and then
N "FakeData" frontends running in polled mode on a single PC. 

We do have an event builder, but we get these errors whether the event builder is
running or not.

At the start of a run, I see the following messages:

[mtransition,INFO] Run #21 started
Sat Feb 9 16:14:57 2013 [FakeData09,ERROR] [system.c:4166:send_tcp,ERROR]
send(socket=9,size=16) returned -1, errno: 104 (Connection reset by peer)
Sat Feb 9 16:14:57 2013 [FakeData09,ERROR] [midas.c:9958:rpc_client_call,ERROR]
send_tcp() failed
Sat Feb 9 16:14:57 2013 [FakeData09,ERROR] [frontend_rpc.c:191:rpc_call,ERROR] No RPC to
master
Sat Feb 9 16:14:57 2013 [master,ERROR] [midas.c:10844:recv_tcp_server,ERROR] Cannot
allocate 268435512 bytes for network buffer
Sat Feb 9 16:14:57 2013 [master,ERROR] [midas.c:12893:rpc_server_receive,ERROR]
recv_tcp_server() returned -1, abort
Sat Feb 9 16:14:57 2013 [master,TALK] Program 'FakeData09' on host 'fe01' aborted

After this it recycles just the first three errors that I mentioned above.
          Reply  12 Feb 2013, Stefan Ritt, Forum, send_tcp error 
Ok, now the picture is clearer. I have however no idea what the real problem is. The number of concurrent programs in midas is 64 as defined in midas.h (MAX_CLIENTS) so that should not be the problem. In our experiment we run 10 front-ends (but 
on 10 different machines) without problems. Other experiments used 27 front-ends.

The TCP error you see comes probably from the fact that the mserver side crashes or quits, then the socket gets broken. What you can try to debug this is to run mserver manually. Just remove mserver from inetd, and start it with "mserver -d" and 
watch what happens. Do you see any additional error messages. If the mserver segfaults, you should turn on core dumps and have a look there. Note that the mserver starts a child process on each incoming connection, so running mserver in gdb 
does not really help, since the child processes (which connect back to the front-ends) are not seen by gdb.

Have you tried to run the 9 front-ends on maybe two different PCs (5 and 4) to see if the problem is on the client side?


Best regards,
Stefan
             Reply  19 Feb 2013, Wes Gohn, Forum, send_tcp error 

Thank you for the help. As it turns out, the problem was due to the fact that we were compiling MIDAS on our 64 bit backend machine, but one of the frontend machines is 32 bit. The problem was resolved by compiling a 32 bit version of MIDAS in
addition to the 64 bit version.
Entry  24 Jan 2013, Konstantin Olchanski, Info, Compression benchmarks 
In the DEAP experiment, the normal MIDAS mlogger gzip compression  is not fast enough for some data 
taking modes, so I am doing tests of other compression programs. Here is the results.

Executive summary:

fastest compression is no compression (cat at 1800 Mbytes/sec - memcpy speed), next best are:
"lzf" at 300 Mbytes/sec and  "lzop" at 250 Mbytes/sec with 50% compression
"gzip -1" at around 70 Mbytes/sec with around 70% compression
"bzip2" at around 12 Mbytes/sec with around 80% compression
"pbzip2", as advertised, scales bzip2 compression linearly with the number of CPUs to 46 Mbytes/sec (4 
real CPUs), then slower to a maximum 60 Mbytes/sec (8 hyper-threaded CPUs).

This confirms that our original choice of "gzip -1" method for compression using zlib inside mlogger is 
still a good choice. bzip2 can gain an additional 10% compression at the cost of 6 times more CPU 
utilization. lzo/lzf can do 50% compression at GigE network speed and at "normal" disk speed.

I think these numbers make a good case for adding lzo/lzf compression to mlogger.

Comments about the data:

- time measured is the "elapsed" time of the compression program. it excludes the time spent flushing 
the compressed output file to disk.
- the relevant number is the first rate number (input data rate)
- test machine has 32GB of RAM, so all I/O is cached, disk speed does not affect these results
- "cat" gives a measure of overall machine "speed" (but test file is too small to give precise measurement)
- "gzip -1" is the recommended MIDAS mlogger compression setting
- "pbzip2 -p8" uses 8 "hyper-threaded" CPUs, but machine only has 4 "real" CPU cores

<pre>
cat                 : time   0.2s, size    431379371    431379371, comp   0%, rate 1797M/s 1797M/s
cat                 : time   0.6s, size   1013573981   1013573981, comp   0%, rate 1809M/s 1809M/s
cat                 : time   1.1s, size   2027241617   2027241617, comp   0%, rate 1826M/s 1826M/s

gzip -1             : time   6.4s, size    431379371    141008293, comp  67%, rate  67M/s  22M/s
gzip                : time  30.3s, size    431379371    131017324, comp  70%, rate  14M/s   4M/s
gzip -9             : time  94.2s, size    431379371    133071189, comp  69%, rate   4M/s   1M/s

gzip -1             : time  15.2s, size   1013573981    347820209, comp  66%, rate  66M/s  22M/s
gzip -1             : time  29.4s, size   2027241617    638495283, comp  69%, rate  68M/s  21M/s

bzip2 -1            : time  34.4s, size    431379371     91905771, comp  79%, rate  12M/s   2M/s
bzip2               : time  33.9s, size    431379371     86144682, comp  80%, rate  12M/s   2M/s
bzip2 -9            : time  34.2s, size    431379371     86144682, comp  80%, rate  12M/s   2M/s

pbzip2 -p1          : time  34.9s, size    431379371     86152857, comp  80%, rate  12M/s   2M/s (1 CPU)
pbzip2 -p1 -1       : time  34.6s, size    431379371     91935441, comp  79%, rate  12M/s   2M/s
pbzip2 -p1 -9       : time  34.8s, size    431379371     86152857, comp  80%, rate  12M/s   2M/s

pbzip2 -p2          : time  17.6s, size    431379371     86152857, comp  80%, rate  24M/s   4M/s (2 CPU)
pbzip2 -p3          : time  11.9s, size    431379371     86152857, comp  80%, rate  36M/s   7M/s (3 CPU)
pbzip2 -p4          : time   9.3s, size    431379371     86152857, comp  80%, rate  46M/s   9M/s (4 CPU)
pbzip2 -p4          : time  45.3s, size   2027241617    384406870, comp  81%, rate  44M/s   8M/s
pbzip2 -p8          : time  33.3s, size   2027241617    384406870, comp  81%, rate  60M/s  11M/s

lzop -1             : time   1.6s, size    431379371    213416336, comp  51%, rate 261M/s 129M/s
lzop                : time   1.7s, size    431379371    213328371, comp  51%, rate 249M/s 123M/s
lzop                : time   4.3s, size   1013573981    515317099, comp  49%, rate 234M/s 119M/s
lzop                : time   7.3s, size   2027241617    978374154, comp  52%, rate 277M/s 133M/s
lzop -9             : time 176.6s, size    431379371    157985635, comp  63%, rate   2M/s   0M/s

lzf                 : time   1.4s, size    431379371    210789363, comp  51%, rate 299M/s 146M/s
lzf                 : time   3.6s, size   1013573981    523007102, comp  48%, rate 282M/s 145M/s
lzf                 : time   6.7s, size   2027241617    972953255, comp  52%, rate 303M/s 145M/s

lzma -0             : time  27s, size    431379371    112406964, comp  74%, rate  15M/s   4M/s
lzma -1             : time  35s, size    431379371    111235594, comp  74%, rate  12M/s   3M/s
lzma: > 5 min, killed

xz -0               : time  28s, size    431379371    112424452, comp  74%, rate  15M/s   4M/s
xz -1               : time  35s, size    431379371    111252916, comp  74%, rate  12M/s   3M/s
xz: > 5 min, killed
</pre>

Columns are:
compression program
time: elapsed time of the compression program (excludes the time to flush output file to disk)
size: size of input file, size of output file
comp: compression ration (0%=no compression, 100%=file compresses into nothing)
rate: input data rate (size of input file divided by elapsed time), output data rate (size of output file 
divided by elapsed time)

Machine used for testing (from /proc/cpuinfo):
Intel(R) Core(TM) i7-3820 CPU @ 3.60GHz
quad core cpu with hyper-threading (8 CPU total)
32 GB quad-channel DDR3-1600.

Script used for testing:

#!/usr/bin/perl -w

my $x = join(" ", @ARGV);

my $in  = "test.mid";
my $out = "test.mid.out";
my $tout = "test.time";

my $cmd = "/usr/bin/time -o $tout -f \"%e\" /usr/bin/time $x < test.mid > test.mid.out";

print $cmd,"\n";

my $t0 = time();
system $cmd;
my $t1 = time();

my $c = `cat $tout`;
print "Elapsed time: $c";

my $t = $c;

#system "/bin/ls -l $in $out";

my $sin  = -s $in;
my $sout = -s $out;

my $xt = $t1-$t0;
$xt = 1 if $xt<1;

print "Total time: $xt\n";

print sprintf("%-20s: time %5.1fs, size %12d %12d, comp %3.0f%%, rate %3dM/s %3dM/s", $x, $t, $sin, 
$sout, 100*($sin-$sout)/$sin, ($sin/$t)/1e6, ($sout/$t)/1e6), "\n";

exit 0;
# end

Typical output:

[deap@deap00 pet]$ ./r.perl lzf    
/usr/bin/time -o test.time -f "%e" /usr/bin/time lzf < test.mid > test.mid.out
1.27user 0.15system 0:01.44elapsed 99%CPU (0avgtext+0avgdata 2800maxresident)k
0inputs+411704outputs (0major+268minor)pagefaults 0swaps
Elapsed time: 1.44
Total time: 3
lzf                 : time   1.4s, size    431379371    210789363, comp  51%, rate 299M/s 146M/s

K.O.
    Reply  06 Feb 2013, Stefan Ritt, Info, Compression benchmarks 
I redid the tests from Konstantin for our MEG experiment at PSI. The event structure is different, so it
is interesting how the two different experiments compare. We have an event size of 2.4 MB and a trigger
rate of ~10 Hz, so we produce a raw data rate of 24 MB/sec. A typical run contains 2000 events, so has a 
size of 5 GB. Here are the results:


cat                 : time   7.8s, size   4960156030   4960156030, comp   0%, rate 639M/s 639M/s

gzip -1             : time 147.2s, size   4960156030   2468073901, comp  50%, rate  33M/s  16M/s

pbzip2 -p1          : time 679.6s, size   4960156030   1738127829, comp  65%, rate   7M/s   2M/s (1 CPU)
pbzip2 -p8          : time  96.1s, size   4960156030   1738127829, comp  65%, rate  51M/s  18M/s (8 CPU)


As one can see, our compression ratio is poorer (due to the quasi random noise in our waveforms), but the
difference between gzip -1 and pbzip2 is larger (15% instead 10% for DEAP). The single CPU version of
pbzip cannot sustain our DAQ rate of 24 MB, but the parallel version can. Actually we have a somehow old
dual-core dual-CPU board 2.5 GHz Xenon box, and make 8 hyper-threading CPUs out of the total 4 cores.
Interestingly the compression rate scales with 7.3 for 8 virtual cores, so hyper-threading does its job.
So we take all our data with the pbzip2 compression. The additional 15% as compared with gzip does 
not sound much, but we produce raw 250 TB/year. So gzip gives us 132 TB/year and pbzip2 gives 
us 98 TB/year, and we save quite some disks.

Note that you can run bzip2 (as all the other methods) already now with the current logger, if you specify
an external compression program in the ODB using the pipe functionality:


local:MEG:S]/>cd Logger/Channels/0/Settings/
[local:MEG:S]Settings>ls
Active                          y
Type                            Disk
Filename                        |pbzip2>/megdata/run%06d.mid.bz2
Format                          MIDAS
Compression                     0
ODB dump                        y
Log messages                    0
Buffer                          SYSTEM
Event ID                        -1
Trigger mask                    -1
Event limit                     0
Byte limit                      0
Subrun Byte limit               0
Tape capacity                   0
Subdir format                   
Current filename                /megdata/run197090.mid.bz2
</pre>
Entry  28 Jan 2013, Robert Pattie, Forum, analyzer cannot connect to the statistics database 
I've managed to put the analyzer into state where it cannot connect to the 
statistics database.  The error message suggests another analyzer is connected.  
I've recompiled MIDAS and the user code, restarted the computer etc..., and the 
analyzer cannot connect.  If I run "odbedit -c clean", I can start the analyzer, 
but get the same error when exiting or starting a run.  I've commented out all the
user code in the analyzer.c and its associated analyzer module's, and read event
code in the frontend and nothing resolves this issue.  Any suggestion?

The output from attempting to run the analyzer is:

Connect to experiment nnbarxwnr...[odb.c:1013:db_open_database,ERROR] Removed ODB
client 'Analyzer', index 0 because process pid 31982 does not exists
Deleted entry '/System/Clients/31982' for client 'Analyzer' because it is not
connected to ODB
OK
Root server listening on port 9090...
Loading previous online histos from ./data/last.root
ss_mutex_wait_for: pthread_mutex_lock() returned errno 22 (Invalid argument),
aborting...


When attempting to clean up the Analyzer tree in the ODB I receive the message
:"deletion of key not allowed."  

It appears that running the analyzer sets the permissions of the Statistics tree of
my analyzer module into RWDE.  

Adding the following lines to my start up script eliminate the above problem:
odbedit -c clean
odbedit -c "chmod 7 Analyzer/"
odbedit -c "rm /Analyzer/fADCs/Statistics"

Now when starting a run the analyzer crashes with this error:analyzer:
src/midas.c:11443: rpc_execute: Assertion `return_buffer' failed.
Aborted (core dumped)

and the messages in the odb are :

[system.c:4295:recv_tcp,ERROR] header: recv returned 0, n_received = 0, unexpected
connection closure
[midas.c:10042:rpc_client_call,ERROR] recv_tcp() failed, routine = "rc_transition",
host = "LANL-FADC-DAQ"
[midas.c:4130:cm_transition,ERROR] Could not start a run: cm_transition() status 503,
message 'Unknown error 503 from client 'Analyzer' on host LANL-FADC-DAQ'
Deleted entry '/System/Clients/1001' for client 'Analyzer' because process pid 1001
does not exists
[midas.c:8893:rpc_client_check,ERROR] Connection broken to "Analyzer" on host
LANL-FADC-DAQ
Run #180 start aborted
Error: Unknown error 503 from client 'Analyzer' on host LANL-FADC-DAQ

20:05:02 [Logger,INFO] Deleting previous file "./data/run00180.mid"

20:05:02 [ODBEdit,ERROR] [system.c:4295:recv_tcp,ERROR] header: recv returned 0,
n_received = 0, unexpected connection closure

20:05:02 [ODBEdit,ERROR] [midas.c:10042:rpc_client_call,ERROR] recv_tcp() failed,
routine = "rc_transition", host = "LANL-FADC-DAQ"

20:05:02 [ODBEdit,ERROR] [midas.c:4130:cm_transition,ERROR] Could not start a run:
cm_transition() status 503, message 'Unknown error 503 from client 'Analyzer' on host
LANL-FADC-DAQ'

20:05:02 [ODBEdit,INFO] Deleted entry '/System/Clients/1001' for client 'Analyzer'
because process pid 1001 does not exists

20:05:02 [ODBEdit,ERROR] [midas.c:8893:rpc_client_check,ERROR] Connection broken to
"Analyzer" on host LANL-FADC-DAQ

20:05:02 [ODBEdit,INFO] Run #180 start aborted
20:05:03 [mdump,INFO] Client 'Analyzer' on buffer 'SYSTEM' removed by cm_watchdog
because process pid 1001 does not exist
20:05:11 [mhttpd,INFO] Client 'Analyzer' (PID 1001) on database 'ODB' removed by
cm_watchdog (idle 10.1s,TO 10s)


Thanks,
Robert Pattie
    Reply  01 Feb 2013, Randolf Pohl, Forum, analyzer cannot connect to the statistics database 
The simplest thing is probably to delete all files .[A-Z]*.SHM in the odb directory (the
one you specified in /etc/exptab).
This wipes the ODB, shared memory and all the other obscure stuff, giving you a clean,
fresh start.

Of course it wipes all the valuable stuff, too. That's why it's handy to sometimes open
odbedit and "save odb_<yyyymmdd>.odb". You can reload the thing after such a fatal 
"rm .[A-Z]*.SHM" 
       Reply  01 Feb 2013, Stefan Ritt, Forum, analyzer cannot connect to the statistics database 
> The simplest thing is probably to delete all files .[A-Z]*.SHM in the odb directory (the
> one you specified in /etc/exptab).
> This wipes the ODB, shared memory and all the other obscure stuff, giving you a clean,
> fresh start.
> 
> Of course it wipes all the valuable stuff, too. That's why it's handy to sometimes open
> odbedit and "save odb_<yyyymmdd>.odb". You can reload the thing after such a fatal 
> "rm .[A-Z]*.SHM" 

Thanks Randolf for helping out, I was not in the office this week.

In addition of deleting the *SHM files, it's sometimes necessary to delete the shared memory. You do this with the 
command line tools

ipcs -m
ipcrm -m <shmid>


/Stefan
Entry  09 Jan 2013, wenliang li, Bug Report, Outputting ADC and TDC data into ROOT tree with the MIDAS SVN Revision:5347. 
Dear Midas Experts

I am Wenliang Li, a graduate student from University of Regina. Our group have
encountered some difficulty on outputting ADC and TDC data into ROOT tree with
the MIDAS SVN Revision: 5347.

Our Linux Distribution: Scientific Linux release 6.0 (Carbon)
ROOT Version:           ROOT 5.28
gcc version:            g++ (GCC) 4.4.4 20100726 (Red Hat 4.4.4-13)
kernel version:         2.6.32-279.19.1.el6.i686


I am using the given example $MIDASSYS/examples/experiment to generate some
data, and the issue is that the analyzer refuses to turn on the  ADC0 and TDC0
back switches. 

If the ADC and TDC banks are switched off, the analyzer will successfully output
the histograms but not the ROOT tree, and the Trigger and Scaler root trees are
completely empty.

With the same example experiment: $MIDASSYS/examples/experiment, this issue does
not occur on MIDAS SVN Revision: 4309.


The output error messages in the analyzer window are shown if the ADC and TDC
banks are switched to 1:

*************************
Connect to experiment ...OK
Root server listening on port 9090...
Loading previous online histos from /home/billlee/experiment/test_exp/last.root
Running analyzer online. Stop with "!"
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
ROOT TTree rebooked
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
Error in <TTree::Branch>: The pointer specified for TDC0 is not of a class known
to ROOT and (null) is not a known class
ROOT TTree rebooked
***********************
***************************



If I analyze the data with TDC and ADC bank switched set to be 1 :
$ analyzer -i runXXXXX.mid -o runXXXXX.root

I get the following error messages:


************************************************************************
************************************************************************


Root server listening on port 9090...
Running analyzer offline. Stop with "!"
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
Error in <TTree::Branch>: The pointer specified for TDC0 is not of a class known
to ROOT and (null) is not a known class
Set run number 1 in ODB
Load ODB from run 1...OK

 *** Break *** segmentation violation



===========================================================
There was a crash.
This is the entire stack trace of all threads:
===========================================================

Thread 2 (Thread 0x7f46c6853700 (LWP 10808)):
#0  0x0000003b63a0e84d in accept () from /lib64/libpthread.so.0
#1  0x0000003b64e370f4 in TUnixSystem::AcceptConnection(int) () from
/usr/lib64/root/libCore.so.5.28
#2  0x0000003b6647849c in TServerSocket::Accept(unsigned char) () from
/usr/lib64/root/libNet.so.5.28
#3  0x000000000040c50e in root_socket_server (arg=<value optimized out>) at
src/mana.c:5275
#4  0x00007f46c8dc513a in TThread::Function(void*) () from
/usr/lib64/root/libThread.so.5.28
#5  0x0000003b63a07851 in start_thread () from /lib64/libpthread.so.0
#6  0x0000003b62ee811d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f46c8b94720 (LWP 10800)):
#0  0x0000003b62eabfdd in waitpid () from /lib64/libc.so.6
#1  0x0000003b62e3e899 in do_system () from /lib64/libc.so.6
#2  0x0000003b62e3ebd0 in system () from /lib64/libc.so.6
#3  0x0000003b64e3da31 in TUnixSystem::StackTrace() () from
/usr/lib64/root/libCore.so.5.28
#4  0x0000003b64e3d3f3 in TUnixSystem::DispatchSignals(ESignals) () from
/usr/lib64/root/libCore.so.5.28
#5  <signal handler called>
#6  0x000000000041245f in TIter (file=<value optimized out>,
pevent=0x7f46c5281010, par=0x665180) at /usr/include/root/TCollection.h:148
#7  write_event_ttree (file=<value optimized out>, pevent=0x7f46c5281010,
par=0x665180) at src/mana.c:2872
#8  0x0000000000412a4c in process_event (par=0x665180, pevent=0x7f46c5281010) at
src/mana.c:3195
#9  0x0000000000412e42 in analyze_run (run_number=1,
input_file_name=0x7fff4d738340 "run00001.mid", output_file_name=<value optimized
out>) at src/mana.c:4178
#10 0x0000000000413372 in loop_runs_offline () at src/mana.c:4366
#11 0x0000000000413ba5 in main (argc=<value optimized out>, argv=<value
optimized out>) at src/mana.c:5579
===========================================================


The lines below might hint at the cause of the crash.
If they do not help you then please submit a bug report at
http://root.cern.ch/bugs. Please post the ENTIRE stack trace
from above as an attachment in addition to anything else
that might help us fixing this issue.
===========================================================
#6  0x000000000041245f in TIter (file=<value optimized out>,
pevent=0x7f46c5281010, par=0x665180) at /usr/include/root/TCollection.h:148
#7  write_event_ttree (file=<value optimized out>, pevent=0x7f46c5281010,
par=0x665180) at src/mana.c:2872
#8  0x0000000000412a4c in process_event (par=0x665180, pevent=0x7f46c5281010) at
src/mana.c:3195
#9  0x0000000000412e42 in analyze_run (run_number=1,
input_file_name=0x7fff4d738340 "run00001.mid", output_file_name=<value optimized
out>) at src/mana.c:4178
#10 0x0000000000413372 in loop_runs_offline () at src/mana.c:4366
#11 0x0000000000413ba5 in main (argc=<value optimized out>, argv=<value
optimized out>) at src/mana.c:5579
===========================================================


[midas.c:1973:,ERROR] cm_disconnect_experiment not called at end of program

**********************************************************************************************
**********************************************************************************************







I wonder if there is any program syntax change between MIDAS Version 4309 and
5347, and are there any simple working setup example which can output root tree
with the newest version of MIDAS?
 
In the end, I would like to thank the continuous effort from Triumf and PSI on
developing MIDAS, it is a pleasure to work with.

Many thanks
Bill 
    Reply  09 Jan 2013, Stefan Ritt, Bug Report, Outputting ADC and TDC data into ROOT tree with the MIDAS SVN Revision:5347. 
Dear Bill,

the Midas analyzer "mana.c" is currently not maintained. At PSI we use the ROME framework (which might be too complicated for a 
small experiment) and at TRIUMF the ROOTANA framework is used:

http://ladd00.triumf.ca/~olchansk/rootana/

You might be better off switching to that one.

Best regards,
Stefan
Entry  04 Jan 2013, Nabin Poudyal, Suggestion, how to start using midas 
Please, tell me how to choose a value of a "key" like DCM, pulser period,
presamples, upper thresholds to run a experiment? where can I find the related
informations? 
Entry  14 Dec 2012, Robert Casperson, Bug Report, MIDAS does not function correctly on F17 
When building MIDAS on Fedora 17 64-bit, the default zlib 1.2.5 shared library
is linked to.  When recording data, the "/Logger/Channels/*/Statistics/Bytes
written" value does not get set correctly beyond the first few seconds of the
run.  Occasionally, it appears to not get set at all, and mlogger aborts the run.

Installing zlib 1.2.3 in static form to /usr/local/lib (the default location),
and changing the NEED_ZLIB section of the MIDAS Makefile to the following seems
to function as a workaround:

ifdef NEED_ZLIB
CFLAGS   += -DHAVE-ZLIB
LIBS     += /usr/local/lib/libz.a
endif

Several Fedora 17 libraries expect zlib 1.2.5 specifically, so it seems safest
to not replace the default zlib shared library.

Some extra details are that the VME CPU is an XVB602, and the most recent GE-IP
drivers are being used for VME communication.  Fedora 17 was chosen to avoid a
bug with the VGA output in Fedora 13-16.
    Reply  20 Dec 2012, Stefan Ritt, Bug Report, MIDAS does not function correctly on F17 
If is not so easy to get out of zlib how many bytes have been written actually. I used an undocumented function, 
which breaks down on 64-bit systems.

I now rewrote the code in mlogger.cxx to use lseek() to "measure" actually the output file and set the values 
correctly. I tried on a few systems but am not 100% sure if it works everywhere. Can you please double check?

The fix is in SVN revision 5347.

/Stefan
Entry  18 Dec 2012, xelap, Forum, midas installation on SL6.3 
I try to do make in zlib folder and got  this
cc -O -o example example.o -L. -lz
/usr/bin/ld: errno: TLS definition in /lib/libc.so.6 section .tbss mismatches
non-TLS reference in ./libz.a(gzio.o)
/lib/libc.so.6: could not read symbols: Bad value
collect2: ld returned 1 exit status
make: *** [example] Error 1

Do I miss any package to be installed?
Thanks in advance,
Xelap
Entry  14 Dec 2012, Vinzenz Bildstein, Suggestion, Midas + Elog with SSL 
I've been trying to set up midas to create an automatic elog entry at the end of
each run and I've run into a problem. I've setup an elog on our server which
uses SSL and it seems that the melog provided by midas to create logbook entries
doesn't know any SSL.

My solution to this was to copy the crypt.c from the elog package to the
computer running midas and changed melog.c and the makefile to use SSL if a flag
-s is used. Does this seem like a sensible solution or did I oversee the obvious
and/or right way to do this?
    Reply  14 Dec 2012, Stefan Ritt, Suggestion, Midas + Elog with SSL 
> I've been trying to set up midas to create an automatic elog entry at the end of
> each run and I've run into a problem. I've setup an elog on our server which
> uses SSL and it seems that the melog provided by midas to create logbook entries
> doesn't know any SSL.
> 
> My solution to this was to copy the crypt.c from the elog package to the
> computer running midas and changed melog.c and the makefile to use SSL if a flag
> -s is used. Does this seem like a sensible solution or did I oversee the obvious
> and/or right way to do this?

Indeed melog.c is an old version of the elog.c utility in the elog package, which has not been maintained since a 
long time. Can't you just use the recent elog.c utility from the elog package?
       Reply  17 Dec 2012, Vinzenz Bildstein, Suggestion, Midas + Elog with SSL 
> > I've been trying to set up midas to create an automatic elog entry at the end of
> > each run and I've run into a problem. I've setup an elog on our server which
> > uses SSL and it seems that the melog provided by midas to create logbook entries
> > doesn't know any SSL.
> > 
> > My solution to this was to copy the crypt.c from the elog package to the
> > computer running midas and changed melog.c and the makefile to use SSL if a flag
> > -s is used. Does this seem like a sensible solution or did I oversee the obvious
> > and/or right way to do this?
> 
> Indeed melog.c is an old version of the elog.c utility in the elog package, which has not been maintained since a 
> long time. Can't you just use the recent elog.c utility from the elog package?

Well, that's essentially what I did, I just didn't want to install the whole elog package on the midas server. Whether
the utility is called elog or melog doesn't really matter. I just wanted to make sure that this is the right way to do
it. 

Thanks!
Entry  12 Dec 2012, Shaun Mead, Bug Report, ss_thread_kill() kills entire program 
Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
to kill the entire program instead 
of just the thread. Here is a test program to show the error:

_________________________________
#include <stdio.h>
#include <stdlib.h>
#include "midas.h"
#include "msystem.h"

INT f(void *param)
{
  for (int x = 0; x < 100; x++)
    sleep(1);
  return 0;
}

int main()
{
  printf("creating thread\n");
  midas_thread_t thr = ss_thread_create(f, NULL);
  sleep(2);
  printf("killing thread\n");
  ss_thread_kill(thr);
  printf("success\n");
  return 0;
}
_________________________________

Makefile:
_________________________________
FLAGS=-g -Wall -DLINUX -DOS_LINUX -I/home/deap/packages/midas/include 
LIBS=-L/home/deap/packages/midas/linux-m64/lib -lmidas -lpthread -lrt -lutil

main.exe: main.cpp 
	g++ $(FLAGS) -o $@ $^ $(LIBS)

_________________________________

Output when run:

_________________________________

[deap@deap04 multithread]$ ./main.exe 
creating thread
killing thread
Killed
[deap@deap04 multithread]$ 
_________________________________

The last "Killed" indicated the whole program got killed, when it should 
actually just kill the thread and then 
print "success".

I noticed the function in system.c uses pthread_kill(). Some google searches 
show me that it may be better to use 
pthread_cancel() (ie http://stackoverflow.com/questions/3438536/when-to-use-
pthread-cancel-and-not-pthread-kill ).


Shaun
    Reply  13 Dec 2012, Stefan Ritt, Bug Report, ss_thread_kill() kills entire program 
The Linux thread functionality was introduced by Konstantin, so he might have a better idea about that.

What I usually do is a graceful thread shutdown just by a flag. Like

int stop_thread = 0;

INT f(void *param)
{
  for (int x = 0; x < 100; x++) {
    sleep(1);
    if (stop_thread) {
      // clean up things here...
      return 0;
    }
  }
  return 0;
}

int main()
{
 printf("creating thread\n");
 midas_thread_t thr = ss_thread_create(f, NULL);
 sleep(2);
 printf("killing thread\n");
 stop_thread = 1;
 sleep(2);
 printf("success\n");
 return 0;
}


This way I have a chance to clean up things in the thread, which otherwise I would not be able to.
    Reply  13 Dec 2012, Konstantin Olchanski, Bug Report, ss_thread_kill() kills entire program 
> Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
> to kill the entire program instead of just the thread.

You cannot kill a thread. It's not a well defined operation. Most OSes do have the 
technical possibility to kill threads, but if you use them, you will not like the 
results. For a taste of small trouble, if a thread is holding a lock and you kill 
it, who's job is it to release the lock?

The best you can do is to ask the thread to gracefully shutdown itself. (I.e. by 
using global variable flags).

P.S. I did not implement the ss_thread stuff, I do not know what ss_thread_kill() 
does, but I recommend that you do not use it.

P.P.S. Programming using threads is complicated, I recommend that you read at least 
some literature on the topic before using threads. At the least you must understand 
the common pitfalls and mistakes. At the least, you must know about deadlocks, 
livelocks, race conditions and semaphore priority inversions.

K.O.
       Reply  13 Dec 2012, Shaun Mead, Bug Report, ss_thread_kill() kills entire program 
> > Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
> > to kill the entire program instead of just the thread.
> 
> You cannot kill a thread. It's not a well defined operation. Most OSes do have the 
> technical possibility to kill threads, but if you use them, you will not like the 
> results. For a taste of small trouble, if a thread is holding a lock and you kill 
> it, who's job is it to release the lock?
> 
> The best you can do is to ask the thread to gracefully shutdown itself. (I.e. by 
> using global variable flags).
> 
> P.S. I did not implement the ss_thread stuff, I do not know what ss_thread_kill() 
> does, but I recommend that you do not use it.
> 
> P.P.S. Programming using threads is complicated, I recommend that you read at least 
> some literature on the topic before using threads. At the least you must understand 
> the common pitfalls and mistakes. At the least, you must know about deadlocks, 
> livelocks, race conditions and semaphore priority inversions.
> 
> K.O.

Yes, but unfortunately what I was attempting to do was use a library function that I
can't alter. It sometimes gets stuck and I wanted a way to kill it. Anyway I ended up
not doing this at all in c++; I was able to do what I needed in python.

Shaun
Entry  30 Aug 2012, Raquel Castillo, Forum, MIDAS in Windows 
Hi,

I need to install MIDAS on a Windows system (Microsoft Windows Server 2003). 
The computer has the Microsoft Visual C++ 2010 Express version.
I have downloaded the MIDAS packages using the tarball mechanism. I have create 
the environment variables without problems and I have create the file           
%SystemRoot%\system32\exptab 
But when I try to build MIDAS and I do 
nmake -f makefile.nt
I have the following problem:
Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
Copyright (C) Microsoft Corporation.  All rights reserved.

NMAKE : fatal error U1073: don't know how to make 'src/mhttpd.c'
Stop.

I don't understand this problem. Can anybody help me, please?

Thanks in advance!!!
    Reply  31 Aug 2012, Pierre-Andre Amaudruz, Forum, MIDAS in Windows 
Hi Raquel,

The makefile.nt has been corrected.
Obviously Midas on Windows has not been updated for quite a while.
mhttpd.c has been converted to c++ (mhttpd.cxx) as well as a couple of other 
applications.

Please give a try,  PAA

> Hi,
> 
> I need to install MIDAS on a Windows system (Microsoft Windows Server 2003). 
> The computer has the Microsoft Visual C++ 2010 Express version.
> I have downloaded the MIDAS packages using the tarball mechanism. I have create 
> the environment variables without problems and I have create the file           
> %SystemRoot%\system32\exptab 
> But when I try to build MIDAS and I do 
> nmake -f makefile.nt
> I have the following problem:
> Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
> Copyright (C) Microsoft Corporation.  All rights reserved.
> 
> NMAKE : fatal error U1073: don't know how to make 'src/mhttpd.c'
> Stop.
> 
> I don't understand this problem. Can anybody help me, please?
> 
> Thanks in advance!!!
       Reply  23 Oct 2012, Raquel Castillo, Forum, MIDAS in Windows MIDAS_odbedit.bmp
Hi Pierre-André, 

sorry for the long delay, another things keep me out of this computer.
Thanks a lot for correcting makefile.nt and the other applications!

Now I have try, downloading the MIDAS packages from the tarball mechanism, as
before,
and now it seems that the previous problems are solved. It remains only one small
problem, it is related to the odbedit.

I attach here the figure with the error that is reported by the computer. Is it
possible that is another file that needs to be updated? Can you help me with that?

Thanks a lot in advance!!!!



> Hi Raquel,
> 
> The makefile.nt has been corrected.
> Obviously Midas on Windows has not been updated for quite a while.
> mhttpd.c has been converted to c++ (mhttpd.cxx) as well as a couple of other 
> applications.
> 
> Please give a try,  PAA
> 
> > Hi,
> > 
> > I need to install MIDAS on a Windows system (Microsoft Windows Server 2003). 
> > The computer has the Microsoft Visual C++ 2010 Express version.
> > I have downloaded the MIDAS packages using the tarball mechanism. I have create 
> > the environment variables without problems and I have create the file           
> > %SystemRoot%\system32\exptab 
> > But when I try to build MIDAS and I do 
> > nmake -f makefile.nt
> > I have the following problem:
> > Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
> > Copyright (C) Microsoft Corporation.  All rights reserved.
> > 
> > NMAKE : fatal error U1073: don't know how to make 'src/mhttpd.c'
> > Stop.
> > 
> > I don't understand this problem. Can anybody help me, please?
> > 
> > Thanks in advance!!!
Entry  27 Sep 2012, Randolf Pohl, Bug Fix, [PATCH] mana.c compile fix, gz files diff.mana
Hi,

I had to apply the attached patch to convince SuSE Linux 12.2 to compile mana.c
gcc version is "(SUSE Linux) 4.6.2"

Problem is that gz{write,close, etc.} expect a 1st argument of type gzFile (see
zlib.h), whereas out_file is FILE*. In fact, out_file is a cast to FILE*, even
in the case when we work on a gzfile (HAVE_ZLIB).

Could you please confirm that the patch is correct, and possibly apply it to trunk?

I haven't checked if mana works as advertised now.

Cheers,


Randolf
    Reply  09 Oct 2012, Stefan Ritt, Bug Fix, [PATCH] mana.c compile fix, gz files 
> Hi,
> 
> I had to apply the attached patch to convince SuSE Linux 12.2 to compile mana.c
> gcc version is "(SUSE Linux) 4.6.2"
> 
> Problem is that gz{write,close, etc.} expect a 1st argument of type gzFile (see
> zlib.h), whereas out_file is FILE*. In fact, out_file is a cast to FILE*, even
> in the case when we work on a gzfile (HAVE_ZLIB).
> 
> Could you please confirm that the patch is correct, and possibly apply it to trunk?
> 
> I haven't checked if mana works as advertised now.
> 
> Cheers,
> 
> 
> Randolf

I applied your patch to the trunk.

Best,
Stefan
Entry  16 Aug 2012, Cheng-Ju Lin, Bug Report, launching roody kills the analyzer 
Hi All,

I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
All the packages compiled OK. The example code in $MIDASSYS/examples/experiment also runs OK 
provided that I don't launch roody. If I try to launch roody, then it immediately crashes the analyzer with 
the following trace:

#6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154
#7 0x0000003219a1e13a in TThread::Function(void*) () from /usr/lib64/root/libThread.so.5.28
#8 0x0000003dd1207851 in start_thread () from /lib64/libpthread.so.0
#9 0x0000003dd0ee76dd in clone () from /lib64/libc.so.6

The line src/mana.c:5154 points to the following:

TObject *obj;
            if (strncmp(request + 10, "Any", 3) == 0)
               obj = folder->FindObjectAny(request + 14);
            else
               obj = folder->FindObject(request + 11);    // LINE 5154


Any suggestions on what may be going on here?  Thanks.


Cheng-Ju
    Reply  16 Aug 2012, Cheng-Ju Lin, Bug Fix, launching roody kills the analyzer 
OK, I've found the solution in the roody forum.  The solution for 64bit machine is to replace
   uint32_t p =0;
   with
   uintptr_t p =0;

in the roody header file roody/include/DataSourceTNetFolder.h

Cheng-Ju



> Hi All,
> 
> I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
> All the packages compiled OK. The example code in $MIDASSYS/examples/experiment also runs OK 
> provided that I don't launch roody. If I try to launch roody, then it immediately crashes the analyzer with 
> the following trace:
> 
> #6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154
> #7 0x0000003219a1e13a in TThread::Function(void*) () from /usr/lib64/root/libThread.so.5.28
> #8 0x0000003dd1207851 in start_thread () from /lib64/libpthread.so.0
> #9 0x0000003dd0ee76dd in clone () from /lib64/libc.so.6
> 
> The line src/mana.c:5154 points to the following:
> 
> TObject *obj;
>             if (strncmp(request + 10, "Any", 3) == 0)
>                obj = folder->FindObjectAny(request + 14);
>             else
>                obj = folder->FindObject(request + 11);    // LINE 5154
> 
> 
> Any suggestions on what may be going on here?  Thanks.
> 
> 
> Cheng-Ju
    Reply  17 Aug 2012, Konstantin Olchanski, Bug Report, launching roody kills the analyzer 
> I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
>
> #6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154

You are connecting to mana, the old midas analyzer. The code for connecting to it is still present in roody,
but I cannot support the matching server code in mana.c - it is 2 revolutions behind the current state of
the ROOT object server (look in ROOTANA - the NetDirectory stuff and the latest is the XmlServer stuff).

I can offer 2 solutions - switch from mana.c to a ROOTANA based analyzer or graft the XmlServer code
into your analyzer (it is very simple - you need to create an XmlServer object and tell it which ROOT
containers you want to make visible to ROODY).

I guess you can also debug the old midas server code inside mana.c...

K.O.
       Reply  17 Aug 2012, Cheng-Ju Lin, Bug Report, launching roody kills the analyzer 
Hi Konstantin,

Many thanks for your feedback.  I was able to keep the analyzer from exiting when launching roody by making some changes in the roody code. 
This at least allows me to keep moving forward. I will look into your suggestion of converting to ROOTANA based analyzer as well.

Regards,

Cheng-Ju


> > I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
> >
> > #6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154
> 
> You are connecting to mana, the old midas analyzer. The code for connecting to it is still present in roody,
> but I cannot support the matching server code in mana.c - it is 2 revolutions behind the current state of
> the ROOT object server (look in ROOTANA - the NetDirectory stuff and the latest is the XmlServer stuff).
> 
> I can offer 2 solutions - switch from mana.c to a ROOTANA based analyzer or graft the XmlServer code
> into your analyzer (it is very simple - you need to create an XmlServer object and tell it which ROOT
> containers you want to make visible to ROODY).
> 
> I guess you can also debug the old midas server code inside mana.c...
> 
> K.O.
          Reply  26 Sep 2012, Konstantin Olchanski, Bug Report, launching roody kills the analyzer 
> > 
> > I guess you can also debug the old midas server code inside mana.c...
> > 

I ended up doing this. (After receiving some discussion by email).

Remembered that this is an old problem with the old midasServer network
protocol in mana.c - if mana.c is compiled 32-bit, it sends 32-bit pointers, if compiled 64-bit
it sends 64-bit pointers. On the receiving end (in roody), the ROOT TMessage object does not
provide any easy way to tell between them (i.e. object length is reported as 12 or 16 for the two cases).

To make things more interesting, the midasServer code in ROOTANA always sends 32-bit "pointers",
(which are not pointers but 32-bit integer cookies).

I use the ROOTANA midasServer to test ROODY (I have no working mana.c analyzers available),
and ROODY expects to receive 32-bit "pointers", so the two are consistent.

But if I compile my midasServer to send/receive 64-bit "pointers" (cookies), I reproduce this crash. What I can reproduce I can "fix".

If I change the code in ROODY to receive and return 64-bit "pointers" (cookies), both 32-bit and 64-bit midasServer seems to work okey.

This is committed as roody svn rev 248. (https://ladd00.triumf.ca/svn/roody/trunk)

It is the same fix as suggested by Cheng-Ju Stephen Lin [cjslin@lbl.gov].

I hope this helps (or breaks the ROODY midasServer connection for everybody. I hope not).

K.O.
Entry  10 Sep 2012, Shaun Mead, Info, MIDAS button to display image 
Hi,

I've written a python script that reads some data from a file and generates a
.png image. I want to have a button on my MIDAS status page that:

- executes the script and waits for it to finish,
- then displays the image

How can I do that? I tried using the sequencer to just execute the script every
30 seconds, but I can't get it to work, and it would be better to only execute
the script on demand anyway. 

I also am having trouble getting image display to work. I have the ODB keys set:

[local:oven1:S]/Custom>ls
Temperature Map&                /home/deap/ovendaq/online/index.html
Images

[local:oven1:S]/Custom>ls Images/temps.png/           
Background                      /home/deap/ovendaq/online/temps.png

And the HTML file is just this:
<img src="temps.png">

But the image won't display. It shows a "broken" picture, and when I try to view
it directly it says: Invalid custom page: Page not found in ODB.

Any help would be appreciated...

Thanks
Shaun
    Reply  11 Sep 2012, Stefan Ritt, Info, MIDAS button to display image Screen_Shot_2012-09-11_at_14.36.56_.png
> Hi,
> 
> I've written a python script that reads some data from a file and generates a
> .png image. I want to have a button on my MIDAS status page that:
> 
> - executes the script and waits for it to finish,
> - then displays the image
> 
> How can I do that? I tried using the sequencer to just execute the script every
> 30 seconds, but I can't get it to work, and it would be better to only execute
> the script on demand anyway. 
> 
> I also am having trouble getting image display to work. I have the ODB keys set:
> 
> [local:oven1:S]/Custom>ls
> Temperature Map&                /home/deap/ovendaq/online/index.html
> Images
> 
> [local:oven1:S]/Custom>ls Images/temps.png/           
> Background                      /home/deap/ovendaq/online/temps.png
> 
> And the HTML file is just this:
> <img src="temps.png">
> 
> But the image won't display. It shows a "broken" picture, and when I try to view
> it directly it says: Invalid custom page: Page not found in ODB.
> 
> Any help would be appreciated...
> 
> Thanks
> Shaun


If you use the "custom" image system, you need to use GIF images. mhttpd can dynamically create GIF 
images, 
with a background image and overlaid labels, bar graphs etc. But mhttpd just contains a GIF library to do 
that 
in memory, but no PNG library.

Actually I would recommend you not to use a script to create an image, but use the custom image system 
to 
display temperatures. In the attachment you see an page from our experiment which contains a 
background image (the greyish boxes), labels (white temperature boxes), bar graphs (blue level boxes) 
and history pages (left side). This is all dynamically created inside mhttpd using the custom page system 
without any external script. All you have to do is to get the temperatures and levels inside the ODB via the 
slow control system. If you want, I can send you the full code for that page.

Cheers,
Stefan
Entry  06 Sep 2012, shaun, Bug Report, "cannot find recent history file" 
Hi, when attempting to access a history window the following message is repeated
over and over in the MIDAS message log:

Thu Sep 6 11:37:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:38:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:38:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:39:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:39:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file

It appears to be related to attempting to display a history graph that includes
some time periods that have no recorded history data. When I zoom in so that the
whole graph has data the error message goes away.

The graph displays fine either way, so this error message seems useless. Is
there a way to suppress it?

Thanks
Shaun
Entry  05 Sep 2012, Stefan Ritt, Info, New pipe compression implemented in mlogger 
A new pipe compression has been implemented in mlogger thanks to Fedor Ignatov from BINP 
Novosibirsk. The way it works that the logger write into a pipe instead directly into a file. The pipe can 
then be connected to any compression program without the need to copile against any additional C 
library.

To use is, enter as the filename for example

|bzip2>run%05d.mid     (note the pipe '|' in front of the bzip2)

This way the data stream is run through the bzip2 program, which is known to have better compression 
ratio than gzip. Furthermore, the parallel version of bzip2 can be used, which spreads over all available 
CPU cures and speeds up compression almost linearly with the number of cores. This parallel version 
called pbzip2 can be found here:

http://compression.ca/pbzip2/

It can be easily compiled and installed. Using this method in the MEG experiment at PSI, we can compress 
our waveform data to 37% or it's original size (49% with gzip), and on 8 cores we get a compression rate 
of about 40 MBytes/sec (23 MBytes with gzip on a single core).

The disadvantage of that method is that one cannot see the compression ratio online, but this is not a big 
deal I guess. The new version has been committed as rev. 5324. 

/Stefan
ELOG V3.1.4-2e1708b5