> I wonder why one needs more than 256 hotlinks at all. Please note that with the odbxx "watch" API, you can hotline a whole subdirectory, and get notified if ANY of the
> underlying values or subdirectories change. In principle, one could have one hotlink to "/" and see all changes in the ODB (although that does not make sense and might slow
> down ODB access a bit).
Thanks ! - I didn't know that. I did run into a number of hotlinks limit via mlogger which complained about not being able to create a hotlink
to yet another event. Doubling the default value of MAX_OPEN_RECORDS solved the problem.
I don't know the exact arithmetic defining the number of hotlinks in the system, but my today's case is a case of
- 36 (linux servers) +18 (RPI) monitoring frontends managing one or several different equipment items each.
- Each equipment item sends to ODB at least one monitoring event
- in addition, each frontend created an individual hotlink for handling interactive commands
- for MAX_OPEN_RECORDS=256, 4 equipment items per frontend easily make it into the dangerous zone.
"Equipment items" also include the online processes running on the distributed computing farm processing the data ..
(we are not using MIDAS event building capabilities)
>
> Try the odbxx_test.cpp example in MIDAS. In line 210 it puts a single hotlink to /Experiment. If you change anything under /Experiment, the program gets notified. By checking the
> path of the changed ODB entry, it can figure out which of the subways have been changed:
>
> // watch ODB key for any change with lambda function
> midas::odb ow("/Experiment");
> ow.watch([](midas::odb &o) {
> std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
> });
>
>
> Maybe that would solve your problem without having to change the maximum number of hotlinks.
I'll see how much mileage one can make here, but so far it looks that it is the number of various monitoring events
handled by the mlogger which drives the number of hotlinks
-- thanks, regards, Pavel |