ID |
Date |
Author |
Topic |
Subject |
505
|
13 Oct 2008 |
Stefan Ritt | Info | mhttpd multi-experiment support removed | Previously, one mhttpd server could sever several experiments at the same time.
This caused however sometimes problems and was hard to maintain. Starting from
SVN revision 4348, I removed the multi-experiment support, which I believe is
now a much cleaner implementation. So if several experiments are defined on a
computer, each one need a separate mhttpd process listening on a different
port. The experiment name can now be supplied on the command line to mhttpd
like for any other midas program. I have tested this so far at two experiments
at PSI, but this does not cover all possibilities. What I did not try was
experiments with web passwords and odb passwords. If there is any problem after
upgrading to 4348, please report. |
506
|
13 Oct 2008 |
Konstantin Olchanski | Info | MIDAS drivers for Tundra tsi148 pci-vme bridge | The latest midas mvmestd.h driver for the Tundra tsi148 pci-vme bridge as used
on GEFANUC VME processors have been commited, revision 4349.
This midas drivers require the "gefvme" Linux kernel driver supplied by GEFANUC
as part of their Linux BSP. (Note that version "v7865-sdk-linux-R01.00" from
GEFANUC is mostly non-functional).
At TRIUMF have the V7865 VME processors and use the kernel driver
v7865-sdk-linux-R01.00-KO6. This driver supports these functions:
1) memory mapped access to full VME A16 and A24 address spaces and window-mapped
access to VME A32 address space. (original gefvme driver does not do
memory-mapped access)
2) DMA directly from vme to user memory, with support for multi-segment chained
transfers (original gefvme driver lacks chained transfers)
3) DMA from user memort to vme should work but is untested
4) no support for interrupts (original gefvme driver does not interrupts).
If you are interested in in using the TRIUMF driver, please contact me directly.
If you already purchased the GEFANUC BSP, I think you can use my drivers
immediately, without objection from GEFANUC.
Otherwise, I will have to do some research into the gefvme code license: since
all of the code appears to have GPL headers and identical code exists on the
internet, I expect to find that my gefvme driver can be freely distributed under
the GPL. But until then, and until it is cleared with TRIUMF management, I
cannot make my gefvme driver available for free download.
K.O. |
507
|
17 Oct 2008 |
Konstantin Olchanski | Info | mlogger async transitions, etc | As we were looking into problems with starting and stopping runs in one of our
daq systems, we found that the mlogger does something differently compared to
mhttpd and odbedit. Starting and stopping runs from mhttpd and odbedit works
correctly, but runs restarted by the file size limit in mlogger would often have
problems.
It turns out that mlogger calls cm_transition() with the ASYNC flag, while
mhttpd and odbedit always use SYNC.
The best I can tell, the ASYNC flag tells cm_transition() to fire off the
end-run rpc calls to all clients all at once, without waiting for reply from the
previous client before calling the next one. This effectively defeats the
transition sequence numbers - higher-numbered clients are told to end-run before
the lower-numbered clients have finished their end-run processing.
Most of the time, transition sequence numbers do not matter - all frontends can
stop at the same time, only mlogger has to be the very last, and for transitions
initiated by the mlogger itself, this sequencing is preserved.
It turns out that for our system, correct sequencing of individual frontends is
important, for example, the frontend controlling the trigger system has to stop
first. As we are using correctly adjusted transition sequence numbers, the right
sequence is always done when runs are started/stopped from mhttpd and from
odbedit, but not for runs started/stopped by the mlogger.
So by changing mlogger to always do SYNC transitions, we fixed our sequencing
problem - now runs always start and stop correctly.
But then we ran into a deadlock between the mlogger and the event builder:
1) mlogger wants to stop the run
2a) mlogger stops reading the SYSTEM buffer
2b) mlogger starts cm_transition(SYNC)
3) rpc call to trigger frontend, trigger is blocked (no new events are
generated, but existing data is still flowing through the system)
4) other frontends are stopped (data still flowing)
5) data still flowing through the system, into the event builder, into the
SYSTEM buffer
6) SYSTEM buffer becomes 100% full (mlogger is not reading it, it is busy inside
cm_transition()), event builder is waiting for free space inside bm_send_event()
7) mlogger issues end-run rpc call to event builder
8) deadlock: mlogger is waiting for a reply from the event builder, the event
builder is waiting for free space in the SYSTEM buffer (not processing rpc
calls), mlogger is supposed to empty the SYSTEM buffer, but it is waiting for an
rpc reply instead.
In our particular case, the dead lock was easy to avoid by making the SYSTEM
buffer big enough to accommodate all in-flight data, but the problem remains in
the general case. I suspect mlogger uses ASYNC transactions exactly to avoid
this type of deadlock (mlogger used ASYNC transactions since svn revision 2, the
beginning of time).
Personally, I am not happy about the inconsistency of run sequencing between
mlogger and mhttpd/odbedit (hmm... should also check mfe.c, it also stops runs
based on event count limits, etc). I think it would be better if all programs
did the same exact thing when starting/stopping runs. When mlogger does
something different, we get surprising unexpected behaviour, best avoided.
One possible solution could be to add an odb variable "/logger/async
transitions", set to "false" by default - to be consistent with other programs.
Systems that benefit from the old ASYNC behaviour and do not care about exact
sequencing can set this flag to "true".
K.O. |
508
|
18 Oct 2008 |
Stefan Ritt | Info | mlogger async transitions, etc | > I suspect mlogger uses ASYNC transactions exactly to avoid
> this type of deadlock (mlogger used ASYNC transactions since svn revision 2, the
> beginning of time).
That's exactly the case. If you would have asked me, I would have told you
immediately, but it is also good that you re-confirmed the deadlock behavior with
the SYNC flag. I didn't check this for the last ten years or so.
Making the buffers bigger is only a partial solution. Assume that the disk gets
slow for some reason, then any buffer will fill up and you get the dead lock.
The only real solution is to put the logic into a separate thread. So the thread
does all the RPC communication with the clients, while the main logger thread logs
data as usual in parallel. The problem is that the RPC layer is not yet completely
tested to be thread safe. I put some mutex and you correctly realized that these
are system wide, but you want a local mutex just for the logger process. You need
also some basic communication between the "run stop thread" and the "logger main
thread". Maybe Pierre remembers that once there was the problem that the logger did
not know when all events "came down the pipe" and could close the file. He added
some delay which helped most of the time. But if we would have some communication
from the "run stop thread" telling the main thread that all programs except the
logger have stopped the run, then the logger only has to empty the local system
buffer and knows 100% that everything is done.
In the MEG experiment we have the same problem. We need a certain sequence
(basically because we have 9 front-ends and one event builder, which has to be
called after the front-ends). We realized quickly that the logger cannot stop the
run, so we wrote a little tool "RunSubmit", which is a run sequence with scripting
facility. So you write a XML file, telling RunSubmit to start 10 runs, each with
5000 events. RunSubmit now watches the run statistics and stops the run. Since it's
outside the logger process, there is no dead lock. Unfortunately RunSubmit was
written by one of our students and contains some MEG specific code. Otherwise it
could be committed to the distribution.
So I feel that a separate thread for run stop (and maybe even start) would be a
good thing, but I'm not sure when I will have time to address this issue.
- Stefan |
509
|
18 Oct 2008 |
Konstantin Olchanski | Info | make linux32 & co | The Makefile targets for crosscompiling MIDAS are now documented in the MIDAS
Doxygen documentation:
make linux32 & make clean32
make linux64 & make clean64
make crosscompile
make dox
This has to do with which flavour of MIDAS is built by default: 32-bit or 64-bit.
This is how this works now.
Default flavour is determined by ROOT. If ROOTSYS points to 32-bit ROOT, then
32-bit MIDAS is built, if 64-bit ROOT, then 64-bit MIDAS. This works well after
the ROOT team added the correct "-m32" and "-m64" flags to "rootconfig --cflags".
If for some reason, we also need a non-default flavour of MIDAS, for example
when the main daq computer runs 64-bit MIDAS, but one frontend has to run on a
"32-bit only" VME processor, you say "make linux32". This creates the
"linux-m32/{lib,bin}" tree that you then reference in the Makefile of your
special frontend (i.e. instead of "-L$MIDASSYS/linux/lib" say
"-L$MIDASSYS/linux-m32/lib"). "make linux64" works the same way.
These non-default flavours of MIDAS are compiled with most special features
disabled: no ROOT, no MYSQL, etc.
When building "make linux32", you may also see errors caused by missing 32-bit
libraries - many 64-bit Linux distributions do not install the full 32-bit
development environment by default - so some header files and libraries may be
reported as missing. These not-installed-by-default 32-bit packages are usually
easy to install using commands like "yum install libxxx-devel.i386".
K.O. |
513
|
22 Oct 2008 |
Konstantin Olchanski | Info | mscb timeouts and retries | A new set of functions was added to mscb.h to adjust mscb timeouts and retries to better match specific
applications:
+ int EXPRT mscb_get_max_retry();
+ int EXPRT mscb_set_max_retry(int max_retry);
+ int EXPRT mscb_get_usb_timeout();
+ int EXPRT mscb_set_usb_timeout(int timeout);
+ int EXPRT mscb_get_eth_max_retry();
+ int EXPRT mscb_set_eth_max_retry(int eth_max_retry);
There are 3 settings:
1) mscb_max_retry: most (all?) mscb operations, like mscb_read(), retry failed mscb transactions up to
10 times. The corresponding set and get functions allow tuning this retry limit.
2) mscb_usb_timeout: the driver for the USB-MSCB adapter uses a timeout of 6 seconds.
mscb_set_usb_timeout() permits changing this value.
3) mscb_eth_max_retry: the driver for the Ethernet-MSCB adapter has to deal with UDP packet loss. If
the adapter does not respond to a UDP command, the UDP command is sent again, with a bigger
timeout (timeout = 100 * (retry+1), in ms), this is repeated up to 10 times. mscb_set_eth_max_retry()
permits adjusting this number of retries.
This is how it works for the usb interface:
int mscb_read(...)
for (retry=0; retry<mscb_max_retry; retry++)
mscb_exch()
musb_write(..., mscb_usb_timeout)
musb_read(..., mscb_usb_timeout)
This is how it works for the ethernet interface:
int mscb_read(...)
for (retry=0; retry<mscb_max_retry; retry++)
mscb_exch()
for (retry=0; retry<mscb_eth_max_retry; retry++)
send_udp_command()
wait_for_udp_response(timeout = 100 * (retry+1))
This is how the new functions are intended to be used:
...
int old = mscb_set_max_retry(2);
... do stuff ...
mscb_set_max_retry(old); // restore default value
svn revision 4356.
K.O. |
519
|
28 Oct 2008 |
Stefan Ritt | Info | mscb timeouts and retries | > A new set of functions was added to mscb.h to adjust mscb timeouts and retries to better match specific
> applications:
>
> + int EXPRT mscb_get_max_retry();
> + int EXPRT mscb_set_max_retry(int max_retry);
> + int EXPRT mscb_get_usb_timeout();
> + int EXPRT mscb_set_usb_timeout(int timeout);
> + int EXPRT mscb_get_eth_max_retry();
> + int EXPRT mscb_set_eth_max_retry(int eth_max_retry);
In the spirit of this, a variable retry scheme has been implemented in the mscbdev.c device driver. At the
MEG experiment, we have one mscb device which is pretty slow, while the others are fast. Therefore it is
necessary to have a per-device max retry count which can be different for different submasters. I moved
therefore the max_eth_retry variable into the mscb_fd structure and adjusted a few functions accordingly. I
did not bother with the other timeouts and retries, since I don't need this for the moment, but it would be
nice if they would be handled in the same way. Then I added code into mscbdev.c to read the retry variable
form the ODB under /Equipment/<name>/Settings/Device/<Name>/Retries. The default is 10, but it can be
changed and becomes valid after the program has been restarted. |
524
|
06 Nov 2008 |
Konstantin Olchanski | Info | midas elog outage | Around Wednesday Noon, there was a power outage at triumf (loss of ups power in the triumf
computing center) and after rebooting ladd00, https/ssl access stopped working with a complaint
about mismatching server name and ssl certificate name. This configuration used to work, so one of the
system updated must have broke it. This problem is now fixed and access to midas elog is restored.
K.O. |
528
|
20 Nov 2008 |
Jimmy Ngai | Info | Recommended platform for running MIDAS | Dear All,
Is there any recommended platforms for running MIDAS? Have anyone encountered
problems when running MIDAS on Scientific Linux?
Thanks.
Jimmy |
529
|
20 Nov 2008 |
Stefan Ritt | Info | Recommended platform for running MIDAS | > Dear All,
>
> Is there any recommended platforms for running MIDAS? Have anyone encountered
> problems when running MIDAS on Scientific Linux?
>
> Thanks.
>
> Jimmy
I run MIDAS on scientific Linux 5.1 without any problem. |
530
|
26 Nov 2008 |
Jimmy Ngai | Info | Send email alert in alarm system | Dear All,
We have a temperature/humidity sensor in MIDAS now and will add a liquid level
sensor to MIDAS soon. We want the operators to get alerted ASAP when the
laboratory environment or the liquid level reached some critical levels. Can
MIDAS send email alerts or SMS alerts to cell phones when the alarms are
triggered? If yes, how can I config it?
Many thanks!
Best Regards,
Jimmy |
531
|
26 Nov 2008 |
Stefan Ritt | Info | Send email alert in alarm system | > We have a temperature/humidity sensor in MIDAS now and will add a liquid level
> sensor to MIDAS soon. We want the operators to get alerted ASAP when the
> laboratory environment or the liquid level reached some critical levels. Can
> MIDAS send email alerts or SMS alerts to cell phones when the alarms are
> triggered? If yes, how can I config it?
Sure that's possible, that's why MIDAS contains an alarm system. To use it, define
an ODB alarm on your liquid level, like
/Alarms/Alarms/Liquid Level
Active y
Triggered 0 (0x0)
Type 3 (0x3)
Check interval 60 (0x3C)
Checked last 1227690148 (0x492D10A4)
Time triggered first (empty)
Time triggered last (empty)
Condition /Equipment/Environment/Variables/Input[0] < 10
Alarm Class Level Alarm
Alarm Message Liquid Level is only %s
The Condition if course might be different in your case, just select the correct
variable from your equipment. In this case, the alarm triggers an alarm of class
"Level Alarm". Now you define this alarm class:
/Alarms/Classes/Level Alarm
Write system message y
Write Elog message n
System message interval 600 (0x258)
System message last 0 (0x0)
Execute command /home/midas/level_alarm '%s'
Execute interval 1800 (0x708)
Execute last 0 (0x0)
Stop run n
Display BGColor red
Display FGColor black
The key here is to call a script "level_alarm", which can send emails. Use
something like:
#/bin/csh
echo $1 | mail -s \"Level Alarm\" your.name@domain.edu
odbedit -c 'msg 2 level_alarm \"Alarm was sent to your.name@domain.edu\"'
The second command just generates a midas system message for confirmation. Most
cell phones (depends on the provider) have an email address. If you send an email
there, it gets translated into a SMS message.
The script file above can of course be more complicated. We use a perl script
which parses an address list, so everyone can register by adding his/her email
address to that list. The script collects also some other slow control variables
(like pressure, temperature) and combines this into the SMS message.
For very sensitive systems, having an alarm via SMS is not everything, since the
alarm system could be down (computer crash or whatever). In this case we use
'negative alarms' or however you might call it. The system sends every 30 minutes
an SMS with the current levels etc. If the SMS is missing for some time, it might
be an indication that something in the midas system is wrong and one can go there
and investigate. |
534
|
27 Nov 2008 |
Konstantin Olchanski | Info | lazylogger updated | lazylogger was updated to improve handling of the list of runs still on disk
(odb /Lazy/xxx/List).
Previously, each and every run was listed in the List arrays. With modern
Terabyte-sized data disks, many many days worth of runs tend to remain on disk
and these List arrays were getting too big, inflating the size of ODB dumps
written by mlogger into the output data file and slowing down starting and
stopping of runs considerably.
Now, the runs are listed as ranges of "first run" - "last run", (see example below).
This significantly reduces the size of the "List" arrays and makes lazylogger
usable for the ALPHA experiment at CERN and for T2K/ND280 prototype DAQ at
TRIUMF (writing to Castor and Dcache respectively, using the newly added
"Script" method).
The new List format is fully compatible with the old format and you can update
and run the new lazylogger without changing anything in ODB. New runs will be
added to the List arrays in the new format and data in the old format will
eventually go away as old runs are removed from disk.
svn revision 4394.
K.O.
Example: this reads like this:
range from 7100 to 7154
range from 7157 to 7161 (7155-7156 are missing)
range from 7163 to 7168 (7162 is missing)
runs 7170, 7173, 7176
range from 7179 to 7182
and so forth.
ODB /Lazy/Dcache/List
007100
[0] 7100 (0x1BBC)
[1] -7154 (0xFFFFE40E)
[2] 7157 (0x1BF5)
[3] -7161 (0xFFFFE407)
[4] 7163 (0x1BFB)
[5] -7168 (0xFFFFE400)
[6] 7170 (0x1C02)
[7] 7173 (0x1C05)
[8] 7176 (0x1C08)
[9] 7179 (0x1C0B)
[10] -7182 (0xFFFFE3F2)
[11] 7184 (0x1C10)
[12] 7188 (0x1C14)
[13] -7199 (0xFFFFE3E1)
007200
[0] 7200 (0x1C20)
[1] -7225 (0xFFFFE3C7) |
535
|
27 Nov 2008 |
Konstantin Olchanski | Info | Fixed mlogger crash, was Per-variable history implementation in the mlogger | > revision 4142+4143 are minor fixes, refactoring (switch the code to use helper
> functions) and implementation of history for structured banks
The implementation of "history for structured banks" had a bug - tags inside
structured banks were counted incorrectly, leading to memory overwrites and mlogger
crash in open_history().
This is problem is now fixed (plus added assert() checks to crash-out if overwrite of
tags[] array is detected).
svn revision 4398.
K.O. |
541
|
12 Dec 2008 |
Jimmy Ngai | Info | Custom page which executes custom function | Dear All,
How can I add a button at the top of the "Status" webpage which will show a
page similar to the "CNAF" one after I click on it? and how can I make a
custom page similar to "CNAF" which allow me to call some custom funtions? I
want to make a page which is particularly for doing calibration.
Thank you for your attention!
Best Regards,
Jimmy Ngai |
542
|
14 Dec 2008 |
Stefan Ritt | Info | Custom page which executes custom function | > How can I add a button at the top of the "Status" webpage which will show a
> page similar to the "CNAF" one after I click on it? and how can I make a
> custom page similar to "CNAF" which allow me to call some custom funtions? I
> want to make a page which is particularly for doing calibration.
The CNAF page calls directly functions through the RPC layer of midas, which is
not possible from custom pages. All you can do is to execute a scrip on the
server side, which then causes some action. For details please consult the
documentation. |
546
|
01 Jan 2009 |
Konstantin Olchanski | Info | odb "hot link" magic explored | Here are my notes on the MIDAS ODB "hot link" function. Perhaps others can find them useful.
Using db_open_record(key,function), the user can tell MIDAS to call the specified user function when
the specified ODB key is modified by any other MIDAS program. This function works both locally
(shared memory odb access) and remotely (odb access through mserver tcp rpc). For example, the
MIDAS "history" mechanism is implemented in the mlogger by "hot-linking" ODB
"/equipment/xxx/Variables".
First, the relevant data structures defined in midas.h and msystem.h (ODB database headers, etc)
(in midas.h)
#define NAME_LENGTH 32 /**< length of names, mult.of 8! */
#define MAX_CLIENTS 64 /**< client processes per buf/db */
#define MAX_OPEN_RECORDS 256 /**< number of open DB records */
(in msystem.h)
DATABASE buf <--- local, private to each client)
DATABASE_HEADER* database_header <--- odb in shared memory
char name[NAME_LENGTH]
DATABASE_CLIENT client[MAX_CLIENTS]
char name[NAME_LENGTH]
OPEN_RECORD open_record[MAX_OPEN_RECORDS]
handle
access_mode
flags
(the above means that each midas client has access to the list of all open records through
buf->database_header.client[i].open_record[j])
Second, the data path through db_set_data & co: (other odb "write" functions work the same way)
db_set_data(key)
lock db
update odb <--- memcpy(), really
db_notify_clients(key)
unlock db
return
db_notify_clients(key)
loop: <--- data for this key changed and so data for all keys containing it
also changed, and we need to notify anybody who has an open record
on the parents of this key. need to loop over parents of this key (follow "..")
if (key->notify_count)
foreach client
foreach open_record
if (open_record.handle == key)
ss_resume(client->port, "O hDB hKey")
key = key.parent
goto loop;
ss_resume(port, message)
idx = ss_suspend_get_index() <--- magic here
send udp message ("O hDB hKey") to localhost:port <-- notifications sent only to local host!
note 1: I do not completely understand the ss_suspend_xxx() stuff. The best I can tell
is it creates a number of udp sockets bound to the local host and at least one udp rpc
receive socket ultimately connected to the cm_dispatch_rpc() function.
note 2: More magic here: database_header->client[i].port appears to be the udp rpc server
port of the mserver, while ODB /Clients/xxx/Port is the tcp rpc server port
of the client itself, on the remote host
note 3: the following is for remote odb clients connected through the mserver. For local
clients, cm_dispatch_rpc() calls the local db_update_record() as shown at the very end.
note 4: this uses udp rpc. If the udp datagram is lost inside the os kernel (it looks like these udp/rpc
datagrams never go out to the network), "hot-link" silently fails: code below is not executed. Some
OSes (namely, Linux) are known to lose udp datagrams with high probability under certain
not very well understood conditions.
local mserver receives the udp datagram
...
cm_dispatch_ipc()
if (message=="O hDB hKey")
decode message (hDB, hKey)
db_update_record(hDB, hKey)
send tcp rpc with args(MSG_ODB, hDB, hKey)
(note- unlike udp rpc, tcp rpc are never "lost")
remote client receives tcp rpc:
rpc_client_dispatch()
recv_tcp(net_buffer)
if (net_buffer.routine_id == MSG_ODB)
db_update_record(hDB, hKey)
db_update_record(hDB, hKey)
if remote delivery, see cm_dispatch_ipc() above
<--- local delivery
foreach (_recordlist)
if (recordlist.handle == hKey)
if (!recordlist.access_mode&MODE_WRITE)
db_get_record(hDB,hKey,recordlist.data,recordlist.size)
recordlist.dispatcher(hDB,hKey,recordlist.info); <-- user-supplied handler
Note: the dispatcher() above is the function supplied by the user in db_open_record().
K.O. |
547
|
01 Jan 2009 |
Konstantin Olchanski | Info | Custom page which executes custom function | > How can I add a button at the top of the "Status" webpage which will show a
> page similar to the "CNAF" one after I click on it? and how can I make a
> custom page similar to "CNAF" which allow me to call some custom funtions? I
> want to make a page which is particularly for doing calibration.
I was going to say that you can do this by using the MIDAS "hot-link" function.
In your equipment program, you create a string /eq/xxx/Settings/Command, and hot-link
it to the function you want to be called. (See midas function db_open_record() for details
and examples). (To test it, you put a call to printf("Hello world!\n") into your handler function,
then change the value of "command" using odbedit or the mhttpd odb editor
and observe that your function gets called and that it receives the correct value of "command").
Then on your custom web page you create 2 buttons "aaa" and "bbb" attached to javascript
ODBset("/eq/xxx/Settings/Command","aaa") and "bbb" respectively. When you push the button,
the specified string is written into ODB, and your hot-link handler function is called with the contents
of "command", which you can then look at to find out which web button was pushed.
But after looking at the hot-link data paths (see https://ladd00.triumf.ca/elog/Midas/546), I see 2
problems that make the above scheme unreliable and maybe unusable in some applications:
1) the data path contains one UDP communication and it is well known that UDP datagrams can be (and
are) lost with low or high probability, depending on not-well-understood external factors.
The effect is that the hot-link fails to "fire": odb contents is changed but your function is not called.
2) there is a timing problem with multiple odb writes: the odb lock is dropped before the "hot-link" gets
to see the new contents of odb: db_data_set()->lock odb->change data->send notification->unlock
odb->xxx->notification received by client->read the data->call user function. If something else is
written into odb during "xxx" above, the client may never see the data written by the first odb write. For
local clients, the delay between "send notification" and "notification is received by client" is not bounded in
time (can be arbitrary long, depending on the system load, etc). For remote clients, there is an additional
delay as the udp datagram is received by the local mserver and is forwarded to the remote client through
a tcp rpc connection (another source of unbounded delay).
The effect is that if buttons "aaa" and "bbb" are pushed quickly one right after the other, while your
function will be called 2 times (if neither udp packet is dropped), you may never see the value of "aaa"
as is it will be overwritten by "bbb" by the time you receive the first notification.
Probability of malfunction increases with code written like this: { ODBset("command", "open door");
ODBset("command", "walk through doorway"); }. You may see the "open door" command sometimes
mysteriously disappear...
The net effect is that sometimes you will push the button but nothing will happen. This may be okey,
depending on your application and depending on how often it happens in practice on your specific system
If you are lucky, you may never see either of the 2 problems listed above ad hot-links will work for you
perfectly. At TRIUMF, in the past, we have seen hot-links misbehave in the TWIST experiment, and now I
think I understand why (because of the 2 problems described above).
K.O. |
552
|
13 Jan 2009 |
Stefan Ritt | Info | Custom page which executes custom function | The UDP connection you mention is only used locally for inter-process communication. When I implemented that, I
made extensive tests and found that there is never a packet being dropped. This happens for UDP only if the packet
goes over a physical network. Maybe this is different in modern Linux versions, so one should double check this
again.
For remote hot-link notification, the notification is sent over the TCP link, so it should not be lost either. But
your second point is correct. The hot-link mechanism was developed to change parameters in front-end programs for
example. So by design it is guaranteed that if you change a value in the ODB, any client hot-linked to that will
see the change (sooner or later). If there are many changes in short intervals (or the callback function on the
remote client takes long time), only the last change is guaranteed to arrive. Therefore, as you correctly state,
the hot-link mechanism is not a save replacement for the RPC layer (That's why the RPC layer is there after all). |
553
|
14 Jan 2009 |
Stefan Ritt | Info | odb "hot link" magic explored |
KO wrote: | note 1: I do not completely understand the ss_suspend_xxx() stuff. The best I can tell is it creates a number of udp sockets bound to the local host and at least one udp rpc receive socket ultimately connected to the cm_dispatch_rpc() function. |
The ss_suspend_xxx() stuff is indeed the most complicated thing in midas an I have to remind myself always
on how this works. So let me try again:
The basic idea is that for a high performance system, you cannot do the inter-process communication via
polling. That would waste CPU time. Inter-process communication is necessary for for buffer manager
(producer notifies consumer when new events are there), for the RPC mechanism (odbedit tells mlogger to
start a run) or for ODB hot-links. To avoid polling, the inter-process communication works with sockets (UDP
and TCP). This allows to use the select() call, which suspends the calling process until some socket
receives data or a pre-defined time-out expires. This is the only portable method I found which works under
unix and windows (signals are only poorly supported under windows).
So after creating all sockets, ss_suspend() does a select() on these sockets:
_suspend_struct[idx].listen_socket | Server side for any new RPC connection (each client is also a RPC server which gets contacted directly during run transitions for example
|
_suspend_struct[idx].server_acception.recv_sock | Receive socket (TCP) for any active RPC connection
|
_suspend_struct[idx].server_acception.event_sock | Receive socket (TCP) for bare events (bypassing RPC layer for performance reasons)
|
_suspend_struct[idx].server_connection->recv_sock | Outgoing TCP connection to mserver. Used for example for hot-link notifications from mserver
|
_suspend_struct[idx].ipc_recv_socket | UDP socket for inter-process notification
|
For each socket there is a dispatch function, which gets called if that socket receives some data. Hope this sheds some light on the guts of that. |
|