25 May 2006, Konstantin Olchanski, Bug Fix, fix crash in xml odb load
|
There is a crash in odbedit when loading some xml odb files: a missing check for NULL pointer when
loading an array of strings and one of the array elements is blank. This check is present when loading
other string values. Here is the diff:
-bash-3.00$ diff odb.c odb.c-new
5621c5621,5624
< db_set_data_index(hDB, hKey, mxml_get_value(child), size, i, tid);
---
> if (mxml_get_value(child) == NULL)
> db_set_data_index(hDB, hKey, "", size, i, tid);
> else
> db_set_data_index(hDB, hKey, mxml_get_value(child), size, i, tid);
K.O. |
18 May 2006, Stefan Ritt, Bug Fix, Fixed problems with reload of custom pages
|
We had a problem with custom pages and reloading of them. If they contain an ODB field which is editable, one can change the ODB value through the custom page. The URL then contains a "?cmd=Set&value=x&index=x" section, which stays in the browser's address bar after the ODB value has been updated. If the value changes later by some other means in the ODB, and one presses "reload" in the browser, the above URL gets executed again and the value gets changed back which is not wanted.
The problem has been fixed such that mhttpd redirects the browser after setting a variable to the URL not containing the "Set" command from above. |
18 May 2006, Konstantin Olchanski, Bug Fix, removed a few "//" comments to fix compilation on VxWorks
|
Our VxWorks C compiler (gcc-2.8-something) does not like the "//" comments. Luckily, on VxWorks, we
only compile a small subset of midas, so there is no point in banning all "//" comments. But I did have to
convert a couple of them to /* commens */ in odb.c to make it compile. Changes to odb.c commited. K.O. |
11 May 2006, Konstantin Olchanski, Bug Report, MIDAS and Fedora 4
|
Fellow Midasites- we are receiving reports that current Midas sources do not compile on Fedora 4 (and 5?)
with errors "invalid lvalue in assignment". It looks like the new compilers reject what looks to my eye like
perfectly valid C code that we have been writing since the beginning of C. Any suggestions on the best fix?
K.O. |
07 May 2006, Konstantin Olchanski, Bug Report, cm_register_transition gyrations
|
I am debugging a Rome-based DAQ system setup by Pierre A. (the system does not
work because of bugs in Rome).
One problem I see is with my copy of cm_register_transition() in midas.c. Rome
calls it with a NULL function to register a "queued" transition, but the
cm_register_transition() code has changed around (rev 3051) to make NULL mean
"unregister" a transition (this broke the queued transitions used by Rome), then
it got changed back (rev 3085). Of course, I was stuck with the broken version,
so Rome did not work at all, and it cost me real wall time to get to the bottom
of all this, only to discover that this problem is already fixed. So-
I would greatly appreciate it if, in the future, changes (and bug fixes) to the
MIDAS API were announced on this mailing list here.
K.O. |
08 May 2006, Stefan Ritt, Bug Report, cm_register_transition gyrations
|
> I am debugging a Rome-based DAQ system setup by Pierre A. (the system does not
> work because of bugs in Rome).
>
> One problem I see is with my copy of cm_register_transition() in midas.c. Rome
> calls it with a NULL function to register a "queued" transition, but the
> cm_register_transition() code has changed around (rev 3051) to make NULL mean
> "unregister" a transition (this broke the queued transitions used by Rome), then
> it got changed back (rev 3085). Of course, I was stuck with the broken version,
> so Rome did not work at all, and it cost me real wall time to get to the bottom
> of all this, only to discover that this problem is already fixed. So-
>
> I would greatly appreciate it if, in the future, changes (and bug fixes) to the
> MIDAS API were announced on this mailing list here.
>
> K.O.
Yes you are right. I apologize. Fact was that I was not aware that anybody else uses
already ROME in online mode. Nevertheless, let me at least explain the reason for
that change:
Some experiments at PSI run a slow control front end, which talks to pretty slow
hardware, and thus can be nonresponsive for many seconds. Since each frontend by
default registers in the start and stop transitions, this frontend delayed the start
/stop of each run. To solve this problem in the short run, the frontend should not
register in the transition. Originally I implemented this by using the NULL function
pointer, until we figured out that ROME uses this to register (not de-register)
together with the cm_query_transition() function. Therefore a new function
cm_deregister_transition() was implemented and is used now by the slow frontends.
In the long run this will be solved by implementing multi-threaded frontends which
get one thread for each equipment and therefore do not block any transition anymore. |
07 May 2006, Konstantin Olchanski, Bug Fix, Update & add VME drivers
|
I commited fixes for a few minor compilation errors in the VME drivers
(vmicvme.c, etc)
I also added new drivers for the v513 latch and v560 scaler that I wrote for
CERN-ALPHA.
(Maybe I should mention that we also have drivers for the SIS 3820 multiscaler,
the v895 VME discriminator and a few more modules. Will commit them as they mature).
K.O. |
23 Mar 2006, Sergio Ballestrero, Info, svn@savannah.psi.ch down ?
|
Hi,
I was trying to update the checkout of Midas, but it looks like something is not
working - maybe a component of the Savannah system:
[sergio@daq-pc midas-SVN]$ svn update
svn@savannah.psi.ch's password: svn
unix dgram connect: Connection refused at /bin/cvssh.pl line 32
no connection to syslog available at /bin/cvssh.pl line 32
svn: Connection closed unexpectedly
my .svn/entries says (amongst the rest)
url="svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk"
and yes, it used to work well...
Cheers,
Sergio |
26 Mar 2006, Stefan Ritt, Info, svn@savannah.psi.ch down ?
|
> Hi,
> I was trying to update the checkout of Midas, but it looks like something is not
> working - maybe a component of the Savannah system:
> [sergio@daq-pc midas-SVN]$ svn update
> svn@savannah.psi.ch's password: svn
> unix dgram connect: Connection refused at /bin/cvssh.pl line 32
> no connection to syslog available at /bin/cvssh.pl line 32
> svn: Connection closed unexpectedly
>
> my .svn/entries says (amongst the rest)
> url="svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk"
> and yes, it used to work well...
>
> Cheers,
> Sergio
I just tried now and it seemed to work fine. Do you still have the problem?
- Stefan |
27 Mar 2006, Sergio Ballestrero, Info, svn@savannah.psi.ch down ?
|
> I just tried now and it seemed to work fine. Do you still have the problem?
>
> - Stefan
The problem was still there this morning, shortly after seeing your mail, but seems
to be fixed now.
BTW, which is the best way to submit patches ? I have a version of khyt1331 for Linux
kernel 2.6 (we are running Scientific Linux 4.1), and a few smaller things, mostly in
the examples.
Thanks, Sergio |
22 Dec 2005, Konstantin Olchanski, Info, How do I do custom event building?
|
It turns out the the standard event builder fragment matching algorithm cannot
be used in my TPC application. I have two TPC-USB interfaces, which lack any
"busy" or synchronization logic. I send the hardware trigger into both
interfaces, and if one of them misses it, the data is out of sync forever. Consider:
Hardware
trigger trig1 trig2 trig3 trig4
TPC01 serial1 serial2 serial3 serial4
TPC02 serial1 (missing) serial2 serial3
With the event builder matching only the event serial numbers, the first event
will be okey, but the second event will have trig2 data from TPC01 and trig3
data from TPC02, etc.
The problem exists even if the TPC-USB interfaces do not miss any triggers:
during begin and end of run, the interfaces are enabled one at a time, so if a
trigger arrives after the first interface was enabled, but before the second is
enabled, the data starts being out of sync (and if the same happens during the
end-of-run, the event counts from both frontends will match, but all data would
*still* be out of sync).
Obviously additional data is needed to match the fragments.
So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
resolution) and I would like to have the event builder match the timestamps
instead of event serial numbers. What is the best way to do this? The mevb.c
code does not have any user callbacks for checking "do these fragments belong to
the same event?".
P.S. The event rate will be about 1/sec from cosmic ray tests and at most
10-50/sec in the M11 beam line at TRIUMF, at these low rates, the gettimeofday()
timestamps should be adequate.
K.O. |
23 Dec 2005, Stefan Ritt, Info, How do I do custom event building?
|
> It turns out the the standard event builder fragment matching algorithm cannot
> be used in my TPC application. I have two TPC-USB interfaces, which lack any
> "busy" or synchronization logic. I send the hardware trigger into both
> interfaces, and if one of them misses it, the data is out of sync forever. Consider:
>
> Hardware
> trigger trig1 trig2 trig3 trig4
> TPC01 serial1 serial2 serial3 serial4
> TPC02 serial1 (missing) serial2 serial3
>
> With the event builder matching only the event serial numbers, the first event
> will be okey, but the second event will have trig2 data from TPC01 and trig3
> data from TPC02, etc.
Well, I would say: this is a very poor design of an experiment. Before curing the
problems in software, I first would consider a redesign of the data readout scheme with
a global hardware trigger and a hardware busy.
> So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
> resolution) and I would like to have the event builder match the timestamps
> instead of event serial numbers.
What do you do if the frontend clock drifts away? I have seen drifts of up to 10 sec/day
on some PCs, so your required accuracy of 1/50 s would be violated after 3 minutes. You
would have to synchronize your clocks constantly. If your synchronization algorithm
determines a clock is out of sync and adjusts it, and the delta t is more than 1/50 sec,
you are screwed.
So all together I conclude that this proposed synchronization scheme is pretty dangerous
and could ruin the whole experiment.
> What is the best way to do this? The mevb.c
> code does not have any user callbacks for checking "do these fragments belong to
> the same event?".
Pierre can answer that.
- Stefan |
03 Jan 2006, John O'Donnell, Info, How do I do custom event building?
|
At DANCE we have a similar issue. We are still doing "software
handshaking" between multiple frontends (15 which read data, and 16th
with direct accessto the trigger logic), and we apply a time stamp
using gettimeofday(). We use the regular mevb, sorting on serial number.
In the analyzer (MIDAS or ROME) we then keep a big circular buffer of
event fragments, which are rebuilt into new events based on the time stamp
obtained from gettimeofday(). We keep the system clocks synchronized
(often to within about 1ms) using ntp (need to average over several
ntp servers to avoid issues with network noise). ntp can take a while
to stabilize, so we never reboot our computers... (well almost never).
We have a slow control frontend which monitors the ntp time offsets and
put's them in the history system for easy visualization.
Occasionally we seem to get in a mess, but somehow this fixes itself on
the next run, so it has been a useable system. Maybe one day we will
get hardware handshaking between the frontend computers and the trigger
logic, but in the meantime we are taking data.
John. |
28 Dec 2005, Konstantin Olchanski, Suggestion, Handling multiple identical USB devices
|
When I wrote the musbstd.h "open" method, I kind of punted on the problem of
handling multiple identical USB devices. Instead of a real solution, I added an
"instance" parameter, which allows one to "open" the "first", "second", etc USB
device, as listed in a magic random system dependant order.
Normally, USB devices are identified by two 16-bit integers: manufacturer ID and
product ID (i.e. as reported by "lsusb"). This works well until one has more
than one "identical" device. Two years ago, I had 5 identical USB cameras
(optical alignement system for TRIUMF-TWIST); last year, I had multiple USB
serial adapters; today I have two identical USB-TPC interfaces.
Most of the time, the devices are plugged into the same USB ports, so
theoretically, one should be able to tell exactly which one is which ("upstream
camera is plugged into port 1, downstream camera is plugged into port 2"). But
in the magic system dependant enumeration order, they keep moving around,
depending on the order of enumeration, history of powering up and down, phase of
the Moon, etc.
So my generic "musbstd" method of "open first", "open second", etc turned out to
be completely disfunctional.
So far, I am unable to come up with a system independant solution. But I have a
solution for Linux and maybe for MacOSX:
1) on Linux, I can use the information parsed from /proc/bus/usb/devices to say
"please open the USB device on USB bus 1, port 1", the so called USB device
"path", as seen in the system log and in /sys/bus/usb/devices.
2) on MacOSX, I was unable to find a way to discover the USB topology, but they
seem to maintain an uint32_t "location", which they promise to keep at least
across reboots (did not check this yet).
3) Windows I did not look at yet.
So we have a choice:
a) use system dependant "musb_open_linux(usbpath,vendor,product)",
"musb_open_macosx(???,vendor,product)", etc
b) create order out of chaos by manually keeping a map of "instances" (first,
second, third device) to "persistant addresses". On Linux, it would be a file
containing something like this: "USB-TPC-0 is on bus1-port1, USB-TPC-1 is on
bus1-port2". Then again, I can say "please open USB-TPC interface instance 0" or
"instance 1", etc. There is a small difficulty with dealing with devices
temporarily or permanantly going away, or changing physical addresses ("I moved
the USB device from port 1 to port 3"). This could be handled by telling the
user "hmm... USB topology has changed, please delete the map file and try
again", or we could come up with something more user friendly.
Any thoughts?
P.S. For my immediate need (I need this tomorrow), I will write a
musb_open_linux(usbpath,vendor,product) function.
K.O. |
03 Jan 2006, Stefan Ritt, Suggestion, Handling multiple identical USB devices
|
> Any thoughts?
I got an idea of how to solve this problem in an OS-independent manner. The USB
devices and hubs form a tree, like this
Root HUB
0 1 2
| | \__...
| \___
DevY \
HUB
0 1 2
| |
| DevX
HUB
0 1 2
|
DevZ
This tree can be considered as an ordered tree, if you read it from left to right.
In that order, the devices are orderd
DevY - DevZ - DevX
Since the devices are ordered, the "instance" parameter from musb_init can be used
to identify them uniquely, like
instance==0 => DevY
instance==1 => DevZ
instance==2 => DevX
So I would say that we can use the current API using the "instance" parameter to
uniquely access a device. All we have to do is to build that tree, sort it, and then
use the instance parameter as an entry to that tree. The sorting takes care of
different ordering, which can happen during enumeration (depeding on power-up
sequence, phase of the Moon etc.). So if you have three devices like above, DevZ
should alway be at "instance==1". The only problem is if you unplug DevY for
example, then you get the map
instance==0 => DevZ
instance==1 => DevX
which is different from above. But if you have a different number of devices, you
likely have to change your frontend cody anyhow, so you can change the device
mapping there as well.
In order to simplify the code, I would not build a complete tree and sort it, but
scan the whole tree hierarchically, i.e. look at
Bus1/Port1
Bus1/Port2
Bus1/...
Bus2/Port1
Bus2/Port2
...
Since there is a maximum of toal 127 USB devices, this scan should be pretty quick.
If you find a device with matching vendor and product ID, you increment an internal
counter. If that counter matches your instance parameter, you open that device.
The ultimate solution of course is to put an additional address into each device, so
you can distinguish them easily. For a out-of-the box Web cam you probably have no
chance, but for the home-made MSCB nodes I put such an address into each node, so I
can distinguish them even if the have the same product and vendor ID. |
30 Dec 2005, Konstantin Olchanski, Bug Report, mhttpd "edit on start" broken for arrays
|
If a variable under "/experiment/edit on start/" is an array, it is correctly
offered for editing on the "start run page", but then all elements in the array
end up set to the value of the first element.
This appears to be an error in mhttpd.c:interprete(), in the "start dialog"
section. The non-working version in CVS reads:
for (j = 0; j < key.num_values; j++) {
size = key.item_size;
sprintf(str, "x%d", n++);
db_sscanf(getparam(str), data, &size, j, key.type);
db_set_data_index(hDB, hsubkey, data, size + 1, j, key.type);
}
the fix that works for me reads:
db_sscanf(getparam(str), data, &size, 0, key.type);
(notice: the argument "j" is replaced with "0").
The way I understand this, all array elements are encoded into individual HTTP
thingy strings, named sequentially x0, x1, ... and when we parse the values out
of them, the array index should never show up.
(Stefan, if you can, please commit a fix to svn).
K.O. |
03 Jan 2006, Stefan Ritt, Bug Report, mhttpd "edit on start" broken for arrays
|
> If a variable under "/experiment/edit on start/" is an array, it is correctly
> offered for editing on the "start run page", but then all elements in the array
> end up set to the value of the first element.
You are right. This was was there from the beginning, you are just the first one
trying "edit on start" with an array. I applied your fix and committed to SVN
reviwion 3013.
Stefan |
18 Aug 2005, Konstantin Olchanski, Info, minor changes to run transition code
|
Minor changes to run transitions code:
- improve debug messages
- fail transition if cannot connect to one of the clients
K.O. |
23 Dec 2005, Konstantin Olchanski, Bug Report, minor changes to run transition code
|
> Minor changes to run transitions code:
> - fail transition if cannot connect to one of the clients
This change introduced a problem:
1) a run is happily taking data
2) a frontend crashes
3) the web interface cannot stop the run (cannot contact the crashed frontend)
until it is removed by the timeout (10-60 seconds?).
I am now considering allowing the run to end even if some clients cannot be
contacted. The begin, pause and resume transitions would continue to fail if
clients cannot be contacted.
K.O. |
24 Dec 2005, Stefan Ritt, Bug Report, minor changes to run transition code
|
> I am now considering allowing the run to end even if some clients cannot be
> contacted. The begin, pause and resume transitions would continue to fail if
> clients cannot be contacted.
Sounds like a good idea.
- Stefan |
22 Dec 2005, Konstantin Olchanski, Info, midas max event size?
|
My TPC events are fairly large: 18 FEC cards * 128 channels per card * 2 Kbytes
per channel = about 4 Mbytes. In my
frontend, when I request this event size, MIDAS complaints (in mfe.c) that it is
bigger than MAX_EVENT_SIZE, which
is set to 0.5 Mbytes in midas.h. What is the best way to deal with this? Should
we increase MAX_EVENT_SIZE to
something bigger? Remove the MAX_EVENT_SIZE limitation altogether?
For now, I increased the value MAX_EVENT_SIZE & co to (10*1024*1024) and it
seems to work (I also had to bump the
sanity check in bm_open_buffer() from 10E6 to 100E6). With 1/4 of the FEC cards,
the event size is 1 Mbyte at ~6
ev/sec the machine is almost idle, with the biggest CPU user being the event
builder at 10% CPU utilization.
K.O. |
23 Dec 2005, Stefan Ritt, Info, midas max event size?
|
> My TPC events are fairly large: 18 FEC cards * 128 channels per card * 2 Kbytes
> per channel = about 4 Mbytes. In my
> frontend, when I request this event size, MIDAS complaints (in mfe.c) that it is
> bigger than MAX_EVENT_SIZE, which
> is set to 0.5 Mbytes in midas.h. What is the best way to deal with this? Should
> we increase MAX_EVENT_SIZE to
> something bigger? Remove the MAX_EVENT_SIZE limitation altogether?
If you teach me how to remove the MAX_EVENT_SIZE, that would be perfect!
Unfortunately the limit comes from the shared memory on the back end (the so-called
"SYSTEM" shared memory). Due to the structure of the buffer manager, the shared
memory has to hold at least two events simultaneously. And once the shared memeory
is created, it's size cannot be changed without restarting all the clients. That's
the origin of the MAX_EVENT_SIZE. In former days, the total allowed shared memory on
a typical linux machine was 2MB. That's why I set MAX_EVENT_SIZE to 0.5 MB, so midas
takes 2*0.5MB=1MB plus 0.2MB for the ODB, leaving 0.8MB for other applications.
Nowadays, the shared memory might be bigger (actually it's a parameter during kernel
compilation), so one could consider increasing the default MAX_EVENT_SIZE. If you
make a survey of the shared memory sizes in some of the current distributions, we
can choose a safe value.
> For now, I increased the value MAX_EVENT_SIZE & co to (10*1024*1024) and it
> seems to work (I also had to bump the
> sanity check in bm_open_buffer() from 10E6 to 100E6). With 1/4 of the FEC cards,
> the event size is 1 Mbyte at ~6
> ev/sec the machine is almost idle, with the biggest CPU user being the event
> builder at 10% CPU utilization.
I made sure that there is no other limitation as the one given by MAX_EVENT_SIZE, so
it should work fine. Thanks for telling me the wrong sanity check, that should be
changed in the repository. |
14 Dec 2005, Konstantin Olchanski, Bug Report, misc problems
|
I would like to document a few problems I ran into while setting up a new
experiment (two USB interfaces to Alice TPC electronics, plus maybe a USB
interface to CAMAC). I am using a midas cvs checkout from last October, so I am
not sure if these problems exist in the very latest code. I have fixes for all
of them and I will commit them after some more testing and after I figure out
how to commit into this new svn thingy.
- mxml: writing xml into an in-memory buffer probably produces invalid xml
because one of the mxml functions always writes "/>" into writer->fh, which is 0
for in-memory writers, so the "/>" tag goes to the console instead of the xml
data stream.
- hs_write_event() closes fd 0 (standard input), which confuses ss_getch(),
which makes mlogger not work (at least on my machine). I traced this down to the
history file file descriptors being initialized to zero and hs_write_event()
closing files without checking that it ever opened them.
- mevb: event builder did not work with a single frontend (a two-liner fix, once
Pierre showed me where to look. Why? My second TPC-USB interface did not yet
arrive and I wanted to test my frontend code. Yes, it had enough bugs to prevent
the event builder from working).
- mevb: consumes 100% CPU. Fix: add a delay in the main busy-loop.
- mlogger ROOT tree output does not work for data banks coming through the event
builder: mlogger looks for the bank definition under the event_id of mevb, in
/equipment/evb/variables, which is empty, as the data banks are under
/equipment/frontendNN/variables. This may be hard to fix: bank "TPCA" may be
under "fe01", "TPCB" under "fe02" and mlogger knows nothing about any of this.
Fix: go back to .mid files.
K.O. |
02 Dec 2005, Greg Hackman, Info, MIDAS on Cygwin
|
If you want to run MIDAS on Cygwin, make sure you have cygserver running. First set a Windows system environment variable CYGWIN=server. This is best done through the Control Panel -> System -> Advanced -> Environment Variables. Then run /usr/bin/cygserver-config in a Cygwin console window. Then reboot. After that your MIDAS executables should run properly.
If cygserver is not running, one (obvious) symptom is that odbedit fails immediately with a "Bad system call" error.
I've only tested this so far with odbedit and an offline analyzer that generates histograms in the same structure . Both of those work properly. |
23 Nov 2005, Stefan Ritt, Bug Fix, Endian swapping in mana.c
|
It was reported that following code in mana.c :
/* swap event header if in wrong format */
if (pevent->serial_number > 0x1000000) {
WORD_SWAP(&pevent->event_id);
WORD_SWAP(&pevent->trigger_mask);
DWORD_SWAP(&pevent->serial_number);
DWORD_SWAP(&pevent->time_stamp);
DWORD_SWAP(&pevent->data_size);
}
does not work correctly for events having a true serial number above 16777216 (=0x10000000). After some considerations, I concluded that there is no good way to determine automatically the endian format of midas events, without adding another field in the header, which would break the compatibility with all recorded data up to date. I therefore changed the above code to
/* swap event header if in wrong format */
#ifdef SWAP_EVENTS
WORD_SWAP(&pevent->event_id);
WORD_SWAP(&pevent->trigger_mask);
DWORD_SWAP(&pevent->serial_number);
DWORD_SWAP(&pevent->time_stamp);
DWORD_SWAP(&pevent->data_size);
#endif
So if one wants to analyze events with the midas analyzer on a PC system for example where the events come from a VxWorks system with the opposite endian encoding, one has to set the flag -DSWAP_EVENTS when compiling the analyzer for that type of analysis. |
02 Nov 2005, Stefan Ritt, Suggestion, Where to put drivers?
|
Hi,
I would like to raise the question where to put the midas drivers.
We have both the example experiment and the MSCB Makefile which both expect to find the midas drivers under $MIDASSYS/drivers/camac or $MIDASSYS/drivers/usb. The documentation does not explicitely mention to define MIDASSYS as /usr/local, but some people do it. That however requires to put all drivers then under /usr/local/drivers, which is not the case in the current Makefile for midas. Do you think that we should add this? Or should we better ask (->documentation) people to define MIDASSYS to wherever they install the midas package (usually /usr/home/<name>/midas or so)?
Looking forward to hear your opinion,
Stefan |
06 Nov 2005, Pierre-Andre Amaudruz, Suggestion, Where to put drivers?
|
Stefan Ritt wrote: | Hi,
I would like to raise the question where to put the midas drivers.
We have both the example experiment and the MSCB Makefile which both expect to find the midas drivers under $MIDASSYS/drivers/camac or $MIDASSYS/drivers/usb. The documentation does not explicitely mention to define MIDASSYS as /usr/local, but some people do it. That however requires to put all drivers then under /usr/local/drivers, which is not the case in the current Makefile for midas. Do you think that we should add this? Or should we better ask (->documentation) people to define MIDASSYS to wherever they install the midas package (usually /usr/home/<name>/midas or so)?
Looking forward to hear your opinion,
Stefan |
Pierre-André Amaudruz wrote: |
The purpose of the MIDASSYS introduction was to permit the placement of the package in the user area as well as publishing the Midas entry point. Doing so, we lessen the necessity to "install" Midas in the standard OS directory such as /opt or /usr/local. Static linking, use of rpath, new "make minimal_install" go in that direction.
Regarding the drivers, organizing the directories per hardware type (camac, vme, fastbus, usb, etc) seems better to me. Originally, we mostly dealt with CAMAC and therefore the diverse Makefile had a default reference to /drivers/bus/(camacrpc). Now that we removed cnaf/rpc from the automatic mfe build, it indicates that CAMAC is no longer the prime hardware. Then we should leave open to the user the selection of the hardware and document the necessity for him/her to adjust the build appropriately ( $MIDASSYS/drivers/<HW_type> ). The different Makefile examples should be adjusted to the proper driver location they're dealing with.
Pierre-André |
|
06 Nov 2005, Stefan Ritt, Suggestion, Where to put drivers?
|
Stefan Ritt wrote: | We have both the example experiment and the MSCB Makefile which both expect to find the midas drivers under $MIDASSYS/drivers/camac or $MIDASSYS/drivers/usb. The documentation does not explicitely mention to define MIDASSYS as /usr/local, but some people do it. That however requires to put all drivers then under /usr/local/drivers, which is not the case in the current Makefile for midas. Do you think that we should add this? Or should we better ask (->documentation) people to define MIDASSYS to wherever they install the midas package (usually /usr/home/<name>/midas or so)?
|
Pierre-André Amaudruz wrote: |
The purpose of the MIDASSYS introduction was to permit the placement of the package in the user area as well as publishing the Midas entry point. Doing so, we lessen the necessity to "install" Midas in the standard OS directory such as /opt or /usr/local. Static linking, use of rpath, new "make minimal_install" go in that direction.
Regarding the drivers, organizing the directories per hardware type (camac, vme, fastbus, usb, etc) seems better to me. Originally, we mostly dealt with CAMAC and therefore the diverse Makefile had a default reference to /drivers/bus/(camacrpc). Now that we removed cnaf/rpc from the automatic mfe build, it indicates that CAMAC is no longer the prime hardware. Then we should leave open to the user the selection of the hardware and document the necessity for him/her to adjust the build appropriately ( $MIDASSYS/drivers/<HW_type> ). The different Makefile examples should be adjusted to the proper driver location they're dealing with.
Pierre-André |
I agree with what you say. So I will include the drivers in the ("full") install to be copied under /usr/local/drivers, just for the people using midas in an "installed" way, but we keep the possibility to use a minimal_install to skip the driver installation. |
23 Aug 2005, Konstantin Olchanski, Info, new mvmestd api
|
For some time now, we have been thinking of updating the programming interface
for the VME bus interface drivers- mvmestd.h.
Until recently, we only had one type of vme interface- the PowerPC and
Universe-II based Motorolla MVME230x single board computers running VxWorks, and
that is the only VME interface supported by the present mvmestd.h & co in the
midas cvs.
Now we also have the Intel-PC and Universe-II based VMIC-VME single board
computers running Linux (RHL9 and RHEL4). They come with their own VME drivers
and interface library (from VMIC), and we (Pierre and myself) wrote a simplified
MIDAS-style library for using it with our ADC and TDC drivers.
After working with the VMIC-VME based systems this Summer, I am about to commit
our VME ADC and TDC drivers to MIDAS CVS. Since they use our VMIC-VME library, I
was inspired to integrate our library with the existing MIDAS VME API.
Both VME interfaces we use, MVME230x and VMIC-VME, use the same Universe-II
PCI-to-VME bridge. This brodge (+ OS drivers) provides memory mapped access to
VME directly from user memory space. Other VME interfaces require more
complicated interfacing and I tried to accomodate them in my design.
Note that this design is incomplete, it only has the VME features that we
currently use. I expect that the missing features (interrupts, DMA) will be
added to the "MIDAS VME API" as we start using them. Alternatively, they may be
implemented as interface-dependant "extensions".
So here goes:
void* mvme_getHandleA16(int crate,mvme_addr_t vmeA16addr,int numbytes,int vmeamod);
void* mvme_getHandleA24(int crate,mvme_addr_t vmeA24addr,int numbytes,int vmeamod);
void* mvme_getHandleA32(int crate,mvme_addr_t vmeA32addr,int numbytes,int vmeamod);
void mvme_writeD8(void* handle,int offset,int data);
void mvme_writeD16(void* handle,int offset,int data);
void mvme_writeD32(void* handle,int offset,int data);
int mvme_readD8(void* handle,int offset);
int mvme_readD16(void* handle,int offset);
int mvme_readD32(void* handle,int offset);
The "getHandle" methods return a handle for accessing the required VME address
space. For Universe-II based drivers with direct memory mapping, the handle is a
pointer to the vme-mapped memory and can be directly dereferenced (after casting
from void*). For other drivers, it may be a pointer to an internal data
structure or whatever.
The "readDnn" and "writeDnn" methods implement the single-word vme transfers. It
is intended that directly mapped interfaces (Universe-II) can implement them as
"extern inline" (RTFM C docs) for maximum efficiency.
I am still struggling with a specification for vme block transfers. How does one
specify chained transfers? (mimic "man readv" using "struct iovec"?) How to
specify when the transfers stop (on word count, on BERR, etc). How to specify
FIFO modes (where the vme address is not incremented, all data is read from the
same address. The Universe-II bridge does not have this mode, others do). How to
decode whether to use DMA or not? (The VMIC-VME DMA driver has high startup
overhead, short transfers are faster using PIO more).
Anyhow, I do not need those advanced features immediately, so I omit them.
An implementation of this new interface will be commited to
midas/drivers/bus/vmicvme.{c,h} (and eventually I will modify vxVME.c to
conform). Drivers for sundry CAEN VME modules that use the new interface will be
commited to midas/drivers/divers (where I see drivers for other VME stuff).
Feedback is most welcome. I will try to get the stuff commited within the next
few days, plus a few days to shake down any bugs introduced during midasification.
K.O. |
01 Sep 2005, Stefan Ritt, Info, new mvmestd api
|
Good that you brought up the MIDAS VME API again, since this is still not complete, but
has to be completed soon.
Let me summarize the goals:
- have a single set of functions which can be used with all VME CPUs/Interfaces at our
institutes. Using this technique, one can change the interface or CPU and still keep
the same frontend source code. This was already successfully done with the MIDAS CAMAC
standard (as defined in mcstd.h)
- base any ADC/TDC driver we write on that API, so these modules can be used with any
CPU/Interface without changing the driver
- have a simple and easy to understand set of functions
- "cover" any specialities from the drivers, like memory mapping.
Especially this point is very delicate. If one explicitely uses memory mapping in the
API, one cannot use interfaces which do not support this (like the Struck SIS3100). So
one should only use explicity vme_read/vme_write functions. Now people might argue that
going for each single access through a function is an overhead as compared to a memory
mapped operation. This might be true (even with inline functions of modern C
compilers), but it should be small on fast computers. Typically a single VME operation
take ~1us, while a function call takes much less.
Regarding the API implementation, I see now three "philosophies":
1) Handle oriented. One obtains a handle for each VME crate for each addressing mode,
then uses this handle for subsequent operation. This is the way the proposal from K.O.
is written.
2) Parameter oriented. There is no handle visible to the user code. All parameters are
passed in each call, like
mvme_read(crate, address_mode, vme_amod, source_addr, destination_addr, num_bytes);
3) ioctl() based. Same as 2), but the parameters like the address mode only get changed
via ioctl() when needed, like
vme_ioctl(request, parameter) such as
vme_ioctl(SET_CRATE, 1);
vme_ioctl(MVME_AMOD, A24);
mvme_read(source_addr, destination_addr, num_bytes);
This is how the current mvmestd.h is defined and how the
midas\drivers\bus\sis3100\sis3100.c is implemented.
Now the question is: should we implement 1), 2) or 3) ?
I had already lots of discussions with Pierre, and he convinced me that the ioctl() way
is not very nice. The advantage is that there is only one function to change
everything, so the complete API would be only 5 functions (init, exit, read, write,
ioctl), but of course there are many parameters to the ioctl() function.
On the other hand I do not like the option 1). If you have five crates on a single PC
(and that's what we will have in our MEG experiment), you need 5x3 handles. If you use
many nested subroutines in your event readout, you have to pass lots of handles around.
I do not like option 2) as well, beacause each VME call contains many parmeters, which
make it hard to read.
So I would propose the following: We implement something like 3), but with explicit
routines:
mvme_set_crate() each funciton has a _get_ partner, like mvme_get_crate()
mvme_set_address_mode()
mvme_set_amod()
mvme_set_blocktransfer()
mvme_set_fifomode() // speciality of the SIS3100 interface, write a
// block of data to the same address
...
mvme_read(vme_address, dest_addr, num_bytes);
mvme_write(src_addr, vme_address, num_bytes);
It might look unfamiliar to have to set the address mode explicitely, but in practice
one typically has a few configuration calls in A16 mode, then the data readout in A32
mode. So omitting the address mode in the vme_read/write calls saves typing effort.
Since one does not use explicit handles, they have to stored internally in the driver.
I did this in the sis3100.c, and found that this overhead is negligible. The
implementation if of course not thread save, but does anybody use threads in the
experiment? I guess not.
Now I would like to hear anybody's comments. If we agree on this method, we have to
define a complete set of functions mvme_set_xxx. If we get a new interface in the
future which has new functionality (like 2eVME block transfers), we have to change the
API each time (while with the ioctl() we only would have to add one parameter). Or
maybe we can make a more generic mvme_set_vme_mode(mode), where mode could be fifomode,
2eVME mode, chained block transfer mode and so on.
Now there might be experiments which require the last bit of performance at the
frontend. They can decide to use the MIDAS API with some performance overhead, or they
can call directly the native driver API, but then be locked to the API. So everybody
has to decide himself.
I meet with Pierre end of September, and would like to finalize the API at that time.
So please give it a thought and let me know.
Best regards,
Stefan |
01 Sep 2005, Stefan Ritt, Suggestion, new mvmestd api
|
Anothe idea which comes to my mind, we could make it kind of object oriented, like
typedef struct {
int handle;
int crate;
int amod;
int fifo_mode;
...
} MVME_INTERFACE;
main()
{
MVME_INERFACE *vme;
vme = mvme_init(); // allocated and fills MVME_INTERFACE structure
mvme_set_crate(vme, crate_no);
mvme_set_address_mode(vme, A24);
...
mvme_read(vme, vme_address, dest_addr, num_bytes);
mvme_exit(vme); // frees memory allocated in mvme_init()
}
------------------------------------------
This way we would only have one structure containing all required parameters, and get/set
functions for it, like the OO textbooks propose it. This would actually make it thread
save. The "vme" pointer from above still has to be passed around to subroutines, but a
single pointer is better than lots of handles. |
10 Sep 2005, Konstantin Olchanski, Info, new mvmestd api
|
> Good that you brought up the MIDAS VME API again, since this is still not complete, but
> has to be completed soon.
Right, but I can only complete the parts that I thought of and for which I already have
code. This leaves out support for DMA (read: any block transfers) and interrupts.
> Let me summarize the goals:
> - have a single set of functions which can be used with all VME CPUs/Interfaces at our
> institutes. Using this technique, one can change the interface or CPU and still keep
> the same frontend source code. This was already successfully done with the MIDAS CAMAC
> standard (as defined in mcstd.h)
Well, all interfaces are different and no amount of software will make them look all the
same. I am now facing this problem with the Wiener CCUSB CAMAC-USB2 interface. I can
implement all of mcstd.h, but the interface is intended to be used by downloading it with a
CAMAC readout program and mcstd.h knows nothing about that.
> - base any ADC/TDC driver we write on that API, so these modules can be used with any
> CPU/Interface without changing the driver
Right. Most useful.
> - have a simple and easy to understand set of functions
Right.
> - "cover" any specialities from the drivers, like memory mapping.
Exactly. We are facing a tricky task of inventing one API for two completely different
modes of operation- purely memory mapped access on UniverseII based hardware and message
passing access for the SIS3100 and VMUSB (Wiener VME-USB2).
> So one should only use explicity vme_read/vme_write functions.
Rightey-ho. The fly in the ointement is that all VME ADC and TDC drivers in TRIUMF are
written assuming memory mapped access, and I will not convert them to vme_read/vme_write
overnight (think of testing).
> So I would propose the following:
>
> mvme_set_crate() each funciton has a _get_ partner, like mvme_get_crate()
> mvme_set_address_mode()
> mvme_set_amod()
> mvme_set_blocktransfer()
> mvme_set_fifomode() // speciality of the SIS3100 interface, write a
> // block of data to the same address
> ...
>
> mvme_read(vme_address, dest_addr, num_bytes);
> mvme_write(src_addr, vme_address, num_bytes);
This is compatible with what we do now and I will look into implementing this for
VMIC/Linux and MVME/VxWorks interfaces.
> Now I would like to hear anybody's comments. If we agree on this method, we have to
> define a complete set of functions mvme_set_xxx.
We currently require only single-word transfers so we can concentrate on mvme_set_xxx for
block-transfers later.
> If we get a new interface in the
> future which has new functionality (like 2eVME block transfers), we have to change the
> API each time (while with the ioctl() we only would have to add one parameter).
This amounts to the same thing: add a new function or add a new ioctl() call.
> maybe we can make a more generic mvme_set_vme_mode(mode), where mode could be fifomode,
> 2eVME mode, chained block transfer mode and so on.
This is a can of worms and I would rather postpone discussion of block transfers. To give
you a taste: UniverseII does not have a "fifo mode"- it *always* increments the vme address
(silly). A fifo mode can be emulated using chained transfers (read 256 bytes from
addresses A through A+256, then read 256 more from address A, etc.), but the present VMIC
VME library does not support chained transfers. On VxWorks, we do not even have a driver
for the DMA engine, so not block transfers there at all.
I will now think about and post an updated proposal for mvmestd.h
K.O.
P.S. There is a proposal for musbstd.h heading your way, too. |
11 Sep 2005, Stefan Ritt, Info, new mvmestd api
|
> Right, but I can only complete the parts that I thought of and for which I already have
> code. This leaves out support for DMA (read: any block transfers) and interrupts.
DMA should be simple. We have a dma_flag in the MVME_INTERFACE structure, which only needs to
be set with mvme_set_dma_mode(...). The mvme_read/write subroutine then checks this flag and
calls the appropriate routine from the native API. About interrupts I haven't thought so much.
Does TRIUMF use interrupts anywhere? Or are all midas frontends in polled mode?
> Well, all interfaces are different and no amount of software will make them look all the
> same.
>
> > - "cover" any specialities from the drivers, like memory mapping.
>
> Exactly. We are facing a tricky task of inventing one API for two completely different
> modes of operation- purely memory mapped access on UniverseII based hardware and message
> passing access for the SIS3100 and VMUSB (Wiener VME-USB2).
Not all the same, but some common denominator. The memory mapped architecture can probably be
hidden in an API. So if one calls mvme_read/write, the routine checks if that region is already
mapped, and maps it if necessary. Then all you need is a proper offset and a memcpy(). Checking
about mapping causes some overhead. You have to check a hash table or a linked list which takes
time. But I think (see previous message) that this overhead should be small compared with the
IO operation.
> I am now facing this problem with the Wiener CCUSB CAMAC-USB2 interface. I can
> implement all of mcstd.h, but the interface is intended to be used by downloading it with a
> CAMAC readout program and mcstd.h knows nothing about that.
Downloading a program you probably cannot cover with a common API, you are right. The problem
with USB is that you can only make ~1000 transfers per second, even with 2.0. So if you want
more, you need the old list concept.
> > So one should only use explicity vme_read/vme_write functions.
>
> Rightey-ho. The fly in the ointement is that all VME ADC and TDC drivers in TRIUMF are
> written assuming memory mapped access, and I will not convert them to vme_read/vme_write
> overnight (think of testing).
You don't have to. This question only comes up if you (have to) use a non-memory mapped
interface. You can then either write then two separate drivers, or one driver and two MVME APIs.
> > Now I would like to hear anybody's comments. If we agree on this method, we have to
> > define a complete set of functions mvme_set_xxx.
>
> We currently require only single-word transfers so we can concentrate on mvme_set_xxx for
> block-transfers later.
I need block transfers end of this month, so we should it include it in our current discussion.
The problem is that I use our (own) DRS2 waveform digitizing board, where each board produces
70kB of data per event. In non-DMA mode, the transfer would take forever.
> > maybe we can make a more generic mvme_set_vme_mode(mode), where mode could be fifomode,
> > 2eVME mode, chained block transfer mode and so on.
>
> This is a can of worms and I would rather postpone discussion of block transfers. To give
> you a taste: UniverseII does not have a "fifo mode"- it *always* increments the vme address
> (silly). A fifo mode can be emulated using chained transfers (read 256 bytes from
> addresses A through A+256, then read 256 more from address A, etc.), but the present VMIC
> VME library does not support chained transfers. On VxWorks, we do not even have a driver
> for the DMA engine, so not block transfers there at all.
If a native API does not support block transfer, the MVME driver should just ignore the DMA
setting. A ADC driver might then run slower, but still run.
> I will now think about and post an updated proposal for mvmestd.h
Please also consider elog:221, I guess this is a cleaner and more flexible way of implementing
any MXXX standard.
- Stefan |
02 Nov 2005, I. K. arapkorir, Info, new mvmestd api
|
I manage to access some vme modules with the older vmicvme interface and seemed
confused with the
new interface as the sample code provided does not have a specific test sample.
The test code
provided in the earlier version for accessing V792 32ch. QDC was quite handy,
how can I apply it
for the new interface? |
02 Nov 2005, Pierre-Andre Amaudruz, Info, new mvmestd api
|
> I manage to access some vme modules with the older vmicvme interface and seemed
> confused with the
> new interface as the sample code provided does not have a specific test sample.
> The test code
> provided in the earlier version for accessing V792 32ch. QDC was quite handy,
> how can I apply it
> for the new interface?
Hello Ian,
I'm in the process of updating the V1190B, V792 and other to the new mvmestd.
These drivers will soon be committed to the repository.
Cheers, Pierre-André |
17 Oct 2005, Exaos Lee, Bug Fix, "make install" error under MacOS X
|
Under MacOS X, "make install" will cours an error like this:
...
install: darwin/bin/dio: No such file or directory
make: *** [install] Error 71
This can be fixed as the following diff:
404,405c404,405
< $(BIN_DIR)/mcnaf: $(UTL_DIR)/mcnaf.c $(DRV_DIR)/camac/camacrpc.c
< $(CC) $(CFLAGS) $(OSFLAGS) -o $@ $(UTL_DIR)/mcnaf.c $(DRV_DIR)/camac/camacrpc.c $(LIB) $(LIBS)
---
> $(BIN_DIR)/mcnaf: $(UTL_DIR)/mcnaf.c $(DRV_DIR)/bus/camacrpc.c
> $(CC) $(CFLAGS) $(OSFLAGS) -o $@ $(UTL_DIR)/mcnaf.c $(DRV_DIR)/bus/camacrpc.c $(LIB) $(LIBS)
438c438,439
< @for i in mserver mhttpd odbedit mlogger ; \
---
>
> @for i in mserver mhttpd odbedit mlogger dio ; \
444,447d444
< chmod +s $(SYSBIN_DIR)/mhttpd
<
< ifeq ($(OSTYPE),linux)
< install -v -m 755 $(BIN_DIR)/dio $(SYSBIN_DIR)
449c446
< endif
---
> chmod +s $(SYSBIN_DIR)/mhttpd
|
10 Oct 2005, Stefan Ritt, Info, Bus drivers moved in repository
|
The previous midas/drivers/bus dirctory contains both midas slow control bus drivers plus vme & fastbus & camac drivers. I separated them now in different directories:
midas/drivers/bus
midas/drivers/camac
midas/drivers/vme
midas/drivers/fastbus
which is a more appropriate structure. Doing this in subversion was really simple and showed me that the moveover to subversion was worth it. |
15 Oct 2005, Exaos Lee, Info, Bus drivers moved in repository
|
The Makefile should be modified too. Please see the diff below:
diff Makefile Makefile.modify
-------------------------------------
404,405c404,405
< $(BIN_DIR)/mcnaf: $(UTL_DIR)/mcnaf.c $(DRV_DIR)/bus/camacrpc.c
< $(CC) $(CFLAGS) $(OSFLAGS) -o $@ $(UTL_DIR)/mcnaf.c $(DRV_DIR)/bus/camacrpc.c $(LIB) $(LIBS)
---
> $(BIN_DIR)/mcnaf: $(UTL_DIR)/mcnaf.c $(DRV_DIR)/camac/camacrpc.c
> $(CC) $(CFLAGS) $(OSFLAGS) -o $@ $(UTL_DIR)/mcnaf.c $(DRV_DIR)/camac/camacrpc.c $(LIB) $(LIBS)
|
07 Oct 2005, Stefan Ritt, Info, MIDAS moved from CVS to Subversion
|
Dear Midas users,
I have moved midas from CVS to Subversion today. There were many reasons for doing so, which I don't want to explain in detail here. To use the new repository, there a several things to note:
- Anonymous checkout can be done now with
svn co svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk midas
svn co svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/mxml/trunk mxml
Use password svn (you might have to enter it several times). The mxml package is now outside from midas, so you have to check it out separately.
- Non-anonymous access (for commits!) is only possible if you have an account at PSI. While it is possible via
svn co svn+ssh://<your_name>@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk midas
it is more convenient if you access the repository via AFS, since then you only have to obtain a valid AFS token once a day and do not have to supply passwords on each SVN access
- Before you do a checkout, delete (or rename) your old CVS working directory
- Subversion does not use file revisions, but a global revision number for the whole repository, which is now at 2752. To get some idea about subversion, read this very good book
- The Web access to the repository is at http://savannah.psi.ch/viewcvs/trunk/?root=midas
- The ViewCVS web interface allows on-the-fly generation of TAR balls from the current repository. Just click on the link Download tarball
- The old CVS repository has been switched to read-only and will be completely closed in a few weeks
- The machine midas.psi.ch will in the near future not be available any more for any repository
- All the $Log: tags in the midas files have been replaced by $Id: tags, since the former ones are not supported by SVN (for good reasons actually). To view the change log, do a svn log <filename>.
For the windows users, I have some additional notes:
- Do not use the Cygwin subversion package, but the binaries from here if you plan to access the SVN repository through AFS at PSI (or other places where AFS is available). If you map the AFS repository for example to "Y:", then the binaries access this under file:///Y:/svn/meg/... whicl the cygwin ones access this under file:///cygwin/y/svn/meg/... While this is ok in principle, it gives a conflict with the TortoiseSVN which expects the first path. So if you want to use command line utilities together with TortoiseSVN, the Cygwin package won't work.
- Use the TortoiseSVN package. It's really great! It has a very nice "diff" viewer/merger, it's integrated into the Windows explorer, has a spell checker for composing comments for commits, etc.
- For the SVN binaries under Windows, you have to set the environment variable LANG=en_US, otherwise svn will talk in German to you on a standard PSI Windows PC.
If there are any problems in accessing the new repository, please let me know.
Note: This elog entry has been updated since the original one did have a wrong username in the SVN URL. |
|