Back Midas Rome Roody Rootana
  Midas DAQ System, Page 72 of 142  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  1484   11 Mar 2019 Francesco RengaForumRun length
Dear all,
        I need to implement a DAQ sequence where a short run (100 events, which takes a couple of 
minutes) is taken every hour, with a long run in between two short runs. In the sequencer, I can do:

LOOP infinite

.... some ODB settings ....
     TRANSITION START
     WAIT events 100
     TRANSITION STOP

.... some ODB settings ....
     TRANSITION START
     WAIT seconds 3600
     TRANSITION STOP

ENDLOOP


I have two questions: 

- for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
the event polling in the frontend, with an ODB query of the number of collected events. I'm 
wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
number of events, but the subsequent short run should still start exactly after 1h from the previous 
short run. 

- with the script above, the real time lapse between the start of two short runs would depend on 
the duration of the short run itself. Is there a way to start the short run exactly 1 h after the starting 
of the previous short run?

Thank you in advance for your help,
              Francesco
  1486   12 Mar 2019 Stefan RittForumRun length
> Is there a way to start the short run exactly 1 h after the starting 
> of the previous short run?

This is not possible with the current sequencer.
  1487   12 Mar 2019 Pierre GorelForumRun length
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT events 100
>      TRANSITION STOP
> I have two questions: 
> 
> - for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
> the event polling in the frontend, with an ODB query of the number of collected events. I'm 
> wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
> number of events, but the subsequent short run should still start exactly after 1h from the previous 
> short run. 

I don't know about a way to give you an exact number of events (maybe /Logger/Run duration). 

I personally use 
    WAIT ODBValue,"/Equipment/DTM/Statistics/Events sent",>,100

Where DTM is the frontend of my trigger. Because of the lag in the run stop, the run will always exceed by few
seconds*rates.

Hope it helps
  1490   13 Mar 2019 Konstantin OlchanskiForumRun length
I did not quite understand your desired sequence, is this what you want:

- at 1pm
- start a run
- record 100 events
- end the run
- (this will be, say, 1:15pm)
- start a run
- at 2pm
- end the run
- start a run
- record 100 events
- ad infinitum

There are 2 difficulties with this:

1) If you want your cycle to be exactly 1 hour, you need to use cron or something similar - if you just "start; sleep 3600; stop", 
your cycle will be slightly longer than 1 hour because starting and stopping runs takes some time to complete.

2) if you want your "100 event" run to start exactly precisely on the hour, you need to stop the previous run a few 
minutes/seconds before the hour to avoid the "run stop" delay.

Instead of using the sequencer, I would use a shell script (run it from crontab to avoid problem (1))

#/bin/sh
mtransition stop # stop the previous long run
odbedit set "/logger/channels/0/settings/event limit" 100
odbedit set "/logger/auto restart" "y"
mtransition start # start the short run
# end

In your frontend end_of_run() function, add this:
odb set "/logger/channels/0/settings/event limit" 0

This will produce the following sequence:

- script will stop previous long run
- set event limit to 100, start the "100 events" run
- logger will stop at 100 events, call your frontend end_of_run(), set event limit to 0
- logger auto restart will start a new run, event limit is now 0, this is your long run
- on the hour, cron runs your script, cycle repeats from the top.

Instead of cron, your can use a looper script. Note that you must run
the main script in the background (note the "&") to avoid problem (1).

#!/bin/foo
# looper script
while (true) {
   main_script &
   sleep 3600   
}
#end

To stop the sequence, kill the looper script.

K.O.


> Dear all,
>         I need to implement a DAQ sequence where a short run (100 events, which takes a couple of 
> minutes) is taken every hour, with a long run in between two short runs. In the sequencer, I can do:
> 
> LOOP infinite
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT events 100
>      TRANSITION STOP
> 
> .... some ODB settings ....
>      TRANSITION START
>      WAIT seconds 3600
>      TRANSITION STOP
> 
> ENDLOOP
> 
> 
> I have two questions: 
> 
> - for the long run, I want to write on disk only a maximum number of events. I think I can suppress 
> the event polling in the frontend, with an ODB query of the number of collected events. I'm 
> wondering if there is a smarter way to do that. It is also ok if the run is stopped after a maximum 
> number of events, but the subsequent short run should still start exactly after 1h from the previous 
> short run. 
> 
> - with the script above, the real time lapse between the start of two short runs would depend on 
> the duration of the short run itself. Is there a way to start the short run exactly 1 h after the starting 
> of the previous short run?
> 
> Thank you in advance for your help,
>               Francesco
  419   07 Jan 2008 Stefan RittInfoRoll-back for history sytem added
The midas history system always had the problem that the database can get
corrupted if the disk gets full where the history records (*.hst & *.idx) are
stored. This can happen if a history event can only be written partially on the
almost full disk. If later some space is freed up (by deleting other files), the
writing continues at the old position, leaving the partial event in the data
base. In that case the whole history data of the current day cannot be read
because it is corrupted.

To solve the problem, a roll-back system has been implemented in the
hs_write_event() function. If an event cannot be written fully, the history file
is restored to the old state, so the partial event is removed from the end of
the file via truncation. This way only the data which could not be written to
the disk is missing in the history file, but the other data from that day is
still valid and readable. The change has been committed in revision 4107.
  429   13 Feb 2008 Konstantin OlchanskiInfoRoll-back for history sytem added
> The midas history system always had the problem that the database can get
> corrupted if the disk gets full where the history records (*.hst & *.idx) are
> stored.

Stefan - big thanks for fixing this problem - it is one of those cases "how come I
did not think of do it!".

This change should fix the last remaining problem with history at CERN - we seem to
be unable to avoid running out of disk space once in a while (run away scripts, fat
fingers, etc) and history got corrupted every time.

But to make things more interesting we had another history outage this week - we
happen to write history files to an NFS server (not recommened! do not do this!) and
when the NFS server had a glitch, history files got corrupted - because during the
glitch NFS was not available, I think this roll-back feature would not have helped.

Anyhow, I now have a patch to allow hs_read() to "skip the bad spots" in the history
files. (hs_gen_index() also needs a patch).

In the nutshell, if invalid history data is detected, the code continues to read the
data one byte at a time, looking for valid event_id markers (etc).

The code looks sane by inspection, and if nobody objects, I would like to commit it
in the next few days.

Here is the diff against src/history.c rev 4114

Index: history.c
===================================================================
--- history.c	(revision 4118)
+++ history.c	(working copy)
@@ -129,6 +129,7 @@
    HIST_RECORD rec;
    INDEX_RECORD irec;
    DEF_RECORD def_rec;
+   int recovering = 0;
 
    printf("Recovering index files...\n");
 
@@ -171,7 +172,7 @@
 
          /* skip tags */
          lseek(fh, rec.data_size, SEEK_CUR);
-      } else {
+      } else if (rec.record_type == RT_DATA) {
          /* write index record */
          irec.event_id = rec.event_id;
          irec.time = rec.time;
@@ -180,6 +181,15 @@
 
          /* skip data */
          lseek(fh, rec.data_size, SEEK_CUR);
+      } else {
+
+         if (!recovering)
+            cm_msg(MERROR, "hs_gen_index", "broken history file %d, trying to
recover", (int)ltime);
+
+	 recovering = 1;
+         lseek(fh, -sizeof(rec)+1, SEEK_CUR);
+
+         continue;
       }
 
    } while (TRUE);
@@ -220,6 +230,7 @@
    time_t lt;
    int fh, fhd, fhi;
    struct tm *tms;
+   int idxsize = 0;
 
    if (*ltime == 0)
       *ltime = ss_time();
@@ -250,12 +261,15 @@
    hs_open_file(*ltime, "idf", O_RDONLY, &fhd);
    hs_open_file(*ltime, "idx", O_RDONLY, &fhi);
 
+   if (fhi >= 0)
+     idxsize = lseek(fhi, 0, SEEK_END);
+
    close(fh);
    close(fhd);
    close(fhi);
 
    /* generate them if not */
-   if (fhd < 0 || fhi < 0)
+   if (fhd < 0 || fhi < 0 || idxsize == 0)
       hs_gen_index(*ltime);
 
    return HS_SUCCESS;
@@ -1480,12 +1494,33 @@
             i = -1;
             M_FREE(cache);
             cache = NULL;
-         } else
+         } else {
+
+	 try_again:
+
             i = sizeof(irec);
-
-         if (cp < cache_size) {
             memcpy(&irec, cache + cp, sizeof(irec));
             cp += sizeof(irec);
+
+	    /* if history file is broken ... */
+	    if (irec.time < last_irec_time) {
+	      //printf("time %d -> %d, cache_size %d, cp %d\n", last_irec_time, irec.time,
cache_size, cp);
+
+	      //printf("Seeking next record...\n");
+
+	      while (cp < cache_size)
+		{
+		  DWORD* evidp = (DWORD*)(cache + cp);
+		  if (*evidp == event_id) {
+		    //printf("Found at cp %d\n", cp);
+		    goto try_again;
+		  }
+
+		  cp++;
+		}
+
+	      i = -1;
+	    }
          }
       } else
          i = read(fhi, (char *) &irec, sizeof(irec));

K.O.
  431   13 Feb 2008 Stefan RittInfoRoll-back for history sytem added
> But to make things more interesting we had another history outage this week - we
> happen to write history files to an NFS server (not recommened! do not do this!) and
> when the NFS server had a glitch, history files got corrupted - because during the
> glitch NFS was not available, I think this roll-back feature would not have helped.

Actually I put our history data on a separate file system, on a separate disk controlled
by a separate RAID controller! If you write bulk data with the logger, and want to read
history files at the same time with mhttpd, you get a bottleneck if both data are at the
same physical disk. Separating this (and even the controller) speeded things up
dramatically.

The rollback will not work for NFS, since it requires truncating the file if an event
gets only partially written. While on a full file system you always can *delete* data,
this does not work if NFS is down. This explains the behavior.

> Anyhow, I now have a patch to allow hs_read() to "skip the bad spots" in the history
> files. (hs_gen_index() also needs a patch).
> 
> In the nutshell, if invalid history data is detected, the code continues to read the
> data one byte at a time, looking for valid event_id markers (etc).
> 
> The code looks sane by inspection, and if nobody objects, I would like to commit it
> in the next few days.

Great. I was thinking of something like this myself. Having a quick look at your code
looks good. The best of course would be if we would have some "magic number" for
re-synchronizating the data stream, but that would blow up the file length. So searching
for the right event id is good, but will not work 100%. Also the check

  if (irec.time < last_irec_time)

to see if the history is broken is very weak. If you take random data, it will be true
50% and false 50%. If one makes however a check

  if ((irec.time - last_irec_time) > 3600*24)

this would work correctly with random data in >99% of all cases (3600*24/2^32). Maybe
you should change that.
  482   28 May 2008 Konstantin OlchanskiInfoRoll-back for history sytem added
> > But to make things more interesting we had another history outage this week...
> > Anyhow, I now have a patch to allow hs_read() to "skip the bad spots" in history files.
> 
> [Stefan suggested]
>
>   if ((irec.time - last_irec_time) > 3600*24)


Yes, your stronger check works quite nicely. The whole patch is now committed into SVN,
revision 4202.

This is how it all works:

0) teach hs_gen_index() to skip over bad data. This is important because hs_read() only
looks at data records listed in the index file: if bad data is omitted from the index,
hs_read() will never see it and we do not need to worry about it in hs_read().
0a) because hs_gen_index() does not check validity of time stamps, we still need to check
them in hs_read().
1) in hs_read(), if we detect bad data (invalid headers, bad time stamps, etc), we
regenerate the index files - this removes a while class of bad data. We also look at time
stamps carefully and ignore records where time goes backwards (usually bad data) and ignore
records with time in the future beyound the end of the current history file (each history
file only contains 24*60*60 seconds = 1 day's worth of data).

While certainly not bullet-proof, these changes should make it easier to deal with
corruption of history files.

K.O.
  225   03 Oct 2005 Stefan RittInfoRevised MVMESTD API
Dear MIDAS users and developers,

The "Midas VME Standard API" has been revised. We tried to incorporate all
comments and ideas we got so far. The mvme_ioctl() function was abandoned in
favor of several mvme_get/set_xxx functions. Furthermore, two additional
functions for read and write have been implemented to simplify writing/reading
single values to VME. The current API looks like this:

int mvme_open(MVME_INTERFACE **vme, int index);
int mvme_close(MVME_INTERFACE *vme);
int mvme_sysreset(MVME_INTERFACE *vme);
int mvme_read(MVME_INTERFACE *vme, void *dst, mvme_addr_t vme_addr,
              mvme_size_t n_bytes);
DWORD mvme_read_value(MVME_INTERFACE *vme, mvme_addr_t vme_addr);
int mvme_write(MVME_INTERFACE *vme, mvme_addr_t vme_addr, void *src,
               mvme_size_t n_bytes);
int mvme_write_value(MVME_INTERFACE *vme, mvme_addr_t vme_addr, DWORD value);
int mvme_set_am(MVME_INTERFACE *vme, int am);
int mvme_get_am(MVME_INTERFACE *vme, int *am);
int mvme_set_dmode(MVME_INTERFACE *vme, int dmode);
int mvme_get_dmode(MVME_INTERFACE *vme, int *dmode);
int mvme_set_blt(MVME_INTERFACE *vme, int mode);
int mvme_get_blt(MVME_INTERFACE *vme, int *mode);

The MVME_INTERFACE structure holds all internal data, similar to the FILE
structure in stdio.h. If several VME interfaces (of the same type) are present
in a PC, the function mvme_open can be called once for each crate, specifying
the index. The block transfer modes passed to mvme_set_blt control the usage of
DMA, MBLT64 and so on. Not all interfaces might support all modes, in which case
mvme_set_blt should return MVME_UNSUPPORTED. Then it's up to the user code to
ignore this error or choose a different mode.

So far we have implemented drivers for the SIS3100, SBS617/SBS618 and VMIC
interfaces using this standard. It should be noted that the VMIC uses solely
memory mapped VME I/O, which is completely hidden in the VMIC MVMESTD driver.

We would like to encourage people to switch to the revised MVMESTD API wherever
possible. If new drivers for ADCs and TDCs for example are written using this
standard, groups with different VME interfaces can use them without modification.

Although the standard works now for three different interfaces, it might be that
new interfaces need slight additions. They should be identified as soon as
possible, in order to adapt the MVMESTD quickly and freeze the API soon.

Interrupts are not (yet) implemented in the MVMESTD, because most experiments
use polling anyhow. If there is a need for interrupts by someone, he should come
up quickly with this and make a proposal for implementation.
  98   17 Nov 2003 Stefan Ritt Revised MVMESTD
Let me propose a revised scheme for midas standard VME calls (mvmestd.h). 

Pierre mentioned some limitations before, and I find now also some fields 
to improve. Right now, the vme_open() call retrieves a handle. For some 
interfaces (like SBS/Bit3), one has to obtain separate handles for 
different addressing modes A24D32/A32D32 and so on, which I find a bit 
troublesome. I would rather keep the handle internally, invisible to the 
user, and use ioctl() statments to change the address/data mode. 

So the API could look like:

vme_open()       Deprecated, will be removed
vme_init(void)   Standard initialization, open device(s), stores handles
                 internally in a table
vme_exit(void)   Deallocates any memory, close handles

vme_read(void *dst, DWORD vme_addr, DWORD size)
vme_write(void *src, DWORD vme_addr, DWORD size)

vme_ioctl(int request, int *param)

                 Request is one of 
                   VME_IOCTL_CRATE_SET/GET
                     Sets VME crate (in case several interfaces are
                     plugged into singlePC, meaningless for embedded CPUs)
                   VME_IOCTL_DEST_SET/GET
                     VME_BUS/VME_RAM/VME_LM for VME bus, RAM in VME 
                     interface, or LM for local memory (used in Bit3 
                     interface)
                   VME_IOCTL_AMOD_SET/GET
                     Sets/Retrieves VME AMOD (= VME_AMOD_xxx as currently
                     defined in mvmestd.h)
                   VME_IOCTL_DSIZE_SET/GET
                     Sets/Retrieves VME data size (D8/D16/D32/D64)
                   VME_IOCTL_DMA_SET/GET
                     Enable/Disable DMA, should be independent of AMOD
                   VME_IOCTL_INTR_ATTACH/DETACH/ENABLE/DISABLE
                     Set VME interrupts
                   VME_IOCTL_AUTO_INCR_SET/GET
                     Set autoincremet of source pointer, can be disabled
                     for FIFO readout

vme_mmap(void **ptr, DWORD vme_addr, DWORD size)
vme_unmap(void *ptr, DWORD size)
                  Map/Unmap VME to local memory

vme_read2(void *dst, DWORD vme_addr, DWORD size, DWORD flags)
vme_write2(void *src, DWORD vme_addr, DWORD size, DWORD flags)
                 With these functions one can directly specify the flags
                 usually managed by vme_ioctl(). Usefule for applications
                 where the address modifier for example has to be
                 different in each read/write operation.  

Note that the vme_read/write functions do not have a VME handle any more, 
nor an address modifier. This is all accomplished with vme_ioctl() calls.

Please have a look at this proposal, compare it with what you do currently 
in VME, and let me know if we should add/modify something. I volunteer to 
implement the API for the SBS/Bit3 617 and the Struck SIS1100/3100 
interfaces, for VxWorks somebody at TRIUMF should take care.
  99   20 Nov 2003 Pierre-André Amaudruz, Konstantin Olchanski Revised MVMESTD
Before we try to merge the different access scheme for the different VME hardware,
we present the "optimal" configuration for the VMIC setup. This is a first shot so take it
with caution.
From these definitions, we should be able to workout a compromise and come up with
a satisfactory standard.

A) The VMIC vme_slave_xxx() options are not considered.
B) The interrupt handling can certainly match the 4 entries required in the user frontend
    code i.e. Attach, Detach, Enable, Disable.

I don't understand your argument that the handle should be hidden. In case of multiple
interfaces, how do you refer to a particular one if not specified? 
The following scheme does require a handle for refering to the proper (device AND window).

1 ) deviceHandle = vme_init(int devNumber);
    Even though the VMIC doesn't deal with multiple devices,
    the SIS/PCI does and needs to init on a specific PCI card.
    Internally:
      opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
      Possible including a mapping to a default VME region of default size with default AM
      (VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
      in A24 mode. Needs to be elaborate this option. But in principle you need to declare the 
     VME region that you want to work on (vme_map).

2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
    Return a mapHandle specific to a device and window. The am has to be specified.
    What ever are the operation to get there, the mapHandle is a reference to thas setting.
    It could just fill a map structure.
    Internally:
      WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
      WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
                                                                           , WindowHandle[deviceHandle]...

3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
                           , const BOOL ENA_DMA, const BOOL ENA_FIFO);
    Mainly used for the vme_block_read/write. Define for following read the data size and 
    am in case of DMA (could use orther DMA mode than window definition for optimal
    transfer).

    Predefine the mode of access:
    DATA_SIZE : D8, D16, D32
    AM             : A16, A24, A32, etc...
    enaDMA     : optional if available.
    enaFIFO     : optional for block read for autoincrement source pointer.

Remark:
PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
        such that extension of functionality is possible. But by passing cons int
        arguments, the optimizer is able to substitute and reduce the internal code.

4)   
   uint_8Value   = vme_readD8  (int mapHandle, uint_64 vmeSrceOffset)
   uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
   uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
   Single VME read access. In the VMIC case, this access is always through mapping.
   Value = *(WindowPtr[WindowHandle] + vmeSrceOffset) 
   or 
   Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset) 
 
5)   
   status  = vme_writeD8   (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
   status  = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
   status  = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
   Single VME write access.

6)
   nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
   Multiple read access. Can be done through standard do loop or DMA if available.
   nBytes < 0 :  error
   Incremented pDest  = (pDest + nBytes); Don't need to pass **pDest for autoincrement.

7)
   nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
   Multiple write access.
   nBytes < 0 :  error
   Incremented pSrce  = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.

8) status = vme_unmap(int mapHandle)
   Cleanup internal pointers or structure of given mapHandle only.

9) status = vme_exit()
   Cleanup deviceHandle and release device.
  100   21 Nov 2003 Stefan Ritt Revised MVMESTD
Thanks for your contribution. Let me try to map your functionality to mvmestd calls:

> A) The VMIC vme_slave_xxx() options are not considered.

We could maybe do that through mvme_mmap(SLAVE, ...) instead of mvme_mmap(MASTER, ...)

> B) The interrupt handling can certainly match the 4 entries required in the user frontend
>     code i.e. Attach, Detach, Enable, Disable.

vmve_ioctl(VME_IOCTL_INTR_ATTACHE/DETACH/ENABLE/DISABLE, func())

> I don't understand your argument that the handle should be hidden. In case of multiple
> interfaces, how do you refer to a particular one if not specified? 
> The following scheme does require a handle for refering to the proper (device AND window).

Four reasons for that:

1) For the SBS/Bit3, you need a handle for each address mode. So if I have two crates (and I do in our 
current experiment), and have to access modules in A16, A24 and A32 mode, I need in total 6 handles. 
Sometimes I mix them up by mistake, and wonder why I get bus errors. 

2) Most installations will only have single crates (as your VMIC). So if there is only one crate, why 
bother with a handle? If you have hunderds of accesses in your code, you save some redundant typing work.

3) A handle is usually kept global, which is considered not good coding style.

4) Our MCSTD and MFBSTD functions also do not use a handle, so people used to those libraries will find it 
more natural not to use one.

> 1 ) deviceHandle = vme_init(int devNumber);
>     Even though the VMIC doesn't deal with multiple devices,
>     the SIS/PCI does and needs to init on a specific PCI card.
>     Internally:
>       opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
>       Possible including a mapping to a default VME region of default size with default AM
>       (VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
>       in A24 mode. Needs to be elaborate this option. But in principle you need to declare the 
>      VME region that you want to work on (vme_map).

Just vme_init(); (like fb_init()).

This function takes the first device, opens it, and stores the handle internally. Sets the AM to a default 
value, and creates a mapping table which is initially empty or mapped to a default VME region. If one wants 
to access a secondary crate, one does a vme_ioctl(VME_IOCTL_CRATE_SET, 2), which opens the secondary crate, 
and stores the new handle in the internal table if applicable.

> 2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
>     Return a mapHandle specific to a device and window. The am has to be specified.
>     What ever are the operation to get there, the mapHandle is a reference to thas setting.
>     It could just fill a map structure.
>     Internally:
>       WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
>       WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
>                               , WindowHandle[deviceHandle]...

The best would be if a mvme_read(...) to an unmapped region would automatically (internally) trigger a 
vme_map() call, and store the WindowHandle and WindowPtr internally. The advantage of this is that code 
written for the SIS for example (which does not require this kind of mapping) would work without change 
under the VMIC. The disadvantage is that for each mvme_read(), the code has to scan the internal mapping 
table to find the proper window handle. Now I don't know how much overhead this would be, but I guess a 
single for() loop over a couple of entries in the mapping table is still faster than a microsecond or so, 
thus making it negligible in a block transfer. 

> 3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
>                            , const BOOL ENA_DMA, const BOOL ENA_FIFO);
>     Mainly used for the vme_block_read/write. Define for following read the data size and 
>     am in case of DMA (could use orther DMA mode than window definition for optimal
>     transfer).
> 
>     Predefine the mode of access:
>     DATA_SIZE : D8, D16, D32
>     AM             : A16, A24, A32, etc...
>     enaDMA     : optional if available.
>     enaFIFO     : optional for block read for autoincrement source pointer.
> 
> Remark:
> PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
>         such that extension of functionality is possible. But by passing cons int
>         arguments, the optimizer is able to substitute and reduce the internal code.

Right. mvme_ioctl(VME_IOCTL_AMOD_SET/DSIZE_SET/DMA_SET/AUTO_INCR_SET, ...)

>    uint_8Value   = vme_readD8  (int mapHandle, uint_64 vmeSrceOffset)
>    uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
>    uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
>    Single VME read access. In the VMIC case, this access is always through mapping.
>    Value = *(WindowPtr[WindowHandle] + vmeSrceOffset) 
>    or 
>    Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset) 

mvme_read(*dst, DWORD vme_addr, DWORD size); would cover this in a single call. Note that the SIS for 
example does not have memory mapping, so if one consistently uses mvme_read(), it will work on both 
architectures. Again, this takes some overhead. Consider for example a possible VMIC implementation

mvme_read(char *dst, DWORD vme_addr, DWORD size)
{
  for (i=0 ; table[i].valid ; i++)
    {
    if (table[i].start >= vme_addr && table[i].end < vme_addr+size)
      break;
    }

  if (!table[i].valid)
    {
    vme_master_crate(...)
    table[i].window_handle = vme_master_window(...)
    }

  if (size == 2)
    mvme_ioctl(VME_IOCTL_DSIZE_SET, D16);
  else if (size == 1)
    mvme_ioctl(VME_IOCTL_DSIZE_SET, D8);

  memcpy(dst, table[i].window_handle + vme_addr - table[i].start, size);
}

Note this is only some rough code, would need more checking etc. But you see that for each access the for() 
loop has to be evaluated. Now I know that for the SBS/Bit3 and for the SIS a single VME access takes 
~0.5us. So the for() loop could be much faster than that. But one has to try. If one experiment needs the 
ultimate speed, it can use the native VMIC API, but then looses the portability. I'm not sure if one needs 
the automatic DSIZE_SET, maybe it works without.

>    status  = vme_writeD8   (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
>    status  = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
>    status  = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
>    Single VME write access.

Dito. mvme_write(void *dst, DWORD vme_addr, DWORD size);

>    nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
>    Multiple read access. Can be done through standard do loop or DMA if available.
>    nBytes < 0 :  error
>    Incremented pDest  = (pDest + nBytes); Don't need to pass **pDest for autoincrement.

vmve_ioctl(VME_IOCTL_DMA_SET, TRUE);
n = mvme_read(char *pDest, DWORD vmd_addr, DWORD size);

>    nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
>    Multiple write access.
>    nBytes < 0 :  error
>    Incremented pSrce  = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.

Dito.

> 8) status = vme_unmap(int mapHandle)
>    Cleanup internal pointers or structure of given mapHandle only.

mvme_unmap(DWORD vme_addr, DWORD size)

Scan through internal table to find handle, then calls vme_unmap(mapHandle);

> 9) status = vme_exit()
>    Cleanup deviceHandle and release device.

mvme_exit();

Let me know if this all makes sense to you...

- Stefan
  863   13 Feb 2013 Konstantin OlchanskiInfoReview of github and bitbucket
I have done a review of github and bitbucket as candidates for hosting GIT repositories for collaborative 
DAQ-type projects. Here is my impressions.

1. GIT as a software management tool seems to be a reasonable choice for DAQ-type projects. "master" 
repositories can be hosted at places like github or self-hosted (in the simplest case, only 
http://host/~user web access is required to host a git repository), for each "daq project" aka "experiment" 
one would "clone" the master repository, perform any local modifications as required, with full local 
version control, and when desired feed the changes back to the master repository as direct commits (git 
push), as patches posted to github ("pull requests") or patches emailed to the maintainers (git format-
patch).

2. Modern requirements for hosting a DAQ-type project include:
a) code repository (GIT, etc) with reasonably easy user access control (i.e. commit privileges should be 
assigned by the project administrators directly, regardless of who is on the payroll at which lab or who is 
a registered user of CERN or who is in some LDAP database managed by some IT departement 
somewhere).
b) a wiki for documentation, with similar user access control requirements.
c) a mailing list, forum or bug tracking system for communication and "community building"
d) an ability to web host large static files (schematics, datasheets, firmware files, etc)
e) reasonable web-based tools for browsing the files, looking at diffs, "cvs annotate/git blame", etc.

3. Both github and bitbucket satisfy most of these requirements in similar ways:

a) GIT repositories:
aa) access using git, ssh and https with password protection. ssh keys can be uploaded to the server, 
permitting automatic commits from scripts and cron jobs.
bb) anonymous checkout possible (cannot be disabled)
cc) user management is simple: participants have to self-register, confirm their email address, the project 
administrator to gives them commit access to specific git repositories (and wikis).
dd) for the case of multiple project administrators, one creates "teams" of participants. In this 
configuration the repositories are owned by the "team" and all designated "team administrators" have 
equal administrative access to the project.

b) Wiki:
aa) both github and bitbucket provide rudimentary wikis, with wiki pages stored in secondary git 
repositories (*NOT* as a branch or subdirectory of the main repo).
bb) github supports "markdown" and "mediawiki" syntax
cc) bitbucket supports "markdown" and "creole" syntax (all documentation and examples use the "creole" 
syntax).
dd) there does not seem to be any way to set the "project standard" syntax - both wikis have the "new 
page" editor default to the "markdown" syntax.
ee) compared to mediawiki (wikipedia, triumf daq wiki) and even plone, both github and bitbucket wikis 
lack important features:
1) cannot edit individual sections of a page, only the whole page at once, bad if you have long pages.
2) cannot upload images (and other documents) directly through the web editor/interface. Both wikis 
require that you clone the wiki git repository, commit image and other files locally and push the wiki git 
repo into the server (hopefully without any collisions), only then you can use the images and documents 
in the wiki.
3) there is no "preview" function for images - in mediawiki I can have small size automatically generated 
"preview" images on the wiki page, when I click on them I get the full size image. (Even "elog" can do this!)
ff) to be extra helpful, the wiki git repository is invisible to the normal git repository graphical tools for 
looking at revisions, branches, diffs, etc. While github has a special web page listing all existing wiki 
pages, bitbucket does not have such a page, so you better write down the filenames on a piece of paper.

c) mailing list/forum/bug tracking:
aa) both github and bitbucket implement reasonable bug tracking systems (but in both systems I do not 
see any button to export the bug database - all data is stuck inside the hosting provider. Perhaps there is 
a "hidden button" somewhere).
bb) bitbucket sends quite reasonable email notifications
cc) github is silent, I do not see any email notifications at all about anything. Maybe github thinks I do not 
want to see notices about my own activities, good of it to make such decisions for me.

d) hosting of large files: both git and wiki functions can host arbitrary files (compared to mediawiki only 
accepting some file types, i.e. Quartus pof files are rejected).

e) web based tools: thumbs up to both! web interfaces are slick and responsive, easy to use.

Conclusions:

Both github and bitbucket provide similar full-featured git repository hosting, user management and bug 
tracking.

Both provide very rudimentary wiki systems. Compared to full featured wikis (i.e. mediawiki), this is like 
going back to SCCS for code management (from before RCS, before CVS, before SVN). Disappointing. A 
deal breaker if my vote counts.

K.O.
  864   14 Feb 2013 Stefan RittInfoReview of github and bitbucket
Let me add my five cents:

We use bitbucket now since two months at PSI, and are very happy with it.

Pros:

- We like the GIT flow model (http://nvie.com/posts/a-successful-git-branching-model/). You can at the same time do hot fixes, have a "distribution 
version", and keep a development branch, where you can try new things without compromising the distribution.
- Nice and fast Web interface, especially the "blame" is lightning fast compared to SVN/CVS
- GIT is non-centralized, so your local clone of a repository contains everything. If bitbucket is down/asks for money, you can continue with your local 
repository and clone it to some other hosting service, or host it yourself
- SourceTree (http://www.sourcetreeapp.com/) is a nice GUI for Mac lovers. 
- Easy user management
- Free for academic use

Con:

- Wiki is limited as KO wrote, so it should not be used as a "full" wiki to replace Plone for example, just to annotate your project
- SVN revision number is gone. This is on purpose since it does not make sense any more if you keep several parallel branches (merging becomes a 
nightmare), so one has to use either the (random) commit-ID or start tagging again.

So I conclusion, I would say that it's time to switch MIDAS to GIT. We'll probably do that in July when I will be at TRIUMF.

/Stefan
  867   01 Apr 2013 Randolf PohlInfoReview of github and bitbucket
And my 2ct:

Go for git!

I've been using git since 2007 or so, after cvs and svn. Git has some killer features which I can't miss any more:

* No central repo. Have all the history with you on the train.
* Branching and merging, with stable branches and feature branches.
  Happy hacking while my students do analysis on a stable version.
  Or multiple development branches for several features.
  And merging really works, including fixing up merge conflicts.
* "git bisect" for finding which commit introduced a (reproducible) bug.
* "gitk --all"

I use git for everything: Software, tex, even (Ooffice) Word documents.

Go for git. :-)

Randolf
  868   02 Apr 2013 Konstantin OlchanskiInfoReview of github and bitbucket
Hi, thanks for your positive feedback. I have been using git for small private projects for a few years now
and I like it. It is similar to the old SCCS days - good version control without having to setup servers,
accounts, doodads, etc.

> * No central repo. Have all the history with you on the train.
> * Branching and merging, with stable branches and feature branches.
>   Happy hacking while my students do analysis on a stable version.
>   Or multiple development branches for several features.

This is the part that worries me the most. Without a "central" "authoritative" repository,
in just a few quick days, everybody will have their own incompatible version of midas.

I guess I am okey with your private midas diverging from mainstream, but when *I* end up
with 10 different incompatible versions just in *my* repository, can that be good?

>   And merging really works, including fixing up merge conflicts.

But somebody still has to do it. With a central repository, the problem takes care of
itself - each developer has to do their own merging - with svn, you cannot commit
to the head without merging the head into your code first. But with git, I can just throw
my changes int some branch out there hoping that somebody else would do the merging.
But guess what, there aint anybody home but us chickens. We do not have a mad finn here
to enforce discipline and keep us in shape...

As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
branches going, neither has all the features combined together and each branch has bugs with
the fixes in a different branch. What a way to run a railroad.

> * "git bisect" for finding which commit introduced a (reproducible) bug.
> * "gitk --all"
>
> Go for git. :-)

Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".

K.O.
  869   02 Apr 2013 Randolf PohlInfoReview of github and bitbucket
Hi Konstantin,

> > * No central repo. Have all the history with you on the train.
> > * Branching and merging, with stable branches and feature branches.
> >   Happy hacking while my students do analysis on a stable version.
> >   Or multiple development branches for several features.
> 
> This is the part that worries me the most. Without a "central" "authoritative" repository,
> in just a few quick days, everybody will have their own incompatible version of midas.

No! This is probably one of the biggest misunderstandings of the git workflow.

You can of course _define_ one central repo: This is the one that you and Stefan decide to be "the source" (as
Linus does for the kernel). It's like the central svn repo: Only Stefan and you can push to it, and everybody
else will pull from it. Why should I pull MIDAS from some obscure source, when your "public" repo is available.

Look at the Linux Kernel: Linus' version is authoritative, even though everybody and his best friend has his
own kernel repo.

So, the main workflow does not change a lot: You collect patches, commit them, and "push" them to the central
repo. All users "pull" from this central repo. This is very much what svn offers.

> 
> I guess I am okey with your private midas diverging from mainstream, but when *I* end up
> with 10 different incompatible versions just in *my* repository, can that be good?

See above: _You_ define what the central repo is.

But: I _bet_ you will very soon have 10 versions in your personal repo, because _you choose_ to do so. It's
just SO much easier. The non-linear history with many branches is a _feature_. I can't live without it any more:


Looking at my MIDAS analyzer:

I have a "public" repo in /pub/git/lamb.git. This is where I publish my analyzer versions. All my collaborators
pull from this.

Then I have my personal repo in ~/src/lamb. 
This is where I develop. When I think something is ready for the public, I merge this branch into the public repo. 

Whenever I start to work on a new feature, I create a branch in my _local_ repo (~/src/lamb).  I can fiddle and
play, not affecting anybody else, because it never sees the public repo.
OK, collaborator A finds a bug. I switch to my local copy of the public version, fix the bug, and push the fix
to the publix repo. Then I go back to my (local) feature branch, merge the bug fix, and continue hacking.
Only when the feature is ready, I push it to the public repo.

Things get moe interesting as you work on several features simultaneously. You have e.g. 3 topic branches:
(a) is nearly ready, and you want a bunch of people to test it.
    push branch "feature (a)" to the public repo and tell the people which branch to pull.
(b) is WIP, you hack on it without affecting (a).
(c) is bug fixes which may or may not affect (a) or (b).
And so on.

You will soon discover the beauty of several parallel branches.

Plus, git merges are SO simple that you never think about "how to merge"

> 
> >   And merging really works, including fixing up merge conflicts.
> 
> But somebody still has to do it. With a central repository, the problem takes care of
> itself - each developer has to do their own merging - with svn, you cannot commit
> to the head without merging the head into your code first. But with git, I can just throw
> my changes int some branch out there hoping that somebody else would do the merging.
> But guess what, there aint anybody home but us chickens. We do not have a mad finn here
> to enforce discipline and keep us in shape...

See above: You will have the exact same workflow in git, if you like.




> As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
> branches going, neither has all the features combined together and each branch has bugs with
> the fixes in a different branch. What a way to run a railroad.

I haven't look at this. All I can say: Branches are one of the best features.

> 
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> > * "gitk --all"
> >
> > Go for git. :-)
> 
> Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".

Easy: YOU do it.

Keep going as in svn: Collect patches, and send them out.

And then, try "git checkout -b my_first_branch", hack, hack, hack,
"git merge master".

Best,

Randolf


> 
> K.O.
  870   03 Apr 2013 Stefan RittInfoReview of github and bitbucket
> * "git bisect" for finding which commit introduced a (reproducible) bug.

I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100 
revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then 
slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending 
commit. Finding that there is a command it git which does this automatically is great news.

Stefan
  871   03 Apr 2013 Randolf PohlInfoReview of github and bitbucket
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> 
> I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100 
> revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then 
> slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending 
> commit. Finding that there is a command it git which does this automatically is great news.

even more so considering the nonlinear history (due to branching) in a regular git repo.
  642   09 Sep 2009 Jimmy NgaiForumRetrieve start/stop time in offline
Hi All,

I set "/Analyzer/ODB Load" to true and analyzed a run in offline mode. After
that, I found the start time and stop time in /RunInfo did not reflect the
correct time as in online. How do I retrieve the correct start/stop time from
the ODB in offline mode?

Thanks!

Jimmy
ELOG V3.1.4-2e1708b5