ID |
Date |
Author |
Topic |
Subject |
880
|
12 Apr 2013 |
Stefan Ritt | Forum | Persistent ipcrm error | > [odb.c:6038:db_paste,ERROR] found string exceeding MAX_STRING_LENGTH
Ok, so here is what probably happened. Some user program wrote a long string into the ODB and somehow corrupted it. This corruption persists as long as you work with
binary data. Indeed "rebuilding" the ODB helps in that case. What we do actually is at the beginning of every run, the ODB contents is dumped into the data file via
/Logger/Channels/0/Setting/ODB dump
in case we get ODB corruption, we clear all *.shm files as well as the shared memory segments, create a fresh ODB, extract the ODB from the last successful run via
odbhist -e runxxx.mid
and load it via odbedit. I put some additional code in most midas functions to prevent this corruption (and thus your saw the above error "found string exceeding
MAX_STRING_LENGTH"), but since the ODB is physically in the address space of each midas program, they can theoretically bypass the midas functions and write accidentally
into the ODB with an uninitialized pointer or so.
Best regards,
Stefan |
192
|
20 Jan 2005 |
Konstantin Olchanski | Bug Report | Persistency problem with h1_book() & co | The current h1_book() macros (and the previous example analyzer code) have an
odd persistency problem: for example, the user wants to change some histogram
limits, edits the h1_book() calls, rebuilds and restarts the analyzer, starts a
new run, and observes that all histograms are filled using the old limits, his
changes "did not take". The user panics, I get paged during the Holy Lunch Hour,
everybody is unhappy.
This is what I think happens:
1) analyzer starts
2) LoadRootHistgrams() loads old histograms from file
3) user code calls h1_book()
4) h1_book template in midas.h does this (roughly):
hist = (TH1X *) gManaHistosFolder->FindObjectAny(name);
if (hist == NULL) {
hist = new TH1X(name, title, bins, min, max);
5) since the histogram already exists (loaded from the file, with the old
limits), the TH1X constructor is not called at all, new histogram limits are
utterly ignored.
A possible solution is to unconditionally create the ROOT objects, like I do in
the example code posted at http://dasdevpc.triumf.ca:9080/Midas/191. That code
produces an annoying warning from ROOT about possible memory leaks. This could
be fixed by adding a two liner to "find and delete" the object before it is
created, trippling the number of user code lines per histogram (find & delete,
then create). Highly ugly.
midas.h macros (h1_book & co) can be fixed by adding checks for histogram limits
and such, but I would much prefer a generic solution/convention that would work
for arbitrary ROOT objects without MIDAS-specific wrappers (think TProfile,
TGraph, etc...).
Any suggestions?
K.O. |
193
|
21 Jan 2005 |
John M O'Donnell | Bug Report | Persistency problem with h1_book() & co | > The current h1_book() macros (and the previous example analyzer code) have an
> odd persistency problem: for example, the user wants to change some histogram
> limits, edits the h1_book() calls, rebuilds and restarts the analyzer, starts a
> new run, and observes that all histograms are filled using the old limits, his
> changes "did not take". The user panics, I get paged during the Holy Lunch Hour,
> everybody is unhappy.
>
> This is what I think happens:
>
> 1) analyzer starts
> 2) LoadRootHistgrams() loads old histograms from file
I can't get onto cvs@midas.psi.ch right now
(cvs update
cvs@midas.psi.ch's password:
Permission denied, please try again.)
but when I changed LoadRootHistograms a few days ago I left it as:
} else if (obj->InheritsFrom( "TH1")) {
// still don't know how to do TH1s
so h1_book() is creating the first and only copy of the histograms.
I am able to create new histogram limits.
I don't get the memory leak problems.
However I have seen the memory leak problems before, and they are real.
They must be dealt with either by (1) first deleteing the old histogram
or (2) ensuring that histogram names are unique in the whole application
(different modules/folders can not use the same histogram names).
I will return to this once I can do a cvs update for midas.
John.
> 3) user code calls h1_book()
> 4) h1_book template in midas.h does this (roughly):
> hist = (TH1X *) gManaHistosFolder->FindObjectAny(name);
> if (hist == NULL) {
> hist = new TH1X(name, title, bins, min, max);
> 5) since the histogram already exists (loaded from the file, with the old
> limits), the TH1X constructor is not called at all, new histogram limits are
> utterly ignored.
>
> A possible solution is to unconditionally create the ROOT objects, like I do in
> the example code posted at <a
href="http://dasdevpc.triumf.ca:9080/Midas/191">http://dasdevpc.triumf.ca:9080/Midas/191</a>.
That code
> produces an annoying warning from ROOT about possible memory leaks. This could
> be fixed by adding a two liner to "find and delete" the object before it is
> created, trippling the number of user code lines per histogram (find & delete,
> then create). Highly ugly.
>
> midas.h macros (h1_book & co) can be fixed by adding checks for histogram limits
> and such, but I would much prefer a generic solution/convention that would work
> for arbitrary ROOT objects without MIDAS-specific wrappers (think TProfile,
> TGraph, etc...).
>
> Any suggestions?
>
> K.O. |
194
|
21 Jan 2005 |
Stefan Ritt | Bug Report | Persistency problem with h1_book() & co | > I can't get onto cvs@midas.psi.ch right now
> (cvs update
> cvs@midas.psi.ch's password:
> Permission denied, please try again.)
I had to upgrade midas.psi.ch today with Scientific Linux 3.03. Most things are back to work, but
I failed to do the anonymous CVS account. I have to wait for next week when the experts are
there. I will let you know when it's working again.
- Stefan |
195
|
25 Jan 2005 |
Stefan Ritt | Bug Report | Persistency problem with h1_book() & co | > > I can't get onto cvs@midas.psi.ch right now
> > (cvs update
> > cvs@midas.psi.ch's password:
> > Permission denied, please try again.)
cvs@midas.psi.ch should be up and running again. |
196
|
25 Jan 2005 |
John M O'Donnell | Bug Report | Persistency problem with h1_book() & co | So now that cvs is reachable again I have confirmed that
the code segment
} else if (obj->InheritsFrom( "TH1")) {
// still don't know how to do TH1s
is indeed still present.
If you want me to look at this some more, you need to provide some code to exhibit the problem.
John.
> > The current h1_book() macros (and the previous example analyzer code) have an
> > odd persistency problem: for example, the user wants to change some histogram
> > limits, edits the h1_book() calls, rebuilds and restarts the analyzer, starts a
> > new run, and observes that all histograms are filled using the old limits, his
> > changes "did not take". The user panics, I get paged during the Holy Lunch Hour,
> > everybody is unhappy.
> >
> > This is what I think happens:
> >
> > 1) analyzer starts
> > 2) LoadRootHistgrams() loads old histograms from file
>
> I can't get onto cvs@midas.psi.ch right now
> (cvs update
> cvs@midas.psi.ch's password:
> Permission denied, please try again.)
>
> but when I changed LoadRootHistograms a few days ago I left it as:
>
> } else if (obj->InheritsFrom( "TH1")) {
>
> // still don't know how to do TH1s
>
> so h1_book() is creating the first and only copy of the histograms.
> I am able to create new histogram limits.
> I don't get the memory leak problems.
>
> However I have seen the memory leak problems before, and they are real.
> They must be dealt with either by (1) first deleteing the old histogram
> or (2) ensuring that histogram names are unique in the whole application
> (different modules/folders can not use the same histogram names).
>
> I will return to this once I can do a cvs update for midas.
>
> John.
>
> > 3) user code calls h1_book()
> > 4) h1_book template in midas.h does this (roughly):
> > hist = (TH1X *) gManaHistosFolder->FindObjectAny(name);
> > if (hist == NULL) {
> > hist = new TH1X(name, title, bins, min, max);
> > 5) since the histogram already exists (loaded from the file, with the old
> > limits), the TH1X constructor is not called at all, new histogram limits are
> > utterly ignored.
> >
> > A possible solution is to unconditionally create the ROOT objects, like I do in
> > the example code posted at <a
> href="<a
href="http://dasdevpc.triumf.ca:9080/Midas/191">http://dasdevpc.triumf.ca:9080/Midas/191</a>">http://dasdevpc.triumf.ca:9080/Midas/191"><a
href="http://dasdevpc.triumf.ca:9080/Midas/191</a>">http://dasdevpc.triumf.ca:9080/Midas/191</a></a></a>.
> That code
> > produces an annoying warning from ROOT about possible memory leaks. This could
> > be fixed by adding a two liner to "find and delete" the object before it is
> > created, trippling the number of user code lines per histogram (find & delete,
> > then create). Highly ugly.
> >
> > midas.h macros (h1_book & co) can be fixed by adding checks for histogram limits
> > and such, but I would much prefer a generic solution/convention that would work
> > for arbitrary ROOT objects without MIDAS-specific wrappers (think TProfile,
> > TGraph, etc...).
> >
> > Any suggestions?
> >
> > K.O. |
471
|
23 Mar 2008 |
Konstantin Olchanski | Info | Per-variable history implementation in the mlogger | The changes to mlogger implementing per-variable history have been committed to
svn. Revision 4145.
The rationale for these changes is roughly described in
https://ladd00.triumf.ca/elog/Midas/347
The main user-visible effect is reduction of data volume written to history
files and better integration with the history plot system in mhttpd.
The new functionality is disabled by default, pending review by Stefan (Except
for /history/tags stuff, which will be created by mlogger and used by mhttpd).
To enable it, set "/equipment/xxx/Common/PerVariableHistory" to 1 (type TID_INT).
In the "per-variable" mode, each entry in /equipment/xxx/variables is assigned
it's own event id and creates it's own events in the history file. In the
"classical" (or per-equipment) mode, all variables are assigned the same event
id (equal to the equipment id) and are written to disk at the same time.
In other words, in per-equipment mode, if there are 100 variables and 1 of them
is updated, all 100 numbers are written to disk. In per-variable mode, only the
one updated variable is written out.
The one point for review in this implementation is the assignment of event id's.
Committed code uses the formula "1000*eq_id + n" (i.e. variables in equipment id
2 get 2001, 2002, etc..., equipment id 3 get 3001, 3002, ...). This formula
works for most experiments, but as I understand is no good for some experiments
at PSI. Other than inventing a better formula that would work for everybody in
every case, one can also assign event id's manually by creating appropriate
entries in "/history/events".
This code has been used at CERN for running ALPHA since last Summer and it will
be used extensively at TRIUMF for T2K/ND280 slow controls. Per-variable history
is also required for the pending implementation of "history logged directly to
an SQL database", to be used at T2K/ND280.
If history (ahem) is any guide, we will now have a brief period of fixing merge
errors and "works for me" mistakes.
K.O. |
472
|
23 Mar 2008 |
Konstantin Olchanski | Info | Per-variable history implementation in the mlogger | > The changes to mlogger implementing per-variable history have been committed to
> svn. Revision 4145.
To make code changes more clear, the commit was done in 3 stages:
revision 4142+4143 are minor fixes, refactoring (switch the code to use helper
functions) and implementation of history for structured banks
revision 4144 implements the per-variable history
revision 4145 is minor cleanup.
K.O. |
474
|
25 Mar 2008 |
Stefan Ritt | Info | Per-variable history implementation in the mlogger | Before approving the code, two conditions have to be fulfilled:
1) The code has to work on PSI experiments
2) The code must work without any SQL database
Concerning point 1), you correctly mentioned that the event numbering does not work
if there are more than 1000 variables per event. What I do not want is that there
will be a special T2K midas version and a special PSI version. This would make
maintenance horrible in the future. One could make the formula variable with id =
ev_id*n+var_n, where n is not fixed to 1000, but variable (stored in the ODB). The
down side would be that if you analyze your history files offline (outside the
experiment) you have to know a priori n in order to read back the data. If you have
990 variables, then you add 20, then you modify n from 1000 to 1500, then you would
screw up yourself since you cannot read the old data any more.
Taking all this into account, I see no clean way to fix this except to modify the
database format (which you change anyhow "somehow" going to per-variable mode). Use a
32-bit ID for the event (16-bit) and the variable (16-bit). This will increase the
overhead, but only marginally, since there is already a 32-bit time stamp. But this
method would then work for all experiments at all times. I suspect that even in T2K
you will come at some point to a configuration where you have move than n variables
per event, whatever n is. So even you would benefit.
Concernign ponit 2), I like your ODBC approach. I never used it, but if you tell me
it works on all supported OSes it's fine with me, but make sure it compiles under
Windows (with the help of Pierre). One thing I would make sure however is that it
runs by default without setting up a database. There are many experiments out there
which do not need a SQL database, and it would be a hassle for them all to set up a
database, just to continue running. So by default I would use either the current flat
file system, and then per configuration enable ODBC, with bindings to MySQL pgSQL and
maybe SQLite3.
Cheers,
Stefan |
1329
|
21 Nov 2017 |
Konstantin Olchanski | Release | Pending release of midas | We are readying a new release of midas and it is almost here except for a few buglets on the new html status page.
The current release candidate branch is "feature/midas-2017-10" and if you have problems with the older versions
of midas, I recommend that you try this release candidate to check if your problem is already fixed. If the problem
still exists, please file a bug report on this forum or on the bitbucket issue tracker
https://bitbucket.org/tmidas/midas/issues?status=new&status=open
Highlights of the new release include
- new and improved web pages done in html and javascript
- many bug fixes and improvements for json and json-rpc support, including improvements in handling of long strings in odb
- locked (protected) operation of odb, where odb shared memory is not writable outside of odb operations
- improved multithead support for odb
- fixes for odb corruption when odb becomes 100% full
For the next release we hope to switch midas from C to fully C++ (building everything with C++ already works). To support el6 we avoid use of
c++11 language constructs.
K.O. |
1413
|
05 Dec 2018 |
Konstantin Olchanski | Info | Partial refactoring of ODB code | The current ODB code has several structural problems and I think I now figured out how to straighten them out.
Here is the problems:
a) nested (recursive) odb locks
b) no clear separation between read-only access and read-write access
c) no clear separation between odb validation and repair functions
d) cm_msg() is called while holding a database lock
Discussion:
a) odb locks are nested because most functions lock the database, then call other functions that lock the database again. Most locking primitives - SystemV
semaphores, POSIX semaphores and mutexes - usually do not permit nested (recursive) locking.
For locking the odb shared memory we use a SystemV semaphore with recursion implemented "by hand" in ss_semaphore_wait_for(). This works ok.
For making odb thread-safe, we use POSIX mutexes, and we rely on an optional feature (PTHREAD_MUTEX_RECURSIVE) which seems to work on most OSes, but
is not required to exist and work by any standard. For example, recursive mutexes do not work in uclinux (linux for machines without an MMU).
I looked at implementing recursive mutexes "by hand", same as we have the recursive semaphores, and realized that it is quite complicated and computationally
expensive (read: inefficient). (Also I think nested and recursive locks is "not mainstream" and should rather be avoided). As an example you can see full
complexity of a nested lock as recent implementation in ROOT. (good luck finding it).
A solution for this problem is well known. All functions are separated into "unlocked" user-callable functions and "locked" internal functions. Nested locking is
naturally eliminated.
Call sequences:
db_get_key() -> db_find_key() // odb is locked twice
become
db_get_key() -> db_get_key_locked() -> db_find_key_locked() // odb is locked once
Actual implementation of this scheme turns out to be a very clean and mechanical refactoring (moving the code without changing what it does).
As a try, I refactored db_find_key() and db_get_key() and I like the result. Locking is now obvious - obscure error paths with hidden "unlock before return" - are all
gone. Extra conversions between hDB and pheader are gone.
b) in this refactoring, functions that do not (should not) modify odb become easy to identify - the pheader argument is tagged "const".
This simplifies the implementation of "write-protected" odb - instead of ad-hoc db_allow_write_locked() sprinkled everywhere, one can have obvious calls to
"db_lock_read_only()" and "db_lock_read_write()".
Separation of locks into "read" and "write" locks, in turn, improves locking behaviour - helps against problems like lock starvation - which we did see with MIDAS -
as "read" locks are much more efficient - all readers can read the data at the same time, locking is only done when somebody need to "write".
c) some db_validate() functions also try to do repair. this cannot work if validation is called from "read-only" functions like db_find_key(). I now think the "repair"
functions should be separate from "validate" functions. validate functions should detect problems, repair functions would repair them. The question remains -
when is good time to run a full repair. (probably at the time when we connect to the database - this way, simply starting "odbedit" will force a database check and
repair).
d) calls to cm_msg() when odb is locked has been a problem for a long time. because cm_msg() itself calls odb and because it also calls event buffer code
(SYSMSG buffer) which in turn call odb functions, there was trouble with deadlocks between ODB and event buffer semaphores, trouble with recursive use of
ODB, etc.
Right now we have all this partially papered over by having cm_msg() put messages into a memory buffer that we periodically flush, but I was never super happy
with that solution. For example, if we crash before the message buffer is flushed, all error messages are lost, they do not go into midas.log, they are not printed on
the screen, they are not accessible in the core dump.
To resolve this problem, I have all "locked" functions call db_msg() instead of cm_msg(). db_msg() saves the messages in a linked list which is flushed into
cm_msg() immediately after we unlock odb.
If we crash after generating an error message but before it is flushed to cm_msg(), we can still access it through the linked list inside the core dump. This is an
improvement over what we have now. Ideally, all messages should be printed to the terminal and saved to midas.log and pushed into SYSMSG, but most of this is
impractical at a moment when odb is locked - as we already know it leads to deadlocks and other trouble...
Bottom line, I now have a path to improve the odb code and to resolve some of the long standing structural problems.
K.O. |
1414
|
11 Dec 2018 |
Stefan Ritt | Info | Partial refactoring of ODB code | All makes sense to me. I agree to proceed with the refactoring.
One additional comment: In the 90's when I developed this code, locking was expensive. On a decent computer you could do a couple of thousand lock operations per second before you hit the 100%
CPU limit. Therefore I tried to reduce the number of lock operation as much as possible. Like a db_find_key locks the ODB once and then goes through all keys before it unlocks again. If I would lock for
every key and have an ODB with ten thousands of keys, that would have taken very long in the old days.
Now the world has changed, we can do almost a million locks a second. So a db_get_record() does not have to obtain a whole directory in one go, but can get each value separately, and if necessary lock
the ODB on each key access. This would be slower, but only a negligible amount these days. So in the spirit of making midas more robust, we can even go a step beyond simple refactoring and change the
locking scheme if it becomes more transparent and stable.
Best,
Stefan |
1419
|
26 Dec 2018 |
Konstantin Olchanski | Info | Partial refactoring of ODB code | > One additional comment: In the 90's when I developed this code, locking was expensive.
> Now the world has changed, we can do almost a million locks a second.
I am not sure this is quite true. The CPU can execute 3000 million operations per second (3GHz CPU, assuming 1 op/Hz),
so 1 lock operation is worth 3000 normal operations. Of course cache misses and branch mispredictions mess up
this simple arithmetic...
But I think cost of mutex lock/unlock can be easily measured. (hmm... now I am curious).
Bigger question is architectural, nested/recursive locks is definitely a bad thing to do (not just my opinion).
But closer to home, as I implemented "write protected" ODB, lock/unlock suddenly has to do MMU operations
(map unmap memory) and this is *very* expensive.
Also as we start doing more multithreading, lock contention is becoming a problem, and the standard solution
is to implement read-locks and write-locks. (everybody holding a read-lock can read ODB at the same time
without waiting).
So, moving in the direction of separate read and write locks and write-protected (and/or read-protected) ODB shared memory,
all points in the direction of reworking of ODB locks in the direction of removing the need for nested/recursive locks.
I think me and Stefan are in agreement here.
K.O. |
1423
|
27 Dec 2018 |
Stefan Ritt | Info | Partial refactoring of ODB code | > I am not sure this is quite true. The CPU can execute 3000 million operations per second (3GHz CPU, assuming 1 op/Hz),
> so 1 lock operation is worth 3000 normal operations. Of course cache misses and branch mispredictions mess up
> this simple arithmetic...
You can try that with "t1" in odbedit. This times the number of db_get_data() calls midas can do per second. On my MacBook Pro I get 470'000
accesses per second. |
2742
|
30 Apr 2024 |
Luigi Vigani | Bug Report | Params not initialized when starting sequencer | Good afternoon,
After updating Midas to the latest develop commit
(0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting
sequencer. If we have a simple loop from start value to stop value and step
size, just printing the value at each iteration, we see everything good (see
first attachment). Then we included another script though, which contains
several subroutines we defined for our detector, and we try to run the same
script. Unfortunately after this the parameters seem uninitialized, and the
value at each loop does not make sense (see second attachment). Also, sometimes
when pressing run the set parameter window would pop-up, but sometimes not.
The script is this one:
>>>
COMMENT Test script to check for a specific bug
INCLUDE global_basic_functions
#CALL setup_paths
#CALL generate_DUT_params
PARAM lv_start, "Start of LV", 1.8
PARAM lv_stop, "Stop of LV", 2.1
PARAM lv_step, "Step of LV", 0.02
n_iterations = (($lv_stop - $lv_start)/$lv_step)
MSG "Parameters:"
MSG $lv_start
MSG $lv_stop
MSG $lv_step
MSG $n_iterations
MSG "Start of looping"
LOOP n, $n_iterations
lv_now = $lv_start + $n * $lv_step
MSG $lv_now
WAIT SECONDS, 1
ENDLOOP
<<<
and the only difference comes from commenting the line:
>>>
INCLUDE global_basic_functions
<<<
as global_basic_functions is defined as a LIBRARY and it includes 75 (!)
subroutines...
Is it possible that when loading a large script it messes up the loading of
parameters?
Thank you very much,
Regards,
Luigi. |
Attachment 1: midas_sequencer_ok.png
|
|
Attachment 2: midas_sequencer_buggy2.png
|
|
2750
|
03 May 2024 |
Zaher Salman | Bug Report | Params not initialized when starting sequencer | Could you please export and send me the /Sequencer ODB tree (or just /Sequencer/Param and /Sequencer/Variables) in both cases while the sequence is running.
thanks,
Zaher
> Good afternoon,
>
> After updating Midas to the latest develop commit
> (0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting
> sequencer. If we have a simple loop from start value to stop value and step
> size, just printing the value at each iteration, we see everything good (see
> first attachment). Then we included another script though, which contains
> several subroutines we defined for our detector, and we try to run the same
> script. Unfortunately after this the parameters seem uninitialized, and the
> value at each loop does not make sense (see second attachment). Also, sometimes
> when pressing run the set parameter window would pop-up, but sometimes not.
>
> The script is this one:
>
> >>>
> COMMENT Test script to check for a specific bug
>
> INCLUDE global_basic_functions
>
> #CALL setup_paths
> #CALL generate_DUT_params
>
> PARAM lv_start, "Start of LV", 1.8
> PARAM lv_stop, "Stop of LV", 2.1
> PARAM lv_step, "Step of LV", 0.02
>
> n_iterations = (($lv_stop - $lv_start)/$lv_step)
>
> MSG "Parameters:"
> MSG $lv_start
> MSG $lv_stop
> MSG $lv_step
> MSG $n_iterations
>
> MSG "Start of looping"
>
> LOOP n, $n_iterations
> lv_now = $lv_start + $n * $lv_step
> MSG $lv_now
> WAIT SECONDS, 1
> ENDLOOP
> <<<
>
> and the only difference comes from commenting the line:
>
> >>>
> INCLUDE global_basic_functions
> <<<
>
> as global_basic_functions is defined as a LIBRARY and it includes 75 (!)
> subroutines...
>
> Is it possible that when loading a large script it messes up the loading of
> parameters?
>
> Thank you very much,
> Regards,
> Luigi. |
2751
|
03 May 2024 |
Stefan Ritt | Bug Report | Params not initialized when starting sequencer | Ok, here is the complete code to reproduce the problem. Load parameter_test.msl which includes functions.msl. From the screenshot you see the variables containing
garbage, and you also see that from the ODB screenshot. For completeness, I added Sequencer.json which contains the whole sequencer tree.
The interesting thing is that this works sometimes, and sometimes not. I'm not sure if this in the GUI or in the sequencer program, so we have to sort out who can
fix it ;-)
Best,
Stefan |
Attachment 1: param_test.msl
|
INCLUDE functions
PARAM lv_start, "Start of LV", 1.8
PARAM lv_stop, "Stop of LV", 2.1
PARAM lv_step, "Step of LV", 0.02
n_iterations = (($lv_stop - $lv_start)/$lv_step)
MSG "Parameters:"
MSG $lv_start
MSG $lv_stop
MSG $lv_step
MSG $n_iterations
MSG "Start of looping"
LOOP n, $n_iterations
lv_now = $lv_start + $n * $lv_step
MSG $lv_now
WAIT SECONDS, 1
ENDLOOP
|
Attachment 2: functions.msl
|
SUBROUTINE sub1
WAIT seconds, 1
ENDSUBROUTINE
SUBROUTINE sub2
WAIT seconds, 1
ENDSUBROUTINE
SUBROUTINE sub3
WAIT seconds, 1
ENDSUBROUTINE
SUBROUTINE sub4
WAIT seconds, 1
ENDSUBROUTINE
SUBROUTINE sub5
WAIT seconds, 1
ENDSUBROUTINE
SUBROUTINE sub6
WAIT seconds, 1
ENDSUBROUTINE
|
Attachment 3: Sequencer.json
|
{
"/MIDAS version": "2.1",
"/filename": "Sequencer.json",
"/ODB path": "/Sequencer",
"State": {
"New File/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"New File": false,
"Path/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Path": "",
"Filename/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Filename": "param_test.msl",
"SFilename/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"SFilename": "/Users/ritt/online/userfiles/sequencer/param_test.msl",
"Next Filename/key": {
"type": 12,
"num_values": 10,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Next Filename": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"Error/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Error": "",
"Error line/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"Error line": 0,
"SError line/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"SError line": 0,
"Message/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Message": "",
"Message Wait/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Message Wait": false,
"Running/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Running": true,
"Finished/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Finished": false,
"Paused/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Paused": false,
"Debug/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Debug": false,
"Current line number/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"Current line number": 46,
"SCurrent line number/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"SCurrent line number": 20,
"Follow Libraries/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Follow Libraries": true,
"Stop after run/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Stop after run": false,
"Transition request/key": {
"type": 8,
"access_mode": 7,
"last_written": 1714720819
},
"Transition request": false,
"Loop start line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"Loop start line": [
43,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"SLoop start line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"SLoop start line": [
17,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"Loop end line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"Loop end line": [
47,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"SLoop end line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"SLoop end line": [
21,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"Loop counter/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"Loop counter": [
6,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"Loop n/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"Loop n": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"Subdir/key": {
"type": 12,
"item_size": 256,
"access_mode": 7,
"last_written": 1714720819
},
"Subdir": "",
"Subdir end line/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"Subdir end line": 0,
"Subdir not notify/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"Subdir not notify": 0,
"If index/key": {
"type": 7,
"access_mode": 7,
"last_written": 1714720819
},
"If index": 0,
"If line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"If line": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"If else line/key": {
"type": 7,
"num_values": 10,
"access_mode": 7,
"last_written": 1714720819
},
"If else line": [
0,
0,
0,
0,
0,
... 379 more lines ...
|
Attachment 4: Screenshot_2024-05-03_at_09.19.29.png
|
|
Attachment 5: Screenshot_2024-05-03_at_09.20.47.png
|
|
2752
|
03 May 2024 |
Luigi Vigani | Bug Report | Params not initialized when starting sequencer | It is pretty much the same as Stefan, I attach here the screenshots. Also in my case it works sometimes, and sometimes partially (one or 2 params, like in
attachment 3).
> Could you please export and send me the /Sequencer ODB tree (or just /Sequencer/Param and /Sequencer/Variables) in both cases while the sequence is running.
>
> thanks,
> Zaher
>
>
> > Good afternoon,
> >
> > After updating Midas to the latest develop commit
> > (0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting
> > sequencer. If we have a simple loop from start value to stop value and step
> > size, just printing the value at each iteration, we see everything good (see
> > first attachment). Then we included another script though, which contains
> > several subroutines we defined for our detector, and we try to run the same
> > script. Unfortunately after this the parameters seem uninitialized, and the
> > value at each loop does not make sense (see second attachment). Also, sometimes
> > when pressing run the set parameter window would pop-up, but sometimes not.
> >
> > The script is this one:
> >
> > >>>
> > COMMENT Test script to check for a specific bug
> >
> > INCLUDE global_basic_functions
> >
> > #CALL setup_paths
> > #CALL generate_DUT_params
> >
> > PARAM lv_start, "Start of LV", 1.8
> > PARAM lv_stop, "Stop of LV", 2.1
> > PARAM lv_step, "Step of LV", 0.02
> >
> > n_iterations = (($lv_stop - $lv_start)/$lv_step)
> >
> > MSG "Parameters:"
> > MSG $lv_start
> > MSG $lv_stop
> > MSG $lv_step
> > MSG $n_iterations
> >
> > MSG "Start of looping"
> >
> > LOOP n, $n_iterations
> > lv_now = $lv_start + $n * $lv_step
> > MSG $lv_now
> > WAIT SECONDS, 1
> > ENDLOOP
> > <<<
> >
> > and the only difference comes from commenting the line:
> >
> > >>>
> > INCLUDE global_basic_functions
> > <<<
> >
> > as global_basic_functions is defined as a LIBRARY and it includes 75 (!)
> > subroutines...
> >
> > Is it possible that when loading a large script it messes up the loading of
> > parameters?
> >
> > Thank you very much,
> > Regards,
> > Luigi. |
Attachment 1: seq1.PNG
|
|
Attachment 2: seq2.PNG
|
|
Attachment 3: seq3.PNG
|
|
2755
|
03 May 2024 |
Zaher Salman | Bug Report | Params not initialized when starting sequencer | I have been able to reproduce the problem only once. From what I see, it seems that the Variables ODB tree is not initialized properly from the Param tree. Below are the messages from the failed run compared to a successful one. As far as I could see, the javascript code does not change anything in the Variables ODB tree (only monitors it). The actual changes are done by the sequencer program, or am I wrong?
Failed run:
16:14:25.849 2024/05/03 [Sequencer,INFO] + 3 *
16:14:24.722 2024/05/03 [Sequencer,INFO] + 2 *
16:14:23.594 2024/05/03 [Sequencer,INFO] + 1 *
16:14:23.592 2024/05/03 [Sequencer,INFO] Start of looping
16:14:23.591 2024/05/03 [Sequencer,INFO] (( - )/)
16:14:23.591 2024/05/03 [Sequencer,INFO]
16:14:23.590 2024/05/03 [Sequencer,INFO]
16:14:23.590 2024/05/03 [Sequencer,INFO]
16:14:23.589 2024/05/03 [Sequencer,INFO] Parameters:
16:14:23.562 2024/05/03 [Sequencer,TALK] Sequencer started with script "testpars.msl".
Successful run:
16:15:37.472 2024/05/03 [Sequencer,INFO] 1.820000
16:15:37.471 2024/05/03 [Sequencer,INFO] Start of looping
16:15:37.471 2024/05/03 [Sequencer,INFO] 15
16:15:37.470 2024/05/03 [Sequencer,INFO] 0.020000
16:15:37.470 2024/05/03 [Sequencer,INFO] 2.100000
16:15:37.469 2024/05/03 [Sequencer,INFO] 1.800000
16:15:37.469 2024/05/03 [Sequencer,INFO] Parameters:
16:15:37.450 2024/05/03 [Sequencer,TALK] Sequencer started with script "testpars.msl". |
2756
|
03 May 2024 |
Stefan Ritt | Bug Report | Params not initialized when starting sequencer | Ahh, that rings a bell:
1) JS opens start dialog box
2) User enters parameters and presses start
3) JS writes parameters
4) JS starts sequencer
5) Sequencer copies parameters to variables
Now how do you handle 3) and 4). Just issue two mjsonrpc commands together? What then could happen is that 4) is executed before 3) and we get the garbage.
You have to do 3) and WAIT for the return ("then" in the JS promise), and only then issue 4) from there.
Stefan |
|