Back Midas Rome Roody Rootana
  Midas DAQ System, Page 2 of 146  Not logged in ELOG logo
    Reply  14 Dec 2004, Konstantin Olchanski, Forum, use of assert in mhttpd 
>    We've had mhttpd aborting regularly since upgrading from midas-1.9.3.  This
> happens during elog queries, and is due to an elog file that was incorrectly
> modified by hand.

(sorry for delayed reply, for reasons unknown, I did not get an email notice when this was posted)

Yes, I agree, error handling in midas elog code is insufficient (note missing error checks for
read() and lseek() system calls). Anything but "perfect" elog files would cause funny errors and
malfunctions.

>  The modification to the file occurred 6 months ago.
>    el_retrieve(midas.c:15683) now has several assert statements, one of which
> aborts the program on reading the bad entry.

I added those to fix problems with "broken last NN days" and with infinite looping in the elog code
that we observed in TWIST.

You are welcome to replace the assert() statements with proper error handling. I used to have some code
that could report the filename of the bad elog file. Can we also report the exact file location for broken
files.

Please send me the diff, I will commit it to midas cvs.

>    Why is assert used, instead of an error return from the function (if
> necessary), and maybe an error message in the log file?  Assert statements are
> often removed, using NDEBUG, for normal use.

I use assert() in several ways:

0) I want a core dump each time X happens. (This is the only reasonable action when facing memory/stack
corruption. The problems in the elog code were stack corruption).
1) "I am too lazy to write proper error handling code" so I just crash and burn. This includes the
case where "proper error handling" would be "too invasive".
2) the error is too bad (or too deep) and there is no reasonable way to recover. Print an error message
and dump core (for later analysis). I sometimes use "cm_msg(); abort()". (assert is "printf("error"); abort()")

Please refer to literature for philosophic discussions on uses of assert() (Argh! Stefan will have my
head again!), but I will mention that "abort() early, abort() often" I find very effective. BTW, this technique
is heavily used in the Linux kernel (oops(), bug(), panic()) with some good effect, too.

>    The problem elog entry had one character removed, so end-of-file came before
> the end of the message.  This could probably occur without the file being
> altered, if the disk containing the elog fills.

Yes, I think you are right. In TWIST, we have seen disk-full conditions break both elog and history.

K.O.
Entry  26 Jul 2003, Konstantin Olchanski, , use "odbedit -C" to connect to corrupted ODB 
Add switch "-C" to odbedit to allow it to connect to corrupted ODB. Then,
depending on corruption, the user can manually remove or correct the
corrupted entries. Also, some corruption is automatically fixed by "odbedit"
itself. I use this functionality to debug and fix broken ODBs.

K.O.

For your enjoyment, here is the diff:

diff -r1.64 odbedit.c
3058a3059
> BOOL          corrupted;
3063c3064
<   debug = cmd_mode = FALSE;
---
>   debug = corrupted = cmd_mode = FALSE;
3077a3079,3080
>     else if (argv[i][0] == '-' && argv[i][1] == 'C')
>       corrupted = TRUE;
3104c3107,3108
<         printf("               [-c Command] [-c @CommandFile] [-s size]
[-g (debug)]\n\n");
---
>         printf("               [-c Command] [-c @CommandFile] [-s size]\n");
>         printf("               [-g (debug)] [-C (connect to corrupted
ODB)]\n\n");
3123c3127,3133
<   if (status != CM_SUCCESS)
---
>   else if ((status == DB_INVALID_HANDLE)&&corrupted)
>     {
>     cm_get_error(status, str);
>     puts(str);
>     printf("ODB is corrupted, connecting anyway...\n");
>     }
>   else if (status != CM_SUCCESS)
Entry  27 Nov 2015, Konstantin Olchanski, Info, updated: note on midas history 
(update: resolve all FIXMEs, document the breakup of "structured banks")

This note documents the workings of the midas history.

There is 2 separate history sections: equipment history and links history.

* is equipment history enabled?

For each equipment, history is controlled by the value of /eq/xxx/common/period:

0 = history disabled
1 = history is enabled
>1 = history is enabled, throttled down

The throttling is implemented in log_history()/watch_history() by this algorithm:
the very first history event is recorded, then all changed to the data are ignored until
"period" seconds has elapsed. Then the next history event will be recorded, and following
changes will be ignored until "period" second elapses, and so forth. Period value "1" has
special meaning - there is no throttling, all history events are logged.

If equipment history is enabled, history events are created by parsing the content of /eq/xxx/variables.

* what is history events?

A "history event" is a history atomic unit of data. Associated with each history event is a timestamp (unix time),
a name (limited to NAME_LENGTH in the old history) and a list of history tags that describe the individual data
values inside the history event.

When making history plots in mhttpd, for each curve on the plot, one selects a history event (from the list
of currently active events, recently active events or the list of all events that ever existed), then from the list of tags
inside the history event one selects the particular variable that will be plotted.

In the old MIDAS history, all history events are written into one history file (.hst file + optional .def and .idx event definition and time index files
which can be/are regenerated automatically from the .hst file). History events are identified by 16-bit history event IDs, the persistent mapping
from history event names and the 16-bit history event IDs is stored in ODB /History/Events. In addition the list of all known history event tags is
stored in ODB /History/Tags. For per-equipment history, the 16-bit history event ID is the value of ODB "/eq/xxx/common/event id".

In the SQL history (MySQL, SQLITE, etc), each history event is an SQL table. The history event tags are the SQL table columns.

In the new FILE history, each history event is written into a separate file, tag definition are recorded in text formal in the file header, history event
data is appended to the file in binary format (fixed record size). If the history event definition is changed, a new file will be started.

* how are history events constructed?

The mlogger creates history events in open_history() by parsing ODB /eq/xxx/variables. Each ODB entry under "variables" is referred to as a "variable".

Each variable can be a single ODB value, an array of ODB values, or a subdirectory (corresponding to TID_STRUCT structured data banks). As each variable
is processed, one or more tags are created to describe it. Single ODB values will generally produce a single tag, while arrays can produce
one single tag - describing the whole array - or multiple tags - one per array element - depending whether the array is "named" or not.

The code can generate two types of history:
- "per-equipment" history will have the tags for all variables concatenated together into one single history event
- "per-variable" history will have one history event defined for each variable. Inside could be one tag - for single odb values and unnamed arrays - or multiple tags - for named arrays and structured data 
banks.

Per-equipment history is the original MIDAS history implementation.

Per-variable history was added to permit efficient data storage in SQL tables. It's initial implementation used 1 ODB hotlink for each variable and it was easy to exceed the maximum permitted number of 
ODB hotlinks (db_open_record()).

To reduce consumption of hotlinks, db_watch() has been implemented and now per-variable history only uses 1 ODB hotlink per equipment.

With db_watch, per-equipment history is no longer available. per-variable history is the new default (and the only option).

* how are the history event tags constructed?

(quirk - single odb values are treated as arrays of length "1")

FIXME: single odb values should be treated as such, /eq/xxx/settings/names should not be applied

(quirk - "string" ODB entries are not permitted)

FIXME - single odb values of type TID_STRING should be possible with SQL, FILE and MIDAS history. arrays of strings is impossible "struct TAG" does not have a data field for string length - only n_data and 
item length implied through it's TID.

History event tags are constructed in the mlogger add_equipment().

For variables of type TID_KEY (subdirectories, corresponding to TID_STRUCT structured banks), one tag is generated for each subdirectory entry. Tag names for /eq/xxx/var/aaa/bbb will be "aaa_bbb". 
(with an underscore).

FIXME: subdirectory entries of type TID_KEY and TID_LINK should be explicitly forbidden.
FIXME: TID_KEY could be supported by replacing db_get_data() with db_get_record() in watch_history().
FIXME: TID_LINK could be supported by adding db_watch() on the link target.

For named arrays, individual tags are generated for each array element. Tag names are taken from the names array. For empty tag names (empty names array), tags are "aaa_0", "aaa_1", etc (for 
/eq/xxx/var/aaa). For "single names" arrays, tag names have the variable name appended (with a space), for /eq/xxx/var/aaa and an empty names array, tags will be "aaa_0 aaa", "aaa_1 aaa", etc. For 
populated names array, the tags will be "name0 aaa", "name1 aaa", etc.

For unnamed arrays and single odb variables (in ODB, single odb variables are arrays of length 1), a single tag is generated.

For TID_LINK variables what happens? FIXME!

FIXME: support TID_LINK variables by correctly parsing the link target and setting a db_watch() on the link target.

Named arrays have a "Names" entry in /eq/xxx/settings. For example, to add names to /eq/xxx/var/aaa, create a string array "/eq/xxx/settings/names aaa". The names array should be at least as long as 
the corresponding data array. Individual entries in the names array can be left blank (tag names will be "aaa_0", "aaa_1", etc). Duplicate tag names are not permitted.

A single "Names" entry can be created to name all arrays in variables with the same names ("single names"). Create /eq/xxx/settings/names" and arrays /eq/xxx/var/aaa and /eq/xxx/var/bbb will have 
history tags "name0 aaa", "name1 aaa", "name0 bbb", "name1 bbb", etc. If "names" are left blank, tag names will be "aaa_0 aaa", "aaa_1 aaa", "bbb_0 bbb", "bbb_1 bbb", etc.

In the mhttpd variables viewer, "single name" arrays are displayed in a 2D table.

* /history/links history

History events are created for each entry under /history/links.

Two types of links are permitted:

/history/links/aaa is a link to a subdirectory: db_watch() is setup to watch this subdirectory, tags are created for each subdirectory entry (1 tag per entry). There is no possibility for naming array elements, so 1 tag per array, regardless of the number of elements.

/history/links/bbb is a subdirectory with links to odb values: db_watch is setup to watch each link target, tags are created for each link (1 tag per link). tag name is the link name (NOT the target name). There is no possibility for naming array elements.

FIXME: Mixing links and subdirectories is not permitted, but could be done - additional db_watch() will need to be done on any links.

Update period history events created for /history/links is controlled by entries in "/history/links periods". Numeric values of periods are same as for equipment histories. Numeric value 0 disables the history for a particular event.

K.O.
Entry  08 Jun 2006, Konstantin Olchanski, Bug Fix, updated vmicvme driver 
I commited the latest VMIC VME driver we use at TRIUMF. It has working support
for D32 and D64 DMA and can move data from the SIS3820 multiscaler through the
MIDAS frontend at > 30 Mbytes/sec on our VMICVME-7805 machines. The actual DMA
speed on the VME bus is around 50 Mbytes/sec, effective data rate is lower
because of a memcpy() from the kernel DMA buffer into user memory (required by
the MIDAS mvmestd.h interface, quite inefficient for DMA operations). K.O.
Entry  27 Jun 2011, Konstantin Olchanski, Info, updated mhttpd history "export" function 
The mhttpd history "export" function has been converted to the new midas history
interface and should now work for SQL-based history systems. In the process,
improvements by Eoin Butler (CERN AD-5/ALPHA) were merged - adding a UNIX
timestamp and a better text timestamp. Also now "export" outputs the actual
values from the history file - the scaling values from the definition of the
history plot panel are no longer applied.

Here is an example of the new file format:

Time, Timestamp, Run, Run State, SLOW
2011.06.21 15:45:21, 1308696321, 13292, 3, -89.1007

svn rev 5104
K.O.
Entry  19 May 2021, Konstantin Olchanski, Info, update of event buffer code 
a big update to the event buffer code was merged today.

two important bug fixes:

- a logic error in bm_receive_event() (actually bm_fill_read_cache_locked()) 
caused use of uninitialized variable to increment the read pointer and crash 
with error "read pointed points to an invalid event")
- missing bm_unlock() in bm_flush_cache() caused double-locking of event buffer 
caused a hang and a subsequent crash via the watchdog timeout.

several improvements:

- bm_receive_event_vec(std::vector<char>) with automatic memory allocation, one 
does not need to worry about providing a large event buffer to receive event 
data. For local connections MAX_EVENT_SIZE is no longer used, for remote 
connections, a buffer of MAX_EVENT_SIZE is allocated automatically, this is a 
limitation of the MIDAS RPC layer (it does not know how to allocate memory to 
receive arbitrary large data)

(MAX_EVENT_SIZE is now only used in bm_receive_event_rpc()).

- rpc_send_event_sg() - thread safe method to send events to the mserver. it 
takes an array of scatter-gather buffers, so a midas event does not have to be 
in one continious buffer.

- bm_send_event_sg() - same for local connections.

- on top of bm_send_event_sg() we now have bm_send_event_vec(std::vector<char>) 
and bm_send_event_vec(std::vector<vector<char>>). now we can move forward with 
implementing a new "event object" (the TMEvent event object from midasio.h 
already works with these new methods).

- remote connected bm_send_event() & co now always send events to the mserver 
using the event socket. (before, bm_send_event() used RPC_BM_SEND_EVENT and 
suffered from the RPC layer encoding/decoding overhead. mfe.c used 
rpc_send_event() for remote connections)

- bm_send_event(), bm_receive_event() & co now take a timeout value (in 
milliseconds) instead of an async_flag. The old async_flag values BM_WAIT and 
BM_NO_WAIT continue working as expected (wait forever and do not wait at all, 
respectively).

- following improvements are only for remote connections:

- in the case of event buffer congestion (event readers are slow, event buffers 
are close to 100% full), the bm_flush_cache() RPC will no longer timeout due to 
mserver being stuck waiting for free buffer space. (RPC is called with a 1000 
msec timeout, infinite loop waiting for flush is done on the frontend side, the 
RPC timeout will never fire)

- in the case of event buffer congestion, ODB RPC will no longer timeout. 
(previously mserver was stuck waiting for free buffer space and did not process 
any RPCs).

- at the end of run, last few events could be stuck in the event socket. now, 
frontends can flush it using bm_flush_cache(0,BM_WAIT) (use zero for the buffer 
handle). correct run transition should stop the trigger, stop generating new 
events, call bm_flush_cache(0,BM_WAIT), call bm_flush_cache("SYSTEM",BM_WAIT) 
and return success. (TMFE frontend already does this). Note that 
bm_flush_cache(BM_WAIT) can be stuck for very long time waiting for the event 
buffers to empty-out, so run transition RPC timeout is still possible.

K.O.
Entry  07 Aug 2020, Konstantin Olchanski, Info, update of MYSQL history documentation 
I updated the documentation for setting up a MYSQL (MariaDB) database for 
recording MIDAS history: https://midas.triumf.ca/MidasWiki/index.php/History_System#Write_MYSQL-history_events

One thing to note: the "writer" user must have the "INDEX" permission, otherwise 
many things will not work correctly.

Included are the instructions for importing exiting *.hst history files into the 
SQL database: mh2sql --mysql mysql_writer.txt *.hst

Let me know if there is interest in adding support for writing into Postgres SQL 
database. We used to support both MySQL and Postgres through the ODBC library, 
but in the new code, each database has to be supported through it's native API. 
There is code for SQLITE, MYSQL, but no code for Postgres, although it is not too 
hard to add.

K.O.
Entry  28 Aug 2008, Konstantin Olchanski, Info, triumf/t2k midas updates 
Following changes to midas produced from the TRIUMF T2K project have been
committed to svn:
1) cm_shutdown() will now SIGKILL clients that cannot be stopped via normal
means. Previously cm_shutdown() would print a message to the effect "please kill
this client yourself manually". The user action in this case (assuming they did
not issue cm_shutdown() by mistake) has been to find out the client pid using
"ps", kill -KILL it, then "odbedit clean". cm_shutdown() now performs all this
automatically.
2) rpc_send_event() did not correctly detect loss of connection to the remote
mserver (i.e. in case it was killed by cm_shutdown() above). Now, correct error
handling is in place and the remote frontend should gracefully shutdown if
mserver connection is lost. (However I observe that some of my remote frontends
fail to exit unless I do "exit(1);" from my frontend_exit() function.
3) mhttpd bug fixed: when editing odb entries, the "cancel" button did not work
correctly.
4) lazylogger "script" backup type is now fully tested and documented. Example
scripts for writing to dcache are available by request.
5) mlogger and mhttpd changes for writing history data to an sql database are
mostly completed and will be committed after some more debugging. (If you are
interested in details, please contact me directly).
6) (committed some time ago) Makefile changes for cross-compiling midas are now
in: "make linux32", "make linux64", "make crosscompile".
K.O.
Entry  30 Apr 2008, Konstantin Olchanski, Info, triumf elog updated to elog-2.7.3-1.i386.rpm 
FYI - in conjunction with replacement of ladd00.triumf.ca, this MIDAS ELOG has been updated to the latest 
version 2.7.3-2058. Please report any problems or anomalies. K.O.
Entry  25 Feb 2021, Lars Martin, Bug Report, tmfe_main.cxx missing include <signal.h> 
The most recent commit (b43aef648c2f8a7e710a327d0b322751ae44afea) throws this 
compiler error:
src/tmfe_main.cxx:39:11: error: 'SIGPIPE' was not declared in this scope
    signal(SIGPIPE, SIG_IGN);

It's fixed by adding #include <signal.h> to that file.
    Reply  25 Feb 2021, Konstantin Olchanski, Bug Report, tmfe_main.cxx missing include <signal.h> 
> The most recent commit (b43aef648c2f8a7e710a327d0b322751ae44afea) throws this 
> compiler error:
> src/tmfe_main.cxx:39:11: error: 'SIGPIPE' was not declared in this scope
>     signal(SIGPIPE, SIG_IGN);
> 
> It's fixed by adding #include <signal.h> to that file.

"but it works just fine on my mac!"

anyhow, thank you for reporting this problem, it already fixed. the bitbucket auto-
build also caught it. I also boogered up "make remoteonly", also fixed now.

BTW, for production use I recommend midas from the "release" branches, unless one 
needs a bug fix or new feature from the development branch.

K.O.
    Reply  26 Feb 2021, Lars Martin, Bug Report, tmfe_main.cxx missing include <signal.h> 
> BTW, for production use I recommend midas from the "release" branches, unless one 
> needs a bug fix or new feature from the development branch.

Fair point. I would suggest adding that recommendation to the wiki instructions. I 
forget to add that step otherwise.
Entry  14 Oct 2019, Joseph McKenna, Forum, tmfe.cxx - Future frontend design 
Hi,

I have been looking at the 2019 workshop slides, I am interested in the C++ future of MIDAS. 

I am quite interested in using the object oriented 


ALPHA will start data taking in 2021
Entry  29 Dec 2024, Pavel Murat, Forum, time ordering of run transition calls to TMFeEquipment things 
Dear MIDAS experts, 

I have a question about "tmfe approach" to implementing MIDAS frontends. If I read the code correctly, 
within this approach it is the TMFeEquipment things, not the TMFrontend's themselves, 
which handle the run transitions - the TMFrontend class 

https://bitbucket.org/tmidas/midas/src/423082fb67c7711813fcda61f7cd03784c398f49/include/tmfe.h#lines-306:378

simply doesn't have methods to handle those directly. 

So how does a user control the sequence in which TMFeEquipment::HandleBeginRun functions of different 
TMFeEquipment pieces are called at begin run? - there are two cases to consider: TMFeEquipment things 
defined by the same TMFrontend and by different TMFrontend's.

Many thanks and happy holidays to everyone! 

-- regards, Pasha
 
    Reply  01 Jan 2025, Konstantin Olchanski, Forum, time ordering of run transition calls to TMFeEquipment things 
> I have a question about "tmfe approach" to implementing MIDAS frontends. If I read the code correctly, 
> within this approach it is the TMFeEquipment things, not the TMFrontend's themselves, 
> which handle the run transitions - the TMFrontend class

that's correct and it is documented so in https://bitbucket.org/tmidas/midas/src/develop/tmfe.md

> So how does a user control the sequence in which TMFeEquipment::HandleBeginRun functions of different 
> TMFeEquipment pieces are called at begin run? - there are two cases to consider: TMFeEquipment things 
> defined by the same TMFrontend and by different TMFrontend's.

I am not sure what you are trying to do. It is always easier to suggest a solution to a specific problem.

But I will try to answer anyway:

1) "time ordering of run transitions" - of course midas transitions are ordered by transition sequence numbers 
and the tmfe class provides methods to control this. ditto for the mfe.cxx frontends.

2) for one TMFrontend, the order of calling HandleBeginRun() is the order in which equipments were added to the 
equipment using FeAddEquipment(). HandleEndRun() is called in reverse order. (I better check this).

3) to have multiple TMFrontends in one program would be unusual (mfe.cxx frontends completely do not support 
this), but should work. Everything was coded to support this, but it was never tested in practice because we 
cannot invent any useful use-case for it. HandleBeginRun() handlers are likely to be called in the frontends are 
created. (I could check this and confirm it works, as long as you have a valid use-case for this configuration).

4) Frontend X has EquipmentA and EquipmentB, you want EqA::HandleBeginRun() to be called at run transition 200 
and EqB::HandleBeginRun() to be called at run transition 400.

This is not directly supported by mfe.cxx frontends (the begin_run() handler is a global function) and I did not 
directly implement it in the TMFE frontend.

But I think this would be a useful improvement. I will look into this.

Likely I will add per-equipment data members fEqConfBeginRunSeqNo, fEqConfEndRunSeqno, etc. Value 0 would 
unregister the corresponding run transition handler. This would cleanup the code quite a bit, a bunch
of RegisterTranstionXXX functions could go away.

K.O.
    Reply  02 Jan 2025, Pavel Murat, Forum, time ordering of run transition calls to TMFeEquipment things 
Hi K.O., your clarification is much appreciated! 
"
> I am not sure what you are trying to do. It is always easier to suggest a solution to a specific problem.

I think, I owe you an explanation :) :

Consider ~ 40 nodes with two FPGAs (PCIE cards) per node, talking to the detector hardware. 
One of those FPGAs, in addition to reading the data, performs the global timing synchronization.
The high-bandwidth data readout is not controlled by MIDAS, so all frontends perform only 'slow control'-type functions.
In MIDAS language, an FPGA implements two different units of slow control equipment: 
one - configuring and controlling a single FPGA (equipment type A), and another one - synchronizing 
multiple FPGAs (equipment type B). On one of the nodes, unit A and unit B share the FPGA card, 
so they better be controlled by the same frontend. 

For one, I need to make sure that all type A equipment units, managed by multiple frontends, 
are initialized before the [single] type B unit which shares the frontend with the type A unit. 
And, of course, the end of a run transition has to be handled in the opposite order - type B unit 
shuts down first. 

As 'periodic' actions for all registered pieces of equipment are performed in the same loop [thread], 
registering the equipment in the needed order - first A, then B - should give a solution - thanks for making that clear.     

> 
> 1) "time ordering of run transitions" - of course midas transitions are ordered by transition sequence numbers 
> and the tmfe class provides methods to control this. ditto for the mfe.cxx frontends.
> 
> 2) for one TMFrontend, the order of calling HandleBeginRun() is the order in which equipments were added to the 
> equipment using FeAddEquipment(). HandleEndRun() is called in reverse order. (I better check this).

the ordering of the rpc handler calls in tmfe's tr_stop/tr_pause/tr_resume functions is ok.

> 
> 3) to have multiple TMFrontends in one program would be unusual (mfe.cxx frontends completely do not support 
> this), but should work. Everything was coded to support this, but it was never tested in practice because we 
> cannot invent any useful use-case for it. HandleBeginRun() handlers are likely to be called in the frontends are 
> created. (I could check this and confirm it works, as long as you have a valid use-case for this configuration).

agreed, I don't think there is a good use case for that, so no need to spend time checking.

> 
> 4) Frontend X has EquipmentA and EquipmentB, you want EqA::HandleBeginRun() to be called at run transition 200 
> and EqB::HandleBeginRun() to be called at run transition 400.
> 
> This is not directly supported by mfe.cxx frontends (the begin_run() handler is a global function) and I did not 
> directly implement it in the TMFE frontend.
> 
> But I think this would be a useful improvement. I will look into this. 

In the simplest case, registering the equipment units in the right order is definitely the answer. 
However a single FPGA can perform multiple logically independent tasks and thus represent 
multiple logical units of equipment. Those units however are not independent: they share the hardware (FPGA) 
and thus do depend on each other. Giving users a full control over the sequence in which those logical units 
execute their run transitions is quite likely to be needed, for example, to work around peculiarities of the 
custom-made kernel drivers.
 
> 
> Likely I will add per-equipment data members fEqConfBeginRunSeqNo, fEqConfEndRunSeqno, etc. Value 0 would 
> unregister the corresponding run transition handler. This would cleanup the code quite a bit, a bunch
> of RegisterTranstionXXX functions could go away.

this also makes sense. -- thanks again, regards, Pasha

> 
> K.O.
    Reply  05 Jan 2025, Stefan Ritt, Forum, time ordering of run transition calls to TMFeEquipment things 
Hi Pavel,

have you looked into 

  cm_set_transition_sequence()

which let's you define the sequence number for every midas client. You give any number between 1 and 1000 (default is 500 for frontends I believe).

The default value of 500 is defined in mfe.cxx:2641 where you have 500 for all four transitions, but it can be overwritten in the frontend_init function via 
cm_set_transition_sequence(). Since you have separate values for the start and stop transition, you can get the different sequencing for both transitions as you need. Like set 
all type A to 400, type B to 600 for TR_START, and set type A to 600 and type B to 400 for TR_STOP.

Since this works on the midas client level, it should also work for the tmfe.cxx framework. I'm however not sure if you have a similar default of 500 as in mfe.cxx:2641. But K.O. 
should know.

Best,
Stefan 
Entry  21 Aug 2020, Ruslan Podviianiuk, Forum, time information Running_time.png
Hello,

I have a few questions about time information:
1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
the attached file)
2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
example when I start a new run, value of "Start time" key is automatically changed 
to "Fri Aug 21 12:38:36 2020" without time zone. 

Thank you.
    Reply  24 Aug 2020, Stefan Ritt, Forum, time information 
> 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
> the attached file)

You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since 
1970. By subtracting this from the current time, you get the running time.

> 2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
> example when I start a new run, value of "Start time" key is automatically changed 
> to "Fri Aug 21 12:38:36 2020" without time zone. 

"Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no 
time zone involved there. The ASCII versions of the start/stop time are derived from 
the binary time using the server's local time zone. If you want to display them in a 
different time zone, you have to create a custom page and convert it to another time 
zone using JavaScript like

var d = new Date(start_time_binary);

Stefan
    Reply  25 Aug 2020, Ruslan Podviianiuk, Forum, time information 
Thank you, Stefan

Ruslan 



> > 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
> > the attached file)
> 
> You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since 
> 1970. By subtracting this from the current time, you get the running time.
> 
> > 2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
> > example when I start a new run, value of "Start time" key is automatically changed 
> > to "Fri Aug 21 12:38:36 2020" without time zone. 
> 
> "Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no 
> time zone involved there. The ASCII versions of the start/stop time are derived from 
> the binary time using the server's local time zone. If you want to display them in a 
> different time zone, you have to create a custom page and convert it to another time 
> zone using JavaScript like
> 
> var d = new Date(start_time_binary);
> 
> Stefan
ELOG V3.1.4-2e1708b5