ID |
Date |
Author |
Topic |
Subject |
1911
|
20 May 2020 |
Konstantin Olchanski | Bug Report | Conflic between Rootana and midas about the redefinition of TID_xxx data types | > Dear Midas and Rootana people,
>
> We have tried to update our midas DAQ with the new TID definitions describe in https://midas.triumf.ca/elog/Midas/1871
>
> And we have noticed an incompatibility of this new definitions with Rootana when reading an XmlOdb in our offline analyzer.
>
> The problem comes from the function FindArrayPath in XmlOdb.cxx and the comparison of bank type as strings.
> Ex: comparing the strings "DWORD" and "UNINT32"
>
> An naive solution would be to print the number associated to the type (ex: '6' for DWORD/UNINT32), but that would mean changing Rootana and Midas source code. Moreover, it does decrease the readability of the XmlOdb file.
>
Hi, it is unfortunate that a change was made in MIDAS that is incompatible with existing analysis software. I shall update the ROOTANA package to deal with this ASAP.
K.O. |
1910
|
19 May 2020 |
Ruslan Podviianiuk | Forum | List of sequencer files | > If you load a file into the sequencer from the web interface, you get a list of all files in that directory.
> This basically gives you a list of possible sequencer files. It's even more powerful, since you can
> create subdirectories and thus group the sequencer files. Attached an example from our
> experiment.
>
> Stefan
Dear Stefan,
Could you please answer one more question:
We have a custom webpage and trying to get list of files from the custom webpage and need jrpc command to show it
in custom page. Is there a jrpc command to get this file list?
Thanks, |
1909
|
18 May 2020 |
Ruslan Podviianiuk | Forum | List of sequencer files | > If you load a file into the sequencer from the web interface, you get a list of all files in that directory.
> This basically gives you a list of possible sequencer files. It's even more powerful, since you can
> create subdirectories and thus group the sequencer files. Attached an example from our
> experiment.
>
> Stefan
Dear Stefan,
Thank you for the explanation.
Ruslan |
1908
|
13 May 2020 |
Stefan Ritt | Forum | List of sequencer files | If you load a file into the sequencer from the web interface, you get a list of all files in that directory.
This basically gives you a list of possible sequencer files. It's even more powerful, since you can
create subdirectories and thus group the sequencer files. Attached an example from our
experiment.
Stefan |
Attachment 1: Screenshot_2020-05-13_at_9.11.55_.png
|
|
1907
|
12 May 2020 |
Ruslan Podviianiuk | Forum | List of sequencer files | Hello,
We are going to implement a list of sequencer files to allow users to select one
of them. The name of this file will be transferred to
/ODB/Sequencer/State/Filename field of ODB.
Is it possible to get a list of Sequencer files from MIDAS? Is there a jrpc
command for this?
Thanks.
Best,
Ruslan |
1906
|
12 May 2020 |
Stefan Ritt | Info | New ODB++ API | Since the beginning of the lockdown I have been working hard on a new object-oriented interface to the online database ODB. I have the code now in an initial state where it is ready for
testing and commenting. The basic idea is that there is an object midas::odb, which represents a value or a sub-tree in the ODB. Reading, writing and watching is done through this
object. To get started, the new API has to be included with
#include <odbxx.hxx>
To create ODB values under a certain sub-directory, you can either create one key at a time like:
midas::odb o;
o.connect("/Test/Settings", true); // this creates /Test/Settings
o.set_auto_create(true); // this turns on auto-creation
o["Int32 Key"] = 1; // create all these keys with different types
o["Double Key"] = 1.23;
o["String Key"] = "Hello";
or you can create a whole sub-tree at once like:
midas::odb o = {
{"Int32 Key", 1},
{"Double Key", 1.23},
{"String Key", "Hello"},
{"Subdir", {
{"Another value", 1.2f}
}
};
o.connect("/Test/Settings");
To read and write to the ODB, just read and write to the odb object
int i = o["Int32 Key];
o["Int32 Key"] = 42;
std::cout << o << std::endl;
This works with basic types, strings, std::array and std::vector. Each read access to this object triggers an underlying read from the ODB, and each write access triggers a write to the
ODB. To watch a value for change in the odb (the old db_watch() function), you can use now c++ lambdas like:
o.watch([](midas::odb &o) {
std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
});
Attached is a full running example, which is now also part of the midas repository. I have tested most things, but would not yet use it in a production environment. Not 100% sure if there
are any memory leaks. If someone could valgrind the test program, I would appreciate (currently does not work on my Mac).
Have fun!
Stefan
|
Attachment 1: odbxx_test.cxx
|
/********************************************************************\
Name: odbxx_test.cxx
Created by: Stefan Ritt
Contents: Test and Demo of Object oriented interface to ODB
\********************************************************************/
#include <string>
#include <iostream>
#include <array>
#include <functional>
#include "odbxx.hxx"
#include "midas.h"
/*------------------------------------------------------------------*/
int main() {
cm_connect_experiment(NULL, NULL, "test", NULL);
midas::odb::set_debug(true);
// create ODB structure...
midas::odb o = {
{"Int32 Key", 42},
{"Bool Key", true},
{"Subdir", {
{"Int32 key", 123 },
{"Double Key", 1.2},
{"Subsub", {
{"Float key", 1.2f}, // floats must be explicitly specified
{"String Key", "Hello"},
}}
}},
{"Int Array", {1, 2, 3}},
{"Double Array", {1.2, 2.3, 3.4}},
{"String Array", {"Hello1", "Hello2", "Hello3"}},
{"Large Array", std::array<int, 10>{} }, // array with explicit size
{"Large String", std::string(63, '\0') }, // string with explicit size
};
// ...and push it to ODB. If keys are present in the
// ODB, their value is kept. If not, the default values
// from above are copied to the ODB
o.connect("/Test/Settings", true);
// alternatively, a structure can be created from an existing ODB subtree
midas::odb o2("/Test/Settings/Subdir");
std::cout << o2 << std::endl;
// retrieve, set, and change ODB value
int i = o["Int32 Key"];
o["Int32 Key"] = i+1;
o["Int32 Key"]++;
o["Int32 Key"] *= 1.3;
std::cout << "Should be 57: " << o["Int32 Key"] << std::endl;
// test with bool
o["Bool Key"] = !o["Bool Key"];
// test with std::string
std::string s = o["Subdir"]["Subsub"]["String Key"];
s += " world!";
o["Subdir"]["Subsub"]["String Key"] = s;
// test with a vector
std::vector<int> v = o["Int Array"];
v[1] = 10;
o["Int Array"] = v; // assign vector to ODB object
o["Int Array"][1] = 2; // modify ODB object directly
i = o["Int Array"][1]; // read from ODB object
o["Int Array"].resize(5); // resize array
o["Int Array"]++; // increment all values of array
// test with a string vector
std::vector<std::string> sv;
sv = o["String Array"];
sv[1] = "New String";
o["String Array"] = sv;
o["String Array"][2] = "Another String";
// iterate over array
int sum = 0;
for (int e : o["Int Array"])
sum += e;
std::cout << "Sum should be 11: " << sum << std::endl;
// creat key from other key
midas::odb oi(o["Int32 Key"]);
oi = 123;
// test auto refresh
std::cout << oi << std::endl; // each read access reads value from ODB
oi.set_auto_refresh_read(false); // turn off auto refresh
std::cout << oi << std::endl; // this does not read value from ODB
oi.read(); // this does manual read
std::cout << oi << std::endl;
midas::odb ox("/Test/Settings/OTF");
ox.delete_key();
// create ODB entries on-the-fly
midas::odb ot;
ot.connect("/Test/Settings/OTF", true); // this forces /Test/OTF to be created if not already there
ot.set_auto_create(true); // this turns on auto-creation
ot["Int32 Key"] = 1; // create all these keys with different types
ot["Double Key"] = 1.23;
ot["String Key"] = "Hello";
ot["Int Array"] = std::array<int, 10>{};
ot["Subdir"]["Int32 Key"] = 42;
ot["String Array"] = std::vector<std::string>{"S1", "S2", "S3"};
std::cout << ot << std::endl;
o.read(); // re-read the underlying ODB tree which got changed by above OTF code
std::cout << o.print() << std::endl;
// iterate over sub-keys
for (auto& oit : o)
std::cout << oit.get_odb()->get_name() << std::endl;
// print whole sub-tree
std::cout << o.print() << std::endl;
// dump whole subtree
std::cout << o.dump() << std::endl;
// delete test key from ODB
o.delete_key();
// watch ODB key for any change with lambda function
midas::odb ow("/Experiment");
ow.watch([](midas::odb &o) {
std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
});
do {
int status = cm_yield(100);
if (status == SS_ABORT || status == RPC_SHUTDOWN)
break;
} while (!ss_kbhit());
cm_disconnect_experiment();
return 1;
}
|
1905
|
07 May 2020 |
Estelle | Bug Report | Conflic between Rootana and midas about the redefinition of TID_xxx data types | Dear Midas and Rootana people,
We have tried to update our midas DAQ with the new TID definitions describe in https://midas.triumf.ca/elog/Midas/1871
And we have noticed an incompatibility of this new definitions with Rootana when reading an XmlOdb in our offline analyzer.
The problem comes from the function FindArrayPath in XmlOdb.cxx and the comparison of bank type as strings.
Ex: comparing the strings "DWORD" and "UNINT32"
An naive solution would be to print the number associated to the type (ex: '6' for DWORD/UNINT32), but that would mean changing Rootana and Midas source code. Moreover, it does decrease the readability of the XmlOdb file.
Thanks for your time.
Estelle |
1904
|
04 May 2020 |
Pintaudi Giorgio | Forum | API to read MIDAS format file | > (But note that back when I implemented the SQLITE history writer, sqlite database corruption
> recovery instructions were "delete the file, restore from backup". And indeed in every test
> experiment I tried, the sqlite history databases eventually corrupted themselves. You see
> same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
> in it's terminal output).
Thank you for the info. But I do not quite understand the comment above.
Do you mean that there is something wrong with the SQLite library itself or with the way that MIDAS creates the SQLite
database? |
1903
|
03 May 2020 |
Stefan Ritt | Forum | API to read MIDAS format file |
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for
> your recommendation.
Was probably me. I use it as my standard IDE and am quite happy with it. All the things KO likes with emacs, plus much
more. Especially the CMake integration is nice, since you don't have to leave the IDE for editing, compiling and debugging.
The tooltips the IDE gave me in the past months made me write code much better. So quite an opposite opinion compared
with KO, but luckily this planet has space for all kinds of opinions. I made myself the cheat sheet attached, which lets me
do things much faster. Maybe you can use it.
Stefan |
Attachment 1: ReferenceCardForMac.pdf
|
|
1902
|
03 May 2020 |
Konstantin Olchanski | Forum | API to read MIDAS format file | >
> - One is to convert to SQL format and then use a SQLite library to import the data in my
> application.
>
You can also configure midas to write history directly to an SQLITE database. I have not used
it recently, but it should still work. In terms of efficiency, sqlite file size is about the same
as .hst files. sqlite file and table naming is similar to the SQL and FILE implementation.
(But note that back when I implemented the SQLITE history writer, sqlite database corruption
recovery instructions were "delete the file, restore from backup". And indeed in every test
experiment I tried, the sqlite history databases eventually corrupted themselves. You see
same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
in it's terminal output).
>
> - The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.
>
If I were to write this today, there would be a c++ class that takes a history file,
iterates over all records and calls "callback" classlets. You can see this in the history.h
(HistoryBufferInterface) and in the tmfe.h (RpcHandlerInterface, etc).
I think this style of OO programming originally comes from java. If you so desire,
an "mhdump" class could be a nice way to learn it.
>
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for
> your recommendation.
>
I remember, years ago, the Borland TurboC IDE was like a gift from Gods. But today, I think IDEs have
declined in quality and usefulness. They clog the screen with too much eye candy and fluff, use hard
to read fonts and silly colours, insist on using tabs where I want spaces, reformat the text even as I type it,
and detract from productive work with distracting popups ("try this new function!", "let's upgrade now!").
For serious programming, I use emacs with minimal decorations. I can easily open 3 or 4 windows at the same
time and still have enough screen space left for a terminal to run "make". And it is the only editor that can
edit the same file in two or more windows at the same time. You do not know you need this until
you work on odb.cxx.
K.O. |
1901
|
03 May 2020 |
Pintaudi Giorgio | Forum | API to read MIDAS format file | > The format of .hst midas history files is pretty simple and mhdump.cxx is an easy to read
> illustration on how to read it from basic principles (without going through the midas library,
> which can be somewhat complicated). The newer "FILE" format for history is even simpler
> to read because it is just fixed-record-size binary data prepended by a text header.
>
> You can also use the mh2sql program to import history data into an sql database (mysql
> and sqlite should work) or to convert .hst files to "FILE" format files. This works well
> for "archiving" history data, because the "FILE" format works better for looking at old data,
> and for looking at data in "months" or "years" timescale.
>
> Back to your question, you can certainly use "mhdump" as is, using a pipe (popen()), or
> you can package mhdump.cxx as a c++ class and use it in your application. If you go this
> route, your contribution of such a c++ class back to midas would be very welcome.
>
> You can also use mhist, but the mhist code cannot be trivially packaged as a c++ class
> to use in your application.
>
> You can also suggest that we write an easier to use history utility, we are always open to
> suggested improvements.
>
> Let us know how it works out for you. Good luck!
>
> K.O.
Dear Konstantin,
thank you very much for the wealth of information you provided.
I have thought about it and I see two options:
- One is to convert to SQL format and then use a SQLite library to import the data in my
application.
- The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.
I am leaning towards the first option for three reasons.
1. I have never used a SQLite database so it is a good learning opportunity for me.
2, The SQLite database format is very well known and widespread, so there are tons of tools to
handle it
3. I have taken a look at the mhdump.cxx source code and I think it is a beautiful piece of code,
but has a very "functional" taste with little encapsulation. Basically, all the fun is happening
inside the readHstFile function and there is no trivial way to get the data out of it. I don't mean
that it would be difficult to wrap it around a C++ class, but I feel that I can learn more by going
the SQL way.
PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it
(together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs
as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for
your recommendation. |
Draft
|
02 May 2020 |
Konstantin Olchanski | Forum | Taking MIDAS beyond 64 clients | > > >
> > > Does the community here have strong opinions about increasing the
> > > MAX_CLIENTS and MAX_RPC_CONNECTION limits?
> > > Am I looking at this problem in a naive way?
> > >
The issue is |
1899
|
02 May 2020 |
Konstantin Olchanski | Forum | Taking MIDAS beyond 64 clients | > > >
> > > Does the community here have strong opinions about increasing the
> > > MAX_CLIENTS and MAX_RPC_CONNECTION limits?
> > > Am I looking at this problem in a naive way?
> > >
The issue is: how to organize an experiment? how many frontends should I have?
There are two extremes:
- collect all data in 1 frontend (and today with c++ threads and c++ ring buffers, this is trivial)
- instantiate 1 frontend for each data source. (for example, ALPHA-g detector has 8 ADCs, 64 PWBs plus some
small fish. No that's wrong. Each ADC looks like 48 individual data sources, each PWB looks like 4 data sources,
so this would be 8*48+4*64=640 data sources, could be 640 frontends easily, plus small fish).
Which way is best? Every experiment is different, but consider simple things:
640 frontends writing into 1 event buffer will probably cause large contention for the event buffer lock. bad.
640 frontends running on a 4 core CPU will probably cause unhappiness in the OS. bad.
starting and stopping 640 frontends requires some scripting, monitoring that they all still run, etc. extra work. bad.
640 frontends on the midas status page? your cell phone web browser will explode. bad.
What I am saying is - arbitrary limits are good for you. Make you think about what is going on before throwing
resources at the problem.
K.O. |
1898
|
02 May 2020 |
Konstantin Olchanski | Forum | Taking MIDAS beyond 64 clients | > >
> > Does the community here have strong opinions about increasing the
> > MAX_CLIENTS and MAX_RPC_CONNECTION limits?
> > Am I looking at this problem in a naive way?
> >
The issue is binary compatibility.
MIDAS has been binary compatible with itself for a long time, 20 years now, easily.
If we are to give this up, we must gain more than we lose.
On the technical level, bumping MAX_CLIENTS from 64 to 100 gives us nothing, Tomorrow an experiment
will come along asking for 101 clients. Any number you pick, it is too small for somebody. And MIDAS
already has a solution for this: edit midas.h, hit make, done.
If we are to break binary compatibility, we should go big. Remove these limits completely!
Move the MAX_CLIENTS & co fixed size arrays out of the headers in ODB and in event buffers, put
them where they can be resized as needed.
That's a binary-compatibility breaking solution I would vote for.
K.O. |
1897
|
02 May 2020 |
Konstantin Olchanski | Forum | Taking MIDAS beyond 64 clients | >
> Does the community here have strong opinions about increasing the
> MAX_CLIENTS and MAX_RPC_CONNECTION limits?
> Am I looking at this problem in a naive way?
>
I think MAX_CLIENTS set at 64 is on the low side for today.
And in the past, we did have experiments that did not work without increasing MAX_CLIENTS. I
think T2K/ND280 needed MAX_CLIENTS bumped to about 100 (200?).
If ALPHA needs MAX_CLIENTS bigger than the default 64, nothing stops the experiment
from changing this number in the local copy of MIDAS.
It is not necessary to change it in the central repository for everybody.
K.O. |
1896
|
02 May 2020 |
Stefan Ritt | Forum | Taking MIDAS beyond 64 clients | > Perhaps a item for future discussion would be for the odbinit program to be able to 'upgrade' the ODB and enable some backwards
> compatibility.
We had this discussion already a few times. There is an ODB version number (DATABSE_VERSION 3 in midas.h) which is intended for that. If we break teh
binary compatibility, programs should complain "ODB version has changed, please run ...", then odbinit (written by KO) should have a well-defined
procedure to upgrade existing ODBs by re-creating them, but keeping all old contents. This should be tested on a few systems.
Stefan |
1895
|
02 May 2020 |
Joseph McKenna | Forum | Taking MIDAS beyond 64 clients |
Thank you very much for feedback.
I am satisfied with not changing the 64 client limit. I will look at re-writing my frontend to spawn threads rather than
processses. The load of my frontend is low, so I do not anticipate issues with a threaded implementation.
In this threaded scenario, it will be a reasonable amount of time until ALPHA bumps into the 64 client limit.
If it avoids confusion, I am happy for my experimental branch 'experimental-beyond_64_clients' to be deleted.
Perhaps a item for future discussion would be for the odbinit program to be able to 'upgrade' the ODB and enable some backwards
compatibility.
Thanks again
Joseph |
1894
|
02 May 2020 |
Stefan Ritt | Forum | Taking MIDAS beyond 64 clients | TRIUMF stayed quiet, probably they have other things to do.
I allowed myself to move the maximum number of clients back to its original value, in order not to break running experiments.
This does not mean that the increase is a bad idea, we just have to be careful not to break running experiments. Let's discuss it
more thoroughly here before we make a decision in that direction.
Best regards,
Stefan |
1893
|
01 May 2020 |
Pierre Gorel | Forum | Taking MIDAS beyond 64 clients | > - On the other hand, if we have to break compatibility, now is maybe a good time since most accelerators worldwide are off. But before doing so, I would like to get feedback from the main experiments
> around the world (MEG, T2K, g-2, DEAP besides ALPHA).
Hello Stefan,
For what is worth, DEAP will not be impacted: as we have been taking data around the clock for the last few years, we froze the code running on the computers. We may have some window of opportunity for upgrade in few months but such a move has not been discussed yet.
Best regards,
Pierre |
1892
|
01 May 2020 |
Stefan Ritt | Forum | Taking MIDAS beyond 64 clients | Hi Joseph,
here some thoughts from my side:
- Breaking ODB compatibility in the master/develop midas branch is very bad, since almost all experiments worldwide are affected if they just do blindly a pull and want to recompile and rerun. Currently,
even during our Corona crisis, still some experiments are running and monitored remotely.
- On the other hand, if we have to break compatibility, now is maybe a good time since most accelerators worldwide are off. But before doing so, I would like to get feedback from the main experiments
around the world (MEG, T2K, g-2, DEAP besides ALPHA).
- Having a maximum of 64 clients was originally decided when memory was scarce. In the early days one had just a couple of megabytes of share memory. Now this is not an issue any more, but I see
another problem. The main status page gives a nice overview of the experiment. This only works because there is a limited number of midas clients and equipments. If we blow up to 1000+, the status
page would be rather long and we have to scroll up and down forever. In such a scenario one would have at least to redesign the status and program pages. To start your experiment, you would have to
click 1000 times to start each front-end, also not very practicable.
- Having 100's or 1000's of front-ends calls rather for a hierarchical design, like the LHC experiments have. That would be a major change of midas and cannot be done quickly. It would also result in
much slower run start/stops.
- If you see limitations with your LabVIEW PCs, have you considered multi-threading on your front-ends? Note that the standard midas slow control system supports multithreaded devices
(DF_MULTITHREAD). In MEG, we use about 800 microcontrollers via the MSCB protocol. They are grouped together and each group is a multithreaded device in the midas slow control lingo, meaning the
group gets its own thread for control and readout in the midas frontend. This way, one group cannot slow down all other groups. There is one front-end for all groups, which can be started/stopped with
a single click, it shows up just as one line in the status page, and still it's pretty fast. Have you considered such a scheme? So your LabVIEW PCs would not be individual front-ends, but just make a
network connection to the midas front-end which then manages all LabVEIW PCs. The midas slow control system allows to define custom commands (besides the usual read/write command for slow
control data), so you could maybe integrate all you need into that scheme.
Best,
Stefan
>
> Hi all,
> I have been experimenting with a frontend solution for my experiment
> (ALPHA). The intention to replace how we log data from PCs running LabVIEW.
> I am at the proof of concept stage. So far I have some promising
> performance, able to handle 10-100x more data in my test setup (current
> limitations now are just network bandwith, MIDAS is impressively efficient).
> ==========================================================================
> Our experiment has many PCs using LabVIEW which all log to MIDAS, the
> experiment has grown such that we need some sort of load balancing in our
> frontend.
> The concept was to have a 'supervisor frontend' and an array of 'worker
> frontend' processes.
> -A LabVIEW client would connect to the supervisor, then be referred to a
> worker frontend for data logging.
> -The supervisor could start a 'worker frontend' process as the demand
> required.
> To increase accountability within the experiment, I intend to have a 'worker
> frontend' per PC connecting. Then any rouge behavior would be clear from the
> MIDAS frontpage.
> Presently there around 20-30 of these LabVIEW PCs, but given how the group
> is growing, I want to be sure that my data logging solution will be viable
> for the next 5-10 years. With the increased use of single board computers, I
> chose the target of benchmarking upto 1000 worker frontends... but I quickly
> hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...
> branching and updating these limits:
> https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients
> I have two commits.
> 1. update the memory layout assertions and use MAX_CLIENTS as a variable
> https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
> 366df96?at=experimental-beyond_64_clients
> 2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
> https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
> 6ce3df1?at=experimental-beyond_64_clients
> Unintended side effects:
> I break compatibility of existing ODB files... the database layout has
> changed and I read my old ODB as corrupt. In my test setup I can start from
> scratch but this would be horrible for any existing experiment.
> Edit: I noticed 'make testdiff' is failing... also fails lok
> Early performance results:
> In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into
> Equipment variables in the ODB seems to perform well... All transactions
> from client PCs are finished within a couple of ms or less
> ==========================================================================
> Questions:
> Does the community here have strong opinions about increasing the
> MAX_CLIENTS and MAX_RPC_CONNECTION limits?
> Am I looking at this problem in a naive way?
>
> Potential solutions other than increasing the MAX_CLIENTS limit:
> -Make worker threads inside the supervisor (not a separate process), I am
> using TMFE, so I can dynamically create equipment. I have not yet taken a
> deep dive into how any multithreading is implemented
> -One could have a round robin system to load balance between a limited pool
> of 'worker frontend' proccesses. I don't like this solution as I want to
> able to clearly see which client PCs have been setup to log too much data
> ========================================================================== |
|