ID |
Date |
Author |
Topic |
Subject |
2834
|
11 Sep 2024 |
Konstantin Olchanski | Bug Report | Multiple issues with mhist | I think I can offer some insight into your problems:
1) your mhist crash is due to the ODB timeout, it is probably set to 30 seconds in ODB /programs/mhist. you will
have to make it biigger.
2) 1.5 years of files. yes. I have 10 years of files for ALPHA at CERN. and the number of files is a problem.
But it should be better than the old system with 3 files per day (1000 files per year).
One solution you can try is symlinks. Assuming you have 10 years of history files in 10 per-year directory, you
symlink as many of them as you need into the "current" directory, then remove the symlinks.
Why remove the symlinks? I use "ls" to read the list of history files and Unix/Linux does not have a syscall to
"give me the 100 files with the newest mtime". I have to read the whole directory and that takes forever (if ZFS
on HDD), it is quick with ZFS on SSD if ZFS cache is hot (you can have a cron job do "ls" every 5 minutes to
keep the ZFS cache hot).
Now that I wrote the above, I think I see a way to make it "automatic", let me ponder this. (plus I always
wanted to implement compressed history files (using "free" lz4)).
K.O.
I am having some trouble with mhist. I suppose that the problems are at least partially due to our specific
needs which might exceed what has been tested. For context, in MEG II we have some 10^4 history variables in ~30
different events.
1. mhist -l crashes. After displaying around 7000 lines, I get the following error message:
[CODE]
[mhist,ERROR] [midas.cxx:5949:bm_validate_client_index,ERROR] My client index 10 in buffer 'SYSMSG'
is invalid: client name '', pid 0 should be my pid 3773321
[mhist,ERROR] [midas.cxx:5952:bm_validate_client_index,ERROR] Maybe this client was removed by a
timeout. See midas.log. Cannot continue, aborting...
Aborted (core dumped)
[/CODE]
Timing the execution shows around 33 seconds before the process is aborted.
I'm not sure if this would actually fix the problem, but while trying to circumvent the issue, I tried the
following: [CODE]mhist -e "Xenon" -l[/CODE] This doesn't seem to be implemented. Listing only the variables of a
single event would be nice
regardless of our specific issue.
2. mhist and history files.
We have a directory directory with about 2500 history files (mhf_...dat) for the past 1.5 years. Older
history files are archived in other directories with similar numbers of files. When trying to access them, I
encountered two issues:
It seems like it is not possible to pass a "history directory" as an argument. To dump the history for a full
year in the archive directory, I would need to run mhist many times with -f and then combine all the dumps.
If it really does not work, please consider this a feature request.
Also, even using single files does not work at the moment:
[CODE]
$ mhist -e "Xenon" -v "Det XeTmp 0-0" -t 100000 -s 200101 -p 250101 -f
/data2/history/2022/mhf_1644698398_20220212_xenon.dat
ID 980316009, Aug 13 19:10:56, size 1851749486
[/CODE]
This command was supposed to show me the rough time frame covered in this particular history file. I was
informed that the history files are in the new "FILE" format and mhist might not work with them properly.
tl;dr
[LIST]
[*] Bug: mhist -l crashes
[*] Bug: mhist -f does not work with "FILE" history format
[*] Feature request: mhist -e "Name" -l to only show variables of event "Name"
[*] Feature request: Set temporary history dir with a flag
[/LIST]
Lukas[/quote] |
655
|
08 Oct 2009 |
Exaos Lee | Bug Report | Multiple definition of `SqlODBC::SqlODBC() | I found there are two SqlODBC defined in different sources.
$ grep -n "class SqlODBC" src/*
src/history_odbc.cxx:282:class SqlODBC: public SqlBase
src/history_sql.cxx:293:class SqlODBC: public SqlBase
Both of them will be archived into one library libmidas.a if "HAVE_ODBC" defined. Then if you build a shared library, you will
get a link error as the following:
Linking CXX shared library lib/libmidas.so
/usr/bin/cmake -E cmake_link_script CMakeFiles/midas-shared.dir/link.txt --verbose=1
/usr/bin/c++ -fPIC -shared -Wl,-soname,libmidas.so -o lib/libmidas.so CMakeFiles/midas-shared.dir/src/ftplib.c.o CMakeFiles/midas-shared.dir/src/midas.c.o CMakeFiles/midas-shared.dir/src/system.c.o CMakeFiles/midas-shared.dir/src/mrpc.c.o CMakeFiles/midas-shared.dir/src/odb.c.o CMakeFiles/midas-shared.dir/src/ybos.c.o CMakeFiles/midas-shared.dir/src/history.c.o CMakeFiles/midas-shared.dir/src/alarm.c.o CMakeFiles/midas-shared.dir/src/elog.c.o CMakeFiles/midas-shared.dir/opt/DAQ/repos/bot/mxml/mxml.c.o CMakeFiles/midas-shared.dir/opt/DAQ/repos/bot/mxml/strlcpy.c.o CMakeFiles/midas-shared.dir/src/history_odbc.cxx.o CMakeFiles/midas-shared.dir/src/history_sql.cxx.o
CMakeFiles/midas-shared.dir/src/history_sql.cxx.o: In function `SqlODBC':
/opt/DAQ/repos/bot/midas/src/history_sql.cxx:200: multiple definition of `SqlODBC::SqlODBC()'
CMakeFiles/midas-shared.dir/src/history_odbc.cxx.o:/opt/DAQ/repos/bot/midas/src/history_odbc.cxx:315: first defined here
...
history_odbc.cxx:727: first defined here
collect2: ld returned 1 exit status
make[2]: *** [lib/libmidas.so] Error 1
Why is the class "SqlODBC" duplicated? |
657
|
09 Oct 2009 |
Konstantin Olchanski | Bug Report | Multiple definition of `SqlODBC::SqlODBC() | > Linking CXX shared library lib/libmidas.so
/usr/bin/c++ ... -o lib/libmidas.so ... CMakeFiles/midas-shared.dir/src/history_odbc.cxx.o
CMakeFiles/midas-shared.dir/src/history_sql.cxx.o
CMakeFiles/midas-shared.dir/src/history_sql.cxx.o: In function `SqlODBC':
/opt/DAQ/repos/bot/midas/src/history_sql.cxx:200: multiple definition of `SqlODBC::SqlODBC()'
CMakeFiles/midas-
shared.dir/src/history_odbc.cxx.o:/opt/DAQ/repos/bot/midas/src/history_odbc.cxx:315: first defined
here
> Why is the class "SqlODBC" duplicated?
This is interesting. I do not think my C++ book spells it out that I cannot have class A in foo.cxx
and a class A in bar.cxx. In fact I already discovered that the two classes A will collide (duplicate default
constructor A::A, even if all other member functions are different) if I link an executable that uses both
foo.o and bar.o.
Is there a way around this problem? In C, I can declare functions and variables "static" to "hide" them
from the linker.
What is the C++ equivalent for classes? (Any C++ experts here?)
In this specific case, the file history_odbc.cxx will disappear with the next commit of mhttpd, hopefully
today. mlogger and mh2sql already do not use it.
And I will commit the Makefile change renaming libmidas.so to libmidas-shared.so, so we can build it
by default but still link midas core executables to the normal midas library.
This will catch such a problem if it happens again.
K.O. |
661
|
11 Oct 2009 |
Konstantin Olchanski | Bug Report | Multiple definition of `SqlODBC::SqlODBC() | > > Why is the class "SqlODBC" duplicated?
>
> This is interesting. I do not think my C++ book spells it out that I cannot have class A in foo.cxx
> and a class A in bar.cxx.
I guess nobody knows the answer to this C++ puzzle. In any case history_odbc.cxx is not used anymore removing duplication of class SqlODBC.
svn rev 4594
K.O. |
413
|
17 Oct 2007 |
Randolf Pohl | Forum | Multi-core CPUs | Dear Forum,
I have this beautiful Intel Quadcore with fast disks, but MIDAS does obviously
only make use of one CPU at a time. Has anyboy of you already done some work
on making MIDAS parallel? Event-based data analysis should be the best
candidate for this.
Has anybody done this with PVM? There is some PVM-related stuff in the MIDAS
sources, but I got the impression this works only with HBOOK, not with ROOT.
Or am I wrong?
But then PVM is probably also not the most efficient thing one ONE machine
with multiple CPUs, right? And finally, with PVM we're back to
adding .root-files efficiently (see my previous post).
Any thoughts?
Cheers,
Randolf |
414
|
17 Oct 2007 |
Stefan Ritt | Forum | Multi-core CPUs | > I have this beautiful Intel Quadcore with fast disks, but MIDAS does obviously
> only make use of one CPU at a time. Has anyboy of you already done some work
> on making MIDAS parallel? Event-based data analysis should be the best
> candidate for this.
There are ring buffer routines rb_xxx for distributed event analysis, but this is
currently only implemented in the front-end framework. These routines are pretty
simple, and their integration into the analyzer should not be very difficult.
Unfortunately I don't have time for that right now. We do our analysis such that we
analyze four different runs in parallel on a quadcore machine.
- Stefan |
1918
|
22 May 2020 |
Thomas Lindner | Bug Report | More trouble with openssl on macos | For the record, here's my report of difficulties getting mongoose to compile with macos. This is a similar
problem reported before, but with slightly different error messages. So I put them here for posterity.
Setup:
- macos: 10.13.6
- xcode: 9.2
- gcc: Thomass-MacBook-Pro-3:build lindner$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-
dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.7.0
- midas: today's version
Start with the openssl version I had already installed. cmake says
-- Found OpenSSL: /usr/lib/libcrypto.dylib (found version "1.0.2s")
-- MIDAS: Found OpenSSL version 1.0.2s
make install fails with this error:
[ 35%] Linking CXX executable mhttpd
cd /Users/lindner/packages/midas/build/progs && /usr/local/Cellar/cmake/3.15.0/bin/cmake -E
cmake_link_script CMakeFiles/mhttpd.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -O2 -g -
DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/mhttpd.dir/mhttpd.cxx.o
CMakeFiles/mhttpd.dir/mongoose616.cxx.o CMakeFiles/mhttpd.dir/mgd.cxx.o
CMakeFiles/mhttpd.dir/__/mscb/src/mscb.cxx.o -o mhttpd ../libmidas.a /usr/lib/libssl.dylib
/usr/lib/libcrypto.dylib -lz -lcurl -lsqlite3
Undefined symbols for architecture x86_64:
"_SSL_CTX_set_psk_client_callback", referenced from:
_mg_ssl_if_conn_init in mongoose616.cxx.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Use macports to upgrade to newer openssl
sudo port selfupdate
sudo port upgrade outdated
Now cmake says
-- Found OpenSSL: /usr/lib/libcrypto.dylib (found version "1.1.1g")
-- MIDAS: Found OpenSSL version 1.1.1g
Error message is different now:
cd /Users/lindner/packages/midas/build/progs && /usr/local/Cellar/cmake/3.15.0/bin/cmake -E
cmake_link_script CMakeFiles/mhttpd.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -O2 -g -
DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/mhttpd.dir/mhttpd.cxx.o
CMakeFiles/mhttpd.dir/mongoose616.cxx.o CMakeFiles/mhttpd.dir/mgd.cxx.o
CMakeFiles/mhttpd.dir/__/mscb/src/mscb.cxx.o -o mhttpd ../libmidas.a /usr/lib/libssl.dylib
/usr/lib/libcrypto.dylib -lz -lcurl -lsqlite3
Undefined symbols for architecture x86_64:
"_OPENSSL_init_ssl", referenced from:
_mg_mgr_init_opt in mongoose616.cxx.o
_mg_ssl_if_init in mongoose616.cxx.o
"_SSL_CTX_set_options", referenced from:
_mg_ssl_if_conn_init in mongoose616.cxx.o
"_SSL_CTX_set_psk_client_callback", referenced from:
_mg_ssl_if_conn_init in mongoose616.cxx.o
ld: symbol(s) not found for architecture x86_64
Fine. Doing 'cmake -D NO_SSL=1 ..' to build still works fine; I will stick with that since I don't need SSL on my
laptop.
Perhaps we should disable SSL by default on Macos? People may only have ~50% chance of getting it to
work.
|
1919
|
22 May 2020 |
Konstantin Olchanski | Bug Report | More trouble with openssl on macos | > For the record, here's my report of difficulties getting mongoose to compile with macos.
> -- MIDAS: Found OpenSSL version 1.0.2s
> -- MIDAS: Found OpenSSL version 1.1.1g
> ... [ all of them did not work ]
For the record, I get this on mac os 10.15.4 and it works.
-- MIDAS: Found OpenSSL version 1.1.1g
I think I am quite fed up with this openssl business, too.
What I will do in MIDAS is fix the mbedtls detection, add mbedtls instructions
in the documentation and remove openssl from mhttpd build.
Result will be:
- default build will have mhttpd without https support, and this works in 100% of our use cases at TRIUMF.
- if user do not want to use apache https proxy, they have to "git clone" mbedtls, build it, rebuild mhttpd, then
they get https support, but for https certificate management - getting them, renewing them, etc, they are
on their own.
Since mhttpd has no integration with certbot, automatic management of https certificates does not work,
so good luck again.
In theory, I can try to add certbot integration, but even the most basic tools are missing, for example, openssl
does not report certificate expiration dates (I guess I must write my own code to examine the certificate
and hope my idea of expiration matches their idea). Since I do not see certificate expiration dates, every day I could
blindly run "certbot renew" and restart openssl with the updated certificate (but I think openssl does
not have a "restart" function, so again, forget about it). Adding insult to injury, by default, certbot stores certificates
in a secret location in /etc where mhttpd cannot access them.
Bottom line is that powers-that-be messed up https certificate management and until that is sorted out and is easy
to integrate with custom web servers, I can only recommend that mhttpd must run behind the "OS support https proxy".
K.O. |
2283
|
11 Oct 2021 |
Stefan Ritt | Info | Modification in the history logging system | A requested change in the history logging system has been made today. Previously, history values were
logged with a maximum frequency (usually once per second) but also with a minimum frequency, meaning
that values were logged for example every 60 seconds, even if they did not change. This causes a problem.
If a frontend is inactive or crashed which produces variables to be logged, one cannot distinguish between
a crashed or inactive frontend program or a history value which simply did not change much over time.
The history system was designed from the beginning in a way that values are only logged when they actually
change. This design pattern was broken since about spring 2021, see for example this issue:
https://bitbucket.org/tmidas/midas/issues/305/log_history_periodic-doesnt-account-for
Today I modified the history code to fix this issue. History logging is now controlled by the value of
common/Log history in the following way:
* Common/Log history = 0 means no history logging
* Common/Log history = 1 means log whenever the value changes in the ODB
* Common/Log history = N means log whenever the value changes in the ODB and
the previous write was more than N seconds ago
So most experiments should be happy with 0 or 1. Only experiments which have fluctuating values due to noisy
sensors might benefit from a value larger than 1 to limit the history logging. Anyhow this is not the preferred
way to limit history logging. This should be done by the front-end limiting the updates to the ODB. Most of the
midas slow control drivers have a “threshold” value. Only if the input changes by more then the threshold are
written to the ODB. This allows a per-channel “dead band” and not a per-event limit on history logging
as ‘log history’ would do. In addition, the threshold reduces the write accesses to the ODB, although that is
only important for very large experiments.
Stefan |
511
|
21 Oct 2008 |
Randolf Pohl | Forum | Mixed CAMAC/VME frontend, SIS3100 | Dear MIDAS-addicts,
I would like to hear your opinion on this:
We've until now used CAMAC with Hytec 1331 controllers. We're using Yale FADCs
whose readout takes ages in CAMAC (2048 samples take 2 milliseconds to be
read). We've got 20+ FADC channels (we usually read only 2-3)
Now we've had the brilliant idea to replace the Yale FADCs with some VME
digitizer and we now plan to buy a Struck SIS 1100/3100 PCI-VME controller,
plus 4 pc. CAEN 1720 8ch 12bit, 250MHz WFD.
(1) Can anybody comment on this choice? Good experiences/problems?
We are still using the CAMAC stuff for all other modules (TDCs, ADCs,
scalers). So my plan is to have ONE frontend who reads both the CAMAC modules
and the VME modules.
(2) Is it possible to build and run a dual-controller frontend for both CAMAC
and VME? Does anybody have experience with that? Or is it a stupid idea?
I'd appreciate any hints.
[Edit: We're using Linux]
Thanks a lot,
Randolf |
512
|
22 Oct 2008 |
Stefan Ritt | Forum | Mixed CAMAC/VME frontend, SIS3100 | > Dear MIDAS-addicts,
>
> I would like to hear your opinion on this:
> We've until now used CAMAC with Hytec 1331 controllers. We're using Yale FADCs
> whose readout takes ages in CAMAC (2048 samples take 2 milliseconds to be
> read). We've got 20+ FADC channels (we usually read only 2-3)
>
> Now we've had the brilliant idea to replace the Yale FADCs with some VME
> digitizer and we now plan to buy a Struck SIS 1100/3100 PCI-VME controller,
> plus 4 pc. CAEN 1720 8ch 12bit, 250MHz WFD.
>
> (1) Can anybody comment on this choice? Good experiences/problems?
>
> We are still using the CAMAC stuff for all other modules (TDCs, ADCs,
> scalers). So my plan is to have ONE frontend who reads both the CAMAC modules
> and the VME modules.
>
> (2) Is it possible to build and run a dual-controller frontend for both CAMAC
> and VME? Does anybody have experience with that? Or is it a stupid idea?
>
> I'd appreciate any hints.
>
> [Edit: We're using Linux]
>
> Thanks a lot,
>
> Randolf
Dear Randolf,
I used some time ago several HYTEC 1331 controllers together with the Struck
SIS3100. Since the HYTEC is IO-mapped and the SIS3100 is memory mapped, there was
no problem in running them in parallel. Note however that there will soon be an
improved version of the SIS3100 with improved speed, and also CAEN plans a WFD
with 32 channels, 6 GSPS, 12 bit, using the DRS chip for the next year. I don't
know if you need that, but just that you know.
Best regards,
Stefan |
2300
|
09 Nov 2021 |
Hunter Lowe | Forum | MityCAMAC Login | Hello all,
I've recently acquired a MityCAMAC system that was built at TRIUMF and I'm
having issues accessing it over ethernet.
The system: Ubuntu VM inside Windows 10 machine.
I've tried reconfiguring the network settings for the VM but nmap and arp/ip
commands have yielded me no results in finding the crate controller.
I was getting help from Pierre Amaudruz but I think he is now busy for some
time. I have the mac address of the crate controller and its name. The
controller seems to initialize fine inside of the CAMAC crate. The windows side
of the workstation also tells me that an unknown network is in fact connected.
I suspect I either need to do something with an ssh key (which I thought we
accomplished but maybe not), or perhaps the domain name in the controller needs
to be changed.
If anybody has experience working with MityARM I would appreciate any advice I
could get.
Best,
Hunter Lowe
UNBC Graduate Physics |
2302
|
11 Nov 2021 |
Thomas Lindner | Forum | MityCAMAC Login | Hi Hunter
This sounds like a Triumf specific problem;
not a MIDAS problem. Please email me directly
and we can try to solve this problem.
Thomas Lindner
TRIUMF DAQ
Hello all,
>
> I've recently acquired a MityCAMAC system
that was built at TRIUMF and I'm
> having issues accessing it over ethernet.
>
> The system: Ubuntu VM inside Windows 10
machine.
>
> I've tried reconfiguring the network
settings for the VM but nmap and arp/ip
> commands have yielded me no results in
finding the crate controller.
>
> I was getting help from Pierre Amaudruz but
I think he is now busy for some
> time. I have the mac address of the crate
controller and its name. The
> controller seems to initialize fine inside
of the CAMAC crate. The windows side
> of the workstation also tells me that an
unknown network is in fact connected.
>
> I suspect I either need to do something with
an ssh key (which I thought we
> accomplished but maybe not), or perhaps the
domain name in the controller needs
> to be changed.
>
> If anybody has experience working with
MityARM I would appreciate any advice I
> could get.
>
> Best,
> Hunter Lowe
> UNBC Graduate Physics |
2316
|
26 Jan 2022 |
Konstantin Olchanski | Info | MityCAMAC Login | For those curious about CAMAC controllers, this one was built around 2014 to
replace the aging CAMAC A1/A2 controllers (parallel and serial) in the TRIUMF
cyclotron controls system (around 50 CAMAC crates). It implements the main
and the auxiliary controller mode (single width and double width modules).
The design predates Altera Cyclone-5 SoC and has separate
ARM processor (TI 335x) and Cyclone-4 FPGA connected by GPMC bus.
ARM processor boots Linux kernel and CentOS-7 userland from an SD card,
FPGA boots from it's own EPCS flash.
User program running on the ARM processor (i.e. a MIDAS frontend)
initiates CAMAC operations, FPGA executes them. Quite simple.
K.O. |
2137
|
25 Mar 2021 |
Lars Martin | Bug Report | Minor bug: Change all time axes together doesn't work with +- buttons | Version: release/midas-2020-12
In the new history display, the checkbox "Change all time axes together" works
well with the mouse-based zoom, but does not apply to the +- buttons. |
2155
|
14 Apr 2021 |
Stefan Ritt | Bug Report | Minor bug: Change all time axes together doesn't work with +- buttons | > Version: release/midas-2020-12
>
> In the new history display, the checkbox "Change all time axes together" works
> well with the mouse-based zoom, but does not apply to the +- buttons.
Fixed in current commit.
Stefan |
1964
|
15 Jul 2020 |
Stefan Ritt | Info | Minimal CMakeLists.txt for your midas front-end | Since a few people asked me, here is a "minimal" CMakeLists.txt file for a user-written front-end
program "myfe":
---------------------------
cmake_minimum_required(VERSION 3.0)
project(myfe)
# Check for MIDASSYS environment variable
if (NOT DEFINED ENV{MIDASSYS})
message(SEND_ERROR "MIDASSYS environment variable not defined.")
endif()
set(CMAKE_CXX_STANDARD 11)
set(MIDASSYS $ENV{MIDASSYS})
if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
set(LIBS -lpthread -lutil -lrt)
endif()
add_executable(myfe myfe.cxx)
target_include_directories(myfe PRIVATE ${MIDASSYS}/include)
target_link_libraries(crfe ${MIDASSYS}/lib/libmfe.a ${MIDASSYS}/lib/libmidas.a ${LIBS}) |
2661
|
27 Dec 2023 |
Konstantin Olchanski | Forum | MidasWiki updated to 1.39.6 | MidasWiki was updated to current mediawiki LTS 1.39.6 supported until Nov 2025,
see https://www.mediawiki.org/wiki/Version_lifecycle
as downside, after this update, I see large amounts of "account request" spam,
something that did not exist before. I suspect new mediawiki phones home to
subscribe itself to some "please spam me" list.
if you want a user account on MidasWiki, please email me or Stefan directly, we
will make it happen.
K.O. |
2958
|
19 Mar 2025 |
Konstantin Olchanski | Forum | MidasWiki special page access now restricted to logged in users | We have a problem with over aggressive AI bots. They are scanning everything and
are putting a crazy load on the mediawiki mysql database. This makes all out
wikis very slow and unresponsive, which is a big problem.
These AI scanners do not seem to know what they are doing and are trying to load
all the special pages, like "what links here" and "recent changes" and complete
revision histories for each page. I am not impressed. Old school google and bing
scanner bots are much more respectful and do not cause problems.
AI bots are also scanning this elog, and to Stefan's credit elogd is holding the
load just fine.
To reduce load on the MidasWiki mysql database, I now restricted access to most
special pages to logged in users. It seems to have helped.
If this causes big difficulty or if you have suggestion on better ways to deal
with aggressive AI scanner bots, please speak up.
One alternative is to put MidasWiki behind a generic login page with a well
known password. Downside is it will block google and bing search engines, too.
K.O. |
2329
|
07 Feb 2022 |
Konstantin Olchanski | Forum | MidasWiki moved from ladd00 to daq00.triumf.ca and updated to MediaWiki 1.35 | MidasWiki moved from ladd00 (obsolete SL6) to daq00.triumf.ca (Ubuntu LTS 20.04)
and updated from obsolete MediaWiki LTS 1.27.7 to MediaWiki LTS 1.35, supported
until mid-2023, see https://www.mediawiki.org/wiki/Version_lifecycle
Old URL https://midas.triumf.ca and https://midas.triumf.ca/MidasWiki/...
redirect to new URL https://daq00.triumf.ca/MidasWiki/index.php/Main_Page
All old links and bookmarks should continue to work (via redirect).
To report problems with this MediaWiki instance and to request
any changes in configuration or installed extensions, please reply to this
message here.
K.O. |
|