| ID |
Date |
Author |
Topic |
Subject |
|
482
|
28 May 2008 |
Konstantin Olchanski | Info | Roll-back for history sytem added | > > But to make things more interesting we had another history outage this week...
> > Anyhow, I now have a patch to allow hs_read() to "skip the bad spots" in history files.
>
> [Stefan suggested]
>
> if ((irec.time - last_irec_time) > 3600*24)
Yes, your stronger check works quite nicely. The whole patch is now committed into SVN,
revision 4202.
This is how it all works:
0) teach hs_gen_index() to skip over bad data. This is important because hs_read() only
looks at data records listed in the index file: if bad data is omitted from the index,
hs_read() will never see it and we do not need to worry about it in hs_read().
0a) because hs_gen_index() does not check validity of time stamps, we still need to check
them in hs_read().
1) in hs_read(), if we detect bad data (invalid headers, bad time stamps, etc), we
regenerate the index files - this removes a while class of bad data. We also look at time
stamps carefully and ignore records where time goes backwards (usually bad data) and ignore
records with time in the future beyound the end of the current history file (each history
file only contains 24*60*60 seconds = 1 day's worth of data).
While certainly not bullet-proof, these changes should make it easier to deal with
corruption of history files.
K.O. |
|
225
|
03 Oct 2005 |
Stefan Ritt | Info | Revised MVMESTD API | Dear MIDAS users and developers,
The "Midas VME Standard API" has been revised. We tried to incorporate all
comments and ideas we got so far. The mvme_ioctl() function was abandoned in
favor of several mvme_get/set_xxx functions. Furthermore, two additional
functions for read and write have been implemented to simplify writing/reading
single values to VME. The current API looks like this:
int mvme_open(MVME_INTERFACE **vme, int index);
int mvme_close(MVME_INTERFACE *vme);
int mvme_sysreset(MVME_INTERFACE *vme);
int mvme_read(MVME_INTERFACE *vme, void *dst, mvme_addr_t vme_addr,
mvme_size_t n_bytes);
DWORD mvme_read_value(MVME_INTERFACE *vme, mvme_addr_t vme_addr);
int mvme_write(MVME_INTERFACE *vme, mvme_addr_t vme_addr, void *src,
mvme_size_t n_bytes);
int mvme_write_value(MVME_INTERFACE *vme, mvme_addr_t vme_addr, DWORD value);
int mvme_set_am(MVME_INTERFACE *vme, int am);
int mvme_get_am(MVME_INTERFACE *vme, int *am);
int mvme_set_dmode(MVME_INTERFACE *vme, int dmode);
int mvme_get_dmode(MVME_INTERFACE *vme, int *dmode);
int mvme_set_blt(MVME_INTERFACE *vme, int mode);
int mvme_get_blt(MVME_INTERFACE *vme, int *mode);
The MVME_INTERFACE structure holds all internal data, similar to the FILE
structure in stdio.h. If several VME interfaces (of the same type) are present
in a PC, the function mvme_open can be called once for each crate, specifying
the index. The block transfer modes passed to mvme_set_blt control the usage of
DMA, MBLT64 and so on. Not all interfaces might support all modes, in which case
mvme_set_blt should return MVME_UNSUPPORTED. Then it's up to the user code to
ignore this error or choose a different mode.
So far we have implemented drivers for the SIS3100, SBS617/SBS618 and VMIC
interfaces using this standard. It should be noted that the VMIC uses solely
memory mapped VME I/O, which is completely hidden in the VMIC MVMESTD driver.
We would like to encourage people to switch to the revised MVMESTD API wherever
possible. If new drivers for ADCs and TDCs for example are written using this
standard, groups with different VME interfaces can use them without modification.
Although the standard works now for three different interfaces, it might be that
new interfaces need slight additions. They should be identified as soon as
possible, in order to adapt the MVMESTD quickly and freeze the API soon.
Interrupts are not (yet) implemented in the MVMESTD, because most experiments
use polling anyhow. If there is a need for interrupts by someone, he should come
up quickly with this and make a proposal for implementation. |
|
98
|
17 Nov 2003 |
Stefan Ritt | | Revised MVMESTD | Let me propose a revised scheme for midas standard VME calls (mvmestd.h).
Pierre mentioned some limitations before, and I find now also some fields
to improve. Right now, the vme_open() call retrieves a handle. For some
interfaces (like SBS/Bit3), one has to obtain separate handles for
different addressing modes A24D32/A32D32 and so on, which I find a bit
troublesome. I would rather keep the handle internally, invisible to the
user, and use ioctl() statments to change the address/data mode.
So the API could look like:
vme_open() Deprecated, will be removed
vme_init(void) Standard initialization, open device(s), stores handles
internally in a table
vme_exit(void) Deallocates any memory, close handles
vme_read(void *dst, DWORD vme_addr, DWORD size)
vme_write(void *src, DWORD vme_addr, DWORD size)
vme_ioctl(int request, int *param)
Request is one of
VME_IOCTL_CRATE_SET/GET
Sets VME crate (in case several interfaces are
plugged into singlePC, meaningless for embedded CPUs)
VME_IOCTL_DEST_SET/GET
VME_BUS/VME_RAM/VME_LM for VME bus, RAM in VME
interface, or LM for local memory (used in Bit3
interface)
VME_IOCTL_AMOD_SET/GET
Sets/Retrieves VME AMOD (= VME_AMOD_xxx as currently
defined in mvmestd.h)
VME_IOCTL_DSIZE_SET/GET
Sets/Retrieves VME data size (D8/D16/D32/D64)
VME_IOCTL_DMA_SET/GET
Enable/Disable DMA, should be independent of AMOD
VME_IOCTL_INTR_ATTACH/DETACH/ENABLE/DISABLE
Set VME interrupts
VME_IOCTL_AUTO_INCR_SET/GET
Set autoincremet of source pointer, can be disabled
for FIFO readout
vme_mmap(void **ptr, DWORD vme_addr, DWORD size)
vme_unmap(void *ptr, DWORD size)
Map/Unmap VME to local memory
vme_read2(void *dst, DWORD vme_addr, DWORD size, DWORD flags)
vme_write2(void *src, DWORD vme_addr, DWORD size, DWORD flags)
With these functions one can directly specify the flags
usually managed by vme_ioctl(). Usefule for applications
where the address modifier for example has to be
different in each read/write operation.
Note that the vme_read/write functions do not have a VME handle any more,
nor an address modifier. This is all accomplished with vme_ioctl() calls.
Please have a look at this proposal, compare it with what you do currently
in VME, and let me know if we should add/modify something. I volunteer to
implement the API for the SBS/Bit3 617 and the Struck SIS1100/3100
interfaces, for VxWorks somebody at TRIUMF should take care. |
|
99
|
20 Nov 2003 |
Pierre-André Amaudruz, Konstantin Olchanski | | Revised MVMESTD | Before we try to merge the different access scheme for the different VME hardware,
we present the "optimal" configuration for the VMIC setup. This is a first shot so take it
with caution.
From these definitions, we should be able to workout a compromise and come up with
a satisfactory standard.
A) The VMIC vme_slave_xxx() options are not considered.
B) The interrupt handling can certainly match the 4 entries required in the user frontend
code i.e. Attach, Detach, Enable, Disable.
I don't understand your argument that the handle should be hidden. In case of multiple
interfaces, how do you refer to a particular one if not specified?
The following scheme does require a handle for refering to the proper (device AND window).
1 ) deviceHandle = vme_init(int devNumber);
Even though the VMIC doesn't deal with multiple devices,
the SIS/PCI does and needs to init on a specific PCI card.
Internally:
opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
Possible including a mapping to a default VME region of default size with default AM
(VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
in A24 mode. Needs to be elaborate this option. But in principle you need to declare the
VME region that you want to work on (vme_map).
2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
Return a mapHandle specific to a device and window. The am has to be specified.
What ever are the operation to get there, the mapHandle is a reference to thas setting.
It could just fill a map structure.
Internally:
WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
, WindowHandle[deviceHandle]...
3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
, const BOOL ENA_DMA, const BOOL ENA_FIFO);
Mainly used for the vme_block_read/write. Define for following read the data size and
am in case of DMA (could use orther DMA mode than window definition for optimal
transfer).
Predefine the mode of access:
DATA_SIZE : D8, D16, D32
AM : A16, A24, A32, etc...
enaDMA : optional if available.
enaFIFO : optional for block read for autoincrement source pointer.
Remark:
PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
such that extension of functionality is possible. But by passing cons int
arguments, the optimizer is able to substitute and reduce the internal code.
4)
uint_8Value = vme_readD8 (int mapHandle, uint_64 vmeSrceOffset)
uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
Single VME read access. In the VMIC case, this access is always through mapping.
Value = *(WindowPtr[WindowHandle] + vmeSrceOffset)
or
Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset)
5)
status = vme_writeD8 (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
status = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
status = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
Single VME write access.
6)
nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
Multiple read access. Can be done through standard do loop or DMA if available.
nBytes < 0 : error
Incremented pDest = (pDest + nBytes); Don't need to pass **pDest for autoincrement.
7)
nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
Multiple write access.
nBytes < 0 : error
Incremented pSrce = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.
8) status = vme_unmap(int mapHandle)
Cleanup internal pointers or structure of given mapHandle only.
9) status = vme_exit()
Cleanup deviceHandle and release device. |
|
100
|
21 Nov 2003 |
Stefan Ritt | | Revised MVMESTD | Thanks for your contribution. Let me try to map your functionality to mvmestd calls:
> A) The VMIC vme_slave_xxx() options are not considered.
We could maybe do that through mvme_mmap(SLAVE, ...) instead of mvme_mmap(MASTER, ...)
> B) The interrupt handling can certainly match the 4 entries required in the user frontend
> code i.e. Attach, Detach, Enable, Disable.
vmve_ioctl(VME_IOCTL_INTR_ATTACHE/DETACH/ENABLE/DISABLE, func())
> I don't understand your argument that the handle should be hidden. In case of multiple
> interfaces, how do you refer to a particular one if not specified?
> The following scheme does require a handle for refering to the proper (device AND window).
Four reasons for that:
1) For the SBS/Bit3, you need a handle for each address mode. So if I have two crates (and I do in our
current experiment), and have to access modules in A16, A24 and A32 mode, I need in total 6 handles.
Sometimes I mix them up by mistake, and wonder why I get bus errors.
2) Most installations will only have single crates (as your VMIC). So if there is only one crate, why
bother with a handle? If you have hunderds of accesses in your code, you save some redundant typing work.
3) A handle is usually kept global, which is considered not good coding style.
4) Our MCSTD and MFBSTD functions also do not use a handle, so people used to those libraries will find it
more natural not to use one.
> 1 ) deviceHandle = vme_init(int devNumber);
> Even though the VMIC doesn't deal with multiple devices,
> the SIS/PCI does and needs to init on a specific PCI card.
> Internally:
> opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
> Possible including a mapping to a default VME region of default size with default AM
> (VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
> in A24 mode. Needs to be elaborate this option. But in principle you need to declare the
> VME region that you want to work on (vme_map).
Just vme_init(); (like fb_init()).
This function takes the first device, opens it, and stores the handle internally. Sets the AM to a default
value, and creates a mapping table which is initially empty or mapped to a default VME region. If one wants
to access a secondary crate, one does a vme_ioctl(VME_IOCTL_CRATE_SET, 2), which opens the secondary crate,
and stores the new handle in the internal table if applicable.
> 2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
> Return a mapHandle specific to a device and window. The am has to be specified.
> What ever are the operation to get there, the mapHandle is a reference to thas setting.
> It could just fill a map structure.
> Internally:
> WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
> WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
> , WindowHandle[deviceHandle]...
The best would be if a mvme_read(...) to an unmapped region would automatically (internally) trigger a
vme_map() call, and store the WindowHandle and WindowPtr internally. The advantage of this is that code
written for the SIS for example (which does not require this kind of mapping) would work without change
under the VMIC. The disadvantage is that for each mvme_read(), the code has to scan the internal mapping
table to find the proper window handle. Now I don't know how much overhead this would be, but I guess a
single for() loop over a couple of entries in the mapping table is still faster than a microsecond or so,
thus making it negligible in a block transfer.
> 3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
> , const BOOL ENA_DMA, const BOOL ENA_FIFO);
> Mainly used for the vme_block_read/write. Define for following read the data size and
> am in case of DMA (could use orther DMA mode than window definition for optimal
> transfer).
>
> Predefine the mode of access:
> DATA_SIZE : D8, D16, D32
> AM : A16, A24, A32, etc...
> enaDMA : optional if available.
> enaFIFO : optional for block read for autoincrement source pointer.
>
> Remark:
> PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
> such that extension of functionality is possible. But by passing cons int
> arguments, the optimizer is able to substitute and reduce the internal code.
Right. mvme_ioctl(VME_IOCTL_AMOD_SET/DSIZE_SET/DMA_SET/AUTO_INCR_SET, ...)
> uint_8Value = vme_readD8 (int mapHandle, uint_64 vmeSrceOffset)
> uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
> uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
> Single VME read access. In the VMIC case, this access is always through mapping.
> Value = *(WindowPtr[WindowHandle] + vmeSrceOffset)
> or
> Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset)
mvme_read(*dst, DWORD vme_addr, DWORD size); would cover this in a single call. Note that the SIS for
example does not have memory mapping, so if one consistently uses mvme_read(), it will work on both
architectures. Again, this takes some overhead. Consider for example a possible VMIC implementation
mvme_read(char *dst, DWORD vme_addr, DWORD size)
{
for (i=0 ; table[i].valid ; i++)
{
if (table[i].start >= vme_addr && table[i].end < vme_addr+size)
break;
}
if (!table[i].valid)
{
vme_master_crate(...)
table[i].window_handle = vme_master_window(...)
}
if (size == 2)
mvme_ioctl(VME_IOCTL_DSIZE_SET, D16);
else if (size == 1)
mvme_ioctl(VME_IOCTL_DSIZE_SET, D8);
memcpy(dst, table[i].window_handle + vme_addr - table[i].start, size);
}
Note this is only some rough code, would need more checking etc. But you see that for each access the for()
loop has to be evaluated. Now I know that for the SBS/Bit3 and for the SIS a single VME access takes
~0.5us. So the for() loop could be much faster than that. But one has to try. If one experiment needs the
ultimate speed, it can use the native VMIC API, but then looses the portability. I'm not sure if one needs
the automatic DSIZE_SET, maybe it works without.
> status = vme_writeD8 (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
> status = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
> status = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
> Single VME write access.
Dito. mvme_write(void *dst, DWORD vme_addr, DWORD size);
> nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
> Multiple read access. Can be done through standard do loop or DMA if available.
> nBytes < 0 : error
> Incremented pDest = (pDest + nBytes); Don't need to pass **pDest for autoincrement.
vmve_ioctl(VME_IOCTL_DMA_SET, TRUE);
n = mvme_read(char *pDest, DWORD vmd_addr, DWORD size);
> nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
> Multiple write access.
> nBytes < 0 : error
> Incremented pSrce = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.
Dito.
> 8) status = vme_unmap(int mapHandle)
> Cleanup internal pointers or structure of given mapHandle only.
mvme_unmap(DWORD vme_addr, DWORD size)
Scan through internal table to find handle, then calls vme_unmap(mapHandle);
> 9) status = vme_exit()
> Cleanup deviceHandle and release device.
mvme_exit();
Let me know if this all makes sense to you...
- Stefan |
|
863
|
13 Feb 2013 |
Konstantin Olchanski | Info | Review of github and bitbucket | I have done a review of github and bitbucket as candidates for hosting GIT repositories for collaborative
DAQ-type projects. Here is my impressions.
1. GIT as a software management tool seems to be a reasonable choice for DAQ-type projects. "master"
repositories can be hosted at places like github or self-hosted (in the simplest case, only
http://host/~user web access is required to host a git repository), for each "daq project" aka "experiment"
one would "clone" the master repository, perform any local modifications as required, with full local
version control, and when desired feed the changes back to the master repository as direct commits (git
push), as patches posted to github ("pull requests") or patches emailed to the maintainers (git format-
patch).
2. Modern requirements for hosting a DAQ-type project include:
a) code repository (GIT, etc) with reasonably easy user access control (i.e. commit privileges should be
assigned by the project administrators directly, regardless of who is on the payroll at which lab or who is
a registered user of CERN or who is in some LDAP database managed by some IT departement
somewhere).
b) a wiki for documentation, with similar user access control requirements.
c) a mailing list, forum or bug tracking system for communication and "community building"
d) an ability to web host large static files (schematics, datasheets, firmware files, etc)
e) reasonable web-based tools for browsing the files, looking at diffs, "cvs annotate/git blame", etc.
3. Both github and bitbucket satisfy most of these requirements in similar ways:
a) GIT repositories:
aa) access using git, ssh and https with password protection. ssh keys can be uploaded to the server,
permitting automatic commits from scripts and cron jobs.
bb) anonymous checkout possible (cannot be disabled)
cc) user management is simple: participants have to self-register, confirm their email address, the project
administrator to gives them commit access to specific git repositories (and wikis).
dd) for the case of multiple project administrators, one creates "teams" of participants. In this
configuration the repositories are owned by the "team" and all designated "team administrators" have
equal administrative access to the project.
b) Wiki:
aa) both github and bitbucket provide rudimentary wikis, with wiki pages stored in secondary git
repositories (*NOT* as a branch or subdirectory of the main repo).
bb) github supports "markdown" and "mediawiki" syntax
cc) bitbucket supports "markdown" and "creole" syntax (all documentation and examples use the "creole"
syntax).
dd) there does not seem to be any way to set the "project standard" syntax - both wikis have the "new
page" editor default to the "markdown" syntax.
ee) compared to mediawiki (wikipedia, triumf daq wiki) and even plone, both github and bitbucket wikis
lack important features:
1) cannot edit individual sections of a page, only the whole page at once, bad if you have long pages.
2) cannot upload images (and other documents) directly through the web editor/interface. Both wikis
require that you clone the wiki git repository, commit image and other files locally and push the wiki git
repo into the server (hopefully without any collisions), only then you can use the images and documents
in the wiki.
3) there is no "preview" function for images - in mediawiki I can have small size automatically generated
"preview" images on the wiki page, when I click on them I get the full size image. (Even "elog" can do this!)
ff) to be extra helpful, the wiki git repository is invisible to the normal git repository graphical tools for
looking at revisions, branches, diffs, etc. While github has a special web page listing all existing wiki
pages, bitbucket does not have such a page, so you better write down the filenames on a piece of paper.
c) mailing list/forum/bug tracking:
aa) both github and bitbucket implement reasonable bug tracking systems (but in both systems I do not
see any button to export the bug database - all data is stuck inside the hosting provider. Perhaps there is
a "hidden button" somewhere).
bb) bitbucket sends quite reasonable email notifications
cc) github is silent, I do not see any email notifications at all about anything. Maybe github thinks I do not
want to see notices about my own activities, good of it to make such decisions for me.
d) hosting of large files: both git and wiki functions can host arbitrary files (compared to mediawiki only
accepting some file types, i.e. Quartus pof files are rejected).
e) web based tools: thumbs up to both! web interfaces are slick and responsive, easy to use.
Conclusions:
Both github and bitbucket provide similar full-featured git repository hosting, user management and bug
tracking.
Both provide very rudimentary wiki systems. Compared to full featured wikis (i.e. mediawiki), this is like
going back to SCCS for code management (from before RCS, before CVS, before SVN). Disappointing. A
deal breaker if my vote counts.
K.O. |
|
864
|
14 Feb 2013 |
Stefan Ritt | Info | Review of github and bitbucket | Let me add my five cents:
We use bitbucket now since two months at PSI, and are very happy with it.
Pros:
- We like the GIT flow model (http://nvie.com/posts/a-successful-git-branching-model/). You can at the same time do hot fixes, have a "distribution
version", and keep a development branch, where you can try new things without compromising the distribution.
- Nice and fast Web interface, especially the "blame" is lightning fast compared to SVN/CVS
- GIT is non-centralized, so your local clone of a repository contains everything. If bitbucket is down/asks for money, you can continue with your local
repository and clone it to some other hosting service, or host it yourself
- SourceTree (http://www.sourcetreeapp.com/) is a nice GUI for Mac lovers.
- Easy user management
- Free for academic use
Con:
- Wiki is limited as KO wrote, so it should not be used as a "full" wiki to replace Plone for example, just to annotate your project
- SVN revision number is gone. This is on purpose since it does not make sense any more if you keep several parallel branches (merging becomes a
nightmare), so one has to use either the (random) commit-ID or start tagging again.
So I conclusion, I would say that it's time to switch MIDAS to GIT. We'll probably do that in July when I will be at TRIUMF.
/Stefan |
|
867
|
01 Apr 2013 |
Randolf Pohl | Info | Review of github and bitbucket | And my 2ct:
Go for git!
I've been using git since 2007 or so, after cvs and svn. Git has some killer features which I can't miss any more:
* No central repo. Have all the history with you on the train.
* Branching and merging, with stable branches and feature branches.
Happy hacking while my students do analysis on a stable version.
Or multiple development branches for several features.
And merging really works, including fixing up merge conflicts.
* "git bisect" for finding which commit introduced a (reproducible) bug.
* "gitk --all"
I use git for everything: Software, tex, even (Ooffice) Word documents.
Go for git. :-)
Randolf |
|
868
|
02 Apr 2013 |
Konstantin Olchanski | Info | Review of github and bitbucket | Hi, thanks for your positive feedback. I have been using git for small private projects for a few years now
and I like it. It is similar to the old SCCS days - good version control without having to setup servers,
accounts, doodads, etc.
> * No central repo. Have all the history with you on the train.
> * Branching and merging, with stable branches and feature branches.
> Happy hacking while my students do analysis on a stable version.
> Or multiple development branches for several features.
This is the part that worries me the most. Without a "central" "authoritative" repository,
in just a few quick days, everybody will have their own incompatible version of midas.
I guess I am okey with your private midas diverging from mainstream, but when *I* end up
with 10 different incompatible versions just in *my* repository, can that be good?
> And merging really works, including fixing up merge conflicts.
But somebody still has to do it. With a central repository, the problem takes care of
itself - each developer has to do their own merging - with svn, you cannot commit
to the head without merging the head into your code first. But with git, I can just throw
my changes int some branch out there hoping that somebody else would do the merging.
But guess what, there aint anybody home but us chickens. We do not have a mad finn here
to enforce discipline and keep us in shape...
As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
branches going, neither has all the features combined together and each branch has bugs with
the fixes in a different branch. What a way to run a railroad.
> * "git bisect" for finding which commit introduced a (reproducible) bug.
> * "gitk --all"
>
> Go for git. :-)
Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".
K.O. |
|
869
|
02 Apr 2013 |
Randolf Pohl | Info | Review of github and bitbucket | Hi Konstantin,
> > * No central repo. Have all the history with you on the train.
> > * Branching and merging, with stable branches and feature branches.
> > Happy hacking while my students do analysis on a stable version.
> > Or multiple development branches for several features.
>
> This is the part that worries me the most. Without a "central" "authoritative" repository,
> in just a few quick days, everybody will have their own incompatible version of midas.
No! This is probably one of the biggest misunderstandings of the git workflow.
You can of course _define_ one central repo: This is the one that you and Stefan decide to be "the source" (as
Linus does for the kernel). It's like the central svn repo: Only Stefan and you can push to it, and everybody
else will pull from it. Why should I pull MIDAS from some obscure source, when your "public" repo is available.
Look at the Linux Kernel: Linus' version is authoritative, even though everybody and his best friend has his
own kernel repo.
So, the main workflow does not change a lot: You collect patches, commit them, and "push" them to the central
repo. All users "pull" from this central repo. This is very much what svn offers.
>
> I guess I am okey with your private midas diverging from mainstream, but when *I* end up
> with 10 different incompatible versions just in *my* repository, can that be good?
See above: _You_ define what the central repo is.
But: I _bet_ you will very soon have 10 versions in your personal repo, because _you choose_ to do so. It's
just SO much easier. The non-linear history with many branches is a _feature_. I can't live without it any more:
Looking at my MIDAS analyzer:
I have a "public" repo in /pub/git/lamb.git. This is where I publish my analyzer versions. All my collaborators
pull from this.
Then I have my personal repo in ~/src/lamb.
This is where I develop. When I think something is ready for the public, I merge this branch into the public repo.
Whenever I start to work on a new feature, I create a branch in my _local_ repo (~/src/lamb). I can fiddle and
play, not affecting anybody else, because it never sees the public repo.
OK, collaborator A finds a bug. I switch to my local copy of the public version, fix the bug, and push the fix
to the publix repo. Then I go back to my (local) feature branch, merge the bug fix, and continue hacking.
Only when the feature is ready, I push it to the public repo.
Things get moe interesting as you work on several features simultaneously. You have e.g. 3 topic branches:
(a) is nearly ready, and you want a bunch of people to test it.
push branch "feature (a)" to the public repo and tell the people which branch to pull.
(b) is WIP, you hack on it without affecting (a).
(c) is bug fixes which may or may not affect (a) or (b).
And so on.
You will soon discover the beauty of several parallel branches.
Plus, git merges are SO simple that you never think about "how to merge"
>
> > And merging really works, including fixing up merge conflicts.
>
> But somebody still has to do it. With a central repository, the problem takes care of
> itself - each developer has to do their own merging - with svn, you cannot commit
> to the head without merging the head into your code first. But with git, I can just throw
> my changes int some branch out there hoping that somebody else would do the merging.
> But guess what, there aint anybody home but us chickens. We do not have a mad finn here
> to enforce discipline and keep us in shape...
See above: You will have the exact same workflow in git, if you like.
> As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
> branches going, neither has all the features combined together and each branch has bugs with
> the fixes in a different branch. What a way to run a railroad.
I haven't look at this. All I can say: Branches are one of the best features.
>
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> > * "gitk --all"
> >
> > Go for git. :-)
>
> Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".
Easy: YOU do it.
Keep going as in svn: Collect patches, and send them out.
And then, try "git checkout -b my_first_branch", hack, hack, hack,
"git merge master".
Best,
Randolf
>
> K.O. |
|
870
|
03 Apr 2013 |
Stefan Ritt | Info | Review of github and bitbucket | > * "git bisect" for finding which commit introduced a (reproducible) bug.
I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100
revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then
slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending
commit. Finding that there is a command it git which does this automatically is great news.
Stefan |
|
871
|
03 Apr 2013 |
Randolf Pohl | Info | Review of github and bitbucket | > > * "git bisect" for finding which commit introduced a (reproducible) bug.
>
> I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100
> revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then
> slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending
> commit. Finding that there is a command it git which does this automatically is great news.
even more so considering the nonlinear history (due to branching) in a regular git repo. |
|
642
|
09 Sep 2009 |
Jimmy Ngai | Forum | Retrieve start/stop time in offline | Hi All,
I set "/Analyzer/ODB Load" to true and analyzed a run in offline mode. After
that, I found the start time and stop time in /RunInfo did not reflect the
correct time as in online. How do I retrieve the correct start/stop time from
the ODB in offline mode?
Thanks!
Jimmy |
|
643
|
10 Sep 2009 |
Stefan Ritt | Forum | Retrieve start/stop time in offline | > I set "/Analyzer/ODB Load" to true and analyzed a run in offline mode. After
> that, I found the start time and stop time in /RunInfo did not reflect the
> correct time as in online. How do I retrieve the correct start/stop time from
> the ODB in offline mode?
Most trees in the ODB are not loaded with "/Analyzer/ODB Load", since you might
want to have the start/stop time of the offline analysis there for example
(although I agree that the online start/stop time is more interesting). So you
have several options:
- modify mana.c. There is a function odb_load(), which first locks the whole ODB
and then unprotects "/Experiment/Run Parameters" for example. Just add three more
lines for "/Runinfo".
- write a run summary when running online. After each run, write a summary with
start/stop time, number of events, settings etc. into some file. I usually do this
in the EOR routine of the online analyzer and write directly into a CSV file which
I can import directly into Excel. There I can make filtering depending on certain
parameters, like show me all runs with more than x events where setting y was 10.
- extract the ODB from the .mid file with "odbhist -e filename.mid" and look into
that.
- The time stamp of each event is in UNIX time form (seconds since 1.1.1970), so
you now exactly when each event was recorded.
Hope one of this helps...
- Stefan |
|
1536
|
29 May 2019 |
Suzannah Daviel | Suggestion | Replacing MIDAS status page with custom status page | Replacing the MIDAS status page with a custom status page documented at
https://midas.triumf.ca/MidasWiki/index.php/Custom_Page_Features#Replace_Status_Page_by_a_Custom_page
does not appear to be supported in the current MIDAS version.
As two of my experiments use this feature may I suggest its reinstatement?
Suzannah |
|
1537
|
31 May 2019 |
Stefan Ritt | Suggestion | Replacing MIDAS status page with custom status page | > Replacing the MIDAS status page with a custom status page documented at
>
> https://midas.triumf.ca/MidasWiki/index.php/Custom_Page_Features#Replace_Status_Page_by_a_Custom_page
>
> does not appear to be supported in the current MIDAS version.
>
> As two of my experiments use this feature may I suggest its reinstatement?
It still works, but is actually simpler. The status page is now a "dynamic" page, meaning mhttpd just servers an html file to
the browser and everything is done in JavaScript there. The file for the status page is under midas/resources/status.html.
You can easily change that file or replace it with a completely different (custom) file without having to change the ODB.
There is only one potential problem. All midas html pages now have a certain structure, as written in
https://midas.triumf.ca/MidasWiki/index.php/Custom_Page#How_to_use_the_standard_MIDAS_navigation_bars_on_your_cust
om_page
So if you have an existing custom status page, you might have to change it slightly to include the standard elements
"mheader" and "msidenav". But this allows you to have the standard menu on your custom page and alerts displayed at the
top row of your custom page (which was not possible before).
Once this works for you, it would be nice to adjust the documentation to reflect this new way.
Stefan |
|
1250
|
16 Mar 2017 |
Konstantin Olchanski | Bug Report | Replaced with /experiment/menu, mhttpd - /Experiment/Menu Buttons - git-sha a350e8db11 | > > I think there sneaked in a little bug in the mhttpd: when starting an experiment
> > from scratch and starting the mhttpd, the Menu Buttons are missing
Ok, the original problem with a small bug in the javascript code for the menu buttons (fixed now),
but I was moved to implement something I wanted to do for a long time.
The menu configuration is now done through a subdirectory /experiment/menu. Each entry corresponds to
one menu button. Set to "y" to show it, set to "n" to hide it.
Buttons are displayed in the same order as they are in ODB, to change the order of buttons,
change their order in ODB (odbedit command "move").
This fixes the long standing problem with adding new midas pages - they were not automatically added to
the existing "menu buttons" lists. So for example when the "chat" page was added, I did not know about it
for a long time (and some people still do not know about it's existence) because it is was not included in
my "/experiment/menu buttons" list in all my already existing experiments. When the "start" and
"transition" pages were added, probably nobody knows that they exist.
Now new buttons for new pages are automatically added to the list (via mhttpd.cxx::init_menu_buttons()),
the users have an option to hide them by setting their values to "n".
K.O. |
|
1251
|
16 Mar 2017 |
Thomas Lindner | Bug Report | Replaced with /experiment/menu, mhttpd - /Experiment/Menu Buttons - git-sha a350e8db11 | > > > I think there sneaked in a little bug in the mhttpd: when starting an experiment
> > > from scratch and starting the mhttpd, the Menu Buttons are missing
>
> Ok, the original problem with a small bug in the javascript code for the menu buttons (fixed now),
> but I was moved to implement something I wanted to do for a long time.
>
Is this change back-wards compatible with an old ODB? Ie, if I upgrade MIDAS, will it notice that I have the old-style key "/Experiment/Menu Buttons"
and replace it equivalently set keys in /Experiment/Menu? Or will it just continue to use the old-style ODB key? |
|
1253
|
28 Mar 2017 |
Konstantin Olchanski | Bug Report | Replaced with /experiment/menu, mhttpd - /Experiment/Menu Buttons - git-sha a350e8db11 | > > > > I think there sneaked in a little bug in the mhttpd: when starting an experiment
> > > > from scratch and starting the mhttpd, the Menu Buttons are missing
> >
> > Ok, the original problem with a small bug in the javascript code for the menu buttons (fixed now),
> > but I was moved to implement something I wanted to do for a long time.
> >
>
> Is this change back-wards compatible with an old ODB? Ie, if I upgrade MIDAS, will it notice that I have the old-style key "/Experiment/Menu Buttons"
> and replace it equivalently set keys in /Experiment/Menu? Or will it just continue to use the old-style ODB key?
I am trying to keep some compatibility between the web pages and mhttpd. I think in most cases, old mhttpd should continue to work
against new web pages (assuming matching mhttpd.js & co). But old web pages would probably break against new mhttpd, mostly due
to the rapid pace of their development.
Anyhow, the midas web page forms menu buttons in this order:
/Experiment/Menu, if it does not exist, then:
/Experiment/menu buttons, if it does not exist, then
built in list of menu buttons, which includes all possible buttons, hardcoded in mhttpd.js.
In cooperation with mhttpd: new mhttpd
- will automatically create the tree /experiment/menu with all buttons disabled
- will complain about the existence of /expriment/menu buttons, instruct user to delete it.
So to answer the question:
after git pull, make, restart mhttpd, you will see all possible menu buttons and you will have to go
into the odb editor to disable the buttons you do not want to see (i.e. the mscb button).
I did it this way on purpose, to give old-time midas users an opportunity to discover
some of the newly added buttons and pages, like the "chat" page, or the "example" page. If I migrated
the existing "menu buttons" verbatim, to the new tree, I would not even today know
that the "chat" page exists (I do not think it was ever announced or described on this forum
or anywhere in the documentation).
K.O. |
|
1341
|
19 Feb 2018 |
Thomas Lindner | Suggestion | Rename sequencer program to msequencer | Hi Folks,
In last year's updates to MIDAS, the MIDAS sequencer has been broken out as a
separate program (rather than running as part of mhttpd). We hope that this
change will make the sequencer operation more stable.
Before anyone gets too used to using the new sequencer program, I would like to
rename it. Currently the program is called 'sequencer'; I would like to rename
it 'msequencer', to make it consistent with most other MIDAS programs. If you
object to making this change, please say so in the next two weeks.
Documentation on the MIDAS sequencer can be found on the wiki:
https://midas.triumf.ca/MidasWiki/index.php/Sequencer
Note that there are still some tweaks that need to be made to the sequencer
webpage and mhttpd in order to handle this new sequencer program.
Cheers,
Thomas |
|