You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a Infrastructure admin, I want resource management to be able to clean up resource on systems not just the DSS, so that the infrastructure is cleaned up correctly.
Background
Currently resource management is unable to access systems as part of its execution. For example we have a custom manager that creates Datasets on the z/OS machine per test. These are logged into the DSS and are cleaned up if the test runs cleanly. If for any reason the pod fails within the ecosystem and so there is no clean up done these datasets are left. The resource manager is running in the background and can clean up the DSS but there is no way to access the zOS machine itself to perform the cleanup.
We have a couple of other managers that have this same issue as well leaving jobs running using pooled resource ids. If the same Id is selected again by another test then that test will fail because the resource id is already in use (but not in use according to the DSS).
I've been experimenting to see whether it would be possible to get this to work from without our resource manager and with a lot of code it is just about possible to start all the bundles up. However there are a number of issues with this:
Currently the RAS isn't setup correctly - Even with passing in a RAS setting it's not setup correctly because everything assumes there is a Galasa Test involved (including run names). (See below)
All managers expect to be able to find @ annotations of things they want to create, but in the resource management environment there are no annotations because there is no test class.
Loading of the bundles per resource manager requires a lot of code and it would be better to have all this code within the resource management code instead. If it's required we attempt to start bundles just like happens within the main TestBundles.
RAS
The RAS one is interesting because after abit of experimenting I've managed to get the resource management to have a RAS by setting the runName (note this was running locally with a Directory RAS, see DirectoryArchiveStoreService. Looking at CouchDB implementation I don't think it would have the same issues. However I am worried what setting this will mean in the long term but more experimenting will be needed.
Tasks
This is an attempt at breaking things down into manageable parts. There are some fundamental changes within the framework that I think are required (note this might not be everything that is required!)
Update FrameworkInitialisation.java to take an enum of the type of framework to initialise.
Update FrameworkInitialisation.java to use the enum to decide whether to set the testRunName or not and what to set it as. Default would be no RunName is set, if Test = current behaviour, if ResourcManagement = "resourceManagement". We may need to use this in other places to handle this better
Implement AbstractResourceManager that uses could extend (this would contain the code for starting up bundles, similar to how AbstractManager is done). This would allow calls like youAreDependent to work similar to AbstractManager.
Add to the z/OS Manager a new SPI interface called IZosManagerResourceSpi that the ZosManagerImpl implements. Allowing a calling manager to get to the IZosImage to perform actions. For example something like:
public interface IZosManagerResourceSpi
{
public IZosImage getImageResource(String imageId) throws ZosManagerException;
}
The text was updated successfully, but these errors were encountered:
I've been thinking about this more overnight and I wonder about the AbstractResourceManager thing at the moment we look for classes implementing a specific Interface I wonder whether it would be better to change this to an annotation and update all the resource managers to use that annotation. This is much more of a disruptive change that I hoped to do because will mean changes far and wide. However I think it would make more sense and kind of brings it more in line with how a standard manager works.
Story
As a Infrastructure admin, I want resource management to be able to clean up resource on systems not just the DSS, so that the infrastructure is cleaned up correctly.
Background
Currently resource management is unable to access systems as part of its execution. For example we have a custom manager that creates Datasets on the z/OS machine per test. These are logged into the DSS and are cleaned up if the test runs cleanly. If for any reason the pod fails within the ecosystem and so there is no clean up done these datasets are left. The resource manager is running in the background and can clean up the DSS but there is no way to access the zOS machine itself to perform the cleanup.
We have a couple of other managers that have this same issue as well leaving jobs running using pooled resource ids. If the same Id is selected again by another test then that test will fail because the resource id is already in use (but not in use according to the DSS).
I've been experimenting to see whether it would be possible to get this to work from without our resource manager and with a lot of code it is just about possible to start all the bundles up. However there are a number of issues with this:
@
annotations of things they want to create, but in the resource management environment there are no annotations because there is no test class.RAS
The RAS one is interesting because after abit of experimenting I've managed to get the resource management to have a RAS by setting the runName (note this was running locally with a Directory RAS, see DirectoryArchiveStoreService. Looking at CouchDB implementation I don't think it would have the same issues. However I am worried what setting this will mean in the long term but more experimenting will be needed.
Tasks
This is an attempt at breaking things down into manageable parts. There are some fundamental changes within the framework that I think are required (note this might not be everything that is required!)
The text was updated successfully, but these errors were encountered: