Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable continuing UQ and SA after loading model results #12

Open
5 tasks
simetenn opened this issue Mar 5, 2018 · 3 comments
Open
5 tasks

Enable continuing UQ and SA after loading model results #12

simetenn opened this issue Mar 5, 2018 · 3 comments

Comments

@simetenn
Copy link
Owner

simetenn commented Mar 5, 2018

Enable continuing UQ and SA after loading model results from file.
This require the following additional information to be saved:

  • Parameters for each model evaluation
  • Method, with all parameters used (make machine readable unlike the current method string)
  • info dictionary
  • Model information (such as model.ignore)
  • Feature information
@simetenn
Copy link
Owner Author

simetenn commented Mar 21, 2018

Additionally, couple this to the options of giving the user the parameters where to evaluate the model, and then give a list of model evaluations back. This to enable easy use of HPC by enabling the user to manually run the model.

@fangqx
Copy link

fangqx commented Dec 18, 2020

Hello Simetenn:
I am trying to use your tool to quantify model uncertainty associated with model inputs. Because my model is not implemented in python and needs some time to finish the model run and get the results, my question is that my model run is always stopped by the main processes ( e.g., model = un.Model()). Could you give me some suggestions on how can I get my external model run first and get the result first? Thank you very much!

@simetenn
Copy link
Owner Author

Hi fangqx,

If I understand your problem correctly you do not necessarily need to be able to resume UQ and SA after running the model. The model itself does not need to be implemented in Python. Any external model can be used, as long as you can control the model parameters and retrieve the simulation output via Python. One way of doing could be to write you parameters to file, call the external model with os.system(), subprocess.run() or something similar and use your parameter file in those runs, write the results to file, and then read the result back into python

def external_model(parameter_1, parameter_2):
    # Write parameters to file, or possibly pass them along with the external model command

    # Call external model using os.system(), subprocess.run() or something similar
    
    # Read the results back into Python
    return time, values, info

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants