-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to run as sequence-to-one and sequence-to-sequence #13
Comments
I think the two options are 1) change the way the forward pass is made, yes, but not through NH, do that through the BMI, and 2) change the way the model is trained with NH to match the way we have implemented it. I think option 1 is much more reasonable, for the short term. In the long term, the thing to do is develop a BMI directly in the NH code, so there is no potential for conflict between training and forward ngen predictions. |
@jmframe I'm digging up old issues today. What are the advantages to resetting the state space each time step? It seems unlikely we'd want to multiply the current execution time by the sequence length unless some dramatic performance increases (i.e. better streamflow prediction) resulted. At some point, given time resources, we could retrain LSTM to eliminate the state resetting per your second recommendation. |
Well. The advantage of resetting the state space and passing in the full sequence is that is how the model is trained, and that is what the weights of the model are trained to respond to. But, I did quite a bit of experimenting, and didn't find that it was much different, results-wise. The issue of computation and time constraints is an import one. It would be good to evaluate the performance of the large runs to see what this means in terms of extra costs. There is a funded CIROH project to do all this, which was supposed to start in 2022, but you know the story... |
That's good to know @jmframe. I'll leave this issue open. |
Short description explaining the high-level reason for the new issue.
Current behavior
The model state space persists throughout the simulation time.
Expected behavior
NeuralHydrology trains the LSTM to reset the state space and run a full sequence length of simulation at each time step.
Steps to replicate behavior (include URLs)
Either
Screenshots
The text was updated successfully, but these errors were encountered: