-
-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How should I use global_step_from_engine #3257
Comments
Hi @H4dr1en , you can take a look at this example in the docs: https://pytorch.org/ignite/v0.5.0.post2/generated/ignite.handlers.tensorboard_logger.html#ignite.handlers.tensorboard_logger.OutputHandler I agree it may be helpful to update the docs of the method: https://pytorch.org/ignite/v0.5.0.post2/generated/ignite.handlers.global_step_from_engine.html |
Hi @vfdev-5 , It is still not clear to me how I can use it; currently I am doing:
With global_step_from_engine, if I understand correctly it would become:
Thanks in advance 👍 |
In this case, you can directly pass trainer to the hander: def draw_confidences(evaluator, trainer):
# .... draw figure
# Not clear to me how I can get the epoch from global_step_transform here
epoch = trainer.state.epoch
plt.savefig(f"confidences_iter_{epoch}")
evaluator.add_event_handler(Events.EPOCH_COMPLETED, draw_confidences, trainer) Let me know it this does(not) work for your use-case |
I see - yes that would work 👍 What's the use case of global_step_from_engine then? If I understood correctly, one can always pass the engine (in this case, the trainer) to store its reference to later get the epoch property, or? |
Typical usage is with loggers like TensorBoard logger (https://pytorch.org/ignite/v0.5.0.post2/generated/ignite.handlers.tensorboard_logger.html#ignite.handlers.tensorboard_logger.OutputHandler)
yes, that's correct. In your case you manually crafted the handler, so either you can use a ref to the trainer or have a function |
❓ Questions/Help/Support
Hi, I am currently logging plots during the validation step and I use the
evaluator.state.epoch
to identify the epoch.This
evaluator.state.epoch
is always 1, but I want it to be the same as thetrainer.state.epoch
obviously. I am looking for a simple fix. I saw the functionglobal_step_from_engine
but it's unclear to me from the function docstring and its implementation, how I am supposed to use it. So my questions are:global_step_from_engine
to fix my issue, if yes, how?Thank you!
The text was updated successfully, but these errors were encountered: