-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensor_db get_aggregated_tensor duplicates code, returns tuple in corner case #1087
Comments
@msheller, This is an intended behaviour as aggregator calling In my opinion, instead of throwing an error that causes the experiment to stop abruptly, we should handle this scenario gracefully by stating the reason and marking the experiment as unsuccessful. or we can still allow experiment to continue for which the changes are done in #1121. |
Instead of maintaining internal call state, and/or raising exceptions, the function should obey idempotence - returning the same result on every call. Unless there are situations where calling any function more than once is violation of a state in a state machine (which I do not believe to be the case for calling |
Ya, changes in #1121 will make it idempotent which is suggested in my previous comment. |
Describe the bug
Calling get_aggregated_tensor on a tensor that has already been aggregated returns a tuple of (tensor, {}).
Expected behavior
I would expect that get_aggregated_tensor would call get_tensor_from_cache first, then aggregate if no tensor is found, thus using the same logic as the get_tensor_from_cache method (which doesn't return a tuple).
Additional context
Unclear why we use the method names "get_tensor_from_cache" and "cache_tensor" instead of "get_tensor" and "add_tensor". Legacy names, maybe?
The text was updated successfully, but these errors were encountered: