-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update __init__.py #146
base: main
Are you sure you want to change the base?
Update __init__.py #146
Conversation
add in second objective for sphere 2d problem
Having loaded the environment into badger I don't believe that the multi-objective version of the sphere_2d as it exists now is valid. The Here is the following error trace I receive with the
|
good catch @MitchellAV, I updated the PR. The vocs error is interesting, could you provide the full stack trace? |
@roussel-ryan Here is the full trace. I am able to run optimizations within badger for other environments, but seem to be having issues with the mobo provided.
|
one consideration is that turbo can't be used with mobo - maybe you can delete the turbo line in the generator text box? |
but clearly this should be validated in xopt properly |
|
||
_variables = { | ||
"x0": 0.5, | ||
"x1": 0.5, | ||
} | ||
_observations = { | ||
"f": 0.0, | ||
"f1": 0.0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The env itself looks good, but did all the tests pass? I assume some would fail since the observable names have changed. If we want to keep the tests untouched, maybe we can keep using the old name? Say, instead of using f1
and f2
, we can have them being f
and g
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wait the built-in env is not being used in the tests... The thing that it could affect would be the docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its a good idea though, using f and g, Zhe can you make the change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given our latest discussion I have made the changes and tried again but am still having issues with a getting a mobo example working.
Changes made:
- uncommented "mobo" from excluded algorithms to use within badger for optimization
- modified sphere_2d example based on prior comments
- removed turbo_controller parameter from algorithm metadata
- added 2 reference points at "0.0" for both objective functions to algorithm metadata
The optimization will run through the initial points set out within the environment + vocs then will throw the following error.
Traceback (most recent call last):
File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/core_subprocess.py", line 206, in run_routine_subprocess
evaluate_queue[0].send((routine.data, routine.generator))
File "/Users/mvicto/.pyenv/versions/3.10.16/lib/python3.10/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/Users/mvicto/.pyenv/versions/3.10.16/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 225, in reduce_tensor
raise RuntimeError(
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
My environment is up-to-date with the latest changes to xopt and badger. If you have some time, let me know if the error is reproducible on either of your machines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe try implementing the following method around routine.generator.model
import torch
import torch.nn as nn
def detach_module(module):
for param in module.parameters():
param.data = param.data.detach()
# Example usage:
model = nn.Linear(10, 2)
input_tensor = torch.randn(1, 10, requires_grad=True)
output_tensor = model(input_tensor)
# Detach all parameters in the model
detach_module(model)
# Verify that gradients are no longer tracked
for param in model.parameters():
assert not param.requires_grad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply this detach_module function might remove the gradient attributes which cause the error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried your suggestion however the same requires_grad
error is still present even after confirming that the tensors within the generator model have been detached. Here is my implementation.
def detach_module(module: Optional[Model]):
if module is not None:
for param in module.parameters():
param.data = param.data.detach()
param.requires_grad = False
return module
if evaluate:
if routine.generator.model is not None:
routine.generator.model = detach_module(routine.generator.model)
evaluate_queue[0].send((routine.data, routine.generator))
There must be another tensor within the within either routine.data or routine.generator that would need the same treatment since those are the only two data structures being passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this error is only showing up when using the mobo generator?
…d combobox to call template loading
…or handling in load_template_yaml()
…current options as yaml template
add in second objective for sphere 2d problem