Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update __init__.py #146

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open

Update __init__.py #146

wants to merge 21 commits into from

Conversation

roussel-ryan
Copy link
Collaborator

add in second objective for sphere 2d problem

add in second objective for sphere 2d problem
@MitchellAV
Copy link
Collaborator

Having loaded the environment into badger I don't believe that the multi-objective version of the sphere_2d as it exists now is valid.

The _observations private variable has f as the observable when I believe it should be f1 and f2. Even after making that change I still having issues with running an optimization.

Here is the following error trace I receive with the expected_improvement algorithm.

File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/xopt/generators/bayesian/utils.py", line 202, in validate_turbo_controller_base
    value = available_controller_types[value](info.data["vocs"])
KeyError: 'vocs'

@roussel-ryan
Copy link
Collaborator Author

good catch @MitchellAV, I updated the PR. The vocs error is interesting, could you provide the full stack trace?

@MitchellAV
Copy link
Collaborator

MitchellAV commented Feb 26, 2025

@roussel-ryan Here is the full trace. I am able to run optimizations within badger for other environments, but seem to be having issues with the mobo provided.

Traceback (most recent call last):
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/gui/acr/pages/home_page.py", line 357, in start_run
    self.prepare_run()
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/gui/acr/pages/home_page.py", line 344, in prepare_run
    raise e
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/gui/acr/pages/home_page.py", line 341, in prepare_run
    routine = self.routine_editor.routine_page._compose_routine()
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/gui/acr/components/routine_page.py", line 1016, in _compose_routine
    routine = Routine(
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/xopt/base.py", line 220, in __init__
    super().__init__(**kwargs)
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/pydantic/main.py", line 214, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/routine.py", line 57, in validate_model
    data["generator"] = generator_class.model_validate(
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/pydantic/main.py", line 627, in model_validate
    return cls.__pydantic_validator__.validate_python(
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/xopt/generator.py", line 86, in __init__
    super().__init__(**kwargs)
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/pydantic/main.py", line 214, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/xopt/generators/bayesian/bayesian_generator.py", line 227, in validate_turbo_controller
    value = validate_turbo_controller_base(value, controller_dict, info)
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/xopt/generators/bayesian/utils.py", line 202, in validate_turbo_controller_base
    value = available_controller_types[value](info.data["vocs"])
KeyError: 'vocs'

@roussel-ryan
Copy link
Collaborator Author

one consideration is that turbo can't be used with mobo - maybe you can delete the turbo line in the generator text box?

@roussel-ryan
Copy link
Collaborator Author

but clearly this should be validated in xopt properly


_variables = {
"x0": 0.5,
"x1": 0.5,
}
_observations = {
"f": 0.0,
"f1": 0.0,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The env itself looks good, but did all the tests pass? I assume some would fail since the observable names have changed. If we want to keep the tests untouched, maybe we can keep using the old name? Say, instead of using f1 and f2, we can have them being f and g. What do you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wait the built-in env is not being used in the tests... The thing that it could affect would be the docs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its a good idea though, using f and g, Zhe can you make the change?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given our latest discussion I have made the changes and tried again but am still having issues with a getting a mobo example working.

Changes made:

  • uncommented "mobo" from excluded algorithms to use within badger for optimization
  • modified sphere_2d example based on prior comments
  • removed turbo_controller parameter from algorithm metadata
  • added 2 reference points at "0.0" for both objective functions to algorithm metadata

The optimization will run through the initial points set out within the environment + vocs then will throw the following error.

Traceback (most recent call last):
  File "/Users/mvicto/Desktop/projects/badger/data-viz-extension/src/badger/core_subprocess.py", line 206, in run_routine_subprocess
    evaluate_queue[0].send((routine.data, routine.generator))
  File "/Users/mvicto/.pyenv/versions/3.10.16/lib/python3.10/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/Users/mvicto/.pyenv/versions/3.10.16/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
  File "/Users/mvicto/Desktop/projects/badger/.venv/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 225, in reduce_tensor
    raise RuntimeError(
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries.  If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).

My environment is up-to-date with the latest changes to xopt and badger. If you have some time, let me know if the error is reproducible on either of your machines.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe try implementing the following method around routine.generator.model

import torch
import torch.nn as nn

def detach_module(module):
    for param in module.parameters():
        param.data = param.data.detach()

# Example usage:
model = nn.Linear(10, 2)
input_tensor = torch.randn(1, 10, requires_grad=True)
output_tensor = model(input_tensor)

# Detach all parameters in the model
detach_module(model)

# Verify that gradients are no longer tracked
for param in model.parameters():
    assert not param.requires_grad

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apply this detach_module function might remove the gradient attributes which cause the error

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried your suggestion however the same requires_grad error is still present even after confirming that the tensors within the generator model have been detached. Here is my implementation.

def detach_module(module: Optional[Model]):
    if module is not None:
        for param in module.parameters():
            param.data = param.data.detach()
            param.requires_grad = False
    return module
if evaluate:
    if routine.generator.model is not None:
        routine.generator.model = detach_module(routine.generator.model)
    evaluate_queue[0].send((routine.data, routine.generator))

There must be another tensor within the within either routine.data or routine.generator that would need the same treatment since those are the only two data structures being passed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this error is only showing up when using the mobo generator?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants