-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mobo enhancement #248
Mobo enhancement #248
Conversation
…f points - adds flag use_pf_as_initial_points to enable behavior
A general comment is that I'd suggest we do a more complete implementation with more randomization as a followup PR. There is also a question of how to compute feasibility - just drop infeasible points like is done now, or sample each candidate to determine things probabilistically, potentially getting more accurate borderline candidates. More specifically:
|
xopt/vocs.py
Outdated
observable_data = self.observable_data(data, "") | ||
|
||
if return_valid: | ||
feasable_status = self.feasibility_data(data)["feasible"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor concerns with going to GPU and a lot of redoing of calcs - probably a small thing compared to main MOBO loop time. Otherwise LGTM.
xopt/generators/bayesian/mobo.py
Outdated
supports_batch_generation: bool = True | ||
use_pf_as_initial_points: bool = Field( | ||
False, | ||
description="flag to specify if pf front points are to be used during " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
use_pf_as_initial_points=True, | ||
) | ||
gen.add_data(test_data) | ||
gen._get_initial_conditions() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
verify that infeasible candidate did not make it
xopt/numerical_optimizer.py
Outdated
@@ -67,7 +70,7 @@ class GridOptimizer(NumericalOptimizer): | |||
10, description="number of grid points per axis used for optimization" | |||
) | |||
|
|||
def optimize(self, function, bounds, n_candidates=1): | |||
def optimize(self, function, bounds, n_candidates=1, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert empty kwargs if none are expected
) | ||
non_dominated = is_non_dominated(obj_data) | ||
|
||
weights = set_botorch_weights(self.vocs).to(**self._tkwargs)[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you reuse weight from _get_scaled_data() and avoid recomputing?
] | ||
variable_data = torch.tensor(var_df[self.vocs.variable_names].to_numpy()) | ||
objective_data = torch.tensor(obj_df[self.vocs.objective_names].to_numpy()) | ||
weights = set_botorch_weights(self.vocs).to(**self._tkwargs)[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't benchmarked this but going to GPU might be quite slow for our small dataset sizes
@nikitakuklev for the record here, I'll reiterate that we are happy to incorporate your suggested improvements to this process in a future PR |
use_pf_as_initial_points
flag which uses points on the pareto frontier to initialize optimization of the EHVI acquisition function which results in substantial speed-up of convergence to the pareto front in high-dimensional input spaces