Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opt-mode #78

Open
MaxOstrowski opened this issue Jun 3, 2021 · 8 comments
Open

opt-mode #78

MaxOstrowski opened this issue Jun 3, 2021 · 8 comments

Comments

@MaxOstrowski
Copy link
Member

MaxOstrowski commented Jun 3, 2021

--opt-mode=<arg> also for clingcon to:
<arg>: <mode {opt|enum|optN|ignore}>[,<bound>...]

  • opt: Find optimal model (default behaviour, as before)
  • enum: Find models with costs <= bound (Set minimize statement once in the init and never touch it again)
  • optN : Find optimum, then enumerate optimal models (This requires multi-shot solving, unsure If i can provide this behaviour consistently with the current API)
  • ignore: Ignore optimize statements (do noop in minimize statement)
  • <bound> : Set initial bound for objective function(s)

Either the clingo option can be reused, as it is currently not well defined how to combine Boolean and Integer optimization, or a new "similar" option will be introduced.

Extra: think about how to combine clingo optimization with clingcon optimization.

@MaxOstrowski MaxOstrowski added the good first issue Good for newcomers label Jun 3, 2021
@MaxOstrowski
Copy link
Member Author

Everything but optN seems simply to implement. optN requires a full search and afterwards another full search. This could be done using multi-shot solving.
There exists a callback for on_finish which could maybe used, though I doubt it.
Using multi-shot solving has the big disadvantage that the option can not be hidden behind a single solve call as it is done in clasp with the same option for Boolean optimization statements.
We already have a similar problem with the on_model callback that is used in clingcon. If the user does not set the on_model callback of the clingcon theory, optimization is broken. In general it would be cool if the XXXTheory class could handle/register all callbacks using the thy.register(ctl) call. This is also true for thy.prepare() and thy.rewrite_ast().
So instead of:

from clingo import Control
from clingo.ast import parse_string, ProgramBuilder
from clingcon import ClingconTheory

prg = '&sum { x } >= 1. &sum { x } <= 3.'

thy = ClingconTheory()
ctl = Control(['0'])
thy.register(ctl)
with ProgramBuilder(ctl) as bld:
    parse_string(prg, lambda ast: thy.rewrite_ast(ast, bld.add))

ctl.ground([('base', [])])
thy.prepare(ctl)
with ctl.solve(yield_=True, on_model=thy.on_model) as hnd:
    for mdl in hnd:
        print([f'{key}={val}' for key, val in thy.assignment(mdl.thread_id)])

the user should be able to do:

from clingo import Control
from clingcon import ClingconTheory

prg = '&sum { x } >= 1. &sum { x } <= 3.'

thy = ClingconTheory()
ctl = Control(['0'])
thy.register(ctl)

ctl.ground([('base', [])])

with ctl.solve(yield_=True) as hnd:
    for mdl in hnd:
        print(mdl)

Something like this could certainly be achieved with a changed Control interface that allows to register all these callbacks beforehand. It wrapper class with a different/simpler interface could also be created. @rkaminsk @wanko whats your opinion on the topic.

@rkaminsk
Copy link
Member

rkaminsk commented Jun 8, 2021

Enumerating all optimal models can only reasonably be implemented using multi-shot solving with the current API. This still holds true even with your completely unrelated suggestion to register callbacks earlier. We would have to provide clasp's enumerator interface via the API to implement advanced reasoning modes. This would require a lot of thinking and the help of Benjamin. It is not going to happen anytime soon.

Now regarding your second proposal. I will not implement this. It is true that the current version is more verbose when theories are used but it gives the user full control about what happens when.

PS: If the callbacks bother you, then you can implement an extended Control object that takes care of this.

@MaxOstrowski
Copy link
Member Author

This was just meant as some brainstorming to identify missing features in the API.
My suggestions actually sounds a bit unrelated, but what I had in mind was a kind of wrapper that allows to also modify the solve method, s.t. the thy.register() changes the solve call to, in this case 2 solve calls to enumerate all models, using a callback.
This would even work with stacking theories and stacking solve functions. I agree that this is not happening anytime soon nor that this is the only way to make our system generic to be used with many different theories.

@MaxOstrowski
Copy link
Member Author

I could start with implementing all other "opt-modes", as they can easily be done.
And maybe even optN for the clingcon binary only. Do you think this is a good idea?

@rkaminsk
Copy link
Member

rkaminsk commented Jun 8, 2021

This was just meant as some brainstorming to identify missing features in the API.
My suggestions actually sounds a bit unrelated, but what I had in mind was a kind of wrapper that allows to also modify the solve method, s.t. the thy.register() changes the solve call to, in this case 2 solve calls to enumerate all models, using a callback.
This would even work with stacking theories and stacking solve functions. I agree that this is not happening anytime soon nor that this is the only way to make our system generic to be used with many different theories.

You were describing something completely different than you are describing now. 😉 We could hand over solving (and also grounding) to the theory by simply adding another function. This is straightforward as long as just one theory wants to change how solving is done. Or we do something like asking the theory if another pass is needed. Developers would then have to call solve again. From a user's perspective, the optN mode is probably the most convenient one to have. Having enum and <bound> would make it relatively easy for a developers to achieve optN behavior in their applications.

Using the idea with the passes, this could be something like:

thy.prepare(ctl)
ctl.ground(...)
while theory.step(ctl):
  ctl.solve(...)

Multi-shot solving related information could be initialized/reset in prepare. For most theories the step function would return true for the first call and return false for all later ones. Since the step function receives the Control object, the theory can do what ever it wants before solving.

@MaxOstrowski
Copy link
Member Author

MaxOstrowski commented Jun 9, 2021

What about

ctl.ground([('base', [])])
thy.prepare(ctl)
with thy.solve(yield_=True, on_model=thy.on_model) as hnd:
    for mdl in hnd:
        ...

This would mean that the theory has its own solve method (with the same interface as the Control::solve) and can do whatever it wants.

If you want to combine different theories also maybe:

ctl.ground([('base', [])])
thy.prepare(ctl)
with thy.solve(ctl.solve, yield_=True, on_model=thy.on_model) as hnd:
    for mdl in hnd:
        ...

is suitable. Then thy.solve and can call ctl.solve via function pointer.
This would also for stacking theories but gets messy fast ...

@rkaminsk
Copy link
Member

rkaminsk commented Jun 9, 2021

What about thy.solve.

This falls into the "handing the solving to the theory" part of my comment. I think that the user will always have to know how to combine theories. Especially, in view of multi-shot solving, returning a Handle class from the solve function will be difficult; it would probably have to be something different than the existing SolveHandle class.

@MaxOstrowski
Copy link
Member Author

Current idea to handle optN and other cases for any theory would be:

while t1.step(ctl):
  ctl.solve(...)
while t2.step(ctl):
  ctl.solve(...)
while t3.step(ctl):
  ctl.solve(...)
ctl.solve(...)

Each theory has a step function that:

  1. declares that now the theory has time to post any preprocessing solve calls
  2. returns true if such solve calls exist
  3. returns false if no more of such solve calls exists, default behaviour should be simply returning false
  • for optN this means the first call to step() posts the normal minimization constraint and the corresponding solve call computes the optimum. Afterwards, the next step call returns false. The final solve call at the very end then should have a fixed constraint for the optimization bound and simply enumerate all models.
  1. Other theores do have the opportunity to compute intermediate fixpoints for their optimization stuff too

TODO: have a look at clasp enumeration to maybe have a different interface idea.
EXTRA: Is there a way to combine this is Boolean optimization ?

@rkaminsk rkaminsk added enhancement future something that might be implemented in the future and removed good first issue Good for newcomers future something that might be implemented in the future labels Feb 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants