-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Zero-suppressed Decision Diagrams #128
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This comment has been minimized.
This comment has been minimized.
Codecov Report
@@ Coverage Diff @@
## main #128 +/- ##
=============================================
- Coverage 96.868% 96.639% -0.229%
=============================================
Files 27 43 +16
Lines 1884 2291 +407
=============================================
+ Hits 1825 2214 +389
- Misses 59 77 +18
Continue to review full report at Codecov.
|
7d4de7a
to
f3566cb
Compare
14 tasks
40ffb27
to
9e062c8
Compare
This is done the 'slow' way, since we use the user-friendly push function that does quite a few irrelevant verification checks. Some other day, we may want to inline the canocity computation of the push function and explicitly zip the meta file
6ebb793
to
e84408f
Compare
e84408f
to
90a3223
Compare
9 tasks
8671b20
to
60b6787
Compare
This also removes the Buckets = 0 option for the Levelized Priority Queue, since it spawns a single bucket that is spurious and dead. If we need an unbucketed Levelized Priority Queue at some point (e.g. Issue #149) we should do so with a completely new class.
60b6787
to
d2a96c7
Compare
d2a96c7
to
467cc54
Compare
This was referenced Dec 5, 2021
2 tasks
This was referenced Dec 13, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
✨ feature
New operation or other feature
✨ optimisation
It's all about speed / space
📁 zdd
Zero-suppressed Decision Diagrams
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Adds support for Zero-suppressed Decision Diagrams that are good at representing families of sparse sets from some universe.
Features
Zero-Suppressed Decision Diagrams
zdd
and__zdd
objects (similar tobdd
and__bdd
objects) that take care of reference counting the files.Functions
zdd_empty
to create Øzdd_null
, to create { Ø }zdd_ithvar
to create { {xi} }zdd_vars
to create { {xi1, xi2, ..., xik} }zdd_singletons
to create { {xi1}, {xi2}, ..., {xik} }zdd_powerset
to create { {xi1}, {xi2}, ..., {xi1, xi2}, {xi1, xi3}, ..., {xi1, xi2, ..., xik} }zdd_sized_set
to create { s ∈2vars\ | |s| ⊗ k } where ⊗ is a binary predicate (std::less
,std::greater
,std::equal_to
and so on) and k is a fixed size.zdd_binop
for binary operations on the sets.zdd_union
(operator|
) to compute A ∪ Bzdd_intsec
(operator&
) to compute A ∩ Bzdd_diff
(operator-
) to compute A \ Bzdd_project
to compute πi1, ..., ik(A) = { s \ { i1, ..., ik }c | s ∈ A }.zdd_nodecount
andzdd_varcount
functionszdd_size
to compute |A|zdd_equal
andzdd_unequal
(operators==
and!=
)zdd_subseteq
(operator<=
) being a cheaper version than computingzdd_binop(a,b,imp_op)
and checking whether it collapsed to the true sink.zdd_subset
(operator<
) usingzdd_subseteq
andzdd_unequal
.zdd_disjoint
being a cheaper version than computingzdd_intsec(a,b)
(zdd_setop(a,b,and_op)
) and checking whether it collapsed to the false sink.Examples
The example for ZDDs provided will be the The Knight's Tour problem on an N x N sized board (both hamiltonian path and cycle).
Documentation
Other changes
Use (undocumented) constructor for
tpie::priority_queue
to remove 'hacky' initialization (see tpie #244)Fix
bdd_ithvar
,bdd_nithvar
,bdd_or
, andbdd_and
are not canonical ( bdd_sink, bdd_ithvar, ... are not canonical #129 ). Gettingbdd_counter
to be canonical will be a much bigger task and is not really worth it.Fix
bdd_ite
always marks its output as non-canonical ( bdd_ite does not set canonicity in zip inputs case #139 )Split the
arc_file
sink arcs into two files: the ones that are in-order and the ones out-of-order. This should improve performance of all algorithms that may output the sink arcs out-of-order, since a smaller number of elements needs to be sorted (bdd_restrict
,bdd_exists
,bdd_forall
,zdd_setop
).This leaves Shortcut merge-conditional at compile-time when sink-arcs are known to be in-order #132 to be resolved as an optimisation on the algorithms, where we can guarantee them to be in-order, i.e.
bdd_apply
,bdd_ite
etc.Fix canonicity is still true for one-node BDDs without using MAX_ID for their identifier. ( Manually constructed ithvar without MAX_ID is still 'canonical' #153 )
Redoes gathering of (optional) statistics from the feature/statistics branch ( Add (optional) gathering of statistics #141 )
Cleans up duplicated memory computation for the
tpie::merge_sorter
in the wrapper classexternal_sorter
. Interactions with TPIE's memory management is also moved into the designated place ofmemory.h
( Move memory management into one place: memory.h #144 ).Adds
adiar_initialized()
to check whetheradiar_init(...)
was called yet without its correspondingadiar_deinit()
. Adiar now also on purpose checks whether it has been initialized twice and will skip doing it the second time.Performance
This pull request includes a major refactoring. It is important that the generalisation of Apply with new features (not used by BDDs) do not have a major impact on performance. Hence, we rerun our BDD benchmarks.
27th of May
Consumer grade laptop (Alice)
Order of obtaining the numbers: left-to-right.
14-Queens
This is a 4% relative decrease in performance compared to 9c034fc.
Grendel Cluster
N-Queens:
feature/zdd
main
(12-2020)main
(04-2021)This reaffirms the 4% slowdown on Alice.
Tic-Tac-Toe
feature/zdd
main
(12-2020)With that much deviance, I wonder whether the
feature/zdd
numbers are just on a "bad disk".9th of June
Purely done on the consumer grade laptop (Alice). The order of obtaining the numbers: left-to-right.
14-Queens
which is now only a 2.2% percent performance decrease!
17th of June
I tried to split the sink arc file in two based on the following idea: Some sink-arcs are output and out-of-order could instead be placed in a separate file. These two can then be merged during the Reduce after sorting the latter. This allows the product construction to output fewer nodes, thereby saving on memory. This will also improve performance of the
bdd_restrict
,bdd_exists
, andbdd_forall
functions, since they also output arcs out-of-order.This will result in a constant need to merge the two files in
adiar::reduce
, so we need to know whether this solution, while pretty in code, is not a major decrease in performance.Purely done on the consumer grade laptop (Alice). The order of obtaining the numbers: left-to-right.
14-Queens
Which is a 1.7% slowdown before (0.5% faster than the 9th of June without any changes) turned into a 3.6% percent slowdown. So, it looks pretty good. I'll leave as a good first issue to template the
arc_file
,__bdd
, and__zdd
objects to whether one can guarantee it is in-order or needs to be merged from the two files.