Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion to reduce number of run encodings/lookups/rotations needed in many cases while keeping memory use to a minimum #12

Open
thejevans opened this issue Jul 12, 2023 · 1 comment

Comments

@thejevans
Copy link

The idea is to use the projections of the polycube on each of the three axes to come up with a set of orientation rules.

For each axis:

  • First take the 2D projection of the bounding box on this axis using np.any() and compute the sum of the projection
  • Next take the two 1D projections of the 2D projection and compute the sum of each projection

This gives a three-tiered hierarchy to orient into a canonical rotation with:

  • 2D projection area
  • Larger 1D projection length
  • Smaller 1D projection length

Degeneracy still needs to be taken into account. Even in the best case, there would still need to be 4 calculations of the run encoding and 4x memory use. In the case where two pairs of faces are degenerate, 8, and in the case where all three are degenerate, 24.

Combine this strategy with storing the degenerate orientations in the hash table instead of computing the different orientations when testing a new possible polycube, and I think this would be a reasonable way to reduce the processing load while keeping memory use low.

@Woodwalker
Copy link

Nice suggestion!

I had the same idea, and I think it could help a lot. I would also suggest adding more potential layers of constraints to remove the possibility of degeneracy altogether. This could for example be done by calculating the center of gravity of the polycube, and orienting the cube such that it is located closest to the origo. Not sure if it fixes every case, but it should help with a lot.

Maybe the information of the projected area could be used to help reduce lookup time too. If the list of all the n=15 cubes for example is split into lists depending on the largest projected area, then any given polycube would only need to check the cubes in that given sub-list. I don't know if this would be terrible in terms of memory, but for larger size cubes it could potentially speed up the look-up time by a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants