You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The idea is to use the projections of the polycube on each of the three axes to come up with a set of orientation rules.
For each axis:
First take the 2D projection of the bounding box on this axis using np.any() and compute the sum of the projection
Next take the two 1D projections of the 2D projection and compute the sum of each projection
This gives a three-tiered hierarchy to orient into a canonical rotation with:
2D projection area
Larger 1D projection length
Smaller 1D projection length
Degeneracy still needs to be taken into account. Even in the best case, there would still need to be 4 calculations of the run encoding and 4x memory use. In the case where two pairs of faces are degenerate, 8, and in the case where all three are degenerate, 24.
Combine this strategy with storing the degenerate orientations in the hash table instead of computing the different orientations when testing a new possible polycube, and I think this would be a reasonable way to reduce the processing load while keeping memory use low.
The text was updated successfully, but these errors were encountered:
I had the same idea, and I think it could help a lot. I would also suggest adding more potential layers of constraints to remove the possibility of degeneracy altogether. This could for example be done by calculating the center of gravity of the polycube, and orienting the cube such that it is located closest to the origo. Not sure if it fixes every case, but it should help with a lot.
Maybe the information of the projected area could be used to help reduce lookup time too. If the list of all the n=15 cubes for example is split into lists depending on the largest projected area, then any given polycube would only need to check the cubes in that given sub-list. I don't know if this would be terrible in terms of memory, but for larger size cubes it could potentially speed up the look-up time by a lot.
The idea is to use the projections of the polycube on each of the three axes to come up with a set of orientation rules.
For each axis:
np.any()
and compute the sum of the projectionThis gives a three-tiered hierarchy to orient into a canonical rotation with:
Degeneracy still needs to be taken into account. Even in the best case, there would still need to be 4 calculations of the run encoding and 4x memory use. In the case where two pairs of faces are degenerate, 8, and in the case where all three are degenerate, 24.
Combine this strategy with storing the degenerate orientations in the hash table instead of computing the different orientations when testing a new possible polycube, and I think this would be a reasonable way to reduce the processing load while keeping memory use low.
The text was updated successfully, but these errors were encountered: