Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump-and-Walk #122

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

therealbnut
Copy link

This is just a proof-of-concept.

I was interested to see how the Jump-and-Walk algorithm would compare to your existing methods of point location. It's meant to perform better with high density clusters of points, and doesn't need any pre-sorting or maintenance of data structures.

Understandably it's worse than bulk last-used because the vertices are sorted for bulk insertion.
Screenshot 2025-01-22 at 12 25 21 pm

It's consistently slower than the hierarchy for location, but requires no hierarchy management.
Screenshot 2025-01-22 at 12 25 47 pm

The jump sampling count could probably be optimised a bit (and it's meant to be random rather than stepped, but I wanted determinism). Although I don't currently think there's a clear enough advantage to pursue it further, I thought I'd share the results in case you're interested (feel free to close this).

@Stoeoef
Copy link
Owner

Stoeoef commented Feb 21, 2025

Interesting - thank you for sharing this. I haven't heard about jump-and-walk before.

I think comparing it with the hierarchy triangulation is a little moot though as the O(log(n)) cost will, eventually, always beat the O(n) runtime (unless I'm overseeing something?). Would you be interested to compare it with LastUsedVertexGenerator?

I think if it beats that one for randomly distributed points, then jump-and-walk would have its niche and would be a nice addition.

@therealbnut
Copy link
Author

therealbnut commented Feb 21, 2025

The runtime is actually expected time $O(n^{\frac{1}{d+1}})$, see here.

While it may not beat O(log(n)) in theory, in practice it has no preprocessing, no data structure memory or maintenance overheads, and low constant time factors, so it is often competitive.

It’s used by QHull, Triangle, and CGAL. I think mostly due to its simplicity for maintaining dynamic points, where it outperforms binning in real-world data where you have concentrated clusters of points.

I don’t think it will outperform last-used vertex for bulk insertion, because the last-used is the same, but is a better cheaper estimate than “jump”, after the sort cost. Sort, for bulk insertion, is a cheap operation. I think this could outperform your current methods in some situations. However, I think it would require more changes to the code to remove unneeded preprocessor work (like sorting vertices).

I mostly shared it because the code is so simple to implement and try, and there may have been some low hanging fruit. However, I think more work (and understanding of your code base) is needed to give it a true comparison. If you have more of a requirement in future for dynamic updated points then this might be worth revisiting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants