-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
triangulator optimization #723
Comments
Yeah, that sounds good. And actually, I think Delaunay cost should work too - we only care within a bounded region: Delaunay only matters for points closer to the edge midpoint than its half-length. So we should still be able to safely screen out more distant points with BHV. |
btw wonder if it is possible to move some of the sort by morton code logic into the collider. |
Or in its own header file. Still, it's important the sort the actual underlying data structures, since that gives us cache coherence. So the Morton codes are for more than just the collider. |
I know how to do triangulator optimization very well, but where to import its source file? |
Adding a comment from #751:
|
This is starting to hurt for some OpenSCAD usage too. It's also surprisingly slow in debug mode:
^ not very scientific, just measuring OpenSCAD e2e processing time. |
Yeah, debug mode adds a lot of extra checks - we should probably go through and see how many are necessary. And you do mean just Yeah, we should get this done - it shouldn't even really be so difficult. What kind of example is this where you see the slowdown? Care to make a |
The polygon in question is #831 - might be a tad big to add as a performance test |
Thx - looks very promising so far! |
The triangulator takes
O(n^2)
time, which performs poorly when the number of points of a polygon increases. For example, for the woodgrain svg we had for #502, there are around 60k points in the polygon. The old triangulator (not as robust) takes 35ms to triangulate the 60k points, while our current triangulator takes around 11500ms (11.5s!). We should try to use bvh to reduce the number of points to check, to avoid quadratic complexity in the general case (for degenerate cases bvh can't help much I guess).Collider
generic and support 2DRect
. This is not necessary, but will probably help with performance and is not hard to do.Remove
method toCollider
to support point removal. We don't have to care about rebalancing because the height is bounded (at least in terms of complexity it does not matter, not sure about that in practice, and rebalancing is costly).DelaunayCost
. We can just categorize ears into different categories: definitely valid, potentially valid, definitely invalid, according to the max cost over query results.@elalish what do you think?
The text was updated successfully, but these errors were encountered: