-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance descriptions or wiki elaboration #58
Comments
See #57 for rough estimates on the performance of NanoRT compared to Embree, but it is recommended to measure the performance on your side(and share the result is appreciated) There is no OSS ray tracing library (except for NanoRT) which supports double precision as far as we know so it would be difficult how performant double-precision NanoRT is(in most case double-precision NanoRT is enoughly fast though). |
Thanks, that was what I was looking for. Is the 3-4x slower referring to the double precision calculations? If so it sure seems to be a reasonable expectation compared to embree for double precision. |
@cadop 3-4x is for single precision. |
I am still working on some more tests and checking if I can improve the way I implemented nanoRT, but here are my results so far in my own use case (mostly posting for reference for others, but also if the numbers make sense to you or not, should not be taken as a decisive metric). nanoRT using doubles, and obviously embree is floats. Times are only for the raycast loop (timer is started after BVH is created). Using a model with ~1000 vertices, 40,000 rays cast in a loop (single core):
Using a model with 320,068 vertices, 40,000 rays cast in a loop (single core):
Same model as above, but with 360,000 rays. (I expanded the grid to cast rays, so there are some more that are possibly missing/hitting than in the previous case)
Using a model with ~1,000,000 vertices, 40,000 rays cast in a loop (single core):
So assuming I haven't messed up in my integration, it seems like the model size, is having a really big impact on performance compared to just increasing the number of rays. Would this suggest the performance difference is really about the BVH efficiency more than the double precision?
|
One of the performance impact would be memory bandwidth. Embree further uses quantized bounding box for BVH(at least curves primitives does) to reduce memory impact. NanoRT always use double precision for BVH when using Also, there is a room for efficient BVH build in NanoRT, especially implementing spatial split BVH: #15 |
I did some initial performance comparisons between tiny_bvh and nanort, see tiny_bvh repo, speedtest application. Findings are strange: For a 150k triangle model, nanort is about three orders of magnitude slower. Am I doing something wrong in my nanort integration? I am closely following the approach shown in the obj viewer example. |
I looked at the readme and wiki, but I don't think the performance is really covered that much besides some mentions of "efficient ray intersection finding". Would it be possible to elaborate on the performance characteristics of nanort? I found nanort from the issue on Embree not supporting double precision. One of the reasons I was starting with Embree was their paper on the high performance aspect of it, however for the scientific computing side the accuracy is also important.
Are there any benchmarks or even rough expectations for the difference between a single ray intersection with triangles in the BVH from nanort compared to Embree or the other raytracers?
The text was updated successfully, but these errors were encountered: