-
Notifications
You must be signed in to change notification settings - Fork 496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gltfpack: Double precision support for node translation values #583
Comments
Can you observe these differences during rendering? gltfpack and cgltf, the library it uses, use single precision floating point numbers to represent everything internally. The values here require double precision to fully roundtrip. The reason I ask if this affects rendering is that typically renderers would also end up truncating translations to single precision at some point during loading or transformation although it’s certainly possible to build a renderer that doesn’t. |
Yes, I think there will be problems when rendering. In the 1.1 version of the 3d tiles specification "The RTC_CENTER can be added to the translation component of the root node of the glTF asset." So all viewers that intend to support loading of 3D tiles must support this way of offsetting to original coordinates. I have only tested this in CesiumJS but it works fine there. No jittering which you would expect with large coordinates like these. When producing a 3d tile, a common way is to translate the asset from the original world coordinates to origo. In the earlier versions of 3d tiles you would then store the offset to go back to the real world coordinates in CESIUM_RTC or RTC_CENTER. This have now been replaced with using the translation component of the root node instead. So imagine having multiple tiles and they all have their own offset. Without the full precision there will be gaps between the tiles when they have been offsetted back to original coordinates. |
It's a little odd that RTC_CENTER is specified as float32[3] as that implies that 32-bit floating point precision should be sufficient. I'm also curious if the tiles need to be quantized to a specific grid where the offsets can be represented without round-off error, or if the intent is really for the root node to have what looks like nanometer-level precision... At any rate, I'd be interested in a full end-to-end example where multiple tiles get combined to see if this causes gaps or precision issues in final render in the viewer. It's not clear to me if the problem is limited to root node translation or if there are other components that would need higher precision during processing; changing this in cgltf is possible but it's not obvious how much to change, as changing all floats to doubles is probably not a great idea. Also noting that you're using |
@sweco-sekrsv If you are in control of the per-tile origin, you can make it float32-friendly early in your process using something like this. (Of course, the positions in your glTF should be relative to this new origin instead.)
I've taken this approach to accommodate tools in the ecosystem beyond my control. Even if the glTF specification gave clear guidance re: precision, you'll still find a mix of 32 and 64-bit matrix types in various glTF libraries/tools. I don't mean to say what meshoptimizer should do here re: precision, just that I found it best to build data that works in as many tools as possible. Hope this helps. |
As @richard mentioned this can be made float32-friendly before running gltfpack. Thank you for you suggestion, it works fine! However I can still see tiny gaps when using quantized files (removing the -noq option). Even if I choose -vp 16 I have attached a new example that shows this. |
Using -vpf seems to fix the issue so I still can benefit from quantization. So my flags for the working dataset is: |
@zeux this is something I've just bumped into. I have a file that is "centered" via a root node that is very far from the origin, for example The babylonjs viewer displays the original file correctly after this issue was resolved by using double precision for the matrices. However, after I process this file through gltfpack, element positions are noisy and result in the model appearing broken. I'm not sure there's anyway around needing the transforms to handled internally with double precision. Thoughts? |
Does this happen when I haven't had time to look into the aforementioned quantization precision issues, but in general the first step here would be to somehow add double-precision support to transforms to https://github.com/jkuhlmann/cgltf without forcing the entire library to use double-precision. |
Thanks for the quick response @zeux . It does happen with If you upload this gltf to the babylon viewer you should see two cubes perfectly aligned to one another. After processing them with gltfpack the cubes overlap. |
Ok - yeah, the attached scene requires full double precision to support. I filed jkuhlmann/cgltf#228 for now. Short of the previously discussed workarounds in this thread where the objects are stored relative to a cleanly roundtrippable origin (e.g. if you want to keep objects around 7M meters away from the origin, parent them to a node that has a translation that is ~7M but representable as a 32-bit floating point number), the only route to support scenes like this without transform precision loss is to use double precision. |
@zeux thank you for creating that ticket. I'm not sure we can make both the root and children all clean round-trippable floating point numbers. Aside from getting that fix into cgltf, I think our next best option would be to collapse the transforms ourselves before we get to gltfpack. |
@zeux my colleague made a pass at adding double support to cgltf jkuhlmann/cgltf#229, would be curious what you think next steps should be. We'd be happy to contribute a change to meshoptimizer, but I think we were a little less clear on the scope. |
I think I ran into the same problem when trying to optimize avatars from the 100avatars.com website, there were gaps and aliasing in some of the models after optimizing their meshes. I see there was an alternative proposed in jkuhlmann/cgltf#239 , but it’s not clear to me that a PR was ever merged based on that suggestion. Is that upstream package fix still a possibility? |
Hi!
I noticed that the translation values of my matrix lacks the original precision after gltfpack have been applied.
I'm using these options when compressing the asset (gltfpack 0.19): -noq -cc -kn
My original matrix is:
The matrix after gltfpack:
Is this possibly a bug or can I solve this somehow with the settings? The asset that are being compressed is to be used as an asset in 3D Tiles and the translation values in the matrix is important to move the asset to the correct real world location.
I attach a example.
matrix_precision.zip
The text was updated successfully, but these errors were encountered: