Replies: 1 comment
-
Sorry, this is not possible with Meshroom as it is |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello.
I am developing an application where everything is based on a 3D model which is actually a scan of an object. The scan is already done, and I don't need to modify it. The complex point comes when I want to be able to take N photos of the object and be able to map those photos as textures on the object. Let me explain:
Imagine I have a scan of a car. One day I want to take a thermal photo of that car, and be able to use it to preview, in a certain region of the car (the one that fits in the 2D photo) the texture of the thermal photo, but wrapped on the model.
There are three options for this:
Is there any way that, having a 3D model textured with the rgb albedo of the object, the photo can be analysed and the texture can be mapped to the region of the object it is looking at?
If not, is there any way to assign N control points (relate 2D points of the image and 3D points) to paste all the texture in the 3D model taking into account those imposed control points?
Finally, if not, is there any way, having the 3D model and the photo, to know in which position and rotation relative to the object was the camera? Because in that case I do have a solution in mind.
Thank you.
Translated with DeepL.com (free version)
Beta Was this translation helpful? Give feedback.
All reactions