Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding dimensionDst in copyTextureToTexture #30420

Open
Makio64 opened this issue Jan 29, 2025 · 9 comments
Open

Adding dimensionDst in copyTextureToTexture #30420

Makio64 opened this issue Jan 29, 2025 · 9 comments

Comments

@Makio64
Copy link
Contributor

Makio64 commented Jan 29, 2025

Description

Add dstDimension parameters in copyTextureToTexture to facilitate the usage in dynamic texture drawing

renderer.copyTextureToTexture( srcTexture, dstTexture, srcRegion, dstPosition, dstDimension, level )

Solution

add dstDimension after the dstPosition or after level ( to not break existing code )

Alternatives

I guess the current workaround is to resize the texture before drawing it ?

Additional context

No response

@gkjohnson
Copy link
Collaborator

Is there a reason you need this functionality in copyTextureToTexture vs performing a render using a full screen quad? The copyTextureToTexture function copies data directly from the CPU to the target region in the case a texture isn't already available on the GPU. The texture's cannot be downsampled in that case so this won't be able to be easily supported in the general case.

@Makio64
Copy link
Contributor Author

Makio64 commented Jan 29, 2025

@gkjohnson : For example i have an illustration and i want to draw it in different locations and at different size into another textures.

Problem if i cant decide the dimension they will be draw at the same size everytime and then i have to load different size or as you say make different quad render.. I think it could be easily avoid with dstDimension which seems a feature include in webgpu when i check the doc.

I think this case dont apply to dataTexture like in the current example but more like a loaded initialize texture and it would be great if its supported : )

@gkjohnson
Copy link
Collaborator

For example i have an illustration and i want to draw it in different locations and at different size into another textures.

Right but I'm trying to understand why using a render quad to render the texture (which you can scale) is somehow not good enough for your use case? Or using sprites / points? It doesn't seem like anything you've described can't already be done. At some point the change your suggesting is just doing almost the same thing internally.

I think it could be easily avoid with dstDimension which seems a feature include in webgpu when i check the doc.

Can you explain this more?

I think this case dont apply to dataTexture like in the current example but more like a loaded initialize texture and it would be great if its supported : )

The limitations I listed don't just apply to data texture - it applies to any texture that has not been uploaded to the GPU already.

@Makio64
Copy link
Contributor Author

Makio64 commented Jan 30, 2025

Yeah I actually did it with the quad and rendertarget but i thought there was a simplier / more effiscient way to do it in webgpu

Can you explain this more?

I was checking the doc here but maybe i mistake something and its not as easy to implement as I think : https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/copyTextureToTexture#copy_texture_object_structure

The limitations I listed don't just apply to data texture - it applies to any texture that has not been uploaded to the GPU already.

Thanks for clarifying it, in my case all textures are init using renderer.initTexture but I think what i'm doing in tis particular project is a rare case.

@gkjohnson
Copy link
Collaborator

https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/copyTextureToTexture#copy_texture_object_structure

That function doesn't look like it lets you specify a source and destination resolution. Just a single argument for the dimensions of data being copied - other than mip level which is already supported by three.js' version. I understand the convenience but I don't think there's much if any efficiency to be gained by adding a scale capability to the copyTextureToTexture function.

@gkjohnson
Copy link
Collaborator

If we do wind up wanting to add this I could see it working like the following:

  • Change the "dstPosition" parameter to a "dstRegion" parameter (take a Box2 or Box3), similar to "srcRegion".
  • If the regions are different sizes then use blitFramebuffer with the src textures scaling filter.
  • Throw an error if the src texture has not been uploaded to the GPU yet and the src / dst sizes are different.

For WebGLRenderer, at least, the changes here may be fairly small.

@Makio64
Copy link
Contributor Author

Makio64 commented Jan 30, 2025

@gkjohnson That sounds perfect and i think this feature fit well into threejs.

@Mugen87 @mrdoob @RenaudRohlinger whats your vision on this ?

@Mugen87
Copy link
Collaborator

Mugen87 commented Jan 30, 2025

That @gkjohnson has suggested sounds good to me.

@gkjohnson
Copy link
Collaborator

@Makio64 if you'd like to make a PR can provide feedback and we can get it merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants