Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about Lpvinjection and DX12 #8

Open
Xiyinyue opened this issue Jul 25, 2023 · 4 comments
Open

Questions about Lpvinjection and DX12 #8

Xiyinyue opened this issue Jul 25, 2023 · 4 comments

Comments

@Xiyinyue
Copy link

Good morning !
It's me again.and this time I have these questions...

Question 1. Is the

auto sample = reinterpret_cast<DXRSExampleGIScene*>(GetWindowLongPtr(hWnd, GWLP_USERDATA)); in
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
meaningful?

because the gsample is already a global value. I can predict the meaning of it is to be more OOP,and so that we can be more portability

Question 2. to generate the shadow map

we have a VSOnlyMain function,and I can't understand result.z *= result.w;
I guess the Hardware will do xyz/w for us ,after we output the float4
to be honest I don't know what situation will do xyz/w. I guess it will be done when use SV_position instead of Target0?

Question 3. The LPV injection

what we do? we put all RSM texels into LPV volume, in the program, is 2048*2048 into 32*32*32
it's obvious that the volume count is not enough,and this will lead to :
in the volume the old value will be coverd by new value ,because there must be two texels in one volume.
So I do a thing just like what you have done in propagation: Blend ! But the result is not so ideal
there is no indirect light!
please see the comparation, the below one is your right answer ,the up one is my wrong answer
QQ截图20230725124538
QQ截图20230725124611

Misc: If you don't have time ,you can omit this!

Thank you for your project. Without your project I will be breakdown,the Chinese toturial of DX12 is too rare
And I choose DX12 instead of OpenGL in mistake.The beginner should choose OpenGL better I think . because In my area, undergraduate don't have time to waste for DX12.It's too hard I think . and to find a work ,you just need OpenGL then your choice will be more.... I still have no offer T_T
And I will write a toturial about your project,For all DX12 Novice. I think then they may be easier to learn.
At last , Thank you very much!,and I feel sorry to occupy your time

@Xiyinyue
Copy link
Author

I have known the question 2:because SV_postion will do xyz/w,and perspective projection need the thing xy 's scale is connected with Z.and you use another one ,so you need Z doesn't over divided

@steaklive
Copy link
Owner

steaklive commented Aug 4, 2023

Hello @Xiyinyue ! Sorry that it took me so long to reply, I was on vacation:)

  1. We want to process various win events and our DXRSExampleGIScene sorta serves like a window (see SetWindowLongPtr calls in DXRS.cpp). GetWindowLongPtr returns a pointer to a current window which we need to cast to our object so that we can interact with it when processing events. I think reinterpret_cast seems like a valid approach here... But in any case, I am not a Win32 programming expert.. I just thought this type of setup was good enough for this project.
  2. you replied
  3. It's hard to tell what might be wrong in your version. I would just side-by-side debug pixel shaders in that pass in RenderDoc and compare the values, etc.

P.S. Thanks for the kind words! I never thought anyone would be interested in this project, to be honest:) However, I am still doubtful about this project being a good DX12 learning resource, as there are many other better projects out there... But it was indeed my playground when I started with DX12 and DXR.

In general, I think if you want to focus on some visual/rendering techniques, you might want to consider the easiest API possible (i.e., DX11 or OpenGL if you do not need hardware raytracing) or even a ready framework/engine. If you want to focus on learning new APIs, then visual/graphics stuff should not be that important in my opinion but building the architecture from scratch will be. Doing both at the same time is tricky and might require way more time to finish.

@Xiyinyue
Copy link
Author

Xiyinyue commented Aug 4, 2023

Hello, In my ongoing learning,

the LPV I think I can find it's problem.

This may worry unnecessarily,because the rendering just need something looks right instead of physics right.
we just need impliment,if you are interested in how to improve it or wanna know what's out of common sense, you can think about what I am saying.And if you are not interested in it ,just skip over the lpv part

  1. The origin LPV from cryengine used a wrong way.Because It does not conform to the law of conservation of energy in physics.
    If a voxel has propagate its Radiance,then it's Radiance will be less than before,and the neighbor will be more, that's what physics do.
  2. and another reason only for your implement is that propagation does not use the last time result 3D texture,but use the primary texture, this will lead to that photon will not propagate father every propagation . In our common sense,it maybe wrong.

the VXGI part

here is some question confusing me;

  1. it's the same problem as what I met in lpv
    while in voxelization ,the new value will cover the elder value.and I don't find any nice way to solve it;
  2. I can't make it clear that while (dist < MaxConeTraceDistance && color.a < 0.9f)
    it looks like very reasonable because MaxConeTraceDistance limits tracing times is very normal,
    and color.a <0.9 is very normal because we wanna a light value not too huge.
    but where to change the AO and the color.a
    all right I find the float3 weight = direction * direction;
    and
float4 anisoSample = 
    weight.x * ((posX) ? voxelTexturePosX.SampleLevel(LinearSampler, uv, anisoLevel) : voxelTextureNegX.SampleLevel(LinearSampler, uv, anisoLevel)) +
    weight.y * ((posY) ? voxelTexturePosY.SampleLevel(LinearSampler, uv, anisoLevel) : voxelTextureNegY.SampleLevel(LinearSampler, uv, anisoLevel)) +
    weight.z * ((posZ) ? voxelTexturePosZ.SampleLevel(LinearSampler, uv, anisoLevel) : voxelTextureNegZ.SampleLevel(LinearSampler, uv, anisoLevel));

we sample a float4 which hold a a value 1;
and mul weight.x+y+z may be avalue <1 then our target is being implied;
I wanna say: in what situation, we will trace more because color.a < 0.9f and our AO gets more change?

let's see the first direction while tracing
(0.0f, 1.0f, 0.0f),
we will trace the origin normal direction
it will got a color.a value 1.
color += (1.0 - color.a) * voxelColor;
so the value will not contribute to the color

I think you may be unknown what I am saying
I just wanna know what we do to control Tracing times and why it works
for example the direction far from normal will tracing more ,and AO changes more?
I wanna you make a example what situation will trace more and why in this situation it will trace more ,it depends on any physics law? or any tricky way?

I will try to find some way to debug Voxel .....
the Renderdoc make a black texture when i check it. I think I should check it's tutorials

Hoping that you have a happy vacation!
I am sorry that the type setting is not so easy to read .And I don't know your reading preference.
my English is not so good ,I will apologize it if any error confuse you

@Xiyinyue
Copy link
Author

Xiyinyue commented Aug 4, 2023

the LPV is not so right so I mean when it looks ok ,then it is ok, there is no need to care too much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants