-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DXGI ERROR: IDXGISwapChain::GetContainingOutput: The swapchain's adapter does not control the output on which the swapchain's window resides. #865
Comments
The advanced graphics settings UI is able to change default adapter. There also exist app opt-in mechanisms: both of the exports mentioned here will achieve the same behavior on a standard dual-GPU laptop. The mechanism by which this is done is by "reparenting" the monitors from their actual connected GPU to a desired target GPU, since apps in the ecosystem expect DXGI adapter 0 to have monitors, and specifically the monitor designated as "primary" / located at desktop coordinates (0, 0). Cross-GPU fullscreen is only supported in this configuration. Note that windowed flip model optimizations like direct flip, independent flip, and access to multiplane overlay hardware are still available across GPUs even without the monitor layout changes. Note that our blog post on flip model says:
|
Thanks @jenatali for the information. I have already tried to use the static symbols exports and can confirm they work (although I am not working on a laptop but a dual-GPU desktop, it works nonetheless). I'll take a look at the graphics settings you mentioned. Although my question still remains, why/how does this work for the D3D12Fullscreeen example instead, seemingly automagically, without the need to do any export or change any settings in the OS? |
The graphics settings UI is default-populated by an application list. Perhaps |
Another easy-to-make mistake is naming your test projects exe something generic and unintentionally matching a prexisting entry in the default list deployed with the graphics driver. |
Hello,
I am writing a DX12 sample application from scratch with support for fullscreen toggle following along the D3D12Fullscreen sample code. I am on a Windows 11 Desktop, with an NVIDIA GeForce RTX 2080 SUPER discrete GPU and a Intel(R) UHD Graphics 630 integrated GPU and two monitors.
I noticed that, in my application, if the HDMI cable of one of the monitors is not connected to the NVidia card, and I try to request a fullscreen switch using
SetFullScreenState(true, nullptr)
while the application's window is on that monitor, it crashes. I also get the error in the title in the logs. I have found a suggestion to suppress the message in the info queue, but the main issue is that, even if I suppress the info queue message, theSetFullscreenState
operation actually fails with an error HRESULT:The specified device interface or feature level is not supported on this system. (0x887A0004)
and crashes. Doing some more digging, I found out that while selecting the adapters, done usingEnumAdapterByGpuPreference
byDXGI_GPU_PREFERENCE_HIGH_PERFORMANCE
, when I enumerate the outputs per adapter, I get one IDXGIOutput for the discrete GPU and the other one for the integrated one like so:The device is then created using the Nvidia GPU adapter.
So I assume the fullscreen state request fails because I am requesting it to the swapchain while the window's bounds intersects an output not controlled by the GPU the swapchain was created on. Which is basically what the debug layer log states, which makes sense, although the HRESULT error seems to be about something different.
The curious thing, which is what I am interested to understand, is that the exact same code and hardware configuration instead works within the D3D12Fullscreen sample in any case! And it does because all outputs are somehow instead assigned to the Nvidia discrete GPU. I have modified a bit the original sample to also enumerate the outputs per adapter, and I get this instead:
What's going on? Nothing obvious stands out in the Visual Studio project settings for D3D12Fullscreen sample. Seems that somehow the outputs are forced to be assigned to the discrete GPU, no matter where the monitors are connected to, but it's unclear to me how is this behaviour achieved?
Of course, if I connect both monitors physically to the NVidia card, everything works fine, but I'd like to understand how is this possible. What's the difference between my app and the original D3D12Fullscreen sample? It's definitely not in the code and in the way factory, adapter, device, and swapchain are created and fullscreen state is requested, as all of that is identical.
The text was updated successfully, but these errors were encountered: