-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change output device for running AudioContext #232
Conversation
and simplify the cpal backend code
Todo, the new render thread crashes because the audio graph must be reused
We might end up with multiple render threads at the same time. Use a Mutex to prevent this
@@ -1,25 +1,23 @@ | |||
//! Audio IO management API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thoroughly rewrote this file, so I'm closing #51 no I'm not, I misread the issue
/// This function operates synchronously and might block the current thread. An async version | ||
/// is currently not implemented. | ||
#[allow(clippy::needless_collect, clippy::missing_panics_doc)] | ||
pub fn set_sink_id_sync(&self, sink_id: String) -> Result<(), Box<dyn Error>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the meat of the PR.
- check if sink is already active
- validate sink
- acquire necessary locks (avoid issues with concurrent calls)
- shut down current render thread and recover the audio graph
- boot up new thread, and ship the audio graph
- release locks
|
||
// hotswap the backend | ||
let options = AudioContextOptions { | ||
sample_rate: Some(self.sample_rate()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this sample rate is not available on the new sink, bad things will happen. That is why #183 exists
src/context/online.rs
Outdated
pub fn output_latency(&self) -> f64 { | ||
self.backend.output_latency() | ||
self.backend.lock().unwrap().output_latency() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure of the thread layout here, but can't this lock() be a problem regarding the audio thread?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's something we might want to call periodically during rendering I guess
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realize now the backend
naming is confusing.
The backend
object live in the control thread, so locking is no issue.
A better name would be backend_manager
, because it passes instructions from the control thread (start, pause, close) and collects info from the actual backend in the render thread (latency, sink_id).
I will update the PR by renaming this struct
let message = ControlMessage::Startup { graph }; | ||
ctrl_msg_send.send(message).unwrap(); | ||
|
||
self.base().set_state(AudioContextState::Running); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the context was suspended maybe we want to keep it suspended?
Actually, I didn't read the thing carefully, but the spec seems to say:
If wasRunning is true:
Set the [[rendering thread state]] on the AudioContext to "suspended".
https://webaudio.github.io/web-audio-api/#ref-for-dom-audiocontext-setsinkid%E2%91%A0
But as we already differ on the initial state, I don't really know...
Just keeping the state as it was could be a good tradeoff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, very good point. I will update the PR
That's quite a thing, I'm really not sure I understand everything (I'm pretty I don't actually :), so maybe my comments are not very relevant. Actually it makes me think it would be quite nice to have some graph of who lives where in terms of threads, I'm always a bit lost in this part of the code. In any case, the example seems to work on my side switching between some virtual device and real output |
Yes, I created a new issue for this: #235 |
To emphasize the object lives in the control thread, not the render thread. This means we can block (Mutex.lock) without issues.
|
We need to handle an interesting edge case where the render thread never processed the Startup command (e.g. when it was suspended for its entire lifetime)
|
This is the second of three PRs for #216
You will notice a severe lack of tests for this complicated feature. That is because the functionality is only exposed in online
AudioContext
which need an actual audio backend. For now, because that will be changed in PR number 3.You can test it with
cargo run --release --example sink_id
but notice thatcpal
only lists your default audio output, so you won't be able to switch from headphones to builtin speakers (on OSX). I added a virtual passthrough device to use for testing. Sad though. It is not really testable with backendcubeb
because it suffers from segfaults when closing a stream. Known issue #187 :(