-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advice on asynchronous playback #525
Comments
For reference, this is the same question on SO: https://stackoverflow.com/q/78146705/ There is some talk about threading vs. You could fold all the sensor-handling code into the But you can also keep the additional level of indirection with But the real problem is at a different place: you are appending to And, as mentioned in the SO comments, you are never setting the BTW, in case you are not aware, what you are doing is called "sonification", you might find some interesting things when using this as a search term. |
Thank you for your helpful comments. I just merged some of your examples together without understanding the purpose of some of the lines of code, which is why the event and loop aren't used! Could you explain how the sensor-handling code could be moved to the I'll just expand quickly on the SO comments. I'm using How would you suggest using a queue in this situation? I guess I'd need to copy the sensor data into two queues for both consumer processes? Forgive me, I'm new to concurrency concepts. |
This is totally untested, but I thought about something like this: with stream:
async with asyncio.timeout(10):
await sensor.read()
Yes, definitely.
Write to the queue after reading from the sensor, read from the queue in the audio callback. Basically what you are already doing with
No problem. It depends if the Bokeh thing runs in a separate thread. If yes, you should probably use another queue, but if not, you probably don't need one. |
okay thanks again for your help. Sorry just a couple more questions, and if I'm still without a clue I'll either stick to my original dodgy method or go and do some studying on concurrency in Python. So you said the callback is done in another thread, so that means I can't use Would it be as simple as amending the callback so that it does I've got one added complication. I have an external timer with four different states which it cycles through periodically (it spends a different amount of time in each state and the whole cycle takes maybe 30 seconds). How the sound is generated depends on the state of this timer object. At the moment, I am just checking the state of the timer with a series of if-statements in the callback. Would it be better to set an asyncio event for a change in the timer state, and is it feasible to be also using queues to bring in the sensor data? |
You have to use the appropriate queue depending on the situation. See examples/asyncio_generators.py for an example which uses both types of queues, hopefully correctly. Writing from an
Something like that, but there might be a few more things that you'll have to adapt.
It depends on how exactly the timer is written and read. More specifically, whether it's thread-safe. An I guess using a queue would be possible. You can think about it as a "command queue": the timer writes commands into the queue, and the audio callback drains the queue and handles the new command(s) (if any) appropriately. |
Thanks for these suggestions. I managed to get both of the features working with queues. I do have a remaining question but it's quite specific. What's the best approach to minimising the lag between the sensor and the audio? Let's say the sensor reads in data roughly every An alternative is to set the blocksize smaller than Is all of this determined by the *I realise you recommend not to set the blocksize in the documentation |
You will never get the sensor and the sound card synchronized (unless you have some kind of hardware-level synchronization like word clock, which I assume you don't), so you should implement your audio callback in a way that it can handle more than one incoming value just as well as zero incoming values. Then you can experiment with different block sizes and hear what sounds best. BTW, you might want to do some parameter interpolation, otherwise the changes of sensor values might sound choppy (sometimes called "zipper noise").
You should be prepared for that situation.
If you always drain the queue, the average lag will not grow meaningfully. There will be some jitter though. That's natural in unsynchronized block-based processing.
No. You shouldn't block the audio callback. If there is no new value available, you have to come up with something. Most probably just use the previous value. Or do some fancy extrapolation.
I guess you mean the note in the I have just taken that nearly verbatim from the PortAudio docs: https://www.portaudio.com/docs/v19-doxydocs/portaudio_8h.html#a443ad16338191af364e3be988014cbbe I'm (and the PortAudio docs are) not saying to never set the |
I'm using your module to provide "audio feedback" for a digital sensor. The idea is I have a sensor with one-dimensional time-series data being read in at 40 Hz. I have a target value that I want the sensor to read. If the sensor is close to that target then a pure sine wave is played, if it's not then the pure tone is superposed with white noise of an amplitude proportional to the error. The audio and sensor reading is done asynchronously. I used your example of playing a sine wave and the asynchronous examples. What I've got actually works, but I don't fully understand the API and I'm certain I'm doing some very ugly stuff. Would just like a nudge in the right direction if that's ok! I have a minimal working example below, I should say that the actually callback function I'm using is a fair bit more complicated but this gives the gist.
The text was updated successfully, but these errors were encountered: