-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch inference #9
Comments
Typically, to scale this model over large datasets, you may want to use some distributed cloud compute solution such as Google Dataflow. Unfortunately scaling analysis is beyond the scope of this repository, but if you have a dataset that is reasonable to analyze on a single machine, then you could use the classes in this repository directly, rather than relying on the CLI. Looking at how https://github.com/PandoraMedia/music-audio-representations/blob/main/mule/analysis.py#L61-L68 You can see that the feature class instances persist with each analysis (including the loaded model), and are simply cleared with each new call to
Hope this helps. Reach out if you have any further questions, or issues with the above suggestion. |
Yeah, I came up with this solution, but I was wondering if it is possible to load audiofiles in batches into model to speed up the process. Processing 10 audio files at a time for example. Also, will the file loader understand mp3 or should I convert everything to wav first? Am I correct that converting everything to 16kHz beforehand will also speed up the process? |
Hey,
mule analyze
works fine for a single file, but how do I process the whole dataset without reloading the model for every file?The text was updated successfully, but these errors were encountered: