-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] FLAC support to decrease file sizes #352
Comments
On one hand, I'm open to loading different audio file formats. At the bottom, the audio loading comes from The issue with Colab though (presumably the simplified trainer) is that I need to be able to tell which of the standardized files are being used so that I can set up training correctly (calibrate latency, do a train/test split that doesn't have data leakage, etc) and I'd need to duplicate the work for these files that aren't actually the exact same, so I'm not really keen on that part--this would be a CLI-only feature (or you'd be responsible for maintaining your own training script--which I'd totally encourage you to look into!) |
Is it possible to create and check the checksums based on the raw audio itself and not the file? E.g. a FLAC file has in its tags checksum like that. Would https://audiodiff.readthedocs.io/en/latest/ maybe work? |
That's how it works currently 👍🏻 Coming back to this, I'm not going to implement it because I don't think that it helps the core aim of this repo, which is to:
So the way I see it, loading FLAC files is a separate consideration--in fact, getting the audio data in general is separate (it doesn't even need to be from a file--it could be provided directly from a plug-in calling this code). At any rate, this should be something that someone could either implement in their own project or extend this project either by forking the repo or writing an extension. |
Thank you for the explaination and I can see your point. |
It can save a lot of storage space and Google Colab's upload speed is slow so this would also help there.
The text was updated successfully, but these errors were encountered: