-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to use more num_layers? #34
Comments
@38github yes it's definitely possible. Would you share your patches for https://github.com/AidaDSP/Automated-GuitarAmpModelling as well? I see your model still has 1 lstm layer. It could be modelToKeras.py needs to be adjusted too |
All I have done for now is the following: Is it the part that generates the json file that needs modification? |
I wish that I could help out more than this. |
@38github I would need the original json file prior to invoking modelToKeras.py. This script needs to gather the number of rec layers from the model file, since atm it expects an input file with only one rec layer |
Do you want me to train using the settings I used and then ignore exporting model and instead send you the best_model.json file? |
@38github yes please note that the training script produces
usually we then calculate ESR on test dataset, where test != val. This is to be able to spot /avoid overfitting. Also the test ESR is calculated without PreEmph filter (it's a true generalization test). Please share both files, I will take care of adjusting the converter script. Also, notice there will be a high increase of CPU usage with the network proposed by you. Expected load to be 2x or 3x. So I was wondering, have you tested instead of cascading REC layers to put them in parallel? I would like to do this test but I have no time atm. Would be nice to have a comparison on the very same device and settings, to understand if it's performing better. Finally, please switch to next branch since I've added a nice Spectrogram view which really helps to understand model quality more than ESR. In example to understand if the improvements in ESR are really worth it (our ears work in frequency not in time domain) Thanks a lot! |
I have tests that I have done with NAM and its LSTM implementation. I didn't try 1 num_layers though. Only 2 and 3 but one day I will do tests with 1 num_layers and different hidden sizes. lstm_tests_2_num_layers_list.pdf I remeber that using 2 num_layers and 8 hidden_sizes gave a lot better result than using 1 num_layers and 16 hidden_sizes even though the CPU was about the same. Is AIDA-X using 1 num_layers by default? |
Nice! I've seen also the companion thread that I will add here for reference. I don't see particular issues for adding the support on runtime side (plugin). So to proceed stright away I need:
if you prefer to input NAM model file format we will try to figure it out too ;) |
I trained a model using 3 num_layers which improved a fuzz sound from the default. Using default I got ESR 0.18 but by using 3 num_layers I got 0.10 and after one more round of training it went down to 0.08.
The problem is that the plugin can't use it correctly. It does load the model but the sound is very quiet and does not have much distortion.
Is it possible to enable more num_layers kind of like how you enabled larger hidden_sizes?
3 num_layers file:
_LSTM-40-0_ESR_0p0886.zip
The text was updated successfully, but these errors were encountered: