-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mobilenet v3 trian #3
Comments
I think you are right. I will modify the use of relu layer. When I was training, I set the batch size to 64, and I used 8 gpus for parallel training. I will release the pre-train model and train.prototxt ASAP. |
Could you please release the train.prototxt and pre-train model?Thanks a lot! |
1 similar comment
Could you please release the train.prototxt and pre-train model?Thanks a lot! |
|
Why did you use relu6 instead of relu after scale layer in blocks?I think the paper used relu.How many batches when you train? Can you release your train.prorotxt and solver.prototxt?I can't reproduce your result.Thank you!
The text was updated successfully, but these errors were encountered: