You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi and congratulations on your work. It is astonishing how much ground UDA has gained over the years.
Looking over the code, I noticed that you changed some parts of the last layers of DeepLabv2. In particular, you add group normalization, ReLu activations, and dropout in the last layers. What is the inspiration behind those changes? How much did they contribute to the results? Similar methods have shown much improved results using DeepLabv3. Is it fair to compare your improved model with other DeepLabv2 approaches without explaining how much those changes contributed to the result?
Thank you for your time and consideration. I am looking forward to your response.
The text was updated successfully, but these errors were encountered:
Hi and congratulations on your work. It is astonishing how much ground UDA has gained over the years.
Looking over the code, I noticed that you changed some parts of the last layers of DeepLabv2. In particular, you add group normalization, ReLu activations, and dropout in the last layers. What is the inspiration behind those changes? How much did they contribute to the results? Similar methods have shown much improved results using DeepLabv3. Is it fair to compare your improved model with other DeepLabv2 approaches without explaining how much those changes contributed to the result?
Thank you for your time and consideration. I am looking forward to your response.
The text was updated successfully, but these errors were encountered: