-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RandAugment implementation differences. #411
Comments
@bhack If those differences were discussed and accepted maybe they could be described in the docstring. For example: the |
I've correctly updated the wrong mentioned ticket |
I see, #169 is closed but there is no mention whether it was decided to ignore or include Also I can't find information/discussion regarding presence of other ops: |
Yes, I think also that some underground information/choices could also be linked to this kind of arguments #294 (comment) |
I actually discussed this heavily with Ekin and am confident that our implementation is correct and that the others are actually NOT correct! I can take an action item to discuss this in a "why KerasCV doc"; I am actively working with several original authors to verify our implementations of various components. |
@LukeWood thank you for the answer. I did not know about internal work or discussions and was simply worried that there are quite few differences between |
Also if more correct and documented I suppose that it could be still a risk for the community until we will cover our "flavour" with a full reproducibility of the train protocol. |
…-team#411) * Fix min and max for torch when initial is not None * Remove unnecessary if check
I'm reading
keras_cv
RandAugment implementation and feel like there are some differences with TF implementation referenced in the paper. I'm still reading this layer so I might not have understood everything, but here it goes:Posterization
op in RandAugment - reference TF Implementation.Sharpness
op in RandAugment - this op is present as a Random Layer in keras-cv.Rotate
op in RandAugment - discussed also in [AutoAugment] Add Rotate operation. #402 .shear
andtranslate
ops - reference TF implementation randomly negates magnitudes for these ops, and it looks like it's also used in RandAugment.Random*
layers, e.gRandomBrightness
is used instead ofBrightness
, but the function behaves differently in keras and differently in TF implementation. Perhaps this is marginal.Are those differences intentional / are the missing operations not so important or should they be added in the future?
The text was updated successfully, but these errors were encountered: