-
Notifications
You must be signed in to change notification settings - Fork 613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate AutoAugment and RandAugment to TensorFlow Addons. #1226
Comments
For the Gsoc probably we need to think more generally about preprocessing. |
@bhack can you please elaborate more on what you mean by this. Thanks |
Many images operations in Keras are still not in addons or in |
Also about AutoKeras can the policy be handled by Autokeras so that autoaugment could be used more generally in other projects/experiments with Autokeras+keras_preprocessing instead of being embedded in Efficientnet? |
I don't think We could train models with various policies using this as proof of concept. |
|
I'm interested in it and I have benefited a lot from |
So while looking at how AutoKeras is handling hyperparameter containers, at first glance it seems like a suitable replacement for Hparams: We initially decided not to move HParams when we left tf.contrib (even though its convient and works very well) since we didn't want to diverge from officially support APIs. Adding KerasTuner as a dependency has its own challenges, but I'm wondering if there is a better way to align with the eco-system though using it. @omalleyt12 @gabrieldemarmiesse Do you have any thoughts about use re-using KerasTuner's HyperParameter object (likely overkill for whats needed in AutoAugment). Or just general thoughts on how AutoAugment in addons fits with the KerasTuner and Keras Preprocessing advances? |
We still have also AutoAugment variant policy in https://github.com/google-research/remixmatch |
Should these augmentations be added as part of a submodule in addons? |
I don't know. Probably.. It had its own sub "library" for augmentation 😄 https://github.com/google-research/remixmatch/tree/master/libml |
/cc @carlini |
The remixmatch repository is intended to faithfully reproduce the the experiments of the corresponding ICLR'20 paper. We do not intend for this repository to be the source of truth for any particular implementation. |
@seanpmorgan I don't think that we should depend on Keras-Tuner, except when implementing papers where the end result is a hyperparameter search algorithm. For papers that use a search algorithm to produce a result, like AutoAugment and RandAugment, we should hardcode the final numbers here and here and make sure our API/architecture is modular enough for other people to plug it into an hyperparameter search algorithm. In short, let's make it easy for users to change those values, it's up to them to plug our API in keras-tuner if they want. |
@gabrieldemarmiesse What do you think about Remixmatch AutoAugment variant that learns the policy on training? |
@bhack could you expand? I'm not sure I understand your question. |
@gabrieldemarmiesse I meant how we could generally organize policies as I think that with just a first AutoAugment variant I think we would organize polices a little bit. |
ReMixMatch's augment policy (CTA) is slightly different from standard augmentation policies because it requires to integrate it with the training loop. At every minibatch step, the policy needs to be "trained" with a second minibatch of examples so that it can determine the magnitude of perturbations that are allowed. |
Yes we need to consider also https://github.com/google-research/fixmatch but it seems to me it doesn't introduce new augmentation policies. |
AutoAugment is also (is it just a copy?) in today released EfficientDet https://github.com/google/automl/blob/master/efficientdet/aug/autoaugment.py. |
Fixmatch uses the same augmentations as ReMixMatch. |
Thank you for the FixMatch confirmation. Instead I see that @mingxingtan in EfficientDet added some extra AutoAugmentations |
@dynamicwebpaige |
@vinayvr11 There is an inital PR at #1275. |
I'll make an issue regarding this, listing all image ops to add. We can also discuss how they can be handled more generally in |
Ok thank you @bhack. |
Hello @bhack could you please help me to find out some more issues in Tensorflow Addons or tf.image() so that i can mention them in GSOC. |
@vinayvr11 these are general hints for every studen. I suggest you also to partially go ahead, as you can, with PR so that they have a valid sample of your coding. I.e. If possibile take an operator that it Is not already already expressed in Tensorflow (i.e. Numpy/PIL) in these referenced repositories so that Mentors could have a feedback about Tensorflow coding ability other than porting. |
Thank you very much for this @bhack. Actually i also found some loss functions and optimizers that are listed in tf.contrib but not in Tensorflow 2.x so can i also mention them in proposal. |
@bhack : could you please review my proposal this will be a great help for me https://docs.google.com/document/d/1mv32xoGI08JP1wcMiugTyVBzCee7dsyYK_5Uf6SjxEE/edit?usp=sharing |
@vinayvr11 See @dynamicwebpaige Best Practice on how to collect a feedback. |
@dynamicwebpaige I don't know how many slots we could have on similar tasks at GSOC but another related "proxy task" could be image text augmentation like CVPR 2020 https://github.com/Canjie-Luo/Text-Image-Augmentation/ |
@bhack @vinayvr11 please keep this thread focused on autoaugment and randaugment. Feel free to use direct messages or to open new issues if you think the topic changed. Having an issue with 40+ messages make the life of the maintainers quite hard. |
@gabrieldemarmiesse probably was better to open a Gitter channel dedicated to Gsoc other than Gitter addons for this kind of threads to not force ISSUES to go off-topic. Google doesn't official support any realtime chat channel and If you see the Google Summer of code Tensorflow page still points to https://github.com/tensorflow/community (you find also off-topic Gsoc issue there). |
How we are going to coordinate with image processing that is landing in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/layers/preprocessing/image_preprocessing.py? |
E.g. @haifeng-jin is using that operations in autokeras. |
/cc @tanzhenyu as from Git blame is working on the image_preprocessing ops in Keras. |
image preprocessing ops only contain basic operations like rotate and translate, it doesn't have all the 'advanced' ops like blend, bbox, or augment. |
I think we need to de-duplicate code as possible not just cause It Is a waste of time but also cause It Is confusing for newcomers (users and candidate contributors). |
Why upstream? Sounds like tfa.image is a complimentary suite of tf.image? |
I meant as TF Addons Is historically the fusion of Tensor contrib comunity and keras-contrib comunity we have two official upstream promotion paths as you can see in point 3. |
I agree with @gabrieldemarmiesse, tfa shouldn't rely on Keras-tuner and include learnable policies. It could just be hard-coded. For autoaugmentation policies API it seems a keras-image repo (which will rely on both tfa.image and Keras-tuner, and AutoKeras can rely on that) would be better fit IMHO. |
I think that we need to discuss this a little bit cause we need to clarify also what Is the difference contributing in keras.experimental and in TF Addons also to route Gsoc students, contributors etc and to not sparsify implementations on multiple repos also cause currently It Is hard to have a monthly overview across the Tensorflow/Keras ecosystems. |
Yep we definitely need this to be modular and architectural. I will attend the next meeting |
We need to involve somebody from the google-research repo to be sure that Google's internal polices will not create a problem for that team to contribute and feed a policy API under |
Some follow-up on this: Francois and I discussed and decided that the new keras_cv will include autoaugment and randaugment. |
Closing as this is now going to be implemented elsewhere in the ecosystem. Since we've already merged a few components we can deprecate them as they're available. |
It's been close to a year with no word from on high, keras_cv looks pretty sparse still, has there been any news? |
Describe the feature and the current behavior/state.
RandAugment and AutoAugment are both policies for enhanced image preprocessing that are included in EfficientNet, but are still using
tf.contrib
.https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/autoaugment.py
The only
tf.contrib
image operations that they use, however, are rotate, translate and transform - all of which have been included in TensorFlow Addons.Relevant information
Are you willing to contribute it (yes/no):
No, but am hoping that someone from the community will pick it up (potentially a Google Summer of Code student)?
Are you willing to maintain it going forward? (yes/no):
Yes
Is there a relevant academic paper? (if so, where):
AutoAugment Reference: https://arxiv.org/abs/1805.09501
RandAugment Reference: https://arxiv.org/abs/1909.13719
Is there already an implementation in another framework? (if so, where):
See link above; this would be a standard migration from
tf.contrib
.Was it part of tf.contrib? (if so, where):
Yes
Which API type would this fall under (layer, metric, optimizer, etc.)
Image
Who will benefit with this feature?
Anyone doing image preprocessing, especially for EfficientNet.
The text was updated successfully, but these errors were encountered: