-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorize Auto Contrast. #1382
Comments
Can you do the same with |
@bhack Similar behaviour: |
Are these on CPU or on GPU? |
CPU |
Is this the result of |
Single value: |
I suppose not: keras-cv/keras_cv/layers/preprocessing/random_contrast.py Lines 58 to 66 in 4e2d660
So It could be interesting to test, please also try to use realistic enough input (WxH) size for production use cases. |
This comment was marked as resolved.
This comment was marked as resolved.
@bhack I don't think there is a bug: |
Here is a basic script that I'm using: # layer = RandomContrast((0,1))
layer = VectorizedRandomContrast((0,1))
@tf.function
def graph_mode(inputs):
return layer(inputs)
@tf.function(jit_compile=True)
def xla_mode(inputs):
return layer(inputs)
for batch_size in range(8, 132, 8):
input_shape = (batch_size, 224, 224, 3)
batch = tf.random.uniform(input_shape)
for _ in range(10):
layer(batch)
results = []
for _ in range(100):
batch = tf.random.uniform(input_shape)
start = time.perf_counter()
layer(batch)
stop = time.perf_counter()
results.append(stop-start)
print(batch_size, float(tf.reduce_mean(results).numpy())) |
@bhack Sorry, my mistake in the previous comment. I'm using tuple for factor, not |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
Ok sorry my bad, you are right it was a call not a variable. |
@sebastian-sz I still don't undertand with your code example: import tensorflow as tf
from tensorflow import keras
from keras_cv.layers.preprocessing import RandomContrast
layer = RandomContrast((0,1))
input_shape = (20, 224, 224, 3)
batch = tf.random.uniform(input_shape)
layer(batch) And if you add a debug print keras-cv/keras_cv/layers/preprocessing/random_contrast.py Lines 72 to 73 in b203717
|
@bhack Sorry for the confusion. The In this case the keras-cv/keras_cv/layers/preprocessing/random_contrast.py Lines 59 to 61 in b203717
Are passed to uniform factor sampler: keras-cv/keras_cv/core/factor_sampler/uniform_factor_sampler.py Lines 39 to 47 in 4b08a7d
This sampler seems to be only a wrapper around This is the value that can be printed via Currently, in case of vectorized implementation, if we use this |
So where is the within the batch randomization in RandomContrast? |
@bhack Wait, it's not called per sample? |
I have used that gist. |
From the code it looks like it is sampling new scalar per image: keras-cv/keras_cv/layers/preprocessing/base_image_augmentation_layer.py Lines 459 to 460 in b203717
|
@sebastian-sz I suppose that
|
|
@sebastian-sz Check the analysis at https://github.com/keras-team/keras-cv/issues#issuecomment-1424701740. It is why this benchmark seems so unexpected. |
@sebastian-sz feel free to contribute this! Thanks a lot for the contribution! In general we should look to vectorize layers unless there’s a clear reason not to. |
Short Description
Testing the water with this issue as #1373 is merged.
How do you feel about migrating more layers to use vectorized approach? Below is a simple performance comparison of vectorized vs current.
EDIT: misclick and submitted the issue too early...
Eager performance is better.
Graph performance is the same.
The text was updated successfully, but these errors were encountered: