-
Notifications
You must be signed in to change notification settings - Fork 634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative/current state of tf_cnn_benchmark #524
Comments
The benchmark is now unmaintained and untested. I do not recommend using it anymore. I think it still is functionally correct and I doubt it will perform worse than it previously did (but it's very possible I'm wrong). However, I recommend using the official models, as you pointed out.
Running the official models prints out the performance numbers, so it can be used as a benchmark. For example, you can run the official resnet50 model from source by following the instructions here with Method 2, navigating to the path python train.py --logtostderr --model_dir=/tmp/model_dir --experiment=resnet_imagenet --mode=train --params_override=runtime.num_gpus=1,task.train_data.global_batch_size=64,task.train_data.input_path=<path-to-imagenet-tfrecords>/train*,task.validation_data.input_path=<path-to-imagenet-tfrecords>/valid* --config_file configs/experiments/image_classification/imagenet_resnet50_gpu.yaml In the command above, you need to replace both instances of |
I miss you @reedwm |
Hello community and devs,
a quick question from my side. I see that tf_cnn_benchmark is no longer actively maintained. I see that this makes sense to reduce the code volume that requires compatibility with future tf versions. But I would like to understand if this poses a severe issue in using the benchmark in the upcoming time. Is the code known to be incompatible or not achieve the expected performance when using for instance tf2.8?
In other words: Is this tf_cnn_benchmark still in good use and only the promise to continue developing and maintaining the code missing? Or is it already outdated?
And the documentation points towards to the new TF2 models for benchmarking. Are you aware of an implementation of an actual benchmark based on the models that could be an alternative?
Would be happy to get a reply.
Cheers
Stefan
The text was updated successfully, but these errors were encountered: