-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[RFC] v1.8.0 release #18800
Comments
@mxnet-label-bot update [RFC, Roadmap] |
These PR related to Partition API changes for the Gluon support could be also added:
|
I would like to include the BatchNorm performance improvement PR for axis != 1 to 1.8 |
A major feature of CUDA 11 and cuDNN 8.0 is support for the new A100 GPU and its TensorFloat-32 (TF32) mode of computation. I would like to include PR #18694, "Unittest tolerance handling improvements", which allows MXNet to use TF32 effectively. The PR also makes sensible adjustments to the unittest tolerances based on device context and dtype, ensuring A100 compatibility with our unittest suite. With cuDNN 8.0 also comes compatibility with CUDA Graph Capture- I would like to include a PR (near complete, but not yet submitted) that enables CUDA Graph use. This will permit MXNet to bypass much of the CPU preparation for launching identical kernel sequences, as are commonly seen in many deep learning training and inferencing environments. |
|
I would like to include update of oneDNN version to v1.6: #18867. |
Hey @wkcn can you create a PR to backport this change on v1.x? |
Hi @samskalicky , I created a PR #18910 to backport it : ) |
I would also like to see the "Duplicate subgraph input and output" fix in MXNet.18 If I am not mistaken that PR had some test failures. I can do the PR or fix the old one if we approve it for 1.8 |
I would like to include one more PR #18218 which helps use to build windows mxnet. |
@stu1130 the PR you reference updates the CI configuration to workaround a bug in NVidia CUDA. Unfortunately NVidia will not fix the bug in CUDA 10. Just backporting the PR may not help users building MXNet from source. It may be more helpful to add a paragraph to the installation docs to recommend windows users to build with CUDA 11 or to manually patch their CUDA 10 version to backport the thrust changes included in CUDA 11. But ideally NVidia would just fix the bugs in CUDA 10 as well, given this bug affects all CUDA 10 users on Windows (not just MXNet): NVIDIA/thrust#1090 |
@leezu There is a CUDA 11 update planned for 1.8. @DickJC123 is planning to do this work. Would we want #18218 anyway? |
@samskalicky generally MXNet supports the last 2 CUDA versions, so MXNet 1.8 would support CUDA 11 and CUDA 10.2 |
@leezu @samskalicky I agree to add a doc for this, will raise the PR later. But I would prefer to include the code so users don't need to patch it manually and for DJL team we don't forget to patch it unless the patch causes another problem. |
Just to summarize, we're all on board with including #18218 in v1.8. Just wanted to clarify that this PR makes the CI more stable, but doesnt help users with the same problem after they install MXNet on windows since thrush is a dependency that gets installed with whatever version of CUDA a user has setup on their machine. If we can document this issue too that would be helpful, but is separate from getting #18218 backported to v1.x. So @stu1130 go ahead and create the backport PR and we'll track that for v1.8. Thanks! |
@samskalicky what's the timeline of 1.8? Does anyone work on updating for the wiki? |
Hi @pengzhao-intel we havent set any dates for the 1.8 release yet. Thanks for bringing this up. I have been holding off while waiting on the vote on general@ to complete for the v1.7 release. But we should start to formalize the 1.8 plans now that we're moving forward with backporting PRs and have consensus from the community on having another 1.x release before 2.0. I would like to propose that we have a feature freeze 1 month from today on September 18th. Does that work for those that intend to submit PRs for the v1.8 release @DickJC123 @stu1130 @Kh4L and others? Once we decide on a feature freeze date we'll work backwards from there and i'll formally setup the release plan with @ChaiBapchya and @josephevans who also will be co-managing the v1.8 release with me. |
@samskalicky that works for me and the TensorRT related PRs I am backporting. |
Github milestone which seems to be a suitable tool for tracking release progress. I created one here and added all mentioned items that we can try out: https://github.com/apache/incubator-mxnet/milestone/5 |
I'd like to include a fix for NaiveEngine::PushAsync #19122 |
@szha I found that training with mx.mod.Module setting MXNET_BACKWARD_DO_MIRROR to 1 takes more GPU memory than Gluon HybridBlock. Because if setting MXNET_BACKWARD_DO_MIRROR to 1, MXNET_USE_FUSION must be also set to 1 because it seems that relu has been fused. Does it mean that Gluon does not need MXNET_BACKWARD_DO_MIRROR? Or we can't generate Symbol from HybridBlock and must write a network with pure symbol API? I test the memory consuming with the following codes: import mxnet as mx
import mxnet.autograd as ag
class NaiveDataset(object):
def __len__(self):
return 10000
def __getitem__(self, idx):
if idx % 2 ==0:
label = mx.nd.zeros(shape=(1000, ))
label[0] = 1
return mx.nd.array(mx.nd.zeros(shape=(3, 224, 224))), label
else:
label = mx.nd.zeros(shape=(1000, ))
label[1] = 1
return mx.nd.array(mx.nd.ones(shape=(3, 224, 224))), label
def train_gluon_model_with_module():
import os
# os.environ["MXNET_BACKWARD_DO_MIRROR"]="1"
# os.environ["MXNET_USE_FUSION"]="0"
ctx_list = [mx.gpu(0)]
from models.backbones.resnet._resnetv1b import resnet50_v1b
net = resnet50_v1b(pretrained=False)
# net = mx.gluon.model_zoo.vision.resnet50_v1(pretrained=False)
net.initialize()
_ = net(mx.nd.zeros(shape=(1, 3, 224, 224)))
arg_params = {}
aux_params = {}
arg_params_collected = net.collect_params()
for k in arg_params_collected:
arg_params[k] = arg_params_collected[k].data(mx.cpu())
for k in arg_params_collected:
aux_params[k] = arg_params_collected[k].data(mx.cpu())
data = mx.sym.var(name="data")
sym = net(data)
module = mx.mod.Module(sym, data_names=['data'], label_names=[], context=ctx_list)
module.bind(data_shapes=[("data", (len(ctx_list) * 2, 3, 224, 224))])
module.init_params(arg_params=arg_params, aux_params=aux_params, allow_missing=False, allow_extra=True)
module.init_optimizer(force_init=True)
train_loader = mx.gluon.data.DataLoader(dataset=NaiveDataset(), batch_size=100,
num_workers=8, last_batch="discard", shuffle=True,
thread_pool=False)
for data_batch in train_loader:
module_data_batch = mx.io.DataBatch(data=[data_batch[0], ], label=None)
module.forward(module_data_batch, is_train=True)
y_hat = module.get_outputs(merge_multi_context=True)
label_list = mx.gluon.utils.split_and_load(data_batch[1], ctx_list=ctx_list, batch_axis=0)
preds_list = mx.gluon.utils.split_and_load(y_hat[0], ctx_list=ctx_list, batch_axis=0)
pred_grad_list = []
for pred, label in zip(preds_list, label_list): # type: mx.nd.NDArray, mx.nd.NDArray
pred.attach_grad()
label.attach_grad()
with ag.record():
pred_log_softmax = mx.nd.log_softmax(pred, axis=1)
loss = pred_log_softmax * label * -1
loss.backward()
pred_grad_list.append(pred.grad)
pred_gradients = mx.nd.concatenate(pred_grad_list, axis=0)
module.backward([pred_gradients])
module.update()
print(loss.sum().asnumpy())
mx.nd.waitall()
def train_gluon_model_with_gluon():
ctx_list = [mx.gpu(0)]
net = mx.gluon.model_zoo.vision.resnet50_v1(pretrained=False)
net.initialize()
net.collect_params().reset_ctx(ctx_list)
net.hybridize(static_alloc=True)
trainer = mx.gluon.Trainer(
net.collect_params(), # fix batchnorm, fix first stage, etc...
'sgd',
{
'learning_rate':1e-2
},
)
train_loader = mx.gluon.data.DataLoader(dataset=NaiveDataset(), batch_size=100,
num_workers=8, last_batch="discard", shuffle=True,
thread_pool=False)
for data_batch in train_loader:
data_list = mx.gluon.utils.split_and_load(data_batch[0], ctx_list=ctx_list, batch_axis=0)
label_list = mx.gluon.utils.split_and_load(data_batch[1], ctx_list=ctx_list, batch_axis=0)
losses = []
for data, label in zip(data_list, label_list): # type: mx.nd.NDArray, mx.nd.NDArray
with ag.record():
y_hat = net(data)
pred_log_softmax = mx.nd.log_softmax(y_hat, axis=1)
loss = pred_log_softmax * label * -1
losses.append(loss)
ag.backward(losses)
trainer.step(1)
print(loss.sum().asnumpy())
mx.nd.waitall()
if __name__ == '__main__':
# train_gluon_model_with_module()
train_gluon_model_with_gluon() By default train_gluon_model_with_module and train_gluon_model_with_gluon need almost same GPU memory, but if set MXNET_BACKWARD_DO_MIRROR to 1 and set MXNET_USE_FUSION to 0, train_gluon_model_with_module will fail and raise a OOM exception. |
@kohillyang thanks for the update. Would you mind opening a separate issue to track debugging? If you do end up opening a PR with a fix please post back here so we can track that for the release. Thank you! |
@kohillyang mirror and fusion are two orthogonal techniques that help save memory and Gluon (or specifically CachedOp) still needs to implement it. I can help elaborate more once you open a tracking issue. |
@samskalicky @szha Thank you very much. I created an issue for this #19133. |
Can we add fix for ElemwiseSum too? : #19200 |
Can someone review this simple one please - a back-port of bug fix reported first for v1.6: #19095 |
Please include fix for GCC8 with oneDNN v1.6.4: #19251 |
@samskalicky It seems that my merged BUG fix (for |
Hi @Neutron3529 it looks like #18423 was merged to the master branch. The v1.8.0 release (and rc0/rc1) branched off of the v1.x branch and since the PR was not backported it is not included in this release. |
How to merge the PR to 1.x? |
Can #19410 be merged? |
Is fix for lacking mkldnn headers: #18608 included? |
@bartekkuncer I think so: 2aa2702 |
Any timeline updates ? |
The current status is here:
https://lists.apache.org/thread.html/r0bcc91647e8d199eb138e13644aa765167402e778227dbf7c6d50a84%40%3Cdev.mxnet.apache.org%3E
…On Fri, Jan 8, 2021 at 2:16 AM Ofer Hasson ***@***.***> wrote:
Any timeline updates ?
The tables at
https://cwiki.apache.org/confluence/display/MXNET/1.8.0+Release+Plan+and+Status
seems out of date
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#18800 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALYO2KACGRPLL42LA64HJTSY3LORANCNFSM4PJHMTFA>
.
|
Thanks for that |
@samskalicky @stu1130 @szha hello, please, can you tell any approximate plans for release 1.8.0 to pip? |
Hi @lgg currently no date for pip. we're still working on making the pip packaging compliant with ASF. we'll send out an update on dev@ when we have more info. |
@samskalicky is there a timeline for mac wheels being available on pip? I noticed the post0 release of 1.8 for Linux on 3/30, but Mac wheels are still missing. |
@fhieber currently we only release Linux wheels as part of our release process. The Windows and Mac wheels are currently built by other community members (@yajiedesign for Windows, @szha for Mac) outside of the normal release process. At some point in the future we hope to integrate the building of these into the release process but where not there yet. |
I see, thanks. It would be helpful to make this clear on the website where selection of MacOs/Python/{CPU,GPU} still shows the pip option. |
I think we have been supporting mac and windows so far and it's just that @yajiedesign and I are part of the release process. I'm currently working on producing the mac wheels for 1.8. By the way, I believe @access2rohit is currently making the CD for mac available in the main repo in #19957. |
Mac 1.8 wheels should be available now. |
@yajiedesign hey any idea when 1.8 wheels for Windows? |
Hi MXNet community,
Now that the 1.7.0 release development is closed (and in the midst of the release process), I wanted to start a discussion around another 1.x based release. There are many users that will continue to use 1.x for the foreseeable future while the community transitions to 2.x. Some examples are those who are using toolkits (ie. GluonCV/NLP/etc.) that are pinned to a 1.x version of MXNet.
Are there features that we want to make available in a 1.8.0 release while the 2.0 transition is ongoing?
Feature freeze (code freeze) was September 18th.
Deferred items
Completed items
[v1.x] backport Invoke mkldnn and cudnn BatchNorm when axis != 1 #18890 [@stu1130 merged][v1.x] backport remove upper bound (#18857) #18910 [@wkcn merged][1.x][submodule] Upgrade to oneDNN v1.6.1 #18867 [@bartekkuncer merged][1.x] Backporting backward attributes inference from master #18895 [@Kh4L merged][1.x] Backporting TensorRT-Gluon Partition API (and TensorRT 7 support) #18916 [@Kh4L merged][1.x] Backporting #18779 to v1.x #18894 [@samskalicky merged][1.x] Backport: Change Partition API's options_map to std::unordered_map #18929 #18964 [@Kh4L merged]1.x: Stop packaging GPL libquadmath.so #19055 [@leezu merged][v1.x] Update onnx support to work with onnx 1.7.0 with most CV models #19017 [@josephevans merged][1.x] Fix race condition in NaiveEngine::PushAsync #19122 [@leezu merged][1.x] TensorRT: add INT8 with calibration #19011 [@Kh4L merged][1.x] Backport 'Update CUB and include it only for CUDA < 11 #18799' #18975 [@Kh4L merged][1.x] Backport Fix for duplicate subgraph inputs/outputs (#16131) #19112 [@samskalicky merged][1.x] Backport of intgemm #17559 #19099 [@kpuatamazon CI failures]Add cmake flag USE_FATBIN_COMPRESSION, ON by default #19123 [@DickJC123 merged][v1.8][Port PR] Port padding fix #19167 [@sxjscience merged][1.x] Backport Add cmake flag USE_FATBIN_COMPRESSION, ON by default (#19123) #19158 [@DickJC123 waiting on CI][v1.x] Add new CI pipeline for building and testing with cuda 11.0. #19149 [@josephevans merged][v1.x] Backport Unittest tolerance handling improvements (#18694). Also test seeding (#18762). #19148 [@DickJC123 merged][v1.x] Backport Improve environment variable handling in unittests (#18424) #19173 [@DickJC123 merged][1.x][FEATURE] CUDA graphs support #19142 [@ptrendx merged]The text was updated successfully, but these errors were encountered: