-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
@mxnet-label-bot add [pr-work-in-progress] |
|
||
You can use this [script](https://github.com/Arsey/keras-transfer-learning-for-oxford102/blob/master/bootstrap.py) to download and organize your data into train, test, and validation sets. Simply run: | ||
```python | ||
python bootstrap.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should use DataLoaders/Transforms here?
@nswamy @ThomasDelteil addressed the feedback, could you take another look? Thanks! |
@ThomasDelteil @nswamy @aaronmarkham Can you take a look again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spelling and grammar fixes have been provided.
This is a great tutorial. I do have a suggestion though. Break it up. This will get filed under the gluon (python) folder. Why not split it up, so that the Python transfer learning stuff is in the gluon folder and then the CPP stuff goes into the CPP folder? Then if you add some further examples of how to take that new model and host it with MMS or use Java API, you can add links to those tutorials too. Otherwise if people look around in the CPP folder they won't see this great example...
If you do decide to break it up, please apply/commit my spelling and grammar fixes first, so that those change requests don't get lost in the shuffle.
5bb2864
to
832ef18
Compare
iterations_per_epoch = math.ceil(num_batch) | ||
# learning rate change at following steps | ||
lr_steps = [epoch * iterations_per_epoch for epoch in lr_epochs] | ||
schedule = mx.lr_scheduler.MultiFactorScheduler(step=lr_steps, factor=lr_factor, base_lr=lr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can pass directly this schedule into your trainer rather than calling on very iteration, check the tutorials on learning rate scheduler
Hi @aaronmarkham @ThomasDelteil, I have addressed your comments and separated into two tutorials. Please let me know if it's good to go. Thanks! |
@aaronmarkham @ThomasDelteil could you help merge if looks good? Thanks! |
* initial draft gluon tutorial * add reference * add cpp inference * improve wording * address pr comments * add util functions on dataset * move util file * update link * fix typo, add test * allow download * update wording * update links * address comments * use lr scheduler with optimizer * separate into 2 tutorials * add c++ tutorial to test whitelist
@roywei Have you added this tutorial to the index? Can't find it on the website |
* initial draft gluon tutorial * add reference * add cpp inference * improve wording * address pr comments * add util functions on dataset * move util file * update link * fix typo, add test * allow download * update wording * update links * address comments * use lr scheduler with optimizer * separate into 2 tutorials * add c++ tutorial to test whitelist
* initial draft gluon tutorial * add reference * add cpp inference * improve wording * address pr comments * add util functions on dataset * move util file * update link * fix typo, add test * allow download * update wording * update links * address comments * use lr scheduler with optimizer * separate into 2 tutorials * add c++ tutorial to test whitelist
Description
This tutorials aims to provide Gluon users an overview of the end to end workflow from training to inference. Since there are too many sections in the workflow, we will use a simple fine tuning example and provide links to more details tutorials on each specific topic.
The sections including:
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments