Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] update documents #336

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
KungFu APIs
===========

KungFu has the high-level optimizer APIs that
allows you to transparently scale out training.
It also has a low-level API that allows an easy implementation
of distributed training strategies.
The following is the public API we released so far.

Distributed optimizers
----------------------

KungFu provides optimizers that implement various distributed training algorithms.
These optimizers are used for transparently scaling out the training of
`tf.train.Optimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/Optimizer>`_
and `tf.keras.optimizers.Optimizer <https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer>`_

.. automodule:: kungfu.tensorflow.optimizers
:members:

Global variable initializers
----------------------------

KungFu provide various initializers to help you synchronize
the global variables of distributed training workers at
the beginning of training. These initializers are used
with ``tf.session``, ``tf.estimator``, ``tf.GradientTape``
and ``tf.keras``, respectively.

.. automodule:: kungfu.tensorflow.initializer
:members:

Cluster management
------------------

When scaling out training, you often want to adjust
the parameters of your training program, for example,
sharding the training dataset or scaling the learning rate
of the optimizer. This can be achieved using the following
cluster management APIs.

.. automodule:: kungfu.python
:members:

TensorFlow operators
--------------------

KungFu provides TensorFlow operators to help you realise
new distributed training optimizers.

.. automodule:: kungfu.tensorflow.ops
:members:
53 changes: 3 additions & 50 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,58 +190,11 @@ You simply pass an extra `with_keras` flag to both KungFu optimizers and
Keras callback to tell KungFu you are using Keras not TensorFlow.
Here is a full Keras training example: `Keras <https://github.com/lsds/KungFu/blob/master/examples/keras_mnist.py>`_

KungFu APIs
===========

KungFu has the high-level optimizer APIs that
allows you to transparently scale out training.
It also has a low-level API that allows an easy implementation
of distributed training strategies.
The following is the public API we released so far.
.. toctree::
:maxdepth: 2

Distributed optimizers
----------------------

KungFu provides optimizers that implement various distributed training algorithms.
These optimizers are used for transparently scaling out the training of
`tf.train.Optimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/Optimizer>`_
and `tf.keras.optimizers.Optimizer <https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer>`_

.. automodule:: kungfu.tensorflow.optimizers
:members:

Global variable initializers
----------------------------

KungFu provide various initializers to help you synchronize
the global variables of distributed training workers at
the beginning of training. These initializers are used
with ``tf.session``, ``tf.estimator``, ``tf.GradientTape``
and ``tf.keras``, respectively.

.. automodule:: kungfu.tensorflow.initializer
:members:

Cluster management
------------------

When scaling out training, you often want to adjust
the parameters of your training program, for example,
sharding the training dataset or scaling the learning rate
of the optimizer. This can be achieved using the following
cluster management APIs.

.. automodule:: kungfu.python
:members:

TensorFlow operators
--------------------

KungFu provides TensorFlow operators to help you realise
new distributed training optimizers.

.. automodule:: kungfu.tensorflow.ops
:members:
api

Indices and tables
==================
Expand Down