Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(docker) Add ServerApps docs #3439

Merged
merged 6 commits into from
May 16, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/contributor-how-to-build-docker-images.rst
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,7 @@ well as the name and tag can be adapted to your needs. These values serve as exa

If you want to use your own base image instead of the official Flower base image, all you need to do
is set the ``BASE_REPOSITORY``, ``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments.

.. code-block:: bash

$ cd src/docker/superlink/
Expand Down
185 changes: 170 additions & 15 deletions doc/source/how-to-run-flower-using-docker.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Run Flower using Docker
=======================

The simplest way to get started with Flower is by using the pre-made Docker images, which you can
find on `Docker Hub <https://hub.docker.com/u/flwr>`_.
find on `Docker Hub <https://hub.docker.com/u/flwr>`__.

Before you start, make sure that the Docker daemon is running:

Expand Down Expand Up @@ -112,13 +112,16 @@ building your own SuperNode image.
.. important::

The SuperNode Docker image currently works only with the 1.9.0-nightly release. A stable version
will be available when Flower 1.9.0 (stable) gets released (ETA: May). A SuperNode nightly image must be paired with the corresponding
SuperLink nightly image released on the same day. To ensure the versions are in sync, using the concrete
tag, e.g., ``1.9.0.dev20240501`` instead of ``nightly`` is recommended.
will be available when Flower 1.9.0 (stable) gets released (ETA: May). A SuperNode nightly image
must be paired with the corresponding SuperLink and ServerApp nightly images released on the same
day. To ensure the versions are in sync, using the concrete tag, e.g., ``1.9.0.dev20240501``
instead of ``nightly`` is recommended.

We will use the ``app-pytorch`` example, which you can find in
the Flower repository, to illustrate how you can dockerize your client-app.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename client-app > ClientApp for consistency.


.. _SuperNode Prerequisites:

Prerequisites
~~~~~~~~~~~~~

Expand Down Expand Up @@ -148,16 +151,16 @@ Let's assume the following project layout:

$ tree .
.
├── client.py # client-app code
├── task.py # client-app code
├── requirements.txt # client-app dependencies
├── client.py # ClientApp code
├── task.py # ClientApp code
├── requirements.txt # ClientApp dependencies
└── <other files>

First, we need to create a Dockerfile in the directory where the ``ClientApp`` code is located.
If you use the ``app-pytorch`` example, create a new file called ``Dockerfile`` in
If you use the ``app-pytorch`` example, create a new file called ``Dockerfile.supernode`` in
``examples/app-pytorch``.

The ``Dockerfile`` contains the instructions that assemble the SuperNode image.
The ``Dockerfile.supernode`` contains the instructions that assemble the SuperNode image.

.. code-block:: dockerfile

Expand All @@ -172,20 +175,36 @@ The ``Dockerfile`` contains the instructions that assemble the SuperNode image.

In the first two lines, we instruct Docker to use the SuperNode image tagged ``nightly`` as a base
image and set our working directory to ``/app``. The following instructions will now be
executed in the ``/app`` directory. Next, we install the ``ClientApp`` dependencies by copying the
executed in the ``/app`` directory. Next, we install the ClientApp dependencies by copying the
``requirements.txt`` file into the image and run ``pip install``. In the last two lines,
we copy the ``ClientApp`` code (``client.py`` and ``task.py``) into the image and set the entry
we copy the ClientApp code (``client.py`` and ``task.py``) into the image and set the entry
point to ``flower-client-app``.

.. important::

If the requirement.txt contains the `flwr <https://pypi.org/project/flwr/>`__ or
`flwr-nightly <https://pypi.org/project/flwr-nightly/>`_ package, please ensure the version in
requirement.txt matches the docker image version.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, we can skip installing flwr as long as the correct Docker image is used. Maybe we can rephrase as:

⚠️ Note that flwr is already installed in the flwr/supernode base image, so you only need to include other package dependencies in your requirements.txt, such as torch, tensorflow, etc ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the way you phrased it sounds better. I will go with that. The reason I included the note is because I'm afraid users develop locally with a requirements.txt containing the flwr package and once they're done they just copy the same file into the Docker image.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see why that can be the case. Since this is a self-contained "How-to" page, I think it's important to start from the simplest setup. This basic Dockerfile can then be expanded by users for other more complicated server setups.


Stable:

- Docker image: ``supernode:1.9.0``
- requirement.txt: ``flwr[simulation]==1.9.0``

Nightly:

- Docker image: ``supernode:1.9.0.dev20240501``
- requirement.txt: ``flwr-nightly[simulation]==1.9.0.dev20240501``

Building the SuperNode Docker image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Next, we build the SuperNode Docker image by running the following command in the directory where
Dockerfile and client-app code are located.
Dockerfile and ClientApp code are located.

.. code-block:: bash

$ docker build -t flwr_supernode:0.0.1 .
$ docker build -f Dockerfile.supernode -t flwr_supernode:0.0.1 .

We gave the image the name ``flwr_supernode``, and the tag ``0.0.1``. Remember that the here chosen
values only serve as an example. You can change them to your needs.
Expand All @@ -206,7 +225,7 @@ Let's break down each part of this command:

* ``docker run``: This is the command to run a new Docker container.
* ``--rm``: This option specifies that the container should be automatically removed when it stops.
* | ``flwr_supernode:0.0.1``: The name the tag of the Docker image to use.
* ``flwr_supernode:0.0.1``: The name the tag of the Docker image to use.
* | ``client:app``: The object reference of the ``ClientApp`` (``<module>:<attribute>``).
| It points to the ``ClientApp`` that will be run inside the SuperNode container.
* ``--insecure``: This option enables insecure communication.
Expand Down Expand Up @@ -245,6 +264,142 @@ certificate within the container. Use the ``--certificates`` flag when starting
--server 192.168.1.100:9092 \
--certificates ca.crt

Flower ServerApp
----------------

The procedure for building and running a ServerApp image is almost identical to the SuperNode image.
A key difference is the additional argument in the ``ENTRYPOINT`` command of the ServerApp
Dockerfile.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"A key difference is the different argument in the ENTRYPOINT command of the ServerApp Dockerfile."

The server:app argument can be passed in the same way as the client:app argument.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it.


Similar to the SuperNode image, the ServerApp Docker image comes with a pre-installed version of
Flower and serves as a base for building your own ServerApp image.

We will use the same ``app-pytorch`` example as we do in the Flower SuperNode section.
If you have not already done so, please follow the `SuperNode Prerequisites`_ before proceeding.


Creating a ServerApp Dockerfile
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Let's assume the following project layout:

.. code-block:: bash

$ tree .
.
├── server.py # ServerApp code
├── task.py # ServerApp code
├── requirements.txt # ServerApp dependencies
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, ServerApp does not need any other packages other than flwr. Let's remove requirements.txt as one of the required files in this tree.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the examples/app-pytorch I need therequirements.txt because the ServerApp code uses the pytorch package but we can create our own example. wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, let's adapt the examples. In fact, the quickstart-pytorch example is a better example to base this "How-to" on. For context, the app-pytorch uses a server-side model parameter intialization - that's why PyTorch is a dependency for running the ServerApp. For the quickstart-pytorch, it is a client-side parameter initialization - so all ML-related computation is done in the ClientApp. Can you please update the references to quickstart-pytorch instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I didn't know that. That makes sense! I will update it to use the quickstart-pytorch example instead.

└── <other files>

First, we need to create a Dockerfile in the directory where the ``ServerApp`` code is located.
If you use the ``app-pytorch`` example, create a new file called ``Dockerfile.serverapp`` in
``examples/app-pytorch``.

The ``Dockerfile.serverapp`` contains the instructions that assemble the ServerApp image.

.. code-block:: dockerfile

FROM flwr/serverapp:1.8.0
chongshenng marked this conversation as resolved.
Show resolved Hide resolved

WORKDIR /app
COPY requirements.txt .
RUN python -m pip install -U --no-cache-dir -r requirements.txt && pyenv rehash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove requirements.txt and the installation of it in the ServerApp's Dockerfile.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done 👍


COPY server.py task.py ./
ENTRYPOINT ["flower-server-app", "server:app"]
chongshenng marked this conversation as resolved.
Show resolved Hide resolved

In the first two lines, we instruct Docker to use the ServerApp image tagged ``1.8.0`` as a base
chongshenng marked this conversation as resolved.
Show resolved Hide resolved
image and set our working directory to ``/app``. The following instructions will now be
executed in the ``/app`` directory. Next, we install the ServerApp dependencies by copying the
``requirements.txt`` file into the image and run ``pip install``. In the last two lines,
we copy the ServerApp code (``server.py`` and ``task.py``) into the image and set the entry
point to ``flower-server-app`` with the argument ``server:app``. The argument is the object
reference of the ServerApp (``<module>:<attribute>``) that will be run inside the ServerApp
container.

.. important::

If the requirement.txt contains the `flwr <https://pypi.org/project/flwr/>`__ or
`flwr-nightly <https://pypi.org/project/flwr-nightly/>`_ package, please ensure the version in
requirement.txt matches the docker image version.

Stable:

- Docker image: ``serverapp:1.8.0``
- requirement.txt: ``flwr[simulation]==1.8.0``

Nightly:

- Docker image: ``serverapp:1.9.0.dev20240501``
- requirement.txt: ``flwr-nightly[simulation]==1.9.0.dev20240501``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since flwr is already installed in the base image, we can skip this step. Let's remove these lines.


Building the ServerApp Docker image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Next, we build the ServerApp Docker image by running the following command in the directory where
Dockerfile and ServerApp code are located.

.. code-block:: bash

$ docker build -f Dockerfile.serverapp -t flwr_serverapp:0.0.1 .

We gave the image the name ``flwr_serverapp``, and the tag ``0.0.1``. Remember that the here chosen
values only serve as an example. You can change them to your needs.


Running the ServerApp Docker image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Now that we have built the ServerApp image, we can finally run it.

.. code-block:: bash

$ docker run --rm flwr_serverapp:0.0.1 \
--insecure \
--server 192.168.1.100:9091
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we adopt the Docker network approach here (and also for SuperLink and SuperNode)? I think it will be clearer overall since we can just use the --name instead of an IP address --server.

docker run \
  --network flwr-net \
  --rm flwr/serverapp:1.8.0 \
  --insecure --server flwr-superlink:9091

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure since this only works if all containers are running on the same machine. We could add a note wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true. Maybe something like the following?

💡 To test running Flower locally, use the --network argument and pass the name of the Docker network to run your ServerApps.

Or did you have another comment in mind?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took your comment and added a link to the Docker documentation on creating a bridge network.


Let's break down each part of this command:

* ``docker run``: This is the command to run a new Docker container.
* ``--rm``: This option specifies that the container should be automatically removed when it stops.
* ``flwr_serverapp:0.0.1``: The name the tag of the Docker image to use.
* ``--insecure``: This option enables insecure communication.

.. attention::

The ``--insecure`` flag enables insecure communication (using HTTP, not HTTPS) and should only be
used for testing purposes. We strongly recommend enabling
`SSL <https://flower.ai/docs/framework/how-to-run-flower-using-docker.html#enabling-ssl-for-secure-connections>`_
when deploying to a production environment.

* | ``--server 192.168.1.100:9091``: This option specifies the address of the SuperLinks Driver
| API to connect to. Remember to update it with your SuperLink IP.

.. note::

Any argument that comes after the tag is passed to the Flower ServerApp binary.
To see all available flags that the ServerApp supports, run:

.. code-block:: bash

$ docker run --rm flwr/serverapp:1.8.0 --help

Enabling SSL for secure connections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To enable SSL, we will need to mount a PEM-encoded root certificate into your ServerApp container.

Assuming the certificate already exists locally, we can use the flag ``-v`` to mount the local
certificate into the container's ``/app/`` directory. This allows the ServerApp to access the
certificate within the container. Use the ``--certificates`` flag when starting the container.

.. code-block:: bash

$ docker run --rm -v ./ca.crt:/app/ca.crt flwr_serverapp:0.0.1 client:app \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the full argument for consistency: docker run --rm --volume ...

--server 192.168.1.100:9091 \
--certificates ca.crt

Advanced Docker options
-----------------------

Expand All @@ -253,7 +408,7 @@ Using a different Flower version

If you want to use a different version of Flower, for example Flower nightly, you can do so by
changing the tag. All available versions are on
`Docker Hub <https://hub.docker.com/r/flwr/superlink/tags>`_.
`Docker Hub <https://hub.docker.com/r/flwr/superlink/tags>`__.

Pinning a Docker image to a specific version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down