diff --git a/doc/source/conf.py b/doc/source/conf.py index 2d303a0ab89..2eb1d7585a2 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -270,6 +270,11 @@ def find_test_modules(package_path): "contributor-explanation-architecture": "explanation-flower-architecture.html", "example-pytorch-from-centralized-to-federated": "tutorial-quickstart-pytorch.html", "example-fedbn-pytorch-from-centralized-to-federated": "how-to-implement-fedbn.html", + "how-to-configure-logging": "index.html", + "how-to-monitor-simulation": "how-to-run-simulations.html", + "fed/index": "index.html", + "fed/0000-20200102-fed-template": "index.html", + "fed/0001-20220311-flower-enhancement-doc": "index.html", } # -- Options for HTML output ------------------------------------------------- diff --git a/doc/source/fed/0000-20200102-fed-template.md b/doc/source/fed/0000-20200102-fed-template.md deleted file mode 100644 index 39031c4520f..00000000000 --- a/doc/source/fed/0000-20200102-fed-template.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -fed-number: 0000 -title: FED Template -authors: ['@adap'] -creation-date: 2020-01-02 -last-updated: 2020-01-02 -status: provisional ---- - -# FED Template - -## Table of Contents - -- [Table of Contents](#table-of-contents) -- [Summary](#summary) -- [Motivation](#motivation) - - [Goals](#goals) - - [Non-Goals](#non-goals) -- [Proposal](#proposal) -- [Drawbacks](#drawbacks) -- [Alternatives Considered](#alternatives-considered) -- [Appendix](#appendix) - -## Summary - -\[TODO - sentence 1: summary of the problem\] - -\[TODO - sentence 2: summary of the solution\] - -## Motivation - -\[TODO\] - -### Goals - -\[TODO\] - -### Non-Goals - -\[TODO\] - -## Proposal - -\[TODO\] - -## Drawbacks - -\[TODO\] - -## Alternatives Considered - -### \[Alternative 1\] - -\[TODO\] - -### \[Alternative 2\] - -\[TODO\] - -## Appendix diff --git a/doc/source/fed/0001-20220311-flower-enhancement-doc.md b/doc/source/fed/0001-20220311-flower-enhancement-doc.md deleted file mode 100644 index 037142e36f8..00000000000 --- a/doc/source/fed/0001-20220311-flower-enhancement-doc.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -fed-number: 0001 -title: Flower Enhancement Doc -authors: ['@nfnt', '@orlandohohmeier'] -creation-date: 2022-03-11 -last-updated: 2022-12-12 -status: provisional ---- - -# Flower Enhancement Doc - -## Table of Contents - -- [Table of Contents](#table-of-contents) -- [Summary](#summary) -- [Motivation](#motivation) - - [Goals](#goals) - - [Non-Goals](#non-goals) -- [Proposal](#proposal) - - [Enhancement Doc Template](#enhancement-doc-template) - - [Metadata](#metadata) - - [Workflow](#workflow) -- [Drawbacks](#drawbacks) -- [Alternatives Considered](#alternatives-considered) - - [GitHub Issues](#github-issues) - - [Google Docs](#google-docs) - -## Summary - -A Flower Enhancement is a standardized development process to - -- provide a common structure for proposing larger changes -- ensure that the motivation for a change is clear -- persist project information in a version control system -- document the motivation for impactful user-facing changes -- reserve GitHub issues for tracking work in flight -- ensure community participants can successfully drive changes to completion across one or more releases while stakeholders are adequately represented throughout the process - -Hence, an Enhancement Doc combines aspects of - -- a feature, and effort-tracking document -- a product requirements document -- a design document - -into one file, which is created incrementally in collaboration with the community. - -## Motivation - -For far-fetching changes or features proposed to Flower, an abstraction beyond a single GitHub issue or pull request is required to understand and communicate upcoming changes to the project. - -The purpose of this process is to reduce the amount of "tribal knowledge" in our community. By moving decisions from Slack threads, video calls, and hallway conversations into a well-tracked artifact, this process aims to enhance communication and discoverability. - -### Goals - -Roughly any larger, user-facing enhancement should follow the Enhancement process. If an enhancement would be described in either written or verbal communication to anyone besides the author or developer, then consider creating an Enhancement Doc. - -Similarly, any technical effort (refactoring, major architectural change) that will impact a large section of the development community should also be communicated widely. The Enhancement process is suited for this even if it will have zero impact on the typical user or operator. - -### Non-Goals - -For small changes and additions, going through the Enhancement process would be time-consuming and unnecessary. This includes, for example, adding new Federated Learning algorithms, as these only add features without changing how Flower works or is used. - -Enhancements are different from feature requests, as they are already providing a laid-out path for implementation and are championed by members of the community. - -## Proposal - -An Enhancement is captured in a Markdown file that follows a defined template and a workflow to review and store enhancement docs for reference — the Enhancement Doc. - -### Enhancement Doc Template - -Each enhancement doc is provided as a Markdown file having the following structure - -- Metadata (as [described below](#metadata) in form of a YAML preamble) -- Title (same as in metadata) -- Table of Contents (if needed) -- Summary -- Motivation - - Goals - - Non-Goals -- Proposal - - Notes/Constraints/Caveats (optional) -- Design Details (optional) - - Graduation Criteria - - Upgrade/Downgrade Strategy (if applicable) -- Drawbacks -- Alternatives Considered - -As a reference, this document follows the above structure. - -### Metadata - -- **fed-number** (Required) - The `fed-number` of the last Flower Enhancement Doc + 1. With this number, it becomes easy to reference other proposals. -- **title** (Required) - The title of the proposal in plain language. -- **status** (Required) - The current status of the proposal. See [workflow](#workflow) for the possible states. -- **authors** (Required) - A list of authors of the proposal. This is simply the GitHub ID. -- **creation-date** (Required) - The date that the proposal was first submitted in a PR. -- **last-updated** (Optional) - The date that the proposal was last changed significantly. -- **see-also** (Optional) - A list of other proposals that are relevant to this one. -- **replaces** (Optional) - A list of proposals that this one replaces. -- **superseded-by** (Optional) - A list of proposals that this one supersedes. - -### Workflow - -The idea forming the enhancement should already have been discussed or pitched in the community. As such, it needs a champion, usually the author, who shepherds the enhancement. This person also has to find committers to Flower willing to review the proposal. - -New enhancements are checked in with a file name in the form of `NNNN-YYYYMMDD-enhancement-title.md`, with `NNNN` being the Flower Enhancement Doc number, to `enhancements`. All enhancements start in `provisional` state as part of a pull request. Discussions are done as part of the pull request review. - -Once an enhancement has been reviewed and approved, its status is changed to `implementable`. The actual implementation is then done in separate pull requests. These pull requests should mention the respective enhancement as part of their description. After the implementation is done, the proposal status is changed to `implemented`. - -Under certain conditions, other states are possible. An Enhancement has the following states: - -- `provisional`: The enhancement has been proposed and is actively being defined. This is the starting state while the proposal is being fleshed out and actively defined and discussed. -- `implementable`: The enhancement has been reviewed and approved. -- `implemented`: The enhancement has been implemented and is no longer actively changed. -- `deferred`: The enhancement is proposed but not actively being worked on. -- `rejected`: The authors and reviewers have decided that this enhancement is not moving forward. -- `withdrawn`: The authors have withdrawn the enhancement. -- `replaced`: The enhancement has been replaced by a new enhancement. - -## Drawbacks - -Adding an additional process to the ones already provided by GitHub (Issues and Pull Requests) adds more complexity and can be a barrier for potential first-time contributors. - -Expanding the proposal template beyond the single-sentence description currently required in the features issue template may be a heavy burden for non-native English speakers. - -## Alternatives Considered - -### GitHub Issues - -Using GitHub Issues for these kinds of enhancements is doable. One could use, for example, tags, to differentiate and filter them from other issues. The main issue is in discussing and reviewing an enhancement: GitHub issues only have a single thread for comments. Enhancements usually have multiple threads of discussion at the same time for various parts of the doc. Managing these multiple discussions can be confusing when using GitHub Issues. - -### Google Docs - -Google Docs allow for multiple threads of discussions. But as Google Docs are hosted outside the project, their discoverability by the community needs to be taken care of. A list of links to all proposals has to be managed and made available for the community. Compared to shipping proposals as part of Flower's repository, the potential for missing links is much higher. diff --git a/doc/source/fed/index.md b/doc/source/fed/index.md deleted file mode 100644 index 4f680d9367c..00000000000 --- a/doc/source/fed/index.md +++ /dev/null @@ -1,9 +0,0 @@ -# FED - Flower Enhancement Doc - -```{toctree} ---- -maxdepth: 1 ---- -0000-20200102-fed-template.md -0001-20220311-flower-enhancement-doc -``` diff --git a/doc/source/how-to-configure-logging.rst b/doc/source/how-to-configure-logging.rst deleted file mode 100644 index bb7461390b4..00000000000 --- a/doc/source/how-to-configure-logging.rst +++ /dev/null @@ -1,148 +0,0 @@ -Configure logging -================= - -The Flower logger keeps track of all core events that take place in federated learning -workloads. It presents information by default following a standard message format: - -.. code-block:: python - - DEFAULT_FORMATTER = logging.Formatter( - "%(levelname)s %(name)s %(asctime)s | %(filename)s:%(lineno)d | %(message)s" - ) - -containing relevant information including: log message level (e.g. ``INFO``, ``DEBUG``), -a timestamp, the line where the logging took place from, as well as the log message -itself. In this way, the logger would typically display information on your terminal as -follows: - -.. code-block:: bash - - ... - INFO flwr 2023-07-15 15:32:30,935 | server.py:125 | fit progress: (3, 392.5575705766678, {'accuracy': 0.2898}, 13.781953627998519) - DEBUG flwr 2023-07-15 15:32:30,935 | server.py:173 | evaluate_round 3: strategy sampled 25 clients (out of 100) - DEBUG flwr 2023-07-15 15:32:31,388 | server.py:187 | evaluate_round 3 received 25 results and 0 failures - DEBUG flwr 2023-07-15 15:32:31,388 | server.py:222 | fit_round 4: strategy sampled 10 clients (out of 100) - DEBUG flwr 2023-07-15 15:32:32,429 | server.py:236 | fit_round 4 received 10 results and 0 failures - INFO flwr 2023-07-15 15:32:33,516 | server.py:125 | fit progress: (4, 370.3378576040268, {'accuracy': 0.3294}, 16.36216809399957) - DEBUG flwr 2023-07-15 15:32:33,516 | server.py:173 | evaluate_round 4: strategy sampled 25 clients (out of 100) - DEBUG flwr 2023-07-15 15:32:33,966 | server.py:187 | evaluate_round 4 received 25 results and 0 failures - DEBUG flwr 2023-07-15 15:32:33,966 | server.py:222 | fit_round 5: strategy sampled 10 clients (out of 100) - DEBUG flwr 2023-07-15 15:32:34,997 | server.py:236 | fit_round 5 received 10 results and 0 failures - INFO flwr 2023-07-15 15:32:36,118 | server.py:125 | fit progress: (5, 358.6936808824539, {'accuracy': 0.3467}, 18.964264554999318) - ... - -Saving log to file ------------------- - -By default, the Flower log is outputted to the terminal where you launch your Federated -Learning workload from. This applies for both gRPC-based federation (i.e. when you do -``fl.server.start_server``) and when using the ``VirtualClientEngine`` (i.e. when you do -``fl.simulation.start_simulation``). In some situations you might want to save this log -to disk. You can do so by calling the `fl.common.logger.configure() -`_ function. For -example: - -.. code-block:: python - - import flwr as fl - - ... - - # in your main file and before launching your experiment - # add an identifier to your logger - # then specify the name of the file where the log should be outputted to - fl.common.logger.configure(identifier="myFlowerExperiment", filename="log.txt") - - # then start your workload - fl.simulation.start_simulation(...) # or fl.server.start_server(...) - -With the above, Flower will record the log you see on your terminal to ``log.txt``. This -file will be created in the same directory as were you are running the code from. If we -inspect we see the log above is also recorded but prefixing with ``identifier`` each -line: - -.. code-block:: bash - - ... - myFlowerExperiment | INFO flwr 2023-07-15 15:32:30,935 | server.py:125 | fit progress: (3, 392.5575705766678, {'accuracy': 0.2898}, 13.781953627998519) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:30,935 | server.py:173 | evaluate_round 3: strategy sampled 25 clients (out of 100) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:31,388 | server.py:187 | evaluate_round 3 received 25 results and 0 failures - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:31,388 | server.py:222 | fit_round 4: strategy sampled 10 clients (out of 100) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:32,429 | server.py:236 | fit_round 4 received 10 results and 0 failures - myFlowerExperiment | INFO flwr 2023-07-15 15:32:33,516 | server.py:125 | fit progress: (4, 370.3378576040268, {'accuracy': 0.3294}, 16.36216809399957) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:33,516 | server.py:173 | evaluate_round 4: strategy sampled 25 clients (out of 100) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:33,966 | server.py:187 | evaluate_round 4 received 25 results and 0 failures - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:33,966 | server.py:222 | fit_round 5: strategy sampled 10 clients (out of 100) - myFlowerExperiment | DEBUG flwr 2023-07-15 15:32:34,997 | server.py:236 | fit_round 5 received 10 results and 0 failures - myFlowerExperiment | INFO flwr 2023-07-15 15:32:36,118 | server.py:125 | fit progress: (5, 358.6936808824539, {'accuracy': 0.3467}, 18.964264554999318) - ... - -Log your own messages ---------------------- - -You might expand the information shown by default with the Flower logger by adding more -messages relevant to your application. You can achieve this easily as follows. - -.. code-block:: python - - # in the python file you want to add custom messages to the Flower log - from logging import INFO, DEBUG - from flwr.common.logger import log - - # For example, let's say you want to add to the log some info about the training on your client for debugging purposes - - - class FlowerClient(fl.client.NumPyClient): - def __init__( - self, - cid: int, - # ... - ): - self.cid = cid - self.net = net - # ... - - def fit(self, parameters, config): - log(INFO, f"Printing a custom INFO message at the start of fit() :)") - - set_params(self.net, parameters) - - log(DEBUG, f"Client {self.cid} is doing fit() with config: {config}") - - # ... - -In this way your logger will show, in addition to the default messages, the ones -introduced by the clients as specified above. - -.. code-block:: bash - - ... - INFO flwr 2023-07-15 16:18:21,726 | server.py:89 | Initializing global parameters - INFO flwr 2023-07-15 16:18:21,726 | server.py:276 | Requesting initial parameters from one random client - INFO flwr 2023-07-15 16:18:22,511 | server.py:280 | Received initial parameters from one random client - INFO flwr 2023-07-15 16:18:22,511 | server.py:91 | Evaluating initial parameters - INFO flwr 2023-07-15 16:18:25,200 | server.py:94 | initial parameters (loss, other metrics): 461.2934241294861, {'accuracy': 0.0998} - INFO flwr 2023-07-15 16:18:25,200 | server.py:104 | FL starting - DEBUG flwr 2023-07-15 16:18:25,200 | server.py:222 | fit_round 1: strategy sampled 10 clients (out of 100) - INFO flwr 2023-07-15 16:18:26,391 | main.py:64 | Printing a custom INFO message :) - DEBUG flwr 2023-07-15 16:18:26,391 | main.py:63 | Client 44 is doing fit() with config: {'epochs': 5, 'batch_size': 64} - INFO flwr 2023-07-15 16:18:26,391 | main.py:64 | Printing a custom INFO message :) - DEBUG flwr 2023-07-15 16:18:28,464 | main.py:63 | Client 99 is doing fit() with config: {'epochs': 5, 'batch_size': 64} - INFO flwr 2023-07-15 16:18:28,465 | main.py:64 | Printing a custom INFO message :) - DEBUG flwr 2023-07-15 16:18:28,519 | main.py:63 | Client 67 is doing fit() with config: {'epochs': 5, 'batch_size': 64} - INFO flwr 2023-07-15 16:18:28,519 | main.py:64 | Printing a custom INFO message :) - DEBUG flwr 2023-07-15 16:18:28,615 | main.py:63 | Client 11 is doing fit() with config: {'epochs': 5, 'batch_size': 64} - INFO flwr 2023-07-15 16:18:28,615 | main.py:64 | Printing a custom INFO message :) - DEBUG flwr 2023-07-15 16:18:28,617 | main.py:63 | Client 13 is doing fit() with config: {'epochs': 5, 'batch_size': 64} - ... - -Log to a remote service ------------------------ - -The ``fl.common.logger.configure`` function, also allows specifying a host to which logs -can be pushed (via ``POST``) through a native Python ``logging.handler.HTTPHandler``. -This is a particularly useful feature in ``gRPC``-based Federated Learning workloads -where otherwise gathering logs from all entities (i.e. the server and the clients) might -be cumbersome. Note that in Flower simulation, the server automatically displays all -logs. You can still specify a ``HTTPHandler`` should you wish to backup or analyze the -logs somewhere else. diff --git a/doc/source/how-to-design-stateful-clients.rst b/doc/source/how-to-design-stateful-clients.rst index fc2755eb9c0..8e3fe8c09b4 100644 --- a/doc/source/how-to-design-stateful-clients.rst +++ b/doc/source/how-to-design-stateful-clients.rst @@ -1,4 +1,4 @@ -Design Stateful ClientApps +Design stateful ClientApps ========================== .. _array: ref-api/flwr.common.Array.html diff --git a/doc/source/how-to-monitor-simulation.rst b/doc/source/how-to-monitor-simulation.rst deleted file mode 100644 index f540e22a6a7..00000000000 --- a/doc/source/how-to-monitor-simulation.rst +++ /dev/null @@ -1,261 +0,0 @@ -Monitor simulation -================== - -Flower allows you to monitor system resources while running your simulation. Moreover, -the Flower simulation engine is powerful and enables you to decide how to allocate -resources per client manner and constrain the total usage. Insights from resource -consumption can help you make smarter decisions and speed up the execution time. - -The specific instructions assume you are using macOS and have the `Homebrew -`_ package manager installed. - -Downloads ---------- - -.. code-block:: bash - - brew install prometheus grafana - -`Prometheus `_ is used for data collection, while `Grafana -`_ will enable you to visualize the collected data. They are both -well integrated with `Ray `_ which Flower uses under the hood. - -Overwrite the configuration files (depending on your device, it might be installed on a -different path). - -If you are on an M1 Mac, it should be: - -.. code-block:: bash - - /opt/homebrew/etc/prometheus.yml - /opt/homebrew/etc/grafana/grafana.ini - -On the previous generation Intel Mac devices, it should be: - -.. code-block:: bash - - /usr/local/etc/prometheus.yml - /usr/local/etc/grafana/grafana.ini - -Open the respective configuration files and change them. Depending on your device, use -one of the two following commands: - -.. code-block:: bash - - # M1 macOS - open /opt/homebrew/etc/prometheus.yml - - # Intel macOS - open /usr/local/etc/prometheus.yml - -and then delete all the text in the file and paste a new Prometheus config you see -below. You may adjust the time intervals to your requirements: - -.. code-block:: bash - - global: - scrape_interval: 1s - evaluation_interval: 1s - - scrape_configs: - # Scrape from each ray node as defined in the service_discovery.json provided by ray. - - job_name: 'ray' - file_sd_configs: - - files: - - '/tmp/ray/prom_metrics_service_discovery.json' - -Now after you have edited the Prometheus configuration, do the same with the Grafana -configuration files. Open those using one of the following commands as before: - -.. code-block:: python - - # M1 macOS - open / opt / homebrew / etc / grafana / grafana.ini - - # Intel macOS - open / usr / local / etc / grafana / grafana.ini - -Your terminal editor should open and allow you to apply the following configuration as -before. - -.. code-block:: bash - - [security] - allow_embedding = true - - [auth.anonymous] - enabled = true - org_name = Main Org. - org_role = Viewer - - [paths] - provisioning = /tmp/ray/session_latest/metrics/grafana/provisioning - -Congratulations, you just downloaded all the necessary software needed for metrics -tracking. Now, let’s start it. - -Tracking metrics ----------------- - -Before running your Flower simulation, you have to start the monitoring tools you have -just installed and configured. - -.. code-block:: bash - - brew services start prometheus - brew services start grafana - -Please include the following argument in your Python code when starting a simulation. - -.. code-block:: python - - fl.simulation.start_simulation( - # ... - # all the args you used before - # ... - ray_init_args={"include_dashboard": True} - ) - -Now, you are ready to start your workload. - -Shortly after the simulation starts, you should see the following logs in your terminal: - -.. code-block:: bash - - 2023-01-20 16:22:58,620 INFO [worker.py:1529](http://worker.py:1529/) -- Started a local Ray instance. View the dashboard at http://127.0.0.1:8265 - -You can look at everything at http://127.0.0.1:8265 . - -It's a Ray Dashboard. You can navigate to Metrics (on the left panel, the lowest -option). - -Or alternatively, you can just see them in Grafana by clicking on the right-up corner, -“View in Grafana”. Please note that the Ray dashboard is only accessible during the -simulation. After the simulation ends, you can only use Grafana to explore the metrics. -You can start Grafana by going to ``http://localhost:3000/``. - -After you finish the visualization, stop Prometheus and Grafana. This is important as -they will otherwise block, for example port ``3000`` on your machine as long as they are -running. - -.. code-block:: bash - - brew services stop prometheus - brew services stop grafana - -Resource allocation -------------------- - -You must understand how the Ray library works to efficiently allocate system resources -to simulation clients on your own. - -Initially, the simulation (which Ray handles under the hood) starts by default with all -the available resources on the system, which it shares among the clients. It doesn't -mean it divides it equally among all of them, nor that the model training happens at all -of them simultaneously. You will learn more about that in the later part of this blog. -You can check the system resources by running the following: - -.. code-block:: python - - import ray - - ray.available_resources() - -In Google Colab, the result you see might be similar to this: - -.. code-block:: bash - - {'memory': 8020104807.0, - 'GPU': 1.0, - 'object_store_memory': 4010052403.0, - 'CPU': 2.0, - 'accelerator_type:T4': 1.0, - 'node:172.28.0.2': 1.0} - -However, you can overwrite the defaults. When starting a simulation, do the following -(you don't need to overwrite all of them): - -.. code-block:: python - - num_cpus = 2 - num_gpus = 1 - ram_memory = 16_000 * 1024 * 1024 # 16 GB - fl.simulation.start_simulation( - # ... - # all the args you were specifying before - # ... - ray_init_args={ - "include_dashboard": True, # we need this one for tracking - "num_cpus": num_cpus, - "num_gpus": num_gpus, - "memory": ram_memory, - } - ) - -Let’s also specify the resource for a single client. - -.. code-block:: python - - # Total resources for simulation - num_cpus = 4 - num_gpus = 1 - ram_memory = 16_000 * 1024 * 1024 # 16 GB - - # Single client resources - client_num_cpus = 2 - client_num_gpus = 1 - - fl.simulation.start_simulation( - # ... - # all the args you were specifying before - # ... - ray_init_args={ - "include_dashboard": True, # we need this one for tracking - "num_cpus": num_cpus, - "num_gpus": num_gpus, - "memory": ram_memory, - }, - # The argument below is new - client_resources={ - "num_cpus": client_num_cpus, - "num_gpus": client_num_gpus, - }, - ) - -Now comes the crucial part. Ray will start a new client only when it has all the -required resources (such that they run in parallel) when the resources allow. - -In the example above, only one client will be run, so your clients won't run -concurrently. Setting ``client_num_gpus = 0.5`` would allow running two clients and -therefore enable them to run concurrently. Be careful not to require more resources than -available. If you specified ``client_num_gpus = 2``, the simulation wouldn't start (even -if you had 2 GPUs but decided to set 1 in ``ray_init_args``). - -FAQ ---- - -Q: I don't see any metrics logged. - -A: The timeframe might not be properly set. The setting is in the top right corner -("Last 30 minutes" by default). Please change the timeframe to reflect the period when -the simulation was running. - -Q: I see “Grafana server not detected. Please make sure the Grafana server is running -and refresh this page” after going to the Metrics tab in Ray Dashboard. - -A: You probably don't have Grafana running. Please check the running services - -.. code-block:: bash - - brew services list - -Q: I see "This site can't be reached" when going to http://127.0.0.1:8265. - -A: Either the simulation has already finished, or you still need to start Prometheus. - -Resources ---------- - -Ray Dashboard: https://docs.ray.io/en/latest/ray-observability/getting-started.html - -Ray Metrics: https://docs.ray.io/en/latest/cluster/metrics.html diff --git a/doc/source/index.rst b/doc/source/index.rst index 5ffb4c23855..dd8e5853467 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -94,8 +94,6 @@ Problem-oriented how-to guides show step-by-step how to achieve a specific goal. how-to-aggregate-evaluation-results how-to-save-and-load-model-checkpoints how-to-run-simulations - how-to-monitor-simulation - how-to-configure-logging how-to-enable-ssl-connections how-to-use-built-in-mods how-to-use-differential-privacy @@ -130,7 +128,7 @@ Information-oriented API reference and other reference material. :caption: API reference :recursive: - flwr + flwr .. toctree:: :maxdepth: 2 @@ -181,7 +179,6 @@ along the way. :maxdepth: 1 :caption: Contributor references - fed/index contributor-ref-good-first-contributions contributor-ref-secure-aggregation-protocols diff --git a/doc/source/ref-api-cli.rst b/doc/source/ref-api-cli.rst index 81724cac4c9..01f07f0893a 100644 --- a/doc/source/ref-api-cli.rst +++ b/doc/source/ref-api-cli.rst @@ -1,10 +1,13 @@ Flower CLI reference ==================== +Basic Commands +-------------- + .. _flwr-apiref: ``flwr`` CLI ------------- +~~~~~~~~~~~~ .. click:: flwr.cli.app:typer_click_object :prog: flwr @@ -13,7 +16,7 @@ Flower CLI reference .. _flower-superlink-apiref: ``flower-superlink`` --------------------- +~~~~~~~~~~~~~~~~~~~~ .. argparse:: :module: flwr.server.app @@ -23,7 +26,7 @@ Flower CLI reference .. _flower-supernode-apiref: ``flower-supernode`` --------------------- +~~~~~~~~~~~~~~~~~~~~ .. argparse:: :module: flwr.client.supernode.app @@ -31,12 +34,12 @@ Flower CLI reference :prog: flower-supernode Advanced Commands -================= +----------------- .. _flwr-serverapp-apiref: ``flwr-serverapp`` ------------------- +~~~~~~~~~~~~~~~~~~ .. argparse:: :module: flwr.server.serverapp.app @@ -46,7 +49,7 @@ Advanced Commands .. _flwr-clientapp-apiref: ``flwr-clientapp`` ------------------- +~~~~~~~~~~~~~~~~~~ .. argparse:: :module: flwr.client.clientapp.app @@ -54,12 +57,12 @@ Advanced Commands :prog: flwr-clientapp Technical Commands -================== +------------------ .. _flower-simulation-apiref: ``flower-simulation`` ---------------------- +~~~~~~~~~~~~~~~~~~~~~ .. argparse:: :module: flwr.simulation.run_simulation @@ -67,12 +70,12 @@ Technical Commands :prog: flower-simulation Deprecated Commands -=================== +------------------- .. _flower-server-app-apiref: ``flower-server-app`` ---------------------- +~~~~~~~~~~~~~~~~~~~~~ .. warning:: @@ -82,7 +85,7 @@ Deprecated Commands .. _flower-superexec-apiref: ``flower-superexec`` --------------------- +~~~~~~~~~~~~~~~~~~~~ .. warning::