Moneo is a distributed GPU system monitor for AI workflows. It orchestrates metric collection (DCGMI + Prometheus DB) and visualization (Grafana) across multi-GPU/node systems. This provides useful insights into workflow and system level characterization.
Moneo offers flexibility with 3 deployment methods:
- The prefered method using Azure Managed Prometheus/Grafana and Moneo linux services for collection (Headless deployment)
- Using Azure Application Insights/Azure Monitor Workspace(AMW) (Headless deployment w/ App Insights).
- Using Moneo CLI with a dedicate headnode to host local Prometheus/Grafana servers (Local Grafana Deployment)
Moneo Headless Method:
Metrics
There five categories of metrics that Moneo monitors:
-
GPU Counters
- Compute/Memory Utilization
- SM and Memory Clock frequency
- Temperature
- Power
- ECC Counts (Nvidia)
- GPU Throttling (Nvidia)
- XID code (Nvidia)
-
GPU Profiling Counters
- SM Activity
- Memory Dram Activity
- NVLink Activity
- PCIE Rate
-
InfiniBand Network Counters
- IB TX/RX rate
- IB Port errors
- IB Link FLap
-
CPU Counters
- Utilization
- Clock frequency
-
Memory
- Utilization
Grafana Dashboards
-
Menu: List of available dashboards.
Note: When viewing GPU dashboards make sure to note whether you are using Nvidia or AMD GPU nodes and select the proper dashboard.
-
Cluster View: contains min, max, average across devices for GPU/IB metrics per VM.
-
GPU Device Counters: Detailed view of node level GPU counters.
-
GPU Profiling Counters: Node level profiling metrics require additional overhead which may affect workload performance. Tensor, FP16, FP32, and FP64 activity are disabled by default but can be switched on by CLI command.
-
InfiniBand Network Counters: Detailed view of node level IB network metrics.
-
Node View: Detailed view of node level CPU, Memory, and Network metrics.
-
python >=3.7 installed
-
OS Support:
- Ubuntu 18.04, 20.04, 22.04
- AlmaLinux 8.6
Note: Not applicable if using Azure Managed Grafana/Prometheus
- docker 20.10.23 (May work with other versions but this has been tested.)
- parallel-ssh 2.3.1-2 (May work with other versions but this has been tested.)
- Manager node must be able to ssh to itself
- Nvidia Architecture supported (only for Nvidia GPU monitoring):
- Volta
- Ampere
- Hopper
- Installed with install script at time of deployment (If not installed):
- DCGM 3.1.6 (For Nvidia deployments)
- Check install scripts for the various python packages installed.
Get the code:
-
Clone Moneo from Github.
# get the code git clone https://github.com/Azure/Moneo.git cd Moneo # install dependency sudo apt-get install pssh
Note: If you are using an Azure Ubuntu HPC-AI VM image you can find the Moneo in this path: /opt/azurehpc/tools/Moneo
The moneo_config.json file can be used to specify certain deployment settings prior to moneo deployment.
There are 4 groups of configurations:
- exporter_conf - This applies to all deployments. See the following settings:
- gpu_sample_interval - Sample rate per minute for Nvidia GPU exporter. Choices are [1, 2, 30, 60, 120, 600]. with 60 samples per minute being default.
- gpu_profiling - Switches on additional profile metrics (Tensor, FP16, FP32, and FP64). Choices are true/false with false as default.
- Note: These settings may have an impact on performance. Default settings were chosen to minimize impact.
- prom_config - This group of settings applies to the Headless deployment method. Refer to Headless Deployment Guide for usage.
- geneva_config - Applies to Geneva deployement. Refer to Geneva deployment for usage.
- publisher_config - Applies to both Geneva and Azure Monitor agent deployment methods see Geneva deployment or Azure Monitor Agent deployment for usage.
The prefered way to deploy Moneo is the headless method using Azure Managaed Grafana and Prometheus resources.
Complete the steps listed here: Headless Deployment Guide
This method requires a deploying of a head node to host the local Prometheus database and Grafana server.
- The headnode must have enough storage available to facilitate data collection
- Grafana and Prometheus are accessed via web browser. Ensure proper access from web browser to headnode IP.
Complete the steps listed here: Local Grafana Deployment Guide
Moneo CLI provides an alternative way to deploy and update Moneo manager and worker nodes. Although linux services are prefered this offers an alternative way to control Moneo.
python3 moneo.py [-d/--deploy] [-c hostfile] {manager,workers,full}
python3 moneo.py [-s/--shutdown] [-c hostfile] {manager,workers,full}
python3 moneo.py [-j JOB_ID ] [-c hostfile]
- i.e.
python3 moneo.py -d -c ./hostfile full
Note: For more options check the Moneo help menu
python3 moneo.py --help
- For Azure Managed Grafana the dashboards can be accessed via the endpoint provided on the resource overview.
- For Moneo CLI deployment with a dedicated head node the Grafana portal can be reached via browser: http://master-ip-or-domain:3000
- If Azure Monitor is used navigate to the Azure Monitor Workspace on The Azure portal.
- Headless Deployment Guide
- Local Grafana Deployment Guide
- To get started with job level filtering see: Job Level Filtering
- Slurm epilog/prolog integration: Slurm example
- To deploy moneo-worker inside container: Moneo-exporter
- To integrate Moneo with Azure Application Insights dashboard see: Azure Monitor
- To expose customized metrics by using custom exporter Custom Exporter
- For Geneva ingestion (internal Microsoft) see: Geneva
-
NVIDIA exporter may conflict with DCGMI
There're two modes for DCGM: embedded mode and standalone mode.
If DCGM is started as embedded mode (e.g.,
nv-hostengine -n
, using no daemon option-n
), the exporter will use the DCGM agent while DCGMI may return error.It's recommended to start DCGM in standalone mode in a daemon, so that multiple clients like exporter and DCGMI can interact with DCGM at the same time, according to NVIDIA.
Generally, NVIDIA prefers this mode of operation, as it provides the most flexibility and lowest maintenance cost to users.
-
Moneo will attempt to install a tested version of DCGM if it is not present on the worker nodes. However, this step is skipped if DCGM is already installed. In instances DCGM installed may be too old.
This may cause the Nvidia exporter to fail. In this case it is recommended that DCGM be upgraded to atleast version 2.4.4. To view which exporters are running on a worker just run
ps -eaf | grep python3
-
For Managed Grafana (headless) deployment
- Verify that the user managed identity is assigned to the VM resource.
- Verify the prerequisite configure file (
Moneo/moneo_config.json
) is configured correctly on each worker node. - On the worker nodes verify functionality of prometheus agent remote write:
- Check prometheus docker with
sudo docker logs prometheus | grep 'Done replaying WAL'
It will have the result like this:
- Check prometheus docker with
ts=2023-08-07T07:25:49.636Z caller=dedupe.go:112 component=remote level=info remote_name=6ac237 url="<ingestion_endpoint>" msg="Done replaying WAL" duration=8.339998173s
- Check Azure Grafana's is linked to Azure Prometheus workspace.
-
For deployments with a Headnode:
- Verifying Grafana and Prometheus containers are running:
- Check browser http://master-ip-or-domain:3000 (Grafana), http://master-ip-or-domain:9090 (Prometheus)
- On Manager node terminal run
sudo docker container ls
- Verifying Grafana and Prometheus containers are running:
-
All deployments:
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.