From 4b7f5469bfba5d4f974cd73222eea790cf9035a1 Mon Sep 17 00:00:00 2001 From: kghamilton89 <29099829+kghamilton89@users.noreply.github.com> Date: Fri, 29 Mar 2024 23:28:33 +0200 Subject: [PATCH] repair directory path --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a3579e1..1ef4cf8 100644 --- a/README.md +++ b/README.md @@ -330,7 +330,7 @@ For example, suppose we have a directory called ``my_image_datasets``. We would ### Local training If you wish to debug your code or setup before launching a distributed training run, we provide the functionality to do so by running the pretraining script locally on a multi-GPU (or single-GPU) machine, however, reproducing our results requires launching distributed training. -The single-machine implementation starts from the [app/main.py](appmain.py), which parses the experiment config file and runs the pretraining locally on a multi-GPU (or single-GPU) machine. +The single-machine implementation starts from the [app/main.py](app/main.py), which parses the experiment config file and runs the pretraining locally on a multi-GPU (or single-GPU) machine. For example, to run V-JEPA pretraining on GPUs "0", "1", and "2" on a local machine using the config [configs/pretrain/vitl16.yaml](configs/pretrain/vitl16.yaml), type the command: ```bash python -m app.main \