Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the relationship between samples_per_gpu and workers_per_gpu and sample_ratio? #19

Open
joeyslv opened this issue May 10, 2023 · 5 comments

Comments

@joeyslv
Copy link

joeyslv commented May 10, 2023

I can only use the default

sample_ ratio=[1, 4]
samples_ per_ gpu=4
workers_ per_ gpu=4

But to increase the batchsize a bit,
samples_ per_ gpu=8,
when the program cannot run and a len() error will occur. Can you tell me the relationship between these three and how labeled and unlabeled data is sampled in the project? Thank you very much

@Adamdad
Copy link
Owner

Adamdad commented May 10, 2023

There are three distinct concepts to understand:

  1. sample_ratio=[1, 4] indicates the ratio of labeled to unlabeled samples within a single GPU. For instance, sample_ratio=[1, 4] means that there is 1 labeled and 4 unlabeled samples on each GPU.
  2. samples_per_gpu=5 refers to the total number of samples per GPU, regardless of whether they are labeled or unlabeled. In fact, sum(sample_ratio) == samples_per_gpu.
  3. workers_per_gpu=5 determines the number of threads that will be used to load the data. The optimal number of workers per GPU depends on your server setup. By default, we set workers_per_gpu equal to samples_per_gpu, but you can reduce this value if your server has limited CPU resources. If necessary, you can set it to 1 or 0.

data = dict(
samples_per_gpu=5,
workers_per_gpu=5,
train=dict(
_delete_=True,
type="SemiDataset",
sup=dict(
type="CocoDataset",
ann_file="data/coco_semi/semi_supervised/instances_train2017.${fold}@${percent}.json",
img_prefix="data/coco/train2017/",
pipeline=train_pipeline,
),
unsup=dict(
type="CocoDataset",
ann_file="data/coco_semi/semi_supervised/instances_train2017.${fold}@${percent}-unlabeled.json",
img_prefix="data/coco/train2017/",
pipeline=unsup_pipeline,
filter_empty_gt=False,
),
),
val=dict(
img_prefix="data/coco/val2017/",
ann_file='data/coco/annotations/instances_val2017.json',
pipeline=test_pipeline
),
test=dict(
pipeline=test_pipeline,
img_prefix="data/coco/val2017/",
ann_file='data/coco/annotations/instances_val2017.json'
),
sampler=dict(
train=dict(
type="SemiBalanceSampler",
sample_ratio=[1, 4],
by_prob=False,
# at_least_one=True,
epoch_length=7330,
)
),
)

@X-KL
Copy link

X-KL commented Oct 7, 2024

Hello, I have been reproducing this paper recently and have encountered the issue you mentioned in another post. Loss=0. I trained on the Coco dataset, and before 10000, the unsup-loss and loss were both 0, but after 10000 the loss is noemal, but soon the loss became 0 again, and unsup_gmm_thr also became 0. Can we discuss this issue with you.
image
image

@huangnana1
Copy link

Hello, I have been reproducing this paper recently and have encountered the issue you mentioned in another post. Loss=0. I trained on the Coco dataset, and before 10000, the unsup-loss and loss were both 0, but after 10000 the loss is noemal, but soon the loss became 0 again, and unsup_gmm_thr also became 0. Can we discuss this issue with you. image image

请问您的该问题解决了吗,我也遇到这个问题

@Adamdad
Copy link
Owner

Adamdad commented Oct 9, 2024

Hello @huangnana1 and @X-KL , how many GPUs are you using and did you change any values in the configs?

@X-KL
Copy link

X-KL commented Oct 11, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants