how to compose the train_dataset with two different pipelines #1346
-
[English] In mmediting, data.train in config (or data.val、data.test) has corresponding pipeline: train_pipeline. Can the data.train consist of two different datasets, which have corresponding pipeline . |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi, Do you mean using two pipelines for a single dataloader? |
Beta Was this translation helpful? Give feedback.
-
@hadesfgh , If the transform is randomly applied to each samples, you can use If you want to apply specific transforms to specific samples, you can decide whether you want to perform a data transformation operation based on the file name (or other metainfo). A demo implementation is provided as follow: class SampleSpecTransform(BaseTransform):
def __init__(self, filter_key, *args, **kwargs):
self.filter_key = filter_key # a filter function to judge the sample
...
def transform(self, results):
# get some metainfo in some way
img_path = results['img_path']
# judge whether apply the transformation
if self.filter_key in img_path:
results = self._do_transform(results)
return results |
Beta Was this translation helpful? Give feedback.
@hadesfgh , If the transform is randomly applied to each samples, you can use
RandomApply
transform.If you want to apply specific transforms to specific samples, you can decide whether you want to perform a data transformation operation based on the file name (or other metainfo). A demo implementation is provided as follow: