-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update docs. #1152
Update docs. #1152
Changes from 41 commits
fcc7459
25d3a74
b116262
b33cbc2
81ed858
db726eb
0625ade
60e3ad8
cbcea08
6e1885f
89853dc
d3180d1
45886d7
255de3f
0c5feaa
d9546fa
4cb40e3
da42d47
f877dce
86e3924
30edc26
be27eb6
eabf590
c2659d3
466813a
9af43c3
07c03e3
c21582e
731b0c8
be3aac4
282b966
3b03f64
e4ccf43
0f0b193
cb98b38
56afa6a
f244212
a452386
a46ef30
ebb8207
5b2c88a
0969c4a
fa32ddc
433921d
f5433fc
3755585
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,12 @@ | ||
English | [简体中文](faq_cn.md) | ||
# FAQ | ||
coming soon | ||
|
||
## Q1: How to load the weight parameters of the pre-trained model locally? | ||
|
||
* **Answer**:The recommended configuration parameters of the model are stored in the yaml file of each model folder under PaddleSeg/configs. For example, one of the configurations of ANN is given in /PaddleSeg/configs/ann/ann_resnet50_os8_cityscapes_1024x512_80k.yml. As shown below: | ||
|
||
![](./faq_imgs/ann_config.png) | ||
|
||
> The red part in the figure is the storage location of the pre-training model parameter file of the backbone network. **Note**: Here, we will download the pre-training model parameters provided by us directly in the form of an https link. If you have the pre-trained model parameters of the backbone network locally, please replace the `pretrained` under `backbone` in the yaml file with the absolute path where it is stored. Otherwise, you must set the relative path for the storage location of the pre-training model parameters according to the directory where the `train.py` will be executed. | ||
|
||
> The green part in the figure is the storage location of the pre-training model parameter file of the segmentation network. If you have the pre-trained model parameters of the segmentation network locally, please replace the `pretrained` in the yaml file with the absolute path where it is stored. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
简体中文 | [English](faq.md) | ||
# FAQ | ||
|
||
## Q1: PaddleSeg 如何从本地加载预训练模型的权重参数? | ||
|
||
* **Answer**:PaddleSeg 模型的推荐配置参数统一存放在 PaddleSeg/configs 下各个模型文件夹的 yaml 文件中。比如 ANN 的其中一个配置在 /PaddleSeg/configs/ann/ann_resnet50_os8_cityscapes_1024x512_80k.yml 中给出。如下图所示: | ||
|
||
![](./faq_imgs/ann_config.png) | ||
|
||
> 图中红色部分为骨干网络的预训练模型参数文件的存放位置。**请注意**:此处将直接以 https 链接形式下载我们提供的预训练模型参数。如果你在本地拥有骨干网络的预训练模型参数,请用其存放的绝对路径替换该 yaml 文件中 `backbone` 下的 `pretrained`。否则,你必须根据将要执行的 `train.py` 所在的目录为该预训练模型参数的存放位置设置相对路径。 | ||
|
||
> 图中绿色部分为分割网络的预训练模型参数文件的存放位置。如果你在本地拥有分割网络的预训练模型参数,请用其存放的绝对路径替换该 yaml 文件中的 `pretrained`。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 绝对路径 改成 绝对路径或相对路径 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 英文同步修改 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
简体中文 | [English](BCELoss_en.md) | ||
# [BCELoss](../../../paddleseg/models/losses/binary_cross_entropy_loss.py) | ||
|
||
|
||
二元交叉熵适合处理二分类与多标签分类任务。二元交叉熵以标注图的概率模型为基准,用二分类语义分割模型计算KL散度,根据吉布斯不等式知二者的交叉熵大于语义分割概率模型的熵。计算BCELoss时,我们通常忽略语义分割概率模型的熵(因为它是常量),仅将KL散度的一部分作为损失函数。 | ||
|
||
|
||
```python | ||
class paddleseg.models.losses.BCELoss( | ||
weight = None, | ||
pos_weight = None, | ||
ignore_index = 255, | ||
edge_label = False | ||
) | ||
``` | ||
|
||
## BCELoss 使用指南 | ||
|
||
|
||
### 参数 | ||
* **weight** (Tensor | str, optional): 对每个批数据元素的损失手动地重新调整权重。如果设定该参数, | ||
且若传入的是一个 1D 张量,则其尺寸为 `[N, ]`,其数据类型为 float32 或 float64; | ||
若传入的是一个 str,则值必须指定为 'dynamic',以使在每轮迭代中根据二元交叉熵动态的计算权重。 | ||
*默认:``'None'``* | ||
* **pos_weight** (float|str, optional): 正样本的权重。若传入的是一个 str,则值必须指定为 'dynamic',以使在每轮迭代中动态的计算权重。 | ||
*默认:``'None'``* | ||
* **ignore_index** (int64, optional): 指定一个在标注图中要忽略的像素值,其对输入梯度不产生贡献。当标注图中存在无法标注(或很难标注)的像素时,可以将其标注为某特定灰度值。在计算损失值时,其与原图像对应位置的像素将不作为损失函数的自变量。 *默认:``255``* | ||
* **edge_label** (bool, optional): 是否使用边缘标签。 *默认:``False``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
English | [简体中文](BCELoss_cn.md) | ||
# [BCELoss](../../../paddleseg/models/losses/binary_cross_entropy_loss.py) | ||
|
||
|
||
Binary cross entropy is suitable for handling binary classification and multi-label classification tasks.The binary cross entropy is based on the probability model of the annotated map, and the binary semantic segmentation model is used to calculate the KL divergence. According to the Gibbs inequality, the cross entropy of the two is greater than the entropy of the semantic segmentation probability model. When calculating BCELoss, we usually ignore the entropy of the semantic segmentation probability model (because it is a constant), and only use a part of the KL divergence as the loss function. | ||
|
||
|
||
```python | ||
class paddleseg.models.losses.BCELoss( | ||
weight = None, | ||
pos_weight = None, | ||
ignore_index = 255, | ||
edge_label = False | ||
) | ||
``` | ||
|
||
## BCELoss usage guidance | ||
|
||
|
||
### Args | ||
* **weight** (Tensor | str, optional): A manual rescaling weight given to the loss of each | ||
batch element. If given, it has to be a 1D Tensor whose size is `[N, ]`,the data type is float32, float64. | ||
If type is str, it should equal to 'dynamic'. | ||
It will compute weight dynamically in every step. | ||
*Default:``'None'``* | ||
* **pos_weight** (float|str, optional): A weight of positive examples. If type is str, | ||
it should equal to 'dynamic'. It will compute weight dynamically in every step. | ||
*Default:``'None'``* | ||
* **ignore_index** (int64, optional): Specify a pixel value to be ignored in the annotated image | ||
and does not contribute to the input gradient. When there are pixels that cannot be marked (or difficult to be marked) in the marked image, they can be marked as a specific gray value. When calculating the loss value, the pixel corresponding to the original image will not be used as the independent variable of the loss function. *Default:``255``* | ||
* **edge_label** (bool, optional): Whether to use edge label. *Default:``False``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
简体中文 | [English](BootstrappedCrossEntropyLoss_en.md) | ||
## [BootstrappedCrossEntropyLoss](../../../paddleseg/models/losses/bootstrapped_cross_entropy.py) | ||
Bootstrapped 首先利用样本构造初始的分类起,然后对未标记样本进行迭代分类,进而利用扩展后的训练数据为未标记样本提取新的 seed rules。 | ||
[参考文献](https://arxiv.org/pdf/1412.6596.pdf) | ||
```python | ||
class paddleseg.models.losses.BootstrappedCrossEntropyLoss( | ||
min_K, | ||
loss_th, | ||
weight = None, | ||
ignore_index = 255 | ||
) | ||
``` | ||
|
||
## Bootstrapped cross entropy loss 使用指南 | ||
|
||
### 参数 | ||
* **min_K** (int): 在计算损失时,参与计算的最小像素数。 | ||
* **loss_th** (float): 损失阈值。 只计算大于阈值的损失。 | ||
* **weight** (tuple|list, optional): 不同类的权重。 *默认:``None``* | ||
* **ignore_index** (int, optional): 指定一个在标注图中要忽略的像素值,其对输入梯度不产生贡献。 *默认:``255``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
English | [简体中文](BootstrappedCrossEntropyLoss_cn.md) | ||
## [BootstrappedCrossEntropyLoss](../../../paddleseg/models/losses/bootstrapped_cross_entropy.py) | ||
|
||
Bootstrapped first uses samples to construct an initial classification, and then iteratively classifies unlabeled samples, and then uses the expanded training data to extract new seed rules for unlabeled samples. | ||
|
||
[paper](https://arxiv.org/pdf/1412.6596.pdf) | ||
```python | ||
class paddleseg.models.losses.BootstrappedCrossEntropyLoss( | ||
min_K, | ||
loss_th, | ||
weight = None, | ||
ignore_index = 255 | ||
) | ||
``` | ||
|
||
## Bootstrapped cross entropy loss usage guidance | ||
|
||
### Args | ||
* **min_K** (int): the minimum number of pixels to be counted in loss computation. | ||
* **loss_th** (float): The loss threshold. Only loss that is larger than the threshold | ||
would be calculated. | ||
* **weight** (tuple|list, optional): The weight for different classes. *Default:``None``* | ||
* **ignore_index** (int, optional): Specify a pixel value to be ignored in the annotated image | ||
and does not contribute to the input gradient.When there are pixels that cannot be marked (or difficult to be marked) in the marked image, they can be marked as a specific gray value. When calculating the loss value, the pixel corresponding to the original image will not be used as the independent variable of the loss function. *Default:``255``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
简体中文 | [English](CrossEntropyLoss_en.md) | ||
## [CrossEntropyLoss](../../../paddleseg/models/losses/cross_entropy_loss.py) | ||
|
||
|
||
交叉熵 (CE) 方法由于其简单性和有效性,允许调整不同类别像素的权重,成为了一种最流行的损失函数。在很多语义分割任务中,交叉熵依赖于足够多的目标函数调用来准确估计基础分布的最佳参数。 | ||
CrossEntropyLoss常用于多像素类别的分割任务,其描述的是两个概率分布之间的不同,可以用来刻画当前模型与实际模型之间的差距(在训练过程中,我们暂时认为给出的标注集就是真实世界中的模型)。注意:机器学习算法中的逻辑回归是这种交叉熵的特例。 | ||
```python | ||
class paddleseg.models.losses.CrossEntropyLoss( | ||
weight = None, | ||
ignore_index = 255, | ||
top_k_percent_pixels = 1.0 | ||
) | ||
``` | ||
|
||
## Cross entropy loss 使用指南 | ||
|
||
### 参数 | ||
* **weight** (tuple|list|ndarray|Tensor, optional): 为每个像素类别的损失手动调整权重。它的长度必须等同于像素类别数。可在多类样本不均衡等情况下调整各类的权重。 | ||
*默认 ``None``* | ||
* **ignore_index** (int64, optional): 指定一个在标注图中要忽略的像素值,其对输入梯度不产生贡献。当标注图中存在无法标注(或很难标注)的像素时,可以将其标注为某特定灰度值。在计算损失值时,其与原图像对应位置的像素将不作为损失函数的自变量。 *默认:``255``* | ||
* **top_k_percent_pixels** (float, optional): 该值的取值范围为 [0.0, 1.0]。 当该值 < 1.0 时,仅计算前 k% 像素(例如,前 20% 像素)的损失。 这将有助于对难分像素的挖掘。 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
English | [简体中文](CrossEntropyLoss_cn.md) | ||
## [CrossEntropyLoss](../../../paddleseg/models/losses/cross_entropy_loss.py) | ||
|
||
The cross entropy (CE) method has become one of the most popular loss functions due to its simplicity and effectiveness, allowing adjustment of the weights of different categories of pixels. In many semantic segmentation tasks, cross entropy relies on enough objective function calls to accurately estimate the best parameters of the underlying distribution. | ||
CrossEntropyLoss is often used for multi-pixel segmentation tasks. It describes the difference between two probability distributions. It can be used to describe the gap between the current model and the actual model (during the training process, we temporarily think that the given label set It is the model in the real world). Note: Logistic regression in machine learning algorithms is a special case of this kind of cross-entropy. | ||
```python | ||
class paddleseg.models.losses.CrossEntropyLoss( | ||
weight = None, | ||
ignore_index = 255, | ||
top_k_percent_pixels = 1.0 | ||
) | ||
``` | ||
|
||
## Cross entropy loss usage guidance | ||
|
||
### Args | ||
* **weight** (tuple|list|ndarray|Tensor, optional): A manual rescaling weight | ||
given to each class. Its length must be equal to the number of classes.The weights of various types can be adjusted under conditions such as unbalanced samples of multiple types. | ||
*Default ``None``* | ||
* **ignore_index** (int64, optional): Specify a pixel value to be ignored in the annotated image | ||
and does not contribute to the input gradient.When there are pixels that cannot be marked (or difficult to be marked) in the marked image, they can be marked as a specific gray value. When calculating the loss value, the pixel corresponding to the original image will not be used as the independent variable of the loss function. *Default:``255``* | ||
* **top_k_percent_pixels** (float, optional): The value lies in [0.0, 1.0]. When its value < 1.0, only compute the loss for | ||
the top k percent pixels (e.g., the top 20% pixels). This is useful for hard pixel mining. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
简体中文 | [English](DiceLoss_en.md) | ||
## [DiceLoss](../../../paddleseg/models/losses/dice_loss.py) | ||
Dice Loss 是一种广泛的应用于医学影像分割任务中的损失函数。Dice 系数是一种用于度量集合之间的相似程度的函数,在语义分割任务中,我们可以理解为当前的模型与真实世界中的真实模型之间的相似程度。Dice损失函数的计算过程包括用预测分割图与GT分割图之间进行点乘、对点乘结果的每个位置进行累计求和,最后计算 1-Dice 的值作为损失函数的输出,即 Dice = 1-2(|X∩Y|/|X|+|Y|)。你可以使用拉普拉斯平滑系数,将分子分母添加该系数后,可以避免除0异常,同时减少过拟合。即Dice_smooth = 1-2((|X∩Y|+smooth) / (|X|+|Y|+smooth) ) | ||
```python。 | ||
|
||
class paddleseg.models.losses.DiceLoss( | ||
ignore_index = 255, | ||
smooth = 0. | ||
) | ||
``` | ||
|
||
## Dice loss 使用指南 | ||
|
||
### 参数 | ||
* **ignore_index** (int64, optional): 指定一个在标注图中要忽略的像素值,其对输入梯度不产生贡献。当标注图中存在无法标注(或很难标注)的像素时,可以将其标注为某特定灰度值。在计算损失值时,其与原图像对应位置的像素将不作为损失函数的自变量。 *默认:``255``* | ||
* **smooth** (float, optional): 可以添加该smooth参数以防止出现除0异常。你也可以设置更大的平滑值(拉普拉斯平滑)以避免过拟合。*默认:`` 0``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
English | [简体中文](DiceLoss_cn.md) | ||
## [DiceLoss](../../../paddleseg/models/losses/dice_loss.py) | ||
|
||
Dice Loss is a loss function widely used in medical image segmentation tasks. The Dice coefficient is a function used to measure the degree of similarity between sets. In the semantic segmentation task, we can understand the degree of similarity between the current model and the real model in the real world. The calculation process of the Dice loss function includes the dot multiplication between the predicted segmentation map and the GT segmentation map, the cumulative sum of each position of the dot multiplication result, and finally the calculation of the value of 1-Dice as the output of the loss function, that is, Dice = 1-2(|X∩Y|/|X|+|Y|). You can use the Laplacian smoothing coefficient. After adding the coefficient to the numerator and denominator, you can avoid the division by 0 exception and reduce overfitting. That is, Dice_smooth = 1-2((|X∩Y|+smooth) / (|X|+|Y|+smooth)) | ||
|
||
```python。 | ||
|
||
class paddleseg.models.losses.DiceLoss( | ||
ignore_index = 255, | ||
smooth = 0. | ||
) | ||
``` | ||
|
||
## Dice loss usage guidance | ||
|
||
### Args | ||
* **ignore_index** (int64, optional): Specify a pixel value to be ignored in the annotated image | ||
and does not contribute to the input gradient.When there are pixels that cannot be marked (or difficult to be marked) in the marked image, they can be marked as a specific gray value. When calculating the loss value, the pixel corresponding to the original image will not be used as the independent variable of the loss function. *Default:``255``* | ||
* **smooth** (float, optional): The smooth parameter can be added to prevent the division by 0 exception. You can also set a larger smoothing value (Laplacian smoothing) to avoid overfitting.*Default:``0``* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
简体中文 | [English](DualTaskLoss_en.md) | ||
## [DualTaskLoss](../../../paddleseg/models/losses/gscnn_dual_task_loss.py) | ||
用于为半监督学习的 Dual-task 一致性以对模型进行约束。DualTaskLoss 旨在强化多个任务之间的一致性。 | ||
|
||
```python | ||
class paddleseg.models.losses.DualTaskLoss( | ||
ignore_index = 255, | ||
tau = 0.5 | ||
) | ||
``` | ||
|
||
## Dual task loss 使用指南 | ||
|
||
### 参数 | ||
* **ignore_index** (int64): 指定一个在标注图中要忽略的像素值,其对输入梯度不产生贡献。当标注图中存在无法标注(或很难标注)的像素时,可以将其标注为某特定灰度值。在计算损失值时,其与原图像对应位置的像素将不作为损失函数的自变量。 *默认:``255``* | ||
* **tau** (float): Gumbel softmax 样本的tau。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
否则,你必须 改成 或者,你可以
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.