Skip to content

Sibaja 2 #3697

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 44 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
f95fabc
DeepGlobePy
A01781042 May 31, 2024
34cd949
Resize images
AnYelg May 31, 2024
61a68ce
Upload of right size
AnYelg May 31, 2024
7aab358
Merge pull request #1 from A01781042/yela
A01781042 May 31, 2024
7d21d81
Adding unet
AnYelg Jun 1, 2024
7d5dfdd
Merge pull request #2 from A01781042/yela
AnYelg Jun 1, 2024
5d42080
Mean and Std added
EmiSib Jun 1, 2024
07e20e3
rm work-dir
EmiSib Jun 1, 2024
89148d5
Merge pull request #3 from A01781042/sibaja
A01781042 Jun 3, 2024
e5688a5
Merge pull request #4 from A01781042/main
A01781042 Jun 3, 2024
a30aea1
CCNET
EmiSib Jun 3, 2024
0d69683
Merge branch 'main' into sibaja
EmiSib Jun 3, 2024
cd8f9d7
Merge remote-tracking branch 'origin/main' into main
EmiSib Jun 3, 2024
230379d
FCN Model
AnYelg Jun 3, 2024
eeaee77
Merge pull request #5 from A01781042/yela
AnYelg Jun 3, 2024
3340e2c
hr18
A01781042 Jun 3, 2024
281593a
Merge pull request #6 from A01781042/octa
A01781042 Jun 3, 2024
9d33f4d
CCNET config
EmiSib Jun 4, 2024
160704f
CCNET config
EmiSib Jun 4, 2024
fb7a5b9
Merge remote-tracking branch 'origin/sibaja' into sibaja
EmiSib Jun 4, 2024
0080718
Merge branch 'sibaja' into main
EmiSib Jun 4, 2024
7b800d4
CCNET v.2
EmiSib Jun 4, 2024
5c23ab1
class
A01781042 Jun 4, 2024
ec1d8dc
Merge pull request #8 from A01781042/octa
A01781042 Jun 4, 2024
a9597f3
dataconfig
A01781042 Jun 4, 2024
55a4e69
Merge pull request #9 from A01781042/octa
A01781042 Jun 4, 2024
caf83e3
Mean and STD change, dataset added, scale resized
EmiSib Jun 4, 2024
39f019f
tensorboard
A01781042 Jun 5, 2024
347dca0
batch
A01781042 Jun 5, 2024
f4f3331
GcNet-DeepLab
A01781042 Jun 5, 2024
97091a5
modelfix
A01781042 Jun 5, 2024
66d3569
model_test
A01781042 Jun 5, 2024
26064f8
del
A01781042 Jun 5, 2024
9c28a6b
Predicciones1
AnYelg Jun 5, 2024
2960dad
Merge pull request #11 from A01781042/main
A01781042 Jun 5, 2024
b533298
Final Predictions FCN
AnYelg Jun 5, 2024
dccc8ae
Delete colorfile
AnYelg Jun 5, 2024
7abe519
Merge pull request #12 from A01781042/yela
A01781042 Jun 5, 2024
696ea80
PredictionsHRDeepFCN
A01781042 Jun 5, 2024
0d30a3e
Predictions
A01781042 Jun 5, 2024
eff3831
Merge pull request #14 from A01781042/Octavio
A01781042 Jun 5, 2024
539d1b6
PrediccionesCCNET
EmiSib Jun 5, 2024
b8bcb34
work-dir -> Deeplabplus and CCNET
EmiSib Jun 5, 2024
41aecd0
DeepLabPlus fix
EmiSib Jun 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added PrediccionesCCNET/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model.zip
Binary file not shown.
Binary file added Test_model/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
69 changes: 69 additions & 0 deletions configs/_base_/datasets/deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#configs/_base_/datasets/deepGlobe.py
#mmseseg/configs/_base_/datasets/deepGlobe.py
# dataset settings
dataset_type = 'DeepGlobeDataset'
data_root = 'data/deepglobe_ds/'
crop_size = (256, 256)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
type='RandomResize',
scale=(512, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(512, 512), keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
]
img_ratios = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
tta_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(
type='TestTimeAug',
transforms=[
[
dict(type='Resize', scale_factor=r, keep_ratio=True)
for r in img_ratios
],
[
dict(type='RandomFlip', prob=0., direction='horizontal'),
dict(type='RandomFlip', prob=1., direction='horizontal')
], [dict(type='LoadAnnotations')], [dict(type='PackSegInputs')]
])
]
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='img_dir/train_sat', seg_map_path='ann_dir/train_mask_grayscale'),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=16,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='img_dir/val_sat', seg_map_path='ann_dir/val_mask_grayscale'),
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'])
test_evaluator = val_evaluator
4 changes: 3 additions & 1 deletion configs/_base_/default_runtime.py
Original file line number Diff line number Diff line change
@@ -4,7 +4,9 @@
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
vis_backends = [dict(type='LocalVisBackend')]
vis_backends = [dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend')]

visualizer = dict(
type='SegLocalVisualizer', vis_backends=vis_backends, name='visualizer')
log_processor = dict(by_epoch=False)
Original file line number Diff line number Diff line change
@@ -2,8 +2,8 @@
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
mean=[0.4082, 0.3791, 0.2815],
std=[0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
@@ -29,7 +29,7 @@
channels=512,
recurrence=2,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
@@ -42,7 +42,7 @@
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
9 changes: 5 additions & 4 deletions configs/_base_/models/deeplabv3_r50-d8.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
#configs/_base_/models/deeplabv3_r50-d8.py
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
@@ -29,7 +30,7 @@
channels=512,
dilations=(1, 12, 24, 36),
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
@@ -42,7 +43,7 @@
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
53 changes: 53 additions & 0 deletions configs/_base_/models/deeplabv3_r50-d8_deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#configs/_base_/models/deeplabv3_r50-d8.py
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='ASPPHead',
in_channels=2048,
in_index=3,
channels=512,
dilations=(1, 12, 24, 36),
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
54 changes: 54 additions & 0 deletions configs/_base_/models/deeplabv3plus_r50-d8_deepglobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='DepthwiseSeparableASPPHead',
in_channels=2048,
in_index=3,
channels=512,
dilations=(1, 12, 24, 36),
c1_in_channels=256,
c1_channels=48,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
60 changes: 60 additions & 0 deletions configs/_base_/models/fcn_hr18_deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1351, 0.1022, 0.0931],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://msra/hrnetv2_w18',
backbone=dict(
type='HRNet',
norm_cfg=norm_cfg,
norm_eval=False,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(18, 36)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(18, 36, 72)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(18, 36, 72, 144)))),
decode_head=dict(
type='FCNHead',
in_channels=[18, 36, 72, 144],
in_index=(0, 1, 2, 3),
channels=sum([18, 36, 72, 144]),
input_transform='resize_concat',
kernel_size=1,
num_convs=1,
concat_input=False,
dropout_ratio=-1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
53 changes: 53 additions & 0 deletions configs/_base_/models/fcn_r50-d8deepglobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[0.1351, 0.1022, 0.0931],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='FCNHead',
in_channels=2048,
in_index=3,
channels=512,
num_convs=2,
concat_input=True,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
54 changes: 54 additions & 0 deletions configs/_base_/models/gcnet_r50-d8_deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='GCHead',
in_channels=2048,
in_index=3,
channels=512,
ratio=1 / 4.,
pooling_type='att',
fusion_types=('channel_add', ),
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
58 changes: 58 additions & 0 deletions configs/_base_/models/pspnet_unet_deepglobe_s5-d16.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[0.1351, 0.1022, 0.0931],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained=None,
backbone=dict(
type='UNet',
in_channels=3,
base_channels=64,
num_stages=5,
strides=(1, 1, 1, 1, 1),
enc_num_convs=(2, 2, 2, 2, 2),
dec_num_convs=(2, 2, 2, 2),
downsamples=(True, True, True, True),
enc_dilations=(1, 1, 1, 1, 1),
dec_dilations=(1, 1, 1, 1),
with_cp=False,
conv_cfg=None,
norm_cfg=norm_cfg,
act_cfg=dict(type='ReLU'),
upsample_cfg=dict(type='InterpConv'),
norm_eval=False),
decode_head=dict(
type='PSPHead',
in_channels=64,
in_index=4,
channels=16,
pool_scales=(1, 2, 3, 6),
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='slide', crop_size=256, stride=170))
4 changes: 2 additions & 2 deletions configs/_base_/schedules/schedule_40k.py
Original file line number Diff line number Diff line change
@@ -8,11 +8,11 @@
eta_min=1e-4,
power=0.9,
begin=0,
end=40000,
end=10000,
by_epoch=False)
]
# training schedule for 40k
train_cfg = dict(type='IterBasedTrainLoop', max_iters=40000, val_interval=4000)
train_cfg = dict(type='IterBasedTrainLoop', max_iters=10000, val_interval=1000)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
default_hooks = dict(
7 changes: 7 additions & 0 deletions configs/ccnet/ccnet_r50-d8_4xb2-40k_deepglobe-256x256.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_base_ = [
'../_base_/models/ccnet_r50-d8_deepglobe_5.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#configs/deeplabv3/deeplabv3_r50-d8_4xb2-40k_deepglobe-512x1024.py
_base_ = [
'../_base_/models/deeplabv3_r50-d8_deepGlobe.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
_base_ = [
'../_base_/models/deeplabv3plus_r50-d8_deepglobe.py',
'../_base_/datasets/deepglobe.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
7 changes: 7 additions & 0 deletions configs/fcn/fcn_r18-d8_4xb2-80k_deepglobe-512x1024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_base_ = [
'../_base_/models/fcn_r50-d8deepglobe.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
_base_ = [
'../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/models/gcnet_r50-d8.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (512, 1024)
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
7 changes: 7 additions & 0 deletions configs/hrnet/fcn_hr18_4xb2-40k_deepglobe-512x1024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_base_ = [
'../_base_/models/fcn_hr18_deepGlobe.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
9 changes: 9 additions & 0 deletions configs/unet/unet-s5-d16_pspnet_4xb4-40k_deepglobe-256x256.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
_base_ = [
'../_base_/models/pspnet_unet_deepglobe_s5-d16.py', '../_base_/datasets/deepGlobe.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
crop_size = (256, 256)
data_preprocessor = dict(size=crop_size)
model = dict(
data_preprocessor=data_preprocessor,
test_cfg=dict(crop_size=(256, 256), stride=(85, 85)))
Empty file modified docs/en/stat.py
100755 → 100644
Empty file.
Empty file modified docs/zh_cn/stat.py
100755 → 100644
Empty file.
39 changes: 15 additions & 24 deletions mmseg/configs/_base_/schedules/schedule_40k.py
Original file line number Diff line number Diff line change
@@ -1,34 +1,25 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.optim.optimizer.optimizer_wrapper import OptimWrapper
from mmengine.optim.scheduler.lr_scheduler import PolyLR
from mmengine.runner.loops import IterBasedTrainLoop, TestLoop, ValLoop
from torch.optim.sgd import SGD

from mmseg.engine import SegVisualizationHook

#configs/_base_/schedules/schedule_40k.py
# optimizer
optimizer = dict(type=SGD, lr=0.01, momentum=0.9, weight_decay=0.0005)
optim_wrapper = dict(type=OptimWrapper, optimizer=optimizer, clip_grad=None)

optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
optim_wrapper = dict(type='OptimWrapper', optimizer=optimizer, clip_grad=None)
# learning policy
param_scheduler = [
dict(
type=PolyLR,
type='PolyLR',
eta_min=1e-4,
power=0.9,
begin=0,
end=40000,
end=10000,
by_epoch=False)
]
# training schedule for 40k
train_cfg = dict(type=IterBasedTrainLoop, max_iters=40000, val_interval=4000)
val_cfg = dict(type=ValLoop)
test_cfg = dict(type=TestLoop)
train_cfg = dict(type='IterBasedTrainLoop', max_iters=10000, val_interval=1000)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
default_hooks = dict(
timer=dict(type=IterTimerHook),
logger=dict(type=LoggerHook, interval=50, log_metric_by_epoch=False),
param_scheduler=dict(type=ParamSchedulerHook),
checkpoint=dict(type=CheckpointHook, by_epoch=False, interval=4000),
sampler_seed=dict(type=DistSamplerSeedHook),
visualization=dict(type=SegVisualizationHook))
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50, log_metric_by_epoch=False),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', by_epoch=False, interval=1000),
sampler_seed=dict(type='DistSamplerSeedHook'),
visualization=dict(type='SegVisualizationHook'))
4 changes: 3 additions & 1 deletion mmseg/datasets/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#mmseg/datasets/__init__.py
# Copyright (c) OpenMMLab. All rights reserved.
# yapf: disable
from .ade import ADE20KDataset
@@ -26,6 +27,7 @@
from .refuge import REFUGEDataset
from .stare import STAREDataset
from .synapse import SynapseDataset
from .deepGlobe import DeepGlobeDataset
# yapf: disable
from .transforms import (CLAHE, AdjustGamma, Albu, BioMedical3DPad,
BioMedical3DRandomCrop, BioMedical3DRandomFlip,
@@ -61,5 +63,5 @@
'MapillaryDataset_v2', 'Albu', 'LEVIRCDDataset',
'LoadMultipleRSImageFromFile', 'LoadSingleRSImageFromFile',
'ConcatCDInput', 'BaseCDDataset', 'DSDLSegDataset', 'BDD100KDataset',
'NYUDataset', 'HSIDrive20Dataset'
'NYUDataset', 'HSIDrive20Dataset', 'DeepGlobeDataset'
]
48 changes: 48 additions & 0 deletions mmseg/datasets/deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#mmseg/datasets/deepGlobe.py
# Copyright (c) OpenMMLab. All rights reserved.
from mmseg.registry import DATASETS
from .basesegdataset import BaseSegDataset
import json
import os
from datetime import datetime

import geopandas as gpd
import numpy as np
import pandas as pd
import torch
import torch.utils.data as tdata

@DATASETS.register_module()
class DeepGlobeDataset(BaseSegDataset):
"""Deep Globe Dataset.
The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is
fixed to '_t.png' for Cityscapes dataset.
"""
METAINFO = dict(
classes=('Urban', 'Agriculture', 'Range', 'Forest', 'Water', 'Barren',
'Unknown'),
palette=[[0,255,255], [255,255,0], [255,0,255], [0,255,0],
[0,0,255], [255,255,255], [1,1,1]
])

class_dict={
"1": "Urban",
"2": "Agriculture",
"3": "Range",
"4": "Forest",
"5": "Water",
"6": "Barren",
"7": "Unknown"
}
color_map = [
[0,255,255], [255,255,0], [255,0,255], [0,255,0],
[0,0,255], [255,255,255], [1,1,1]
]

def __init__(self,
img_suffix='_sat.jpg',
seg_map_suffix='_mask.png',
**kwargs) -> None:
super().__init__(
img_suffix=img_suffix, seg_map_suffix=seg_map_suffix, **kwargs)
2 changes: 1 addition & 1 deletion mmseg/utils/__init__.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@
# yapf: disable
from .class_names import (ade_classes, ade_palette, bdd100k_classes,
bdd100k_palette, cityscapes_classes,
cityscapes_palette, cocostuff_classes,
cityscapes_palette, deepGlobe_classes, deepGlobe_palette, cocostuff_classes,
cocostuff_palette, dataset_aliases, get_classes,
get_palette, isaid_classes, isaid_palette,
loveda_classes, loveda_palette, potsdam_classes,
12 changes: 12 additions & 0 deletions mmseg/utils/class_names.py
Original file line number Diff line number Diff line change
@@ -463,6 +463,12 @@ def bdd100k_classes():
'bicycle'
]

def deepGlobe_classes():
return[
'Urban', 'Agriculture', 'Range', 'Forest', 'Water', 'Barren',
'Unknown'
]


def bdd100k_palette():
"""bdd100k palette for external use(same with cityscapes)"""
@@ -487,9 +493,15 @@ def hsidrive_palette():
[0, 0, 255], [102, 51, 0], [255, 255, 0], [0, 207, 250],
[255, 166, 0], [0, 204, 204]]

def deepGlobe_palette():
"""DeepGlobe palette for external use."""
return [[0,255,255], [255,255,0], [255,0,255], [0,255,0],
[0,0,255], [255,255,255], [1,1,1]]


dataset_aliases = {
'cityscapes': ['cityscapes'],
'deepGlobe': ['deepGlobe'],
'ade': ['ade', 'ade20k'],
'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug'],
'pcontext': ['pcontext', 'pascal_context', 'voc2010'],
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file modified setup.py
100755 → 100644
Empty file.
Empty file modified tests/data/biomedical.nii.gz
100755 → 100644
Empty file.
Empty file modified tests/data/biomedical_ann.nii.gz
100755 → 100644
Empty file.
Empty file modified tests/data/dataset.json
100755 → 100644
Empty file.
Empty file modified tests/data/dsdl_seg/config.py
100755 → 100644
Empty file.
Empty file modified tests/data/dsdl_seg/defs/class-dom.yaml
100755 → 100644
Empty file.
Empty file modified tests/data/dsdl_seg/defs/segmentation-def.yaml
100755 → 100644
Empty file.
Empty file modified tests/data/dsdl_seg/set-train/train.yaml
100755 → 100644
Empty file.
Empty file modified tests/data/dsdl_seg/set-train/train_samples.json
100755 → 100644
Empty file.
Empty file modified tools/dist_test.sh
100755 → 100644
Empty file.
Empty file modified tools/dist_train.sh
100755 → 100644
Empty file.
78 changes: 78 additions & 0 deletions tools/model_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@


import os
import mmcv
import numpy as np
import cv2
from mmseg.apis import init_model, inference_model

# Configuration and checkpoint paths
CONFIG_FILE = '/Users/octaviofenollosa/GithubWeb/AIDeep/mmsegmentationEQ2/configs/hrnet/fcn_hr18_4xb2-40k_deepglobe-512x1024.py'
CHECKPOINT_FILE = '/Users/octaviofenollosa/GithubWeb/AIDeep/mmsegmentationEQ2/work-dir/HRNet/iter_10000.pth'
DEVICE = 'cpu'



# Custom color palette
PALETTE = np.array([
[0, 255, 255],
[255, 255, 0],
[255, 0, 255],
[0, 255, 0],
[0, 0, 255],
[255, 255, 255],
[0, 0, 0]
], dtype=np.uint8)

# Directories
IMAGE_DIR = '/Users/octaviofenollosa/GithubWeb/AIDeep/mmsegmentationEQ2/Test_model'
OUTPUT_DIR = '/Users/octaviofenollosa/GithubWeb/AIDeep/mmsegmentationEQ2/PrediccionesHRNet'

def initialize_model(config_path, checkpoint_path, device):
"""Initialize the segmentation model."""
return init_model(config_path, checkpoint_path, device=device)

def create_output_directory(directory):
"""Create the output directory if it doesn't exist."""
if not os.path.exists(directory):
os.makedirs(directory)

def get_image_paths(directory):
"""Retrieve all image paths from the directory."""
return [os.path.join(directory, img_name) for img_name in os.listdir(directory)]

def postprocess_mask(mask):
"""Post-process the predicted mask."""
# Apply morphological operations to refine the mask
kernel = np.ones((3, 3), np.uint8)
refined_mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
return refined_mask

def process_image(model, img_path):
"""Run inference on a single image and return the color mask."""
result = inference_model(model, img_path)
mask = result.pred_sem_seg.data.cpu().numpy().astype(np.uint8).squeeze(0)
mask = postprocess_mask(mask)
return PALETTE[mask]

def save_color_mask(color_mask, output_path):
"""Save the color mask to the specified path."""
mmcv.imwrite(color_mask, output_path)

def main():
"""Main function to run the segmentation inference."""
model = initialize_model(CONFIG_FILE, CHECKPOINT_FILE, DEVICE)
create_output_directory(OUTPUT_DIR)

for img_path in get_image_paths(IMAGE_DIR):
color_mask = process_image(model, img_path)
if color_mask.size > 0:
output_path = os.path.join(OUTPUT_DIR, os.path.basename(img_path))
save_color_mask(color_mask, output_path)
else:
print(f"Skipping {img_path} due to invalid mask")

print(f"Inference completed. Results saved to: {OUTPUT_DIR}")

if __name__ == "__main__":
main()
Empty file modified tools/slurm_test.sh
100755 → 100644
Empty file.
Empty file modified tools/slurm_train.sh
100755 → 100644
Empty file.
221 changes: 221 additions & 0 deletions work-dir/CCNET/20240605_012655/vis_data/20240605_012655.json

Large diffs are not rendered by default.

294 changes: 294 additions & 0 deletions work-dir/CCNET/20240605_012655/vis_data/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,294 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=512,
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
recurrence=2,
type='CCHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=500)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/CCNET'
Binary file not shown.
221 changes: 221 additions & 0 deletions work-dir/CCNET/20240605_012655/vis_data/scalars.json

Large diffs are not rendered by default.

294 changes: 294 additions & 0 deletions work-dir/CCNET/ccnet_r50-d8_4xb2-40k_deepglobe-256x256.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,294 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=512,
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
recurrence=2,
type='CCHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=500)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/CCNET'
211 changes: 211 additions & 0 deletions work-dir/DeepLab/20240605_012836/vis_data/20240605_012836.json

Large diffs are not rendered by default.

299 changes: 299 additions & 0 deletions work-dir/DeepLab/20240605_012836/vis_data/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,299 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=512,
dilations=(
1,
12,
24,
36,
),
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
type='ASPPHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=1000)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/work-dir/DeepLab'
Binary file not shown.
211 changes: 211 additions & 0 deletions work-dir/DeepLab/20240605_012836/vis_data/scalars.json

Large diffs are not rendered by default.

299 changes: 299 additions & 0 deletions work-dir/DeepLab/deeplabv3_r50-d8_4xb2-40k_deepglobe-512x1024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,299 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=512,
dilations=(
1,
12,
24,
36,
),
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
type='ASPPHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=1000)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/work-dir/DeepLab'
1 change: 1 addition & 0 deletions work-dir/DeepLab/last_checkpoint
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/content/drive/MyDrive/Equipo2AI/work-dir/DeepLab/iter_10000.pth
221 changes: 221 additions & 0 deletions work-dir/DeepLabPlus/20240605_191501/vis_data/20240605_191501.json

Large diffs are not rendered by default.

301 changes: 301 additions & 0 deletions work-dir/DeepLabPlus/20240605_191501/vis_data/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
c1_channels=48,
c1_in_channels=256,
channels=512,
dilations=(
1,
12,
24,
36,
),
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
type='DepthwiseSeparableASPPHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=500)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/DeepLabPlus_2'
Binary file not shown.
221 changes: 221 additions & 0 deletions work-dir/DeepLabPlus/20240605_191501/vis_data/scalars.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
auxiliary_head=dict(
align_corners=False,
channels=256,
concat_input=False,
dropout_ratio=0.1,
in_channels=1024,
in_index=2,
loss_decode=dict(
loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
backbone=dict(
contract_dilation=True,
depth=50,
dilations=(
1,
1,
2,
4,
),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
num_stages=4,
out_indices=(
0,
1,
2,
3,
),
strides=(
1,
2,
1,
1,
),
style='pytorch',
type='ResNetV1c'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1451,
0.1116,
0.1013,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
c1_channels=48,
c1_in_channels=256,
channels=512,
dilations=(
1,
12,
24,
36,
),
dropout_ratio=0.1,
in_channels=2048,
in_index=3,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
type='DepthwiseSeparableASPPHead'),
pretrained='open-mmlab://resnet50_v1c',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=500)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=8,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/DeepLabPlus_2'
211 changes: 211 additions & 0 deletions work-dir/HRNet/20240605_163912/vis_data/20240605_163912.json

Large diffs are not rendered by default.

321 changes: 321 additions & 0 deletions work-dir/HRNet/20240605_163912/vis_data/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,321 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
backbone=dict(
extra=dict(
stage1=dict(
block='BOTTLENECK',
num_blocks=(4, ),
num_branches=1,
num_channels=(64, ),
num_modules=1),
stage2=dict(
block='BASIC',
num_blocks=(
4,
4,
),
num_branches=2,
num_channels=(
18,
36,
),
num_modules=1),
stage3=dict(
block='BASIC',
num_blocks=(
4,
4,
4,
),
num_branches=3,
num_channels=(
18,
36,
72,
),
num_modules=4),
stage4=dict(
block='BASIC',
num_blocks=(
4,
4,
4,
4,
),
num_branches=4,
num_channels=(
18,
36,
72,
144,
),
num_modules=3)),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
type='HRNet'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=270,
concat_input=False,
dropout_ratio=-1,
in_channels=[
18,
36,
72,
144,
],
in_index=(
0,
1,
2,
3,
),
input_transform='resize_concat',
kernel_size=1,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
pretrained='open-mmlab://msra/hrnetv2_w18',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=16,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=1000)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=16,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/work-dir'
Binary file not shown.
211 changes: 211 additions & 0 deletions work-dir/HRNet/20240605_163912/vis_data/scalars.json

Large diffs are not rendered by default.

319 changes: 319 additions & 0 deletions work-dir/HRNet/fcn_hr18_4xb2-40k_deepglobe-512x1024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,319 @@
crop_size = (
256,
256,
)
data_preprocessor = dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor')
data_root = 'data/deepglobe_ds/'
dataset_type = 'DeepGlobeDataset'
default_hooks = dict(
checkpoint=dict(by_epoch=False, interval=4000, type='CheckpointHook'),
logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
cudnn_benchmark=True,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
img_ratios = [
0.5,
0.75,
1.0,
1.25,
1.5,
1.75,
]
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
backbone=dict(
extra=dict(
stage1=dict(
block='BOTTLENECK',
num_blocks=(4, ),
num_branches=1,
num_channels=(64, ),
num_modules=1),
stage2=dict(
block='BASIC',
num_blocks=(
4,
4,
),
num_branches=2,
num_channels=(
18,
36,
),
num_modules=1),
stage3=dict(
block='BASIC',
num_blocks=(
4,
4,
4,
),
num_branches=3,
num_channels=(
18,
36,
72,
),
num_modules=4),
stage4=dict(
block='BASIC',
num_blocks=(
4,
4,
4,
4,
),
num_branches=4,
num_channels=(
18,
36,
72,
144,
),
num_modules=3)),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
norm_eval=False,
type='HRNet'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
0.4082,
0.3791,
0.2815,
],
pad_val=0,
seg_pad_val=255,
size=(
256,
256,
),
std=[
0.1351,
0.1022,
0.0931,
],
type='SegDataPreProcessor'),
decode_head=dict(
align_corners=False,
channels=270,
concat_input=False,
dropout_ratio=-1,
in_channels=[
18,
36,
72,
144,
],
in_index=(
0,
1,
2,
3,
),
input_transform='resize_concat',
kernel_size=1,
loss_decode=dict(
loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False),
norm_cfg=dict(requires_grad=True, type='SyncBN'),
num_classes=7,
num_convs=1,
type='FCNHead'),
pretrained='open-mmlab://msra/hrnetv2_w18',
test_cfg=dict(mode='whole'),
train_cfg=dict(),
type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
optim_wrapper = dict(
clip_grad=None,
optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
dict(
begin=0,
by_epoch=False,
end=10000,
eta_min=0.0001,
power=0.9,
type='PolyLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=16,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
]
train_cfg = dict(max_iters=10000, type='IterBasedTrainLoop', val_interval=1000)
train_dataloader = dict(
batch_size=32,
dataset=dict(
data_prefix=dict(
img_path='img_dir/train_sat',
seg_map_path='ann_dir/train_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(
cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='InfiniteSampler'))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
keep_ratio=True,
ratio_range=(
0.5,
2.0,
),
scale=(
512,
512,
),
type='RandomResize'),
dict(cat_max_ratio=0.75, crop_size=(
256,
256,
), type='RandomCrop'),
dict(prob=0.5, type='RandomFlip'),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs'),
]
tta_model = dict(type='SegTTAModel')
tta_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(
transforms=[
[
dict(keep_ratio=True, scale_factor=0.5, type='Resize'),
dict(keep_ratio=True, scale_factor=0.75, type='Resize'),
dict(keep_ratio=True, scale_factor=1.0, type='Resize'),
dict(keep_ratio=True, scale_factor=1.25, type='Resize'),
dict(keep_ratio=True, scale_factor=1.5, type='Resize'),
dict(keep_ratio=True, scale_factor=1.75, type='Resize'),
],
[
dict(direction='horizontal', prob=0.0, type='RandomFlip'),
dict(direction='horizontal', prob=1.0, type='RandomFlip'),
],
[
dict(type='LoadAnnotations'),
],
[
dict(type='PackSegInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=16,
dataset=dict(
data_prefix=dict(
img_path='img_dir/val_sat',
seg_map_path='ann_dir/val_mask_grayscale'),
data_root='data/deepglobe_ds/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(keep_ratio=True, scale=(
512,
512,
), type='Resize'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs'),
],
type='DeepGlobeDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
iou_metrics=[
'mIoU',
], type='IoUMetric')
vis_backends = [
dict(type='LocalVisBackend'),
]
visualizer = dict(
name='visualizer',
type='SegLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
])
work_dir = '/content/drive/MyDrive/Equipo2AI/work-dir'
1 change: 1 addition & 0 deletions work-dir/HRNet/last_checkpoint
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/content/drive/MyDrive/Equipo2AI/work-dir/iter_10000.pth