Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 Segmentation Dataloader Updates #2188

Merged
merged 213 commits into from
Feb 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
213 commits
Select commit Hold shift + click to select a range
9c6cc85
Update C3 module
glenn-jocher Dec 16, 2020
b62f891
Update C3 module
glenn-jocher Dec 16, 2020
81d4f2e
Update C3 module
glenn-jocher Dec 16, 2020
51762f6
Update C3 module
glenn-jocher Dec 16, 2020
a6d84e1
update
glenn-jocher Dec 17, 2020
78380e6
update
glenn-jocher Dec 17, 2020
decb34f
merge master
glenn-jocher Dec 17, 2020
b61827f
update
glenn-jocher Dec 18, 2020
f79a47b
update
glenn-jocher Dec 19, 2020
71cbe4c
update
glenn-jocher Dec 19, 2020
7fad46e
update
glenn-jocher Dec 21, 2020
84f2fec
update
glenn-jocher Dec 21, 2020
9bb09fc
update
glenn-jocher Dec 21, 2020
aa8ac51
update
glenn-jocher Dec 21, 2020
1c8ce04
update
glenn-jocher Dec 21, 2020
5316dba
updates
glenn-jocher Dec 22, 2020
163a7fa
updates
glenn-jocher Dec 22, 2020
7007a51
updates
glenn-jocher Dec 22, 2020
7541a5c
updates
glenn-jocher Dec 22, 2020
b61145d
updates
glenn-jocher Dec 22, 2020
c3aaede
updates
glenn-jocher Dec 22, 2020
05632dc
updates
glenn-jocher Dec 22, 2020
0020026
updates
glenn-jocher Dec 22, 2020
834c5a9
updates
glenn-jocher Dec 22, 2020
07154cb
updates
glenn-jocher Dec 22, 2020
98e74f0
update
glenn-jocher Dec 23, 2020
efb21cf
update
glenn-jocher Dec 23, 2020
201197b
update
glenn-jocher Dec 23, 2020
5ae91fe
update
glenn-jocher Dec 23, 2020
1809003
updates
glenn-jocher Dec 23, 2020
1f4c726
updates
glenn-jocher Dec 23, 2020
c6db527
updates
glenn-jocher Dec 23, 2020
3cd90e5
updates
glenn-jocher Dec 24, 2020
afdaf21
update
glenn-jocher Dec 25, 2020
dfe78d5
update
glenn-jocher Dec 25, 2020
873ba73
update
glenn-jocher Dec 25, 2020
a864ea7
update
glenn-jocher Dec 25, 2020
6c36cb2
update
glenn-jocher Dec 25, 2020
1f07c84
update
glenn-jocher Dec 25, 2020
bfd92ee
update
glenn-jocher Dec 26, 2020
ed3b0d3
update
glenn-jocher Dec 27, 2020
4e53f3f
update
glenn-jocher Dec 27, 2020
0a16ea4
update
glenn-jocher Dec 27, 2020
8868fca
update
glenn-jocher Dec 27, 2020
b5f0e72
update
glenn-jocher Dec 28, 2020
abb5a3e
update
glenn-jocher Dec 28, 2020
d241ed9
update
glenn-jocher Dec 28, 2020
7a6e953
update
glenn-jocher Dec 28, 2020
11228d0
update
glenn-jocher Dec 28, 2020
3a225f3
update
glenn-jocher Dec 28, 2020
89f9965
update
glenn-jocher Dec 28, 2020
49abc1e
update
glenn-jocher Dec 28, 2020
4f8f445
update
glenn-jocher Dec 28, 2020
2d5d00a
update
glenn-jocher Dec 28, 2020
d9e5535
update
glenn-jocher Dec 28, 2020
e48e07f
update
glenn-jocher Dec 28, 2020
e9b5e55
update
glenn-jocher Dec 28, 2020
fff0174
update
glenn-jocher Dec 28, 2020
fb9c3c0
update
glenn-jocher Dec 28, 2020
afef0ce
update
glenn-jocher Dec 29, 2020
4acc617
update
glenn-jocher Dec 29, 2020
6c996dc
update
glenn-jocher Dec 29, 2020
3275999
update
glenn-jocher Dec 29, 2020
d4a2880
update
glenn-jocher Dec 30, 2020
52d4082
update
glenn-jocher Dec 30, 2020
318b9cb
update
glenn-jocher Dec 30, 2020
0737f83
update
glenn-jocher Dec 30, 2020
67270cd
update
glenn-jocher Dec 30, 2020
4059db7
update
glenn-jocher Dec 30, 2020
821cce5
update
glenn-jocher Dec 30, 2020
3c48bc0
update
glenn-jocher Dec 30, 2020
826b805
merge master
glenn-jocher Dec 30, 2020
5e69e1f
update
glenn-jocher Dec 30, 2020
9f7f70c
update
glenn-jocher Dec 30, 2020
fe83861
update
glenn-jocher Dec 30, 2020
914cef6
update datasets
glenn-jocher Dec 30, 2020
ef51189
update
glenn-jocher Dec 30, 2020
5a29bc3
update
glenn-jocher Dec 30, 2020
ad671cc
update
glenn-jocher Dec 31, 2020
66b8553
update attempt_downlaod()
glenn-jocher Dec 31, 2020
ac010e1
merge
glenn-jocher Dec 31, 2020
f316064
merge
glenn-jocher Dec 31, 2020
75a5bc0
update
glenn-jocher Dec 31, 2020
f29df1e
update
glenn-jocher Dec 31, 2020
8151347
update
glenn-jocher Dec 31, 2020
ddf749c
update
glenn-jocher Dec 31, 2020
cc68004
update
glenn-jocher Dec 31, 2020
fee9d79
update
glenn-jocher Dec 31, 2020
7609b23
update
glenn-jocher Dec 31, 2020
a487d5f
update
glenn-jocher Dec 31, 2020
8807a56
update
glenn-jocher Dec 31, 2020
d960574
update
glenn-jocher Dec 31, 2020
884149b
parameterize eps
glenn-jocher Jan 1, 2021
a19e111
comments
glenn-jocher Jan 2, 2021
5fee648
gs-multiple
glenn-jocher Jan 2, 2021
6314183
dependabot config
glenn-jocher Jan 2, 2021
bc9ab3a
update
glenn-jocher Jan 2, 2021
e3b4eb7
max_nms implemented
glenn-jocher Jan 2, 2021
8e9127d
Create one_cycle() function
glenn-jocher Jan 4, 2021
32927db
Create one_cycle() function
glenn-jocher Jan 4, 2021
27500f2
update
glenn-jocher Jan 5, 2021
18671c8
update
glenn-jocher Jan 5, 2021
693d4e7
update
glenn-jocher Jan 5, 2021
2a87b2b
update
glenn-jocher Jan 5, 2021
2c0af2f
update
glenn-jocher Jan 5, 2021
8bee44d
update
glenn-jocher Jan 6, 2021
4c476c4
update
glenn-jocher Jan 6, 2021
09efacc
merge
glenn-jocher Jan 8, 2021
804bcc9
update
glenn-jocher Jan 8, 2021
f89b5dd
update
glenn-jocher Jan 8, 2021
195f52a
update
glenn-jocher Jan 9, 2021
e6fa569
update
glenn-jocher Jan 9, 2021
c28b137
update
glenn-jocher Jan 9, 2021
37ae230
update
glenn-jocher Jan 9, 2021
8527084
update
glenn-jocher Jan 9, 2021
64a5979
merge
glenn-jocher Jan 9, 2021
bb79ac8
update
glenn-jocher Jan 10, 2021
0026841
update
glenn-jocher Jan 10, 2021
6f5edca
update
glenn-jocher Jan 10, 2021
6b04c8e
update
glenn-jocher Jan 10, 2021
d130465
update
glenn-jocher Jan 10, 2021
0e44515
GitHub API rate limit fix
glenn-jocher Jan 11, 2021
8964be3
merge
glenn-jocher Jan 12, 2021
1178b3e
update
glenn-jocher Jan 12, 2021
db3bf74
merge
glenn-jocher Jan 13, 2021
0df1bde
merge
glenn-jocher Jan 13, 2021
2c59301
ComputeLoss
glenn-jocher Jan 13, 2021
9bea325
ComputeLoss
glenn-jocher Jan 13, 2021
b241471
ComputeLoss
glenn-jocher Jan 13, 2021
a37ba34
ComputeLoss
glenn-jocher Jan 13, 2021
18b7853
ComputeLoss
glenn-jocher Jan 13, 2021
2faf553
ComputeLoss
glenn-jocher Jan 13, 2021
6d26e7f
ComputeLoss
glenn-jocher Jan 13, 2021
9a67af2
ComputeLoss
glenn-jocher Jan 13, 2021
e2ab588
ComputeLoss
glenn-jocher Jan 13, 2021
e22c68e
ComputeLoss
glenn-jocher Jan 13, 2021
1e6896a
ComputeLoss
glenn-jocher Jan 14, 2021
70bf7b9
merge
glenn-jocher Jan 15, 2021
86e8624
astuple
glenn-jocher Jan 15, 2021
c9bd7d1
epochs
glenn-jocher Jan 15, 2021
0f5dfc8
update
glenn-jocher Jan 15, 2021
640b7bc
update
glenn-jocher Jan 15, 2021
046ad74
merge
glenn-jocher Jan 15, 2021
9784fbb
merge
glenn-jocher Jan 15, 2021
a1e27e6
ComputeLoss()
glenn-jocher Jan 15, 2021
001cb28
Add ComputeLoss() class
glenn-jocher Jan 15, 2021
20f64f4
update
glenn-jocher Jan 15, 2021
270aab4
update
glenn-jocher Jan 15, 2021
0704da5
update
glenn-jocher Jan 16, 2021
8d94cf9
update
glenn-jocher Jan 16, 2021
fb768e6
update
glenn-jocher Jan 16, 2021
d0c901b
update
glenn-jocher Jan 16, 2021
342ad43
update
glenn-jocher Jan 16, 2021
7846e41
update
glenn-jocher Jan 16, 2021
e02d1a8
update
glenn-jocher Jan 16, 2021
8c38674
update
glenn-jocher Jan 16, 2021
51a2b79
update
glenn-jocher Jan 16, 2021
269c2d7
merge
glenn-jocher Jan 17, 2021
125cacf
merge
glenn-jocher Jan 18, 2021
3d72f58
merge
glenn-jocher Jan 18, 2021
f208117
merge
glenn-jocher Jan 18, 2021
d19e3af
merge
glenn-jocher Jan 20, 2021
bb2cf2e
merge
glenn-jocher Jan 21, 2021
73c0f7a
merge
glenn-jocher Jan 21, 2021
5b3597d
merge
glenn-jocher Jan 21, 2021
f18175b
merge
glenn-jocher Jan 21, 2021
92edf7d
merge
glenn-jocher Jan 24, 2021
5c55f8c
merge
glenn-jocher Jan 25, 2021
0376e3e
merge master
glenn-jocher Jan 27, 2021
dcd1c20
merge master
glenn-jocher Jan 27, 2021
a18654a
update
glenn-jocher Jan 27, 2021
7c2bffb
update
glenn-jocher Jan 27, 2021
ed62203
merge master
glenn-jocher Jan 28, 2021
2b058ee
update
glenn-jocher Jan 28, 2021
699e1f9
update
glenn-jocher Jan 28, 2021
fd59fe5
Update to colors.TABLEAU_COLORS
glenn-jocher Jan 29, 2021
8a9b864
commit=tag == tags[-1]
glenn-jocher Jan 29, 2021
c1705d1
merge master
glenn-jocher Jan 31, 2021
8e9b73b
Update cudnn.benchmark
glenn-jocher Feb 1, 2021
1b2733c
update
glenn-jocher Feb 1, 2021
411363a
update
glenn-jocher Feb 1, 2021
8820fd3
update
glenn-jocher Feb 1, 2021
f906698
update
glenn-jocher Feb 1, 2021
9762cdd
merge
glenn-jocher Feb 2, 2021
daba287
update
glenn-jocher Feb 5, 2021
a7fc41e
updates
glenn-jocher Feb 5, 2021
c5c1c40
updates
glenn-jocher Feb 6, 2021
24ee803
updates
glenn-jocher Feb 6, 2021
629b69a
updates
glenn-jocher Feb 6, 2021
64f1c3d
updates
glenn-jocher Feb 6, 2021
dbd0cf9
updates
glenn-jocher Feb 7, 2021
b551457
updates
glenn-jocher Feb 9, 2021
275021b
updates
glenn-jocher Feb 9, 2021
7564b60
updates
glenn-jocher Feb 9, 2021
27bc6b0
updates
glenn-jocher Feb 9, 2021
273f088
merge C6
glenn-jocher Feb 9, 2021
abfe6c5
updates
glenn-jocher Feb 9, 2021
6b29107
update
glenn-jocher Feb 11, 2021
9c5e384
merge master
glenn-jocher Feb 12, 2021
6ad18e8
update
glenn-jocher Feb 12, 2021
5a1e677
update
glenn-jocher Feb 12, 2021
0c57c47
update
glenn-jocher Feb 12, 2021
73e2971
update
glenn-jocher Feb 12, 2021
301a359
mosaic9
glenn-jocher Feb 12, 2021
ebc79d8
update
glenn-jocher Feb 12, 2021
f1c3e52
update
glenn-jocher Feb 12, 2021
73669a0
update
glenn-jocher Feb 12, 2021
90d5026
update
glenn-jocher Feb 12, 2021
d36268a
update
glenn-jocher Feb 12, 2021
ffe8a01
update
glenn-jocher Feb 12, 2021
1934fe8
institute cache versioning
glenn-jocher Feb 12, 2021
d11487e
only display on existing cache
glenn-jocher Feb 12, 2021
4ca9c31
reverse cache exists booleans
glenn-jocher Feb 12, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion data/scripts/get_coco.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
# Download/unzip labels
d='../' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco2017labels.zip' # 68 MB
f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background

Expand Down
134 changes: 76 additions & 58 deletions utils/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@
from torch.utils.data import Dataset
from tqdm import tqdm

from utils.general import xyxy2xywh, xywh2xyxy, xywhn2xyxy, clean_str
from utils.general import xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, resample_segments, \
clean_str
from utils.torch_utils import torch_distributed_zero_first

# Parameters
Expand Down Expand Up @@ -374,21 +375,23 @@ def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, r
self.label_files = img2label_paths(self.img_files) # labels
cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels
if cache_path.is_file():
cache = torch.load(cache_path) # load
if cache['hash'] != get_hash(self.label_files + self.img_files) or 'results' not in cache: # changed
cache = self.cache_labels(cache_path, prefix) # re-cache
cache, exists = torch.load(cache_path), True # load
if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed
cache, exists = self.cache_labels(cache_path, prefix), False # re-cache
else:
cache = self.cache_labels(cache_path, prefix) # cache
cache, exists = self.cache_labels(cache_path, prefix), False # cache

# Display cache
[nf, nm, ne, nc, n] = cache.pop('results') # found, missing, empty, corrupted, total
desc = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
tqdm(None, desc=prefix + desc, total=n, initial=n)
nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total
if exists:
d = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results
assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}'

# Read cache
cache.pop('hash') # remove hash
labels, shapes = zip(*cache.values())
cache.pop('version') # remove version
labels, shapes, self.segments = zip(*cache.values())
self.labels = list(labels)
self.shapes = np.array(shapes, dtype=np.float64)
self.img_files = list(cache.keys()) # update
Expand Down Expand Up @@ -451,14 +454,20 @@ def cache_labels(self, path=Path('./labels.cache'), prefix=''):
im = Image.open(im_file)
im.verify() # PIL verify
shape = exif_size(im) # image size
segments = [] # instance segments
assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'
assert im.format.lower() in img_formats, f'invalid image format {im.format}'

# verify labels
if os.path.isfile(lb_file):
nf += 1 # label found
with open(lb_file, 'r') as f:
l = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
l = [x.split() for x in f.read().strip().splitlines()]
if any([len(x) > 8 for x in l]): # is segment
classes = np.array([x[0] for x in l], dtype=np.float32)
segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...)
l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
l = np.array(l, dtype=np.float32)
if len(l):
assert l.shape[1] == 5, 'labels require 5 columns each'
assert (l >= 0).all(), 'negative labels'
Expand All @@ -470,7 +479,7 @@ def cache_labels(self, path=Path('./labels.cache'), prefix=''):
else:
nm += 1 # label missing
l = np.zeros((0, 5), dtype=np.float32)
x[im_file] = [l, shape]
x[im_file] = [l, shape, segments]
except Exception as e:
nc += 1
print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}')
Expand All @@ -482,7 +491,8 @@ def cache_labels(self, path=Path('./labels.cache'), prefix=''):
print(f'{prefix}WARNING: No labels found in {path}. See {help_url}')

x['hash'] = get_hash(self.label_files + self.img_files)
x['results'] = [nf, nm, ne, nc, i + 1]
x['results'] = nf, nm, ne, nc, i + 1
x['version'] = 0.1 # cache version
torch.save(x, path) # save for next time
logging.info(f'{prefix}New cache created: {path}')
return x
Expand Down Expand Up @@ -652,7 +662,7 @@ def hist_equalize(img, clahe=True, bgr=False):
def load_mosaic(self, index):
# loads images in a 4-mosaic

labels4 = []
labels4, segments4 = [], []
s = self.img_size
yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(3)] # 3 additional image indices
Expand Down Expand Up @@ -680,19 +690,21 @@ def load_mosaic(self, index):
padh = y1a - y1b

# Labels
labels = self.labels[index].copy()
labels, segments = self.labels[index].copy(), self.segments[index].copy()
if labels.size:
labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
labels4.append(labels)
segments4.extend(segments)

# Concat/clip labels
if len(labels4):
labels4 = np.concatenate(labels4, 0)
np.clip(labels4[:, 1:], 0, 2 * s, out=labels4[:, 1:]) # use with random_perspective
# img4, labels4 = replicate(img4, labels4) # replicate
labels4 = np.concatenate(labels4, 0)
for x in (labels4[:, 1:], *segments4):
np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
# img4, labels4 = replicate(img4, labels4) # replicate

# Augment
img4, labels4 = random_perspective(img4, labels4,
img4, labels4 = random_perspective(img4, labels4, segments4,
degrees=self.hyp['degrees'],
translate=self.hyp['translate'],
scale=self.hyp['scale'],
Expand All @@ -706,7 +718,7 @@ def load_mosaic(self, index):
def load_mosaic9(self, index):
# loads images in a 9-mosaic

labels9 = []
labels9, segments9 = [], []
s = self.img_size
indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(8)] # 8 additional image indices
for i, index in enumerate(indices):
Expand Down Expand Up @@ -739,30 +751,34 @@ def load_mosaic9(self, index):
x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords

# Labels
labels = self.labels[index].copy()
labels, segments = self.labels[index].copy(), self.segments[index].copy()
if labels.size:
labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format
segments = [xyn2xy(x, w, h, padx, pady) for x in segments]
labels9.append(labels)
segments9.extend(segments)

# Image
img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax]
hp, wp = h, w # height, width previous

# Offset
yc, xc = [int(random.uniform(0, s)) for x in self.mosaic_border] # mosaic center x, y
yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y
img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]

# Concat/clip labels
if len(labels9):
labels9 = np.concatenate(labels9, 0)
labels9[:, [1, 3]] -= xc
labels9[:, [2, 4]] -= yc
labels9 = np.concatenate(labels9, 0)
labels9[:, [1, 3]] -= xc
labels9[:, [2, 4]] -= yc
c = np.array([xc, yc]) # centers
segments9 = [x - c for x in segments9]

np.clip(labels9[:, 1:], 0, 2 * s, out=labels9[:, 1:]) # use with random_perspective
# img9, labels9 = replicate(img9, labels9) # replicate
for x in (labels9[:, 1:], *segments9):
np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
# img9, labels9 = replicate(img9, labels9) # replicate

# Augment
img9, labels9 = random_perspective(img9, labels9,
img9, labels9 = random_perspective(img9, labels9, segments9,
degrees=self.hyp['degrees'],
translate=self.hyp['translate'],
scale=self.hyp['scale'],
Expand Down Expand Up @@ -823,7 +839,8 @@ def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scale
return img, ratio, (dw, dh)


def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)):
def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
border=(0, 0)):
# torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
# targets = [cls, xyxy]

Expand Down Expand Up @@ -875,37 +892,38 @@ def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shea
# Transform label coordinates
n = len(targets)
if n:
# warp points
xy = np.ones((n * 4, 3))
xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
xy = xy @ M.T # transform
if perspective:
xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale
else: # affine
xy = xy[:, :2].reshape(n, 8)

# create new boxes
x = xy[:, [0, 2, 4, 6]]
y = xy[:, [1, 3, 5, 7]]
xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T

# # apply angle-based reduction of bounding boxes
# radians = a * math.pi / 180
# reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5
# x = (xy[:, 2] + xy[:, 0]) / 2
# y = (xy[:, 3] + xy[:, 1]) / 2
# w = (xy[:, 2] - xy[:, 0]) * reduction
# h = (xy[:, 3] - xy[:, 1]) * reduction
# xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T

# clip boxes
xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width)
xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height)
use_segments = any(x.any() for x in segments)
new = np.zeros((n, 4))
if use_segments: # warp segments
segments = resample_segments(segments) # upsample
for i, segment in enumerate(segments):
xy = np.ones((len(segment), 3))
xy[:, :2] = segment
xy = xy @ M.T # transform
xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine

# clip
new[i] = segment2box(xy, width, height)

else: # warp boxes
xy = np.ones((n * 4, 3))
xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
xy = xy @ M.T # transform
xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine

# create new boxes
x = xy[:, [0, 2, 4, 6]]
y = xy[:, [1, 3, 5, 7]]
new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T

# clip
new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)

# filter candidates
i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T)
i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
targets = targets[i]
targets[:, 1:5] = xy[i]
targets[:, 1:5] = new[i]

return img, targets

Expand Down
36 changes: 35 additions & 1 deletion utils/general.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def xywh2xyxy(x):
return y


def xywhn2xyxy(x, w=640, h=640, padw=32, padh=32):
def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
# Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
Expand All @@ -235,6 +235,40 @@ def xywhn2xyxy(x, w=640, h=640, padw=32, padh=32):
return y


def xyn2xy(x, w=640, h=640, padw=0, padh=0):
# Convert normalized segments into pixel segments, shape (n,2)
y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
y[:, 0] = w * x[:, 0] + padw # top left x
y[:, 1] = h * x[:, 1] + padh # top left y
return y


def segment2box(segment, width=640, height=640):
# Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
x, y = segment.T # segment xy
inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
x, y, = x[inside], y[inside]
return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # cls, xyxy


def segments2boxes(segments):
# Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
boxes = []
for s in segments:
x, y = s.T # segment xy
boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
return xyxy2xywh(np.array(boxes)) # cls, xywh


def resample_segments(segments, n=1000):
# Up-sample an (n,2) segment
for i, s in enumerate(segments):
x = np.linspace(0, len(s) - 1, n)
xp = np.arange(len(s))
segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
return segments


def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
# Rescale coords (xyxy) from img1_shape to img0_shape
if ratio_pad is None: # calculate from img0_shape
Expand Down
2 changes: 1 addition & 1 deletion utils/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ def __init__(self, model, autobalance=False):
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)

det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
self.balance = {3: [3.67, 1.0, 0.43], 4: [4.0, 1.0, 0.25, 0.06], 5: [4.0, 1.0, 0.25, 0.06, .02]}[det.nl]
self.balance = {3: [4.0, 1.0, 0.4], 4: [4.0, 1.0, 0.25, 0.06], 5: [4.0, 1.0, 0.25, 0.06, .02]}[det.nl]
self.ssi = (det.stride == 16).nonzero(as_tuple=False).item() # stride 16 index
self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
for k in 'na', 'nc', 'nl', 'anchors':
Expand Down