Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to visualize feature mapping on YOLOV5s? #3089

Closed
Mashood3624 opened this issue May 9, 2021 · 26 comments · Fixed by #3804
Closed

How to visualize feature mapping on YOLOV5s? #3089

Mashood3624 opened this issue May 9, 2021 · 26 comments · Fixed by #3804
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@Mashood3624
Copy link

❔Question

I have been searching for hours but I cannot find a single repo or issue on github which tells how can we visualize feature maps of a yolo model.

Sample:
feature

Additional context

@Mashood3624 Mashood3624 added the question Further information is requested label May 9, 2021
@github-actions
Copy link
Contributor

github-actions bot commented May 10, 2021

👋 Hello @Mashood3624, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

@thanhphuchuynh
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks 👍

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

@glenn-jocher
Copy link
Member

Oh great! Though I think there may be a bug in your plotting code, as the feature maps lack any concept of RGB or BGR 'colors'. You may have confused the anchor dimension for a color dimension.

@thanhphuchuynh
Copy link

After i remove RGB and BGR in my code.
the result:
image

@glenn-jocher
Copy link
Member

@thanhphuchuynh the feature maps have no concept of RGB or BGR as I said, but your images are in color. The channel dimension that inputs 3 colors is stretched to over a thousand channels eventually, but none of them are paired in 3s for color.

@thanhphuchuynh
Copy link

i tried again and result. thanks sir <3
image

@glenn-jocher
Copy link
Member

@thanhphuchuynh interesting!

@wanghangege
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks 👍

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

Hello, I'm a novice. I'd like to ask you, where should the visual code you disclose be inserted?

@thanhphuchuynh
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks 👍

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

Hello, I'm a novice. I'd like to ask you, where should the visual code you disclose be inserted?

In the model forward method, you should custom again code.

@GiorgioSgl
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks +1

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

Hello, I'm a novice. I'd like to ask you, where should the visual code you disclose be inserted?

Have you understand how to plot the feature maps?

@wanghangege
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks +1

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

Hello, I'm a novice. I'd like to ask you, where should the visual code you disclose be inserted?

Have you understand how to plot the feature maps?

yes, I know. 应该是在前向传播中添加可视化程序即可。

@Mashood3624
Copy link
Author

Hi, I have tried implementing feature maps from @thanhphuchuynh repo but It dont look great. @glenn-jocher Can you confirm if it is right or not ?

Screenshot (2)

https://github.com/ThanhPhucHuynh/yolov5_xray/blob/a52b9934b2912840c7df6a4e60c5910aeda96e68/models/yolo.py

here is some pictures

0y
1y
2y

@glenn-jocher
Copy link
Member

@Mashood3624 thats incorrect.

Feature maps are square for 640x640 inputs and there are more than a thousand of them in a YOLOv5l output.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 28, 2021

@Mashood3624 good news 😃! Feature map visualization was added ✅ in PR #3804 by @Zigars today. This allows for visualizing feature maps from any part of the model from any function (i.e. detect.py, train.py, test.py). Feature maps are saved as *.png files in runs/features/exp directory. To turn on feature visualization set feature_vis=True in the model forward method and define the layer you want to visualize (default is SPP layer).

yolov5/models/yolo.py

Lines 158 to 160 in 20d45aa

if feature_vis and m.type == 'models.common.SPP':
feature_visualization(x, m.type, m.i)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

layer_8_SPP_features

@glenn-jocher glenn-jocher linked a pull request Jun 28, 2021 that will close this issue
@Mashood3624
Copy link
Author

Thank you @glenn-jocher 😊

@github-actions
Copy link
Contributor

github-actions bot commented Jul 29, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Jul 29, 2021
@github-actions github-actions bot closed this as completed Aug 3, 2021
@ControllableGeneration
Copy link

ControllableGeneration commented Aug 3, 2021

1, I realize that below if not os.path.isdir(save_dir): os.mkdir(save_dir) need to be added to the 432nd line of utils/plots.py in order to not get a file or directory not found error
2, and you should do plt.close() after plt.savefig() in order to avoid opening too many images

@glenn-jocher
Copy link
Member

@ZiyuBao making directories is handled in detect.py here:

yolov5/detect.py

Lines 113 to 114 in 2d99063

visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
pred = model(img, augment=augment, visualize=visualize)[0]

matplotlib is actually placed in a mode where it doesn't produce any windows here, so I think we can skip plt.close() calls

yolov5/utils/plots.py

Lines 20 to 22 in 2d99063

# Settings
matplotlib.rc('font', **{'size': 11})
matplotlib.use('Agg') # for writing to files only

@noreenanwar
Copy link

Hi, how can we visualize class probability map after detection?

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 26, 2022

👋 Hello! Thanks for asking about feature visualization. YOLOv5 🚀 features can be visualized through all stages of the model from input to output. To visualize features from a given source run detect.py with the --visualize flag:

python detect.py --weights yolov5s.pt --source data/images/bus.jpg --visualize

An example Notebook visualizing bus.jpg features with YOLOv5s is shown below:

Open In Colab Open In Kaggle
Screenshot 2021-08-30 at 16 44 04

All stages are visualized by default, each with its own PNG showing the first 32 feature maps output from that stage. You can open any PNG for a closer look. For example the first 32 feature maps of the Focus() layer output are shown in stage0_Focus_features.png:

stage0_Focus_features

Feature maps may be customized by updating the feature_visualization() function in utils/plots.py:

yolov5/utils/plots.py

Lines 403 to 427 in bb5ebc2

def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')):
"""
x: Features to be visualized
module_type: Module type
stage: Module stage within model
n: Maximum number of feature maps to plot
save_dir: Directory to save results
"""
if 'Detect' not in module_type:
batch, channels, height, width = x.shape # batch, channels, height, width
if height > 1 and width > 1:
f = f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename
blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels
n = min(n, channels) # number of plots
fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols
ax = ax.ravel()
plt.subplots_adjust(wspace=0.05, hspace=0.05)
for i in range(n):
ax[i].imshow(blocks[i].squeeze()) # cmap='gray'
ax[i].axis('off')
print(f'Saving {save_dir / f}... ({n}/{channels})')
plt.savefig(save_dir / f, dpi=300, bbox_inches='tight')
plt.close()

Good luck 🍀 and let us know if you have any other questions!

@noreenanwar
Copy link

Hi, Thanks for your reply.
I want to visualize only detected objects.It is possible to only get the class probability maps ?

@glenn-jocher
Copy link
Member

@noreenanwar class probability vectors are stripped in the NMS function, you can access the full vectors there:

yolov5/utils/general.py

Lines 776 to 875 in 0c13240

def non_max_suppression(prediction,
conf_thres=0.25,
iou_thres=0.45,
classes=None,
agnostic=False,
multi_label=False,
labels=(),
max_det=300):
"""Non-Maximum Suppression (NMS) on inference results to reject overlapping bounding boxes
Returns:
list of detections, on (n,6) tensor per image [xyxy, conf, cls]
"""
bs = prediction.shape[0] # batch size
nc = prediction.shape[2] - 5 # number of classes
xc = prediction[..., 4] > conf_thres # candidates
# Checks
assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
# Settings
# min_wh = 2 # (pixels) minimum box width and height
max_wh = 7680 # (pixels) maximum box width and height
max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
time_limit = 0.3 + 0.03 * bs # seconds to quit after
redundant = True # require redundant detections
multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
merge = False # use merge-NMS
t = time.time()
output = [torch.zeros((0, 6), device=prediction.device)] * bs
for xi, x in enumerate(prediction): # image index, image inference
# Apply constraints
# x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
x = x[xc[xi]] # confidence
# Cat apriori labels if autolabelling
if labels and len(labels[xi]):
lb = labels[xi]
v = torch.zeros((len(lb), nc + 5), device=x.device)
v[:, :4] = lb[:, 1:5] # box
v[:, 4] = 1.0 # conf
v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls
x = torch.cat((x, v), 0)
# If none remain process next image
if not x.shape[0]:
continue
# Compute conf
x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
# Box (center x, center y, width, height) to (x1, y1, x2, y2)
box = xywh2xyxy(x[:, :4])
# Detections matrix nx6 (xyxy, conf, cls)
if multi_label:
i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
else: # best class only
conf, j = x[:, 5:].max(1, keepdim=True)
x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
# Filter by class
if classes is not None:
x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
# Apply finite constraint
# if not torch.isfinite(x).all():
# x = x[torch.isfinite(x).all(1)]
# Check shape
n = x.shape[0] # number of boxes
if not n: # no boxes
continue
elif n > max_nms: # excess boxes
x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
# Batched NMS
c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
if i.shape[0] > max_det: # limit detections
i = i[:max_det]
if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
# update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
weights = iou * scores[None] # box weights
x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
if redundant:
i = i[iou.sum(1) > 1] # require redundancy
output[xi] = x[i]
if (time.time() - t) > time_limit:
LOGGER.warning(f'WARNING: NMS time limit {time_limit:.3f}s exceeded')
break # time limit exceeded
return output

@noreenanwar
Copy link

I should add the following lines in this file?.

visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
pred = model(img, augment=augment, visualize=visualize)[0]

@glenn-jocher
Copy link
Member

@noreenanwar yes, you can customize the visualization output by adding those lines to the file. Make sure to also define the save_dir variable, which should hold the path where you want to save the visualizations. This will allow you to generate and save only the desired class probability maps. Let me know if you need further help with this!

@Muhammad-237
Copy link

@Mashood3624 you should be able to visualize data anywhere in the model by inserting your own plotting routines wherever you want. The model forward method is probably a good place to start:

yolov5/models/yolo.py

Lines 72 to 118 in 525f4f8

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train

I tried it and it worked! thanks 👍

for ii in range(len(x[i][0].permute(3, 0, 1, 2))):
    plt.subplot(3,7,ii+1)
    b =  np.array(255*x[i][0].permute(3, 0, 1, 2)[ii, :, :, :].permute(1, 2, 0).cpu().detach().numpy()[:, :, ::-1])
    plt.imshow(cv2.cvtColor(b.astype('uint8'), cv2.COLOR_BGR2RGB))
    plt.axis("off")
plt.show()

how did you make changes to this can you show me step by step i dont understand what you did to make feature map

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants