深度学习和目标检测系列教程10-300:通过torch训练第一个faster-rcnn模型(代码片段)

刘润森! 刘润森!     2022-12-04     797

关键词:

@Author:Runsen

上次介绍了Faster-RCNN模型,那么今天就开始训练第一个Faster-RCNN模型。

本文将展示如何在水果图像数据集上使用Faster-RCNN模型。

代码的灵感来自此处的 Pytorch 文档教程和Kaggle

  • https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
  • https://www.kaggle.com/yerramvarun/fine-tuning-faster-rcnn-using-pytorch/

这是我目前见到RCNN最好的教程

数据集来源:https://www.kaggle.com/mbkinaci/fruit-images-for-object-detection

由于很多对象检测代码是相同的,并且必须编写,torch为我们提供了相关的代码,直接克隆复制到工作目录中。

git clone https://github.com/pytorch/vision.git

cp vision/references/detection/utils.py ./
cp vision/references/detection/transforms.py ./
cp vision/references/detection/coco_eval.py ./
cp vision/references/detection/engine.py ./
cp vision/references/detection/coco_utils.py ./

下载的数据集,在train和test文件夹中存在对应的xml和jpg文件。

import os
import numpy as np
import cv2
import torch
import matplotlib.patches as patches
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
from matplotlib import pyplot as plt
from torch.utils.data import Dataset
from xml.etree import ElementTree as et
from torchvision import transforms as torchtrans

class FruitImagesDataset(torch.utils.data.Dataset):
    def __init__(self, files_dir, width, height, transforms=None):
        self.transforms = transforms
        self.files_dir = files_dir
        self.height = height
        self.width = width


        self.imgs = [image for image in sorted(os.listdir(files_dir))
                     if image[-4:] == '.jpg']

        self.classes = [_,'apple', 'banana', 'orange']

    def __getitem__(self, idx):

        img_name = self.imgs[idx]
        image_path = os.path.join(self.files_dir, img_name)

        # reading the images and converting them to correct size and color
        img = cv2.imread(image_path)
        img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB).astype(np.float32)
        img_res = cv2.resize(img_rgb, (self.width, self.height), cv2.INTER_AREA)
        # diving by 255
        img_res /= 255.0

        # annotation file
        annot_filename = img_name[:-4] + '.xml'
        annot_file_path = os.path.join(self.files_dir, annot_filename)

        boxes = []
        labels = []
        tree = et.parse(annot_file_path)
        root = tree.getroot()

        # cv2 image gives size as height x width
        wt = img.shape[1]
        ht = img.shape[0]

        # box coordinates for xml files are extracted and corrected for image size given
        for member in root.findall('object'):
            labels.append(self.classes.index(member.find('name').text))

            # bounding box
            xmin = int(member.find('bndbox').find('xmin').text)
            xmax = int(member.find('bndbox').find('xmax').text)

            ymin = int(member.find('bndbox').find('ymin').text)
            ymax = int(member.find('bndbox').find('ymax').text)

            xmin_corr = (xmin / wt) * self.width
            xmax_corr = (xmax / wt) * self.width
            ymin_corr = (ymin / ht) * self.height
            ymax_corr = (ymax / ht) * self.height

            boxes.append([xmin_corr, ymin_corr, xmax_corr, ymax_corr])

        # convert boxes into a torch.Tensor
        boxes = torch.as_tensor(boxes, dtype=torch.float32)

        # getting the areas of the boxes
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])

        # suppose all instances are not crowd
        iscrowd = torch.zeros((boxes.shape[0],), dtype=torch.int64)

        labels = torch.as_tensor(labels, dtype=torch.int64)

        target = 
        target["boxes"] = boxes
        target["labels"] = labels
        target["area"] = area
        target["iscrowd"] = iscrowd
        # image_id
        image_id = torch.tensor([idx])
        target["image_id"] = image_id

        if self.transforms:
            sample = self.transforms(image=img_res,
                                     bboxes=target['boxes'],
                                     labels=labels)

            img_res = sample['image']
            target['boxes'] = torch.Tensor(sample['bboxes'])
        return img_res, target
    def __len__(self):
        return len(self.imgs)

def torch_to_pil(img):
    return torchtrans.ToPILImage()(img).convert('RGB')

def plot_img_bbox(img, target):
    fig, a = plt.subplots(1, 1)
    fig.set_size_inches(5, 5)
    a.imshow(img)
    for box in (target['boxes']):
        x, y, width, height = box[0], box[1], box[2] - box[0], box[3] - box[1]
        rect = patches.Rectangle((x, y),
                                 width, height,
                                 linewidth=2,
                                 edgecolor='r',
                                 facecolor='none')

        a.add_patch(rect)
    plt.show()


def get_transform(train):
    if train:
        return A.Compose([
            A.HorizontalFlip(0.5),
            ToTensorV2(p=1.0)
        ], bbox_params='format': 'pascal_voc', 'label_fields': ['labels'])
    else:
        return A.Compose([
            ToTensorV2(p=1.0)
        ], bbox_params='format': 'pascal_voc', 'label_fields': ['labels'])

files_dir = '../input/fruit-images-for-object-detection/train_zip/train'
test_dir = '../input/fruit-images-for-object-detection/test_zip/test'

dataset = FruitImagesDataset(train_dir, 480, 480)

img, target = dataset[78]
print(img.shape, '\\n', target)
plot_img_bbox(torch_to_pil(img), target)

输出如下:

在torch中Faster-RCNN模型导入from torchvision.models.detection.faster_rcnn import FastRCNNPredictor

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor

def get_object_detection_model(num_classes):
    # 加载在COCO上预先训练过的模型(会下载对应的权重)
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
    # 获取分类器的输入特征数
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # 用新的头替换预先训练好的头
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
    return model

对象检测的增强与正常增强不同,因为在这里需要确保 bbox 在转换后仍然正确与对象对齐。

在这里,添加了随机翻转转换,随机图片处理

def get_transform(train):
    
    if train:
        return A.Compose([
                            A.HorizontalFlip(0.5),
                            ToTensorV2(p=1.0) 
                        ], bbox_params='format': 'pascal_voc', 'label_fields': ['labels'])
    else:
        return A.Compose([
                            ToTensorV2(p=1.0)
                        ], bbox_params='format': 'pascal_voc', 'label_fields': ['labels'])

现在让我们准备数据集和数据加载器进行训练和测试。

dataset = FruitImagesDataset(files_dir, 480, 480, transforms= get_transform(train=True))
dataset_test = FruitImagesDataset(files_dir, 480, 480, transforms= get_transform(train=False))

# split the dataset in train and test set
torch.manual_seed(1)
indices = torch.randperm(len(dataset)).tolist()

# train test split
test_split = 0.2
tsize = int(len(dataset)*test_split)
dataset = torch.utils.data.Subset(dataset, indices[:-tsize])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-tsize:])

# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
    dataset, batch_size=10, shuffle=True, num_workers=4,
    collate_fn=utils.collate_fn)

data_loader_test = torch.utils.data.DataLoader(
    dataset_test, batch_size=10, shuffle=False, num_workers=4,
    collate_fn=utils.collate_fn)

准备模型

# to train on gpu if selected.
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

num_classes = 4

# get the model using our helper function
model = get_object_detection_model(num_classes)

# move model to the right device
model.to(device)

# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
                            momentum=0.9, weight_decay=0.0005)

# and a learning rate scheduler which decreases the learning rate by
# 10x every 3 epochs
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                               step_size=3,
                                               gamma=0.1)

# training for 10 epochs
num_epochs = 10

for epoch in range(num_epochs):
    # training for one epoch
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
    # update the learning rate
    lr_scheduler.step()
    # evaluate on the test dataset
    evaluate(model, data_loader_test, device=device)


Torchvision 为我们提供了一个将 nms 应用于我们的预测的实用程序,让我们apply_nms使用它构建一个函数。

def apply_nms(orig_prediction, iou_thresh=0.3):
    
    # torchvision returns the indices of the bboxes to keep
    keep = torchvision.ops.nms(orig_prediction['boxes'], orig_prediction['scores'], iou_thresh)
    
    final_prediction = orig_prediction
    final_prediction['boxes'] = final_prediction['boxes'][keep]
    final_prediction['scores'] = final_prediction['scores'][keep]
    final_prediction['labels'] = final_prediction['labels'][keep]
    
    return final_prediction

# function to convert a torchtensor back to PIL image
def torch_to_pil(img):
    return torchtrans.ToPILImage()(img).convert('RGB')

让我们从我们的测试数据集中取一张图像,看看我们的模型是如何工作的。

我们将首先看到,与实际相比,我们的模型预测了多少个边界框

# pick one image from the test set
img, target = dataset_test[5]
# put the model in evaluation mode
model.eval()
with torch.no_grad():
    prediction = model([img.to(device)])[0]
    
print('predicted #boxes: ', len(prediction['labels']))
print('real #boxes: ', len(target['labels']))

预测#boxes:14
真实#boxes:1

真实数据

print('EXPECTED OUTPUT')
plot_img_bbox(torch_to_pil(img), target)

print('MODEL OUTPUT')
plot_img_bbox(torch_to_pil(img), prediction)

你可以看到我们的模型为每个苹果预测了很多边界框。让我们对其应用 nms 并查看最终输出

nms_prediction = apply_nms(prediction, iou_thresh=0.2)
print('NMS APPLIED MODEL OUTPUT')
plot_img_bbox(torch_to_pil(img), nms_prediction)

在这里插入图片描述

算法和代码逻辑是我目前见到,最好的Faster-RCNN教程:

https://www.kaggle.com/yerramvarun/fine-tuning-faster-rcnn-using-pytorch/

这个RCNN对于系统的要求非常高,在公司的GPU中也会显出内存不够。

主要是DataLoader中的num_workers=4在做多线程。

如何微调RCNN模型,并对resnet 50进行微调。如何更改训练配置,比如图像大小、优化器和学习率。如何更好使用Albumentations ,值得去探索。

最后附上整个RCNN的网络结构

FasterRCNN(
  (transform): GeneralizedRCNNTransform(
      Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
      Resize(min_size=(800,), max_size=1333, mode='bilinear')
  )
  (backbone): BackboneWithFPN(
    (body): IntermediateLayerGetter(
      (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
      (bn1): FrozenBatchNorm2d(64, eps=0.0)
      (relu): ReLU(inplace=True)
      (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
      (layer1): Sequential(
        (0): Bottleneck(
          (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=0.0)
          (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(64, eps=0.0)
          (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(256, eps=0.0)
          (relu): ReLU(inplace=True)
          (downsample): Sequential(
            (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): FrozenBatchNorm2d(256, eps=0.0)
          )
        )
        (1): Bottleneck(
          (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=0.0)
          (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(64, eps=0.0)
          (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(256, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (2): Bottleneck(
          (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=0.0)
          (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(64, eps=0.0)
          (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(256, eps=0.0)
          (relu): ReLU(inplace=True)
        )
      )
      (layer2): Sequential(
        (0): Bottleneck(
          (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=0.0)
          (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(128, eps=0.0)
          (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(512, eps=0.0)
          (relu): ReLU(inplace=True)
          (downsample): Sequential(
            (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
            (1): FrozenBatchNorm2d(512, eps=0.0)
          )
        )
        (1): Bottleneck(
          (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=0.0)
          (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(128, eps=0.0)
          (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(512, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (2): Bottleneck(
          (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=0.0)
          (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(128, eps=0.0)
          (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(512, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (3): Bottleneck(
          (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=0.0)
          (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(128, eps=0.0)
          (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(512, eps=0.0)
          (relu): ReLU(inplace=True)
        )
      )
      (layer3): Sequential(
        (0): Bottleneck(
          (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
          (downsample): Sequential(
            (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
            (1): FrozenBatchNorm2d(1024, eps=0.0)
          )
        )
        (1): Bottleneck(
          (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (2): Bottleneck(
          (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (3): Bottleneck(
          (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (4): Bottleneck(
          (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (5): Bottleneck(
          (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=0.0)
          (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(256, eps=0.0)
          (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(1024, eps=0.0)
          (relu): ReLU(inplace=True)
        )
      )
      (layer4): Sequential(
        (0): Bottleneck(
          (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(512, eps=0.0)
          (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(512, eps=0.0)
          (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(2048, eps=0.0)
          (relu): ReLU(inplace=True)
          (downsample): Sequential(
            (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
            (1): FrozenBatchNorm2d(2048, eps=0.0)
          )
        )
        (1): Bottleneck(
          (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(512, eps=0.0)
          (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(512, eps=0.0)
          (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(2048, eps=0.0)
          (relu): ReLU(inplace=True)
        )
        (2): Bottleneck(
          (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(512, eps=0.0)
          (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn2): FrozenBatchNorm2d(512, eps=0.0)
          (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn3): FrozenBatchNorm2d(2048, eps=0.0)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (fpn): FeaturePyramidNetwork(
      (inner_blocks): ModuleList(
        (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
        (2): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (3): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (layer_blocks): ModuleList(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (extra_blocks): LastLevelMaxPool()
    )
  )
  (rpn): RegionProposalNetwork(
    (anchor_generator): AnchorGenerator()
    (head): RPNHead(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (cls_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (bbox_pred): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (roi_heads): RoIHeads(
    (box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2)
    (box_head): TwoMLPHead(
      (fc6): Linear(in_features=12544, out_features=1024, bias=True)
      (fc7): Linear(in_features=1024, out_features=1024, bias=True)
    )
    (box_predictor): FastRCNNPredictor(
      (cls_score): Linear(in_features=1024, out_features=4, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=16, bias=True)
    )
  )
)

深度学习和目标检测系列教程19-300:关于目标检测apiou和map简介(代码片段)

@Author:RunsenR-CNN和YOLO等对象检测模型,使用了平均精度(mAP)。mAP将真实边界框与检测到的框进行比较并返回分数。分数越高,模型的检测就越准确。PrecisionandRecall现在有两个类,Positive和Negative,这里是10个... 查看详情

深度学习和目标检测系列教程19-300:关于目标检测apiou和map简介(代码片段)

@Author:RunsenR-CNN和YOLO等对象检测模型,使用了平均精度(mAP)。mAP将真实边界框与检测到的框进行比较并返回分数。分数越高,模型的检测就越准确。PrecisionandRecall现在有两个类,Positive和Negative,这里是10个... 查看详情

深度学习和目标检测系列教程15-300:在python中使用opencv执行yolov3对象检测(代码片段)

@Author:Runsen上次讲了yolov3,这是使用yolov3的模型通过opencv的摄像头来执行YOLOv3对象检测。导入所需模块:importcv2importnumpyasnpimporttime让我们定义一些我们需要的变量和参数:CONFIDENCE=0.5SCORE_THRESHOLD=0.5IOU_ 查看详情

深度学习和目标检测系列教程15-300:在python中使用opencv执行yolov3对象检测(代码片段)

@Author:Runsen上次讲了yolov3,这是使用yolov3的模型通过opencv的摄像头来执行YOLOv3对象检测。导入所需模块:importcv2importnumpyasnpimporttime让我们定义一些我们需要的变量和参数:CONFIDENCE=0.5SCORE_THRESHOLD=0.5IOU_ 查看详情

深度学习和目标检测系列教程13-300:yolo物体检测算法(代码片段)

@Author:RunsenYOLO,是目前速度更快的物体检测算法之一。虽然它不再是最准确的物体检测算法,但当您需要实时检测时,它是一个非常好的选择,而不会损失太多的准确性。YOLO框架在本篇博客中,我将... 查看详情

深度学习和目标检测系列教程21-300:deepsorts测试小车经过的时间和速度(代码片段)

@Author:RunsendeepDeepSortDeepSort是一种用于跟踪目标的模型,为每个目标分配ID,为每一个不同的类别分配label。在DeepSort中,过程如下。使用YOLO计算边界框(检测)使用Sort(卡尔曼滤波器)和ReID... 查看详情

深度学习和目标检测系列教程23-300:fasterrcnn和yolov5训练飞机目标识别的小项目(代码片段)

@Author:RunsenFasterRCNN和yolov5训练飞机目标识别的项目目标检测算法主要包括:两类two-stage和one-stage一类是two-stage,two-stage检测算法将检测问题划分为两个阶段,首先产生候选区域(regionproposals),然... 查看详情

深度学习和目标检测系列教程18-300:关于yolovoc格式标签转化问题(代码片段)

@Author:RunsenPASCALVOC(ThePASCALVisualObjectClasses)是一个世界级的计算机视觉挑战赛,第一届比赛在2005年举办,随后一年举办一次,直到2012年最后一次。PASCAL的全称是PatternAnalysis,StatisticalmodellingandComputAtionalLe 查看详情

深度学习和目标检测系列教程14-300:训练第一个yolov3检测器(代码片段)

@Author:Runsen本次博客参考GIthub项目:https://github.com/qqwweee/keras-yolo3.git在开始之前,请在以下位置克隆!gitclonehttps://github.com/qqwweee/keras-yolo3.git到您的本地机器。确保设置虚拟环境并安装要求Keras2.1.5Tensorfl 查看详情

深度学习和目标检测系列教程22-300:关于人体姿态常见的估计方法(代码片段)

@Author:Runsen姿态估计是计算机视觉中的一项流行任务,比如真实的场景如何进行人体跌倒检测,如何对手语进行交流。作为人工智能(AI)的一个领域,计算机视觉使机器能够以模仿人类视觉为目的来... 查看详情

深度学习和目标检测系列教程12-300:常见的opencv的api和用法总结(上)(代码片段)

@Author:Runsen由于CV需要熟练使用opencv,因此总结了opencv常见的APi和用法。OpenCV(opensourcecomputervision)于1999年正式推出,它来自英特尔的一项倡议。OpenCV的核心是用C++编写的。在Python中,我们只使... 查看详情

深度学习和目标检测系列教程21-300:deepsorts测试小车经过的时间和速度(代码片段)

@Author:RunsendeepDeepSortDeepSort是一种用于跟踪目标的模型,为每个目标分配ID,为每一个不同的类别分配label。在DeepSort中,过程如下。使用YOLO计算边界框(检测)使用Sort(卡尔曼滤波器)和ReID... 查看详情

深度学习和目标检测系列教程20-300:opencv与图像处理:霍夫变换技术实现直线检测(代码片段)

@Author:Runsen霍夫变换(HoughTransform)是图像处理中的一种特征提取技术,该过程在一个參数空间中通过计算累计结果的局部最大值得到一个符合该特定形状的集合作为霍夫变换结果。这张图里面有一条看起来挺直的线ÿ... 查看详情

深度学习和目标检测系列教程20-300:opencv与图像处理:霍夫变换技术实现直线检测(代码片段)

@Author:Runsen霍夫变换(HoughTransform)是图像处理中的一种特征提取技术,该过程在一个參数空间中通过计算累计结果的局部最大值得到一个符合该特定形状的集合作为霍夫变换结果。这张图里面有一条看起来挺直的线ÿ... 查看详情

深度学习和目标检测系列教程11-300:小麦数据集训练faster-rcnn模型(代码片段)

@Author:Runsen上次训练的Faster-RCNN的数据格式是xml和jpg图片提供,在很多Object-Detection中,数据有的是csv格式,数据集来源:https://www.kaggle.com/c/global-wheat-detectionwidth和heigth是图片的长和宽,b 查看详情

深度学习和目标检测系列教程17-300:3个类别面罩检测类别数据集训练yolov5s模型(代码片段)

@Author:RunsenYOLO是目前最火爆的的计算机视觉算法之一,今天使用的数据集来源:https://www.kaggle.com/andrewmvd/face-mask-detection这是数据集可以创建一个模型来检测戴口罩、不戴口罩或不正确戴口罩的人。该数据集包含... 查看详情

目标检测——深度学习下的小目标检测(检测难的原因和tricks)

小目标难检测原因主要原因(1)小目标在原图中尺寸比较小,通用目标检测模型中,一般的基础骨干神经网络(VGG系列和Resnet系列)都有几次下采样处理,导致小目标在特征图的尺寸基本上只有个位数的像素大小,导致设计的... 查看详情

深度学习和目标检测系列教程22-300:关于人体姿态常见的估计方法(代码片段)

@Author:Runsen姿态估计是计算机视觉中的一项流行任务,比如真实的场景如何进行人体跌倒检测,如何对手语进行交流。作为人工智能(AI)的一个领域,计算机视觉使机器能够以模仿人类视觉为目的来... 查看详情