tensorflow利用预训练模型进行目标检测:预训练模型的使用(代码片段)

vactor vactor     2023-01-09     437

关键词:

一、运行样例

官网链接:https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb  但是一直有问题,没有运行起来,所以先使用一个别人写好的代码

上一个在ubuntu下可用的代码链接:https://gitee.com/bubbleit/JianDanWuTiShiBie  使用python2运行,python3可能会有问题

该代码由https://gitee.com/talengu/JianDanWuTiShiBie/tree/master而来,经过我部分的调整与修改,代码包含在ODtest.py文件中,/ssd_mobilenet_v1_coco_11_06_2017中存储的是预训练模型

原始代码如下

技术分享图片
import numpy as np
from matplotlib import pyplot as plt
import os
import tensorflow as tf
from PIL import Image
from utils import label_map_util
from utils import visualization_utils as vis_util

import datetime
# 关闭tensorflow警告
os.environ[TF_CPP_MIN_LOG_LEVEL]=3

detection_graph = tf.Graph()

# 加载模型数据-------------------------------------------------------------------------------------------------------
def loading():

    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        PATH_TO_CKPT = ssd_mobilenet_v1_coco_11_06_2017 + /frozen_inference_graph.pb
        with tf.gfile.GFile(PATH_TO_CKPT, rb) as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name=‘‘)
    return detection_graph



# Detection检测-------------------------------------------------------------------------------------------------------
def load_image_into_numpy_array(image):
    (im_width, im_height) = image.size
    return np.array(image.getdata()).reshape(
        (im_height, im_width, 3)).astype(np.uint8)
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join(data, mscoco_label_map.pbtxt)
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

def Detection(image_path="images/image1.jpg"):
    loading()
    with detection_graph.as_default():
        with tf.Session(graph=detection_graph) as sess:
            # for image_path in TEST_IMAGE_PATHS:
            image = Image.open(image_path)

            # the array based representation of the image will be used later in order to prepare the
            # result image with boxes and labels on it.
            image_np = load_image_into_numpy_array(image)

            # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
            image_np_expanded = np.expand_dims(image_np, axis=0)
            image_tensor = detection_graph.get_tensor_by_name(image_tensor:0)

            # Each box represents a part of the image where a particular object was detected.
            boxes = detection_graph.get_tensor_by_name(detection_boxes:0)

            # Each score represent how level of confidence for each of the objects.
            # Score is shown on the result image, together with the class label.
            scores = detection_graph.get_tensor_by_name(detection_scores:0)
            classes = detection_graph.get_tensor_by_name(detection_classes:0)
            num_detections = detection_graph.get_tensor_by_name(num_detections:0)

            # Actual detection.
            (boxes, scores, classes, num_detections) = sess.run(
                [boxes, scores, classes, num_detections],
                feed_dict=image_tensor: image_np_expanded)

            # Visualization of the results of a detection.将识别结果标记在图片上
            vis_util.visualize_boxes_and_labels_on_image_array(
                 image_np,
                 np.squeeze(boxes),
                 np.squeeze(classes).astype(np.int32),
                 np.squeeze(scores),
                 category_index,
                 use_normalized_coordinates=True,
                 line_thickness=8)
            # output result输出
            for i in range(3):
                if classes[0][i] in category_index.keys():
                    class_name = category_index[classes[0][i]][name]
                else:
                    class_name = N/A
                print("物体:%s 概率:%s" % (class_name, scores[0][i]))
                
            # matplotlib输出图片
            # Size, in inches, of the output images.
            IMAGE_SIZE = (20, 12)
            plt.figure(figsize=IMAGE_SIZE)
            plt.imshow(image_np)
            plt.show()



# 运行
Detection()
View Code

git clone到本地后执行有几个错误

问题1

报错信息: UnicodeDecodeError: ascii codec cant decode byte 0xe5 in position 1: ordinal not in range(128) 

solution:参考:https://www.cnblogs.com/QuLory/p/3615584.html

主要错误是上面最后一行的Unicode解码问题,网上搜索说是读取文件时使用的编码默认时ascii而不是utf8,导致的错误;

在代码中加上如下几句即可。

import sys
reload(sys)
sys.setdefaultencoding(utf8)

问题1

报错信息:_tkinter.TclError: no display name and no $DISPLAY environment variable 详情:

技术分享图片
Traceback (most recent call last):
  File "ODtest.py", line 103, in <module>
    Detection()
  File "ODtest.py", line 96, in Detection
    plt.figure(figsize=IMAGE_SIZE)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 533, in figure
    **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 161, in new_figure_manager
    return cls.new_figure_manager_given_figure(num, fig)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/_backend_tk.py", line 1046, in new_figure_manager_given_figure
    window = Tk.Tk(className="matplotlib")
  File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1822, in __init__
    self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
View Code

solution:参考:https://blog.csdn.net/qq_22194315/article/details/77984423

纯代码解决方案

这也是大部分人在网上诸如stackoverflow的问答平台得到的解决方案,在引入pyplot、pylab之前,要先更改matplotlib的后端模式为”Agg”。直接贴代码吧!

技术分享图片
# do this before importing pylab or pyplot
Import matplotlib
matplotlib.use(Agg)
import matplotlib.pyplot asplt
View Code

修改之后代码为:

技术分享图片
#!usr/bin/python
# -*- coding: utf-8 -*-

import numpy as np
import matplotlib
matplotlib.use(Agg)
import matplotlib.pyplot 
from matplotlib import pyplot as plt
import os
import tensorflow as tf
from PIL import Image
from utils import label_map_util
from utils import visualization_utils as vis_util

import datetime
# 关闭tensorflow警告
import sys
reload(sys)
sys.setdefaultencoding(utf8)

os.environ[TF_CPP_MIN_LOG_LEVEL]=3

detection_graph = tf.Graph()

# 加载模型数据-------------------------------------------------------------------------------------------------------
def loading():

    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        PATH_TO_CKPT = ssd_mobilenet_v1_coco_11_06_2017 + /frozen_inference_graph.pb
        with tf.gfile.GFile(PATH_TO_CKPT, rb) as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name=‘‘)
    return detection_graph



# Detection检测-------------------------------------------------------------------------------------------------------
def load_image_into_numpy_array(image):
    (im_width, im_height) = image.size
    return np.array(image.getdata()).reshape(
        (im_height, im_width, 3)).astype(np.uint8)
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join(data, mscoco_label_map.pbtxt)
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

def Detection(image_path="images/image1.jpg"):
    loading()
    with detection_graph.as_default():
        with tf.Session(graph=detection_graph) as sess:
            # for image_path in TEST_IMAGE_PATHS:
            image = Image.open(image_path)

            # the array based representation of the image will be used later in order to prepare the
            # result image with boxes and labels on it.
            image_np = load_image_into_numpy_array(image)

            # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
            image_np_expanded = np.expand_dims(image_np, axis=0)
            image_tensor = detection_graph.get_tensor_by_name(image_tensor:0)

            # Each box represents a part of the image where a particular object was detected.
            boxes = detection_graph.get_tensor_by_name(detection_boxes:0)

            # Each score represent how level of confidence for each of the objects.
            # Score is shown on the result image, together with the class label.
            scores = detection_graph.get_tensor_by_name(detection_scores:0)
            classes = detection_graph.get_tensor_by_name(detection_classes:0)
            num_detections = detection_graph.get_tensor_by_name(num_detections:0)

            # Actual detection.
            (boxes, scores, classes, num_detections) = sess.run(
                [boxes, scores, classes, num_detections],
                feed_dict=image_tensor: image_np_expanded)

            # Visualization of the results of a detection.将识别结果标记在图片上
            vis_util.visualize_boxes_and_labels_on_image_array(
                 image_np,
                 np.squeeze(boxes),
                 np.squeeze(classes).astype(np.int32),
                 np.squeeze(scores),
                 category_index,
                 use_normalized_coordinates=True,
                 line_thickness=8)
            # output result输出
            for i in range(3):
                if classes[0][i] in category_index.keys():
                    class_name = category_index[classes[0][i]][name]
                else:
                    class_name = N/A
                print("object:%s gailv:%s" % (class_name, scores[0][i]))
                
            # matplotlib输出图片
            # Size, in inches, of the output images.
            IMAGE_SIZE = (20, 12)
            plt.figure(figsize=IMAGE_SIZE)
            plt.imshow(image_np)
            plt.show()



# 运行
Detection()
View Code

运行结果:

技术分享图片

如无意外,加上时间统计函数,调用已下载好的预训练模型即可

 

二、使用与训练模型

aa

TensorFlow 对象检测 api:使用预训练模型在训练中更改类数时的分类权重初始化

】TensorFlow对象检测api:使用预训练模型在训练中更改类数时的分类权重初始化【英文标题】:TensorFlowobjectdetectionapi:classificationweightsinitializationwhenchangingnumberofclassesattrainingusingpre-trainedmodels【发布时间】:2018-08-2817:20:26【问题描... 查看详情

基于pytorch预训练模型使用fasterrcnn调用摄像头进行目标检测无敌详细!简单!超少代码!(代码片段)

基于pytorch预训练模型使用FasterRCNN调用摄像头进行目标检测【无敌详细!简单!超少代码!】详细完整项目链接:https://download.csdn.net/download/weixin_46570668/86954697?spm=1001.2014.3001.5503使用Pytorch自带的预训练模型fasterr 查看详情

使用opencv自带的预训练模型yolov3调用摄像头进行目标检测超少代码!懒人必备!(代码片段)

使用OpenCV自带的预训练模型YOLOv3调用摄像头进行目标检测【超少代码!懒人必备!】想要跑一个使用摄像头进行目标检测的模型玩一玩,想要简单又想要酷炫,想要少写代码,有想要足够好的效果!那不如... 查看详情

基于pytorch预训练模型使用fasterrcnn调用摄像头进行目标检测无敌详细!简单!超少代码!(代码片段)

基于pytorch预训练模型使用FasterRCNN调用摄像头进行目标检测【无敌详细!简单!超少代码!】详细完整项目链接:https://download.csdn.net/download/weixin_46570668/86954697?spm=1001.2014.3001.5503使用Pytorch自带的预训练模型fasterr... 查看详情

tensorflow同时调用多个预训练好的模型(代码片段)

...,再针对每个图创建一个会话,分别进行预测即可。importtensorflowastfimportnumpyasnp#建立两个graph 查看详情

使用opencv自带的预训练模型yolov3调用摄像头进行目标检测超少代码!懒人必备!(代码片段)

...还得是咱们YOLO啊!真的超级傻瓜,甚至不用安装TensorFlow和pytorch,让我们开始吧!首先调入我们这次实验需要的包importcv2ascvimportargparseimportsysimportnumpyasnpimportos.path没错!,你没有看错!我们只需要这些... 查看详情

4.使用预训练的pytorch网络进行图像分类(代码片段)

...客将介绍如何使用PyTorch预先训练的网络执行图像分类。利用这些网络只需几行代码就可以准确地对1000个常见对象类别进行分类。这些图像分类网络是开创性的、最先进的图像分类网络,包括VGG16、VGG19、Inception、DenseNet和ResNe... 查看详情

4.使用预训练的pytorch网络进行图像分类(代码片段)

...客将介绍如何使用PyTorch预先训练的网络执行图像分类。利用这些网络只需几行代码就可以准确地对1000个常见对象类别进行分类。这些图像分类网络是开创性的、最先进的图像分类网络,包括VGG16、VGG19、Inception、DenseNet和ResNe... 查看详情

Tensorflow:如何将预训练模型已经嵌入的数据输入到 LSTM 模型中?

】Tensorflow:如何将预训练模型已经嵌入的数据输入到LSTM模型中?【英文标题】:Tensorflow:Howtoinputdataalreadyembeddedbypre-trainmodelintoaLSTMmodel?【发布时间】:2022-01-2003:06:37【问题描述】:我是TensorFlow的新手。我正在构建一个简单的LSTM... 查看详情

21使用预训练的目标检测与语义分割网络(代码片段)

今天简单测试一下pytorch提供的模型文章目录1.使用训练好的目标检测网络1.1完整代码2.使用训练好的语义分割网络2.1完整代码1.使用训练好的目标检测网络importnumpyasnpimporttorchvisionimporttorchimporttorchvision.transformsastransformsfromPILimportIm... 查看详情

keras中的预训练对象检测模型

keras.applications库中有预训练的对象识别模型。但据我所知,目前还没有预训练物体检测模型。有谁知道为什么会这样?在处理视觉问题时,对象检测是问题的重要组成部分。答案这是因为vanillaKeras不包括用于对象检测的方法/模... 查看详情

恢复预训练模型的 TensorFlow 检查点文件

】恢复预训练模型的TensorFlow检查点文件【英文标题】:RestoringTensorflowcheckpointfilesofapre-trainedmodel【发布时间】:2018-02-1619:47:19【问题描述】:我已经从MobileNet检查点文件下载了一个预训练的TF-Slim模型,我正在尝试查看与层相关... 查看详情

使用重新训练的 Tensorflow 对象检测模型使用 snpe 进行 pb 到 dlc 转换失败

】使用重新训练的Tensorflow对象检测模型使用snpe进行pb到dlc转换失败【英文标题】:pbtodlcconversionwithsnpefailsusingaretrainedTensorflowObjectDetectionmodel【发布时间】:2021-09-2718:44:52【问题描述】:Tensorflow版本:2.5Snpe版本:1.51使用来自Tenso... 查看详情

如何从 python 中的预训练模型中获取权重并在 tensorflow 中使用它?

】如何从python中的预训练模型中获取权重并在tensorflow中使用它?【英文标题】:howcanigetweightfromapretrainedmodelinpythonanduseitintensorflow?【发布时间】:2022-01-0519:43:33【问题描述】:如何从PyTorch中的预训练模型中获取权重并在TensorFlow... 查看详情

深度学习基本功2:网络训练小技巧之使用预训练权重冻结训练和断点恢复(代码片段)

...方法,因为它可以建立精确的模型,耗时更短。利用迁移学习,不是从零开始学习,而是从之前解决各种问题时学到的模式开始。这样,我们就可以利用以前的学习成果,避免从零开始。二、使用预训练权... 查看详情

求助tensorflow怎样恢复预训练的模型啊

我现在只有这三个文件,缺少checkpoint文件,该怎样获取预训练的权重参数啊,求助啊参考技术Acheckpoint文件不是必须的,saver=tf.train.Saver()withtf.Session()assess:saver.restore(sess,tf.train.latest_checkpoint(参数存放路径)) 查看详情

如何在 android 中使用 tensorflow 预训练模型制作 stylegan

】如何在android中使用tensorflow预训练模型制作stylegan【英文标题】:howtomakestyleganusingtensorflowpretrainedmodelsinandroid【发布时间】:2021-10-3023:13:24【问题描述】:如何在android中使用python预训练模型。我想在android中制作类似的卡通效果... 查看详情

开源一个安全帽佩戴检测数据集及预训练模型

转载自 开源一个安全帽佩戴检测数据集及预训练模型-supersayajin-博客园本文开源了一个安全帽佩戴检测数据集及预训练模型,该项目已上传至github,点此链接,感觉有帮助的话请点star。同时简要介绍下实践上如何完... 查看详情