pointnet基于pointnet的三维点云目标分类识别matlab仿真(代码片段)

fpga和matlab fpga和matlab     2022-12-05     558

关键词:

1.软件版本

matlab2021a

2.系统概述

这里,采用的pointnet网络结构如下图所示:

        在整体网络结构中,

      首先进行set abstraction,这一部分主要即对点云中的点进行局部划分,提取整体特征,如图可见,在set abstraction中,主要有Sampling layer、Grouping layer、以及PointNet layer三层构成,sampling layer即完成提取中心点工作,采用fps算法,而在grouping中,即完成group操作,采用mrg或msg方法,最后对于提取出得点,使用pointnet进行特征提取。在msg中,第一层set abstraction取中心点512个,半径分别为0.1、0.2、0.4,每个圈内的最大点数为16,32,128。

Sampling layer

采样层在输入点云中选择一系列点,由此定义出局部区域的中心。采样算法使用迭代最远点采样方法 iterative farthest point sampling(FPS)。先随机选择一个点,然后再选择离这个点最远的点作为起点,再继续迭代,直到选出需要的个数为止相比随机采样,能更完整得通过区域中心点采样到全局点云

Grouping layer

目的是要构建局部区域,进而提取特征。思想就是利用临近点,并且论文中使用的是neighborhood ball,而不是KNN,是因为可以保证有一个fixed region scale,主要的指标还是距离distance。

Pointnet layer

在如何对点云进行局部特征提取的问题上,利用原有的Pointnet就可以很好的提取点云的特征,由此在Pointnet++中,原先的Pointnet网络就成为了Pointnet++网络中的子网络,层级迭代提取特征。

3.部分核心程序

clc;
clear;
close all;
warning off;
addpath(genpath(pwd));
rng('default')
%****************************************************************************
%更多关于matlab和fpga的搜索“fpga和matlab”的CSDN博客:
%matlab/FPGA项目开发合作
%https://blog.csdn.net/ccsss22?type=blog
%****************************************************************************
dsTrain = PtCloudClassificationDatastore('train');
dsVal = PtCloudClassificationDatastore('test');

ptCloud = pcread('Chair.ply');
label = 'Chair';
figure;pcshow(ptCloud)
xlabel("X");ylabel("Y");zlabel("Z");title(label)

dsLabelCounts = transform(dsTrain,@(data)data2 data1.Count);
labelCounts = readall(dsLabelCounts);
labels = vertcat(labelCounts:,1);
counts = vertcat(labelCounts:,2);
figure;histogram(labels);title('class distribution')


rng(0)
[G,classes] = findgroups(labels);
numObservations = splitapply(@numel,labels,G);
desiredNumObservationsPerClass = max(numObservations);
filesOverSample=[];
for i=1:numel(classes)
if i==1
    targetFiles = dsTrain.Files1:numObservations(i);
else
    targetFiles = dsTrain.FilesnumObservations(i-1)+1:sum(numObservations(1:i));
end
% Randomly replicate the point clouds belonging to the infrequent classes
files = randReplicateFiles(targetFiles,desiredNumObservationsPerClass);
filesOverSample = vertcat(filesOverSample,files');
end
dsTrain.Files=filesOverSample;

 

dsTrain.Files = dsTrain.Files(randperm(length(dsTrain.Files)));



dsTrain.MiniBatchSize = 32;
dsVal.MiniBatchSize = dsTrain.MiniBatchSize;


dsTrain = transform(dsTrain,@augmentPointCloud);

data = preview(dsTrain);
ptCloud = data1,1;
label = data1,2;

figure;pcshow(ptCloud.Location,[0 0 1],"MarkerSize",40,"VerticalAxisDir","down")
xlabel("X");ylabel("Y");zlabel("Z");title(label)


minPointCount = splitapply(@min,counts,G);
maxPointCount = splitapply(@max,counts,G);
meanPointCount = splitapply(@(x)round(mean(x)),counts,G);
stats = table(classes,numObservations,minPointCount,maxPointCount,meanPointCount)

numPoints = 1000;
dsTrain = transform(dsTrain,@(data)selectPoints(data,numPoints));
dsVal = transform(dsVal,@(data)selectPoints(data,numPoints));

dsTrain = transform(dsTrain,@preprocessPointCloud);
dsVal = transform(dsVal,@preprocessPointCloud);

data = preview(dsTrain);
figure;pcshow(data1,1,[0 0 1],"MarkerSize",40,"VerticalAxisDir","down");
xlabel("X");ylabel("Y");zlabel("Z");title(data1,2)


inputChannelSize = 3;
hiddenChannelSize1 = [64,128];
hiddenChannelSize2 = 256;
[parameters.InputTransform, state.InputTransform] = initializeTransform(inputChannelSize,hiddenChannelSize1,hiddenChannelSize2);

inputChannelSize = 3;
hiddenChannelSize = [64 64];
[parameters.SharedMLP1,state.SharedMLP1] = initializeSharedMLP(inputChannelSize,hiddenChannelSize);

inputChannelSize = 64;
hiddenChannelSize1 = [64,128];
hiddenChannelSize2 = 256;
[parameters.FeatureTransform, state.FeatureTransform] = initializeTransform(inputChannelSize,hiddenChannelSize,hiddenChannelSize2);

inputChannelSize = 64;
hiddenChannelSize = 64;
[parameters.SharedMLP2,state.SharedMLP2] = initializeSharedMLP(inputChannelSize,hiddenChannelSize);


inputChannelSize = 64;
hiddenChannelSize = [512,256];
numClasses = numel(classes);
[parameters.ClassificationMLP, state.ClassificationMLP] = initializeClassificationMLP(inputChannelSize,hiddenChannelSize,numClasses);

numEpochs = 60;
learnRate = 0.001;
l2Regularization = 0.1;
learnRateDropPeriod = 15;
learnRateDropFactor = 0.5;

gradientDecayFactor = 0.9;
squaredGradientDecayFactor = 0.999;
avgGradients = [];
avgSquaredGradients = [];

[lossPlotter, trainAccPlotter,valAccPlotter] = initializeTrainingProgressPlot;
% Number of classes
numClasses = numel(classes);
% Initialize the iterations
iteration = 0;
% To calculate the time for training
start = tic;
% Loop over the epochs
for epoch = 1:numEpochs
    
    % Reset training and validation datastores.
    reset(dsTrain);
    reset(dsVal);
    
    % Iterate through data set.
    while hasdata(dsTrain) % if no data to read, exit the loop to start the next epoch
        iteration = iteration + 1;        
        % Read data.
        data = read(dsTrain);        
        % Create batch.
        [XTrain,YTrain] = batchData(data,classes);        
        % Evaluate the model gradients and loss using dlfeval and the
        % modelGradients function.
        [gradients, loss, state, acc] = dlfeval(@modelGradients,XTrain,YTrain,parameters,state);
        % L2 regularization.
        gradients = dlupdate(@(g,p) g + l2Regularization*p,gradients,parameters);
        % Update the network parameters using the Adam optimizer.
        [parameters, avgGradients, avgSquaredGradients] = adamupdate(parameters, gradients, ...
            avgGradients, avgSquaredGradients, iteration,learnRate,gradientDecayFactor, squaredGradientDecayFactor);
        % Update the training progress.
        D = duration(0,0,toc(start),"Format","hh:mm:ss");
        title(lossPlotter.Parent,"Epoch: " + epoch + ", Elapsed: " + string(D))
        addpoints(lossPlotter,iteration,double(gather(extractdata(loss))))
        addpoints(trainAccPlotter,iteration,acc);
        drawnow
    end
    
    % Create confusion matrix 
    cmat = sparse(numClasses,numClasses);
    % Classify the validation data to monitor the tranining process
    while hasdata(dsVal)                
        data = read(dsVal); % Get the next batch of data.
        [XVal,YVal] = batchData(data,classes);% Create batch.        
        % Compute label predictions.
        isTrainingVal = 0; %Set at zero for validation data
        YPred = pointnetClassifier(XVal,parameters,state,isTrainingVal);
        
        % Choose prediction with highest score as the class label for
        % XTest.
        [~,YValLabel] = max(YVal,[],1);
        [~,YPredLabel] = max(YPred,[],1);
        cmat = aggreateConfusionMetric(cmat,YValLabel,YPredLabel);% Update the confusion matrix
    end
    % Update training progress plot with average classification accuracy.
    acc = sum(diag(cmat))./sum(cmat,"all");
    addpoints(valAccPlotter,iteration,acc);
    % Update the learning rate
    if mod(epoch,learnRateDropPeriod) == 0
        learnRate = learnRate * learnRateDropFactor;
    end   
    reset(dsTrain); % Reset the training data since all the training data were already read 
    % Shuffle the data at every epoch
    dsTrain.UnderlyingDatastore.Files = dsTrain.UnderlyingDatastore.Files(randperm(length(dsTrain.UnderlyingDatastore.Files)));
    reset(dsVal);
end


cmat = sparse(numClasses,numClasses); % Prepare sparse-double variable to do like zeros(2,2)
reset(dsVal); % Reset the validation data
data = readall(dsVal); % Read all validation data
[XVal,YVal] = batchData(data,classes); % Create batch.
% Classify the validation data using the helper function pointnetClassifier
YPred = pointnetClassifier(XVal,parameters,state,isTrainingVal);
% Choose prediction with highest score as the class label for
% XTest.
[~,YValLabel] = max(YVal,[],1);
[~,YPredLabel] = max(YPred,[],1);

% Collect confusion metrics.
cmat = aggreateConfusionMetric(cmat,YValLabel,YPredLabel);
figure;chart = confusionchart(cmat,classes);

acc = sum(diag(cmat))./sum(cmat,"all")






4.仿真结论

 

 

 

 

 5.参考文献

 [1][1] Qi C R ,  Su H ,  Mo K , et al. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.资源同名下载

深度学习算法简要总结系列

...回来看看,以框架流程为主,不涉及细节、点云pointnet代码仓库https://github.com/yanx27/Pointnet_Pointnet2_pytorch参考博客最远点采样(FarthestPointSampling)介绍【3D计算机视觉】PointNet++的pytorch实现代码阅读论文阅读笔记|三维目... 查看详情

论文解读|f-pointnet,使用rgb图像和depth点云深度,数据的3d目标检测

...ff0c;选择“星标★”公众号大家好,我是Charmve!F-PointNet提出了直接处理点云数据的方案,但这种方式面临着挑战,比如:如何有效地在三维空间中定位目标的可能位置,即如何产生3D候选框,假如全局... 查看详情

论文解读f-pointnet使用rgb图像和depth点云深度数据的3d目标检测

前言F-PointNet提出了直接处理点云数据的方案,但这种方式面临着挑战,比如:如何有效地在三维空间中定位目标的可能位置,即如何产生3D候选框,假如全局搜索将会耗费大量算力与时间。F-PointNet是在进行点... 查看详情

论文解读f-pointnet使用rgb图像和depth点云深度数据的3d目标检测

前言F-PointNet提出了直接处理点云数据的方案,但这种方式面临着挑战,比如:如何有效地在三维空间中定位目标的可能位置,即如何产生3D候选框,假如全局搜索将会耗费大量算力与时间。F-PointNet是在进行点... 查看详情

三维深度学习pytorch-pointnet系列之win10下环境安装与demo运行(代码片段)

【三维深度学习】Pytorch-PointNet系列之win10下环境安装与demo运行提示:最近开始在三维深度学习方面进行研究,从PointNet开始入手,对这个系列的网络进入深入学习,记录相关知识点,分享学习中遇到的问题已经解决的方法。文章目录【... 查看详情

论文解读|f-pointnet,使用rgb图像和depth点云深度,数据的3d目标检测

...一时间送达来源|一棵小树X大家好,我是Charmve!F-PointNet提出了直接处理点云数据的方案,但这种方式面临着挑战,比如:如何有效地在三维空间中定位目标的可能位置,即如何产生3D候选框,假如全局... 查看详情

pointnetpointnet++基于深度学习的3d点云分类和分割

前言PointNet是直接对点云进行处理的,它对输入点云中的每一个点,学习其对应的空间编码,之后再利用所有点的特征得到一个全局的点云特征。Pointnet提取的全局特征能够很好地完成分类任务,但局部特征提取能力较差,这使... 查看详情

pointnetpointnet++基于深度学习的3d点云分类和分割

前言PointNet是直接对点云进行处理的,它对输入点云中的每一个点,学习其对应的空间编码,之后再利用所有点的特征得到一个全局的点云特征。Pointnet提取的全局特征能够很好地完成分类任务,但局部特征提取能力较差,这使... 查看详情

matlab教程案例62使用matlab实现基于pointnet++网络的点云数据分类仿真分析

欢迎订阅《FPGA学习入门100例教程》、《MATLAB学习入门100例教程》 目录1.软件版本2.PointNet++网络理论概述 查看详情

论文简析+解读+pytorch实现pointnet:deeplearningonpointsetsfor3dclassificationandsegmentation

...ps://arxiv.org/abs/1612.00593代码下载:https://github.com/fxia22/pointnet.pytorch在处理点云数据之前&#x 查看详情

2020厦门大学综述翻译:3d点云深度学习(remotesensiong期刊)

...于多视图3.3高维晶格4、直接在点云上进行的深度学习4.1PointNet4.2局部结构计算方法4.2.1不探索局部相关性的方法4.2.2探索局部相关性的方法4.3基于图5、基准数据集5.13D模型数据集5.23D室内数据集5.33D室外数据集6、深度学习在3D视觉... 查看详情

自动驾驶感知——激光雷达物体检测算法

...出1.2点云数据库1.3激光雷达物体检测算法1.3.1点视图1.3.1.1PointNet1.3.1.2PointNet++1.3.1.3Point-RCNN1.3.1.43D-SSD1.3.1.5总结和对比1.3.2俯视图1.3.2.1VoxelNet1.3.2.2SECOND1.3.2.3PIXOR1.3.2.4AFDet1.3.2.5总结与对比1.3.3前视图1.3.3.1LaserNet1.3.3.2RangeDet1.3.4多... 查看详情

cvpr2020论文阅读笔记(三维点云/三维重建)

...不错,很值得借鉴。论文地址:https://arxiv.org/abs/1612.00593PointNet提出一种基础的网络结构,可以用于点云分类、部分分割和语义分割等多种任务。在这篇文章之前,点云数据的处理方式是将点云数据转换为多个二维的视图或三维... 查看详情

pointnet代码复现(代码片段)

文章目录一、环境准备二、数据集准备三、训练模型四、实验结果一、环境准备根据作者描述使用python2.7+tensorflow1.0.1+cuda8.0。在Ubuntu16.04中复现。1.配置cuda8.0+cudnn5.1具体过程可参考网上教程2.配置python2.7+tensorflow1.0.1、... 查看详情

阅读记录3dssd:point-based3dsinglestageobjectdetector(代码片段)

...,大致分为单阶段和双阶段的网络。双阶段网络可以依靠pointnet++这样的网络得到的语义信息提供更加精确的结果。单阶段网络虽然具备了快速的优点,但是由于在道路环境下点的数量庞大,大部分的方案都是采用了将点云数据... 查看详情

自动驾驶激光点云3d目标检测pointpillar论文简述(代码片段)

...型,它有3个要点:提出Pillar这个概念,将类PointNets模型能够以Pillar为基础单位学习点云特征运用标准化的2D卷积进行后续处理快,满足实时要求 查看详情

自动驾驶感知算法实战16——激光雷达点云处理原理与实战

...达点云数据的处理深度学习处理方法3.1点云关键点检测3.2PointNet详解三、实战分析:激光点云目标检测讲解(45%)效果演示代码pipeline分析关键部分讲解总结================================================================ 查看详情

激光点云预处理研究概述

...多学者提出了大量研究方法,这些方法主要有以下两类:基于栅格图方法的地面去除研究、基于三维激光雷达原始扫描线数据的地面去除研究。    通过激光雷达扫描得到的点云包含大部分地面点,常用的栅格图方... 查看详情