双目立体视觉-zed2ros双目视觉开发理论与实践2(代码片段)

TechblogofHaoWANG TechblogofHaoWANG     2023-01-13     156

关键词:

ROS - ZED Node

To start a ZED ROS node you can use the following commands in a shell console:

  • ZED:
$ roslaunch zed_wrapper zed.launch
  • ZED Mini:
$ roslaunch zed_wrapper zedm.launch
  • ZED 2:
$ roslaunch zed_wrapper zed2.launch
  • ZED 2i:
$ roslaunch zed_wrapper zed2i.launch

Published Topics

The ZED node publishes data to the following topics:

  • Left camera

    • rgb/image_rect_color: Color rectified image (left RGB image by default)
    • rgb/image_rect_gray: Grayscale rectified image (left RGB image by default)
    • rgb_raw/image_raw_color: Color unrectified image (left RGB image by default)
    • rgb_raw/image_raw_gray: Grayscale unrectified image (left RGB image by default)
    • rgb/camera_info: Color camera calibration data
    • rgb_raw/camera_info: Color unrectified camera calibration data
    • left/image_rect_color: Left camera color rectified image
    • left/image_rect_gray: Left camera grayscale rectified image
    • left_raw/image_raw_color: Left camera color unrectified image
    • left_raw/image_raw_gray: Left camera grayscale unrectified image
    • left/camera_info: Left camera calibration data
    • left_raw/camera_info: Left unrectified camera calibration data
  • Right camera

    • right/image_rect_color: Color rectified right image
    • right_raw/image_raw_color: Color unrectified right image
    • right/image_rect_gray: Grayscale rectified right image
    • right_raw/image_raw_gray: Grayscale unrectified right image
    • right/camera_info: Right camera calibration data
    • right_raw/camera_info: Right unrectified camera calibration data
  • Stereo pair

    • stereo/image_rect_color: stereo rectified pair images side-by-side
    • stereo_raw/image_raw_color: stereo unrectified pair images side-by-side

    Note: to retrieve the camera parameters you can subscribe to the topics left/camera_inforight/camera_infoleft_raw/camera_info and right_raw/camera_info

  • Depth and point cloud

    • depth/depth_registered: Depth map image registered on left image (32-bit float in meters by default)
    • depth/camera_info: Depth camera calibration data
    • point_cloud/cloud_registered: Registered color point cloud
    • confidence/confidence_map: Confidence image (floating point values to be used in your own algorithms)
    • disparity/disparity_image: Disparity image
  • Tracking

    • odom: Absolute 3D position and orientation relative to the Odometry frame (pure visual odometry for ZED, visual-inertial for ZED-M and ZED 2)
    • pose: Absolute 3D position and orientation relative to the Map frame (Sensor Fusion algorithm + SLAM + Loop closure)
    • pose_with_covariance: Camera pose referred to Map frame with covariance
    • path_odom: Sequence of camera odometry poses in Map frame
    • path_map: Sequence of camera poses in Map frame
  • Mapping

    • mapping/fused_cloud: Fused color point cloud

    Note: published only if mapping is enabled, see mapping/mapping_enabled parameter

  • Sensor Data

    • imu/data: Accelerometer, gyroscope, and orientation data in Earth frame [only ZED-M and ZED 2]
    • imu/data_raw: Accelerometer and gyroscope data in Earth frame [only ZED-M and ZED 2]
    • imu/mag: Calibrated magnetometer data [only ZED 2]
    • atm_press: Atmospheric pressure data [only ZED 2]
    • temperature/imu: Temperature of the IMU sensor [only ZED 2]
    • temperature/left: Temperature of the left camera sensor [only ZED 2]
    • temperature/right: Temperature of the right camera sensor [only ZED 2]
    • left_cam_imu_transform: Transform from left camera sensor to IMU sensor position
  • Object Detection [only ZED 2]

    • objects: array of the detected/tracked objects for each camera frame [only ZED 2]
    • object_markers: array of markers of the detected/tracked objects to be rendered in Rviz [only ZED 2]
  • Diagnostic

    • /diagnostics: ROS diagnostic message for ZED cameras

ZED parameters

You can specify the parameters to be used by the ZED node modifying the values in the files

  • params/common.yaml: common paramters to all camera models
  • params/zed.yaml: parameters for the ZED camera
  • params/zedm.yaml: parameters for the ZED Mini camera
  • params/zed2.yaml: parameters for the ZED 2 camera
  • params/zed2i.yaml: parameters for the ZED 2i camera

GENERAL PARAMETERS

Parameters with prefix general

PARAMETERDESCRIPTIONVALUE
camera_nameA custom name for the ZED camera. Used as namespace and prefix for camera TF framesstring, default=zed
camera_modelType of Stereolabs camerazed: ZED, zedm: ZED-M, zed2: ZED 2, zed2i: ZED 2i
camera_flipFlip the camera data if it is mounted upsidedowntrue, false
zed_idSelect a ZED camera by its ID. Useful when multiple cameras are connected. ID is ignored if an SVO path is specifiedint, default 0
serial_numberSelect a ZED camera by its serial numberint, default 0
resolutionSet ZED camera resolution0: HD2K, 1: HD1080, 2: HD720, 3: VGA
grab_frame_rateSet ZED camera grabbing framerateint
gpu_idSelect a GPU device for depth computationint, default -1 (best device found)
base_frameFrame_id of the frame that indicates the reference base of the robotstring, default=base_link
verboseEnable/disable the verbosity of the SDKtrue, false
svo_compressionSet SVO compression mode0: LOSSLESS (PNG/ZSTD), 1: H264 (AVCHD) ,2: H265 (HEVC)
self_calibEnable/disable self calibration at startingtrue, false

VIDEO PARAMETERS

Parameters with prefix video

PARAMETERDESCRIPTIONVALUE
img_downsample_factorResample factor for images [0.01,1.0]. The SDK works with native image sizes, but publishes rescaled image.double, default=1.0
extrinsic_in_camera_frameIf false extrinsic parameter in camera_info will use ROS native frame (X FORWARD, Z UP) instead of the camera frame (Z FORWARD, Y DOWN) [true use old behavior as for version < v3.1]true, false

DEPTH PARAMETERS

Parameters with prefix depth

PARAMETERDESCRIPTIONVALUE
qualitySelect depth map quality0: NONE, 1: PERFORMANCE, 2: MEDIUM, 3: QUALITY, 4: ULTRA
sensing_modeSelect depth sensing mode (change only for VR/AR applications)0: STANDARD, 1: FILL
depth_stabilizationEnable depth stabilization. Stabilizing the depth requires an additional computation load as it enables tracking0: disabled, 1: enabled
openni_depth_modeConvert 32bit depth in meters to 16bit in millimeters0: 32bit float meters, 1: 16bit uchar millimeters
depth_downsample_factorResample factor for depth data matrices [0.01,1.0]. The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, …)double, default=1.0
min_depthMinimum value allowed for depth measuresMin: 0.3 (ZED) or 0.1 (ZED-M), Max: 3.0 - Note: reducing this value will require more computational power and GPU memory. In cases of limited compute power, increasing this value can provide better performance
max_depthMaximum value allowed for depth measuresMin: 1.0, Max: 30.0 - Values beyond this limit will be reported as TOO_FAR

POSITION PARAMETERS

Parameters with prefix pos_tracking

PARAMETERDESCRIPTIONVALUE
pos_tracking_enabledEnable/disable positional tracking from starttrue, false
publish_tfEnable/disable publish TF framestrue, false
publish_map_tfEnable/disable publish map TF frametrue, false
map_frameFrame_id of the pose messagestring, default=map
odometry_frameFrame_id of the odom messagestring, default=odom
area_memory_db_pathPath of the database file for loop closure and relocalization that contains learnt visual information about the environmentstring, default=``
save_area_memory_db_on_exitSave the “known visual features” map when the node is correctly closed to the path indicated by area_memory_db_pathtrue, false
area_memoryEnable Loop Closingtrue, false
floor_alignmentIndicates if the floor must be used as origin for height measurestrue, false
initial_base_poseInitial reference posevector, default=[0.0,0.0,0.0, 0.0,0.0,0.0] -> [X, Y, Z, R, P, Y]
init_odom_with_first_valid_poseIndicates if the odometry must be initialized with the first valid pose received by the tracking algorithmtrue, false
path_pub_rateFrequency (Hz) of publishing of the trajectory messagesfloat, default=2.0
path_max_countMaximum number of poses kept in the pose arrays (-1 for infinite)int, default=-1
two_d_modeForce navigation on a plane. If true the Z value will be fixed to “fixed_z_value”, roll and pitch to zerotrue, false
fixed_z_value:Value to be used for Z coordinate if two_d_mode is truedouble, default: 0.0

MAPPING PARAMETERS

Parameters with prefix mapping

PARAMETERDESCRIPTIONVALUE
mapping_enabledEnable/disable the mapping moduletrue, false
resolutionResolution of the fused point cloud [0.01, 0.2]double, default=0.1
max_mapping_rangeMaximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]double, default=-1
fused_pointcloud_freqPublishing frequency (Hz) of the 3D map as fused point clouddouble, default=1.0

SENSORS PARAMETERS (ONLY ZED-M, ZED 2, AND ZED 2I)

Parameters with prefix sensors

PARAMETERDESCRIPTIONVALUE
sensors_timestamp_syncSynchronize Sensors message timestamp with latest received frametrue, false
publish_imu_tfPublish IMU -> <cam_name>_left_camera_frame TFtrue, false

OBJECT DETECTION PARAMETERS (ONLY ZED 2 AND ZED 2I)

Parameters with prefix object_detection

PARAMETERDESCRIPTIONVALUE
od_enabledEnable/disable the Object Detection module when the node startstrue, false
modelObject Detection module to be used‘0’: MULTI_CLASS_BOX - ‘1’: MULTI_CLASS_BOX_ACCURATE - ‘2’: HUMAN_BODY_FAST - ‘3’: HUMAN_BODY_ACCURATE - ‘4’: MULTI_CLASS_BOX_MEDIUM - ‘5’: HUMAN_BODY_MEDIUM
confidence_thresholdMinimum value of the detection confidence of an objectint [0,100]
max_range:Maximum detection rangedouble, default: 15.
object_tracking_enabledEnable/disable the tracking of the detected objectstrue, false
body_fitting:Enable/disable body fitting for ‘HUMAN_BODY’ modelstrue, false
mc_people:Enable/disable the detection of persons for ‘MULTI_CLASS_BOX’ modelstrue, false
mc_vehicle:Enable/disable the detection of vehicles for ‘MULTI_CLASS_BOX’ modelstrue, false
mc_bag:Enable/disable the detection of bags for ‘MULTI_CLASS_BOX’ modelstrue, false
mc_animal:Enable/disable the detection of animals for ‘MULTI_CLASS_BOX’ modelstrue, false
mc_electronics:Enable/disable the detection of electronic devices for ‘MULTI_CLASS_BOX’ modelstrue, false
mc_fruit_vegetable:Enable/disable the detection of fruits and vegetables for ‘MULTI_CLASS_BOX’ modelstrue, false

Dynamic parameters

The ZED node lets you reconfigure these parameters dynamically:

PARAMETERDESCRIPTIONVALUE
pub_frame_rateFrequency of the publishing of Video and Depth images (equal or minor to grab_frame_rate value)float [0.1,100.0]
depth_confidenceThreshold to reject depth values based on their confidence. Each depth pixel has a corresponding confidence. A lower value means more confidence and precision (but less density). An upper value reduces filtering (more density, less certainty). A value of 100 will allow values from 0 to 100. (no filtering). A value of 90 will allow values from 10 to 100. (filtering lowest confidence values). A value of 30 will allow values from 70 to 100. (keeping highest confidence values and lowering the density of the depth map). The value should be in [1,100]. By default, the confidence threshold is set at 100, meaning that no depth pixel will be rejected.int [0,100]
depth_texture_confThreshold to reject depth values based on their textureness confidence. A lower value means more confidence and precision (but less density). An upper value reduces filtering (more density, less certainty). The value should be in [1,100]. By default, the confidence threshold is set at 100, meaning that no depth pixel will be rejected.int [0,100]
point_cloud_freqFrequency of the pointcloud publishing (equal or minor to frame_rate value)float [0.1,100.0]
brightnessDefines the brightness controlint [0,8]
contrastDefines the contrast controlint [0,8]
hueDefines the hue controlint [0,11]
saturationDefines the saturation controlint [0,8]
sharpnessDefines the sharpness controlint [0,8]
gammaDefines the gamma controlint [1,9]
auto_exposure_gainDefines if the Gain and Exposure are in automatic mode or nottrue, false
gainDefines the gain control [only if auto_exposure_gain is false]int [0,100]
exposureDefines the exposure control [only if auto_exposure_gain is false]int [0,100]
auto_whitebalanceDefines if the White balance is in automatic mode or nottrue, false
whitebalance_temperatureDefines the color temperature value (x100)int [42,65]

To modify a dynamic parameter, you can use the GUI provided by the rqt stack:

$ rosrun rqt_reconfigure rqt_reconfigure

 

Transform frames

The ZED ROS wrapper broadcasts multiple coordinate frames that each provide information about the camera’s position and orientation. If needed, the reference frames can be changed in the launch file.

  • base_frame is the current position and orientation of the reference base of the robot
  • <camera_name>_camera_center is the current position and orientation of ZED, determined by visual odometry and the tracking module
  • <camera_name>_right_camera is the position and orientation of the ZED’s right camera
  • <camera_name>_right_camera_optical is the position and orientation of the ZED’s right camera optical frame
  • <camera_name>_left_camera is the position and orientation of the ZED’s left camera
  • <camera_name>_left_camera_optical is the position and orientation of the ZED’s left camera optical frame
  • <camera_name>_imu_link is the origin of the inertial data frame (ZED-M and ZED2 only)
  • <camera_name>_mag_link is the origin of the magnetometer frame (ZED2 only)
  • <camera_name>_baro_link is the origin of the barometer frame (ZED2 only)
  • <camera_name>_temp_left_link is the origin of the left temperature frame (ZED2 only)
  • <camera_name>_temp_right_link is the origin of the right temperature frame (ZED2 only)

For RVIZ compatibilty, the root frame map_frame is called map. The TF tree generated by the zed_wrapper reflects the standard descripted in REP105. The odometry frame is updated using only the “visual odometry” information. The map frame is updated using the Tracking algorithm provided by the Stereolabs SDK, fusing the inertial information from the IMU sensor if using a ZED-M camera.

map_frame (`map`)
└─odometry_frame (`odom`)
   └─base_frame (`base_link`)
      └─camera_frame (`<camera_name>_camera_center`)
      |  └─left_camera_frame (`<camera_name>_left_camera_frame`)
      |  |     └─left_camera_optical_frame (`<camera_name>_left_camera_optical_frame`)
      |  |     └─left_temperature:frame (`<camera_name>_temp_left_link`)
      |  └─right_camera_frame (`<camera_name>_right_camera_frame`) (*only ZED 2 and ZED2i*)
      |        └─right_camera_optical_frame (`<camera_name>_right_camera_optical_frame`)
      |        └─left_temperature:frame (`<camera_name>_temp_left_link`)
      └─imu_frame (`<camera_name>_imu_link`) (*only ZED-M and ZED 2 and ZED2i*)
      └─magnetometer_frame (`<camera_name>_mag_link`) (*only ZED 2 and ZED2i*)
      └─barometer_frame (`<camera_name>_baro_link`) (*only ZED 2 and ZED2i*)

ZED-M

The ZED-M provides the same information as the ZED, plus the inertial data from the IMU sensor. The IMU data are used internally to generate the pose in the Map frame with the Tracking sensor fusion algorithm.

Note: The initial pose in Odometry frame can be set to the first pose received by the Tracking module by setting the parameter init_odom_with_first_valid_pose to true.

ZED 2 / ZED 2i

The ZED 2 and ZED 2i provide the same information as the ZED and the ZED-M, plus the data from a new set of sensors. The ZED 2 and ZED 2i provide magnetometer data to get better and absolute information about the YAW angle, atmospheric pressure information to be used to estimate the height of the camera respect to a reference point and temperature information to check the camera health in a complex robotic system.

Services

The ZED node provides the following services:

  • reset_tracking: Restarts the Tracking module setting the initial pose to the value available in the param server.
  • reset_odometry: Resets the odometry values eliminating the drift due to the Visual Odometry algorithm, setting the new odometry value to the latest camera pose received from the tracking module.
  • set_pose: Sets the current pose of the camera to the value passed as single parameters -> X, Y, Z [m], R, P, Y [rad].
  • save_area_memory: Triggers the saving of the current area memory used for relocation, if area memory is enabled.
  • set_led_status: Sets the status of the blue led -> True: LED ON, False: LED OFF.
  • toggle_led: Toggles the status of the blue led, returning the new status.
  • start_svo_recording: Starts recording an SVO file. If no filename is provided the default zed.svo is used. If no path is provided with the filename the default recording folder is ~/.ros/.
  • stop_svo_recording: Stops an active SVO recording.
  • start_remote_stream: Starts streaming over network to allow processing of ZED data on a remote machine. See Remote streaming.
  • stop_remote_stream: Stops streaming over network.
  • start_3d_mapping: Starts the Spatial Mapping processing. See Spatial Mapping.
  • stop_3d_mapping: Stops the Spatial Mapping processing (works even with automatic start from configuration file).
  • save_3d_map: Triggers the saving of the current 3D fused point cloud if the mapping module is enabled.
  • start_object_detection: Starts the Object Detection processing. See Object Detection Note: returns error if not using a ZED 2 camera.
  • stop_object_detection: Stops the Object Detection processing (works even with automatic start from configuration file).

Note: Currently the H26x SVO compression uses the resolution and the real FPS to calculate the bitrate. This feature can lead to some issues of quality when FPS is low. On Nvidia Jetson TX1 and Jetson TX2, the FPS is quite low at the beginning of the grab loop, therefore it could be better to wait for some grab calls before calling start_svo_recording so that the camera FPS is stabilized at the requested value (15fps or more).

Remote streaming

With SDK v2.8 has been introduced the capability to acquire ZED data and to stream it on a remote machine over network. This features is useful when the ZED is connected to a machine with limited CUDA capabilities while a high definition analysis is required. Starting the streaming without activating any other feature (i.e. depth processing, positional tracking, point cloud publishing, …) requires only the power for H264 or H265 compression.

To start streaming the service start_remote_stream must be called with the following parameters:

  • codec (def. 0): Defines the codec used for streaming (0: AVCHD [H264], 1: HEVC [H265]) Note: If HEVC (H265) is used, make sure the recieving host is compatible with HEVC decoding (basically a pascal NVIDIA card). If not, prefer to use AVCHD (H264) since every compatible NVIDIA card supports AVCHD decoding
  • port (def. 30000): Defines the PORT the data will be streamed on. Note: port must be an even number. Any odd number will be rejected uint16 port=30000
  • bitrate (def. 2000): Defines the streaming BITRATE in Kbits/s
  • gop_size (def. -1): Defines the GOP SIZE in frame unit. Note: if value is set to -1, the gop size will match 2 seconds, depending on camera fps. The gop size determines the maximum distance between IDR/I-frames. Very high GOP size will result in slightly more efficient compression, especially on static scene. But it can result in more latency if IDR/I-frame packet are lost during streaming. Maximum allowed value is 256 (frames)
  • adaptative_bitrate (def. False): Enable/Disable adaptive bitrate. Note: Bitrate will be adjusted regarding the number of packet loss during streaming. If activated, bitrate can vary between [bitrate/4, bitrate]. Currently, the adaptive bitrate only works when “sending” device is a NVIDIA jetson (X1,X2,Xavier,Nano)

RECEIVE A REMOTE STREAM

To acquire the streaming on a remote machine start a ZED node with the following command:

$ roslaunch zed_wrapper zed.launch stream:=<sender_IP>:<port>

For example:

$ roslaunch zed_wrapper zed.launch stream:=192.168.1.127:30000

Note: the stream contains only visual information. Using a ZED-M camera the inertial topic will not be available if not subscribed using ROS standard methods.

Spatial Mapping

The ZED node provides a basilar mapping module publishing a 3D map of the environment as 3D color point cloud (mapping/fused_cloud).

Spatial Mapping is disabled by default as it’s an heavy consuming process, it can be enabled setting to true the parameter mapping/mapping_enabled in the file params/common.yaml.

Spatial Mapping can be enabled manually at any time using the start_3d_mapping service.

To reduce the computational power requirements use value higher than 0.1 m for resolution_m and decrease the value of fused_pointcloud_freq to reduce the frequency of the point cloud topic publishing.

It is possible to save the generated 3D map as a pointcloud in format OBJ, OBJ binary or PLY, by calling the service save_3d_map.

Object Detection

Using the ZED 2 or ZED 2i camera unlocks the Object Detection module that allows to detect and track Human skeletons and Objects.

Object detection is disabled by default as it’s an heavy consuming process, it can be automatically enabled when the node starts setting to true the parameter object_detection/od_enabled in the file params/zed2.yaml or params/zed2i.yaml.

If Object Detection is disabled it can be manually enabled at any time using the start_object_detection service. See Object Detection parameters for more info about the meaning of each parameter used with the service.

The list of the detected objects is published as a custom topic of type ObjectStamped, defined in the package zed_interfaces. For details about the topic fileds please read the Object Detection full documentation.

OBJECT TRACKING

Enabling Object Tracking allows to get information frame by frame about the status of each detected object. The same object is identified with the same ID in all the frames after its first detection and its position is estimated if an occlusion occurs.

Each detected object can get four different values about their Tracking Status:

  • OFF - The tracking is not yet initialized, the object ID is not usable
  • OK - The object is tracked
  • SEARCHING - The object couldn’t be detected in the image and is potentially occluded, the trajectory is estimated
  • TERMINATE - This is the last searching state of the track, the track will be deleted in the next retreiveObject

Diagnostic

The ZED node publishes diagnostic information that can be used by the robotics system using a diagnostic_aggregator node.

Using the Runtime monitor plugin of rqt it is possible to get all the diagnostic information and check that the node is working as expected: 

A brief explanation of each field:

  • Component: name of the diagnostic component
  • Message: summary of the status of the ZED node
  • HardwareID: Model of the ZED camera and its serial number
  • Capture: grabbing frequency (if video or depth data are subscribed) and the percentage respect to the camera frame rate
  • Processing time: time in seconds spent to elaborate data and the time limit to achieve max frame rate
  • Playing SVO: (visible only if playng an SVO file) current frame position in the SVO file over the total frame count
  • Depth status: indicates if the depth processing is performed
  • Point Cloud: point cloud publishing frequency (if there is at least a subscriber) and the percentage respect to the camera frame rate
  • Floor Detection: if the floor detection is enabled, indicates if the floor has been detected and the camera position correctly initialized
  • Tracking status: indicates the status of the positional tracking, if enabled
  • Object data processing: if Object Detection is enabled indicates the time required to process the data relative to detected objects
  • IMU: the publishing frequency of the IMU topics, if the camera is the ZED Mini and there is at least a subscriber
  • Left CMOS Temp.: (only ZED 2) the temperature of the CMOS of the left camera sensor [-273.15°C if not valid]
  • Right CMOS Temp.: (only ZED 2) the temperature of the CMOS of the right camera sensor [-273.15°C if not valid]
  • SVO Recording: indicates if the SVO recording is active
  • SVO Compression time: average time spent on frame compressing
  • SVO Compression ratio: average frame compression ratio

Using multiple ZEDs

It is possible to use multiple ZED cameras with ROS. Simply launch the node with the zed_multi_cam.launch file:

$ roslaunch zed_wrapper zed_multi_cam.launch

Assigning a GPU to a camera

To improve performance, you can specify the gpu_id of the graphic card that will be used for the depth computation in the launch file. The default value (-1) will select the GPU with the highest number of CUDA cores. When using multiple ZEDs, you can assign each camera to a GPU to increase performance.

Limitations

PERFORMANCE

This wrapper lets you quickly prototype applications and interface the ZED with other sensors and packages available in ROS. However, the ROS layer introduces significant latency and a performance hit. If performance is a major concern for your application, please consider using the ZED SDK library.

MULTIPLE ZEDS

The ZED camera uses the maximum bandwidth provided by USB 3.0 to output video. When using multiple ZEDs, you may need to reduce camera framerate and resolution to avoid corrupted frames (green or purple frames). You can also use multiple GPUs to load-balance computations and improve performance.

双目立体视觉

1.简介:双目视觉是模拟人类视觉原理,使用计算机被动感知距离的方法。从两个或者多个点观察一个物体,获取在不同视角下的图像,根据图像之间像素的匹配关系,通过三角测量原理计算出像素之间的偏移来获取物体的三维... 查看详情

基于halcon的双目立体视觉系统实现

双目立体视觉是机器视觉的一种重要形式,它是基于视差原理并由多幅图像获取物体三维几何信息的方法。双目立体视觉系统一般由双摄像机从不同角度同时获得被测物的两幅数字图像,或由单摄像机在不同时刻从不同角度获得... 查看详情

双目立体视觉-zed2双目视觉examplesforbeginner(cpp/liunx)-上篇(代码片段)

 目录Tutorial1:HelloZEDGettingstartedPrerequisitesBuildtheprogramCodeoverviewCameraConfigurationGettingCameraInformationTutorial2:ImagecaptureGettingstartedPrerequisitesBuildtheprogramCodeoverviewCreate 查看详情

双目立体视觉传感器数据读取与绘图plotjuggler(代码片段)

 项目地址:GitHub-stereolabs/zed-examples:ZEDSDKExampleprojects官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationROSZEDSDK: GettingIMUandSensorDatainROS|Stereolabshttps://www.stereolab 查看详情

双目立体视觉传感器数据读取与绘图plotjuggler(代码片段)

 项目地址:GitHub-stereolabs/zed-examples:ZEDSDKExampleprojects官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationROSZEDSDK: GettingIMUandSensorDatainROS|Stereolabshttps://www.stereolab 查看详情

双目立体视觉-zed2双目视觉examplesforbeginner(cpp/liunx)-下篇(代码片段)

项目地址:https://github.com/stereolabs/zed-examples官方文档:https://www.stereolabs.com/docs/ 目录Tutorial4:PositionaltrackingwiththeZEDPrerequisitesBuildtheprogramCodeoverviewCreateacameraEnablepo 查看详情

双目立体视觉-zed2双目视觉examplesforbeginner(cpp/liunx)-上篇(代码片段)

 项目地址:https://github.com/stereolabs/zed-examples官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationGettingStartedwithROSandZED|Stereolabshttps://www.stereolabs.com/docs/ros/官方ZEDRO 查看详情

双目立体视觉-zed2双目视觉examplesforbeginner(cpp/liunx)-上篇(代码片段)

 项目地址:https://github.com/stereolabs/zed-examples官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationGettingStartedwithROSandZED|Stereolabshttps://www.stereolabs.com/docs/ros/官方ZEDRO 查看详情

基于双目相机的图像深度信息提取算法matlab仿真

目录一、理论基础二、核心程序三、测试结果一、理论基础    自然界本来就是一个三维空间,自然界所有景物都是立体的。人眼观看自然界景物时,不仅能看到景物的高度与宽度,而且还能分辨出景物的深度,即... 查看详情

基于fpga的双目相机目标深度图像提取实现——简略版(代码片段)

...、理论基础系统的整体框架结构如下图所示:    双目立体视觉模拟了人类双眼感知环境的方式其结构如图2-1所示,主要包括图像采集、摄像机表达、图像校正、立体匹配、及深度获取等五部分构成。   为... 查看详情

双目立体视觉pytorch&zed3d人体识别与追踪(下)(代码片段)

 项目地址:GitHub-stereolabs/zed-examples:ZEDSDKExampleprojects官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationHowtoUsePyTorchwithZED|Stereolabshttps://www.stereolabs.com/docs/pytorc 查看详情

双目立体视觉pytorch&zed3d人体识别与追踪(上)(代码片段)

 项目地址:GitHub-stereolabs/zed-examples:ZEDSDKExampleprojects官方文档:StereolabsDocs:APIReference,Tutorials,andIntegrationHowtoUsePyTorchwithZED|Stereolabshttps://www.stereolabs.com/docs/pytorc 查看详情

技术流派教你提高双目立体视觉系统的精度!

...,欢迎关注我的GitHub:https://github.com/ethan-li-coding双目立体视觉(BinocularStereoVision)是机器视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对... 查看详情

双目立体视觉jetsonxaviernx嵌入式设备部署zed2(代码片段)

基于英伟达NX开发板进行zed2双目相机的开发环境配置一、环境配置:系统:Ubuntu18.04+ROSCUDA:10.2nvcc-V查看CUDA版本:10.2二、安装SDK        因为NVIDIAJetsonXavierNX自带CUDA10.2版本,所以此处便不再安装CUDA࿰... 查看详情

双目立体视觉jetsonxaviernx嵌入式设备部署zed2(代码片段)

基于英伟达NX开发板进行zed2双目相机的开发环境配置一、环境配置:系统:Ubuntu18.04+ROSCUDA:10.2nvcc-V查看CUDA版本:10.2二、安装SDK        因为NVIDIAJetsonXavierNX自带CUDA10.2版本,所以此处便不再安装CUDA࿰... 查看详情

技术流派双目立体视觉系统空间精度:精确定量分析

在往期文章【技术流派】教你提高双目立体视觉系统的精度!中,博主关于立体视觉系统的精度做了一个较为具体的分析,但整体来说是定性的,本篇再次关于精度做一次定量分析,毕竟绝大部分场景下精度... 查看详情

技术流派双目立体视觉系统空间精度:精确定量分析

在往期文章【技术流派】教你提高双目立体视觉系统的精度!中,博主关于立体视觉系统的精度做了一个较为具体的分析,但整体来说是定性的,本篇再次关于精度做一次定量分析,毕竟绝大部分场景下精度... 查看详情

双目标定与三维计算:从理论到opencv实践

双目标定与三维计算:从理论到OpenCV实践一、双目立体成像主要步骤    2二、三角测量    32.1主光线在无穷远处相交    32.2主光线在有限距离内相交    32.3深度与视差 ... 查看详情