Jetson Orin NX 部署VINS-Fusion+d455

一、前期准备

1.1 ROS-Noetic安装

一般板子上都会自动部署好对应于ubuntu20.04系统的ROS-Noetic,只需要启动roscore和内置小乌龟程序进行检验即可。

1.2 d455准备工作

首先是使用realsense-viewer可以正常观察到图像。

使用realsense-ros功能包跑通d455,并利用rviz显示话题和读取到的图像。

 1.3 标定工作

利用kalibr工具对d455中双目相机的外参,以及双目相机和内置IMU的外参进行标定。还需要对IMU的bias和高斯白噪声进行标定。

标定过程参考我的另一篇博客:

Ubuntu20.04下IMU内参+d455内外参+IMU与d455外参标定_weixin_49247766的博客-优快云博客

标定后的结果最好放在一个文件夹下方便查找。

1.4 各种库的安装

包括OpenCV、Ceres、g2o、Eigen3等等

ceres1.14.0的安装可以参考如下博客:

Ubuntu20.04安装Ceres1.14.0_我是你de不死的bug的博客-优快云博客

g2o的安装可以参考如下博客:

Ubuntu环境下安装g2o_ubuntu20.04完整安装g2o_abcwoabcwo的博客-优快云博客

g2o的安装及初步使用_Jasmine_shine的博客-优快云博客

第二篇博客里面可以从csdn加速到github,为了防止有的时候不能挂梯子预留的。

二、VINS-Fusion部署

2.1 准备工作

需要在rs_camera.launch空间下新建一个rs_vins_fusion.launch文件,直接将rs_camera.launch中内容复制过来,修改参数后的文件内容如下所示:

<launch>
  <arg name="serial_no"           default=""/>
  <arg name="usb_port_id"         default=""/>
  <arg name="device_type"         default=""/>
  <arg name="json_file_path"      default=""/>
  <arg name="camera"              default="camera"/>
  <arg name="tf_prefix"           default="$(arg camera)"/>
  <arg name="external_manager"    default="false"/>
  <arg name="manager"             default="realsense2_camera_manager"/>
  <arg name="output"              default="screen"/>
  <arg name="respawn"              default="false"/>

  <arg name="fisheye_width"       default="-1"/>
  <arg name="fisheye_height"      default="-1"/>
  <arg name="enable_fisheye"      default="false"/>

  <arg name="depth_width"         default="848"/>
  <arg name="depth_height"        default="480"/>
  <arg name="enable_depth"        default="true"/>

  <arg name="confidence_width"    default="-1"/>
  <arg name="confidence_height"   default="-1"/>
  <arg name="enable_confidence"   default="true"/>
  <arg name="confidence_fps"      default="-1"/>

  <arg name="infra_width"         default="848"/>
  <arg name="infra_height"        default="480"/>
  <arg name="enable_infra"        default="true"/>
  <arg name="enable_infra1"       default="true"/>
  <arg name="enable_infra2"       default="true"/>
  <arg name="infra_rgb"           default="false"/>

  <arg name="color_width"         default="848"/>
  <arg name="color_height"        default="480"/>
  <arg name="enable_color"        default="true"/>

  <arg name="fisheye_fps"         default="-1"/>
  <arg name="depth_fps"           default="30"/>
  <arg name="infra_fps"           default="30"/>
  <arg name="color_fps"           default="30"/>
  <arg name="gyro_fps"            default="200"/>
  <arg name="accel_fps"           default="200"/>
  <arg name="enable_gyro"         default="true"/>
  <arg name="enable_accel"        default="true"/>

  <arg name="enable_pointcloud"         default="false"/>
  <arg name="pointcloud_texture_stream" default="RS2_STREAM_COLOR"/>
  <arg name="pointcloud_texture_index"  default="0"/>
  <arg name="allow_no_texture_points"   default="false"/>
  <arg name="ordered_pc"                default="false"/>

  <arg name="enable_sync"               default="true"/>
  <arg name="align_depth"               default="false"/>

  <arg name="publish_tf"                default="true"/>
  <arg name="tf_publish_rate"           default="0"/>

  <arg name="filters"                   default=""/>
  <arg name="clip_distance"             default="-2"/>
  <arg name="linear_accel_cov"          default="0.01"/>
  <arg name="initial_reset"             default="false"/>
  <arg name="reconnect_timeout"         default="6.0"/>
  <arg name="wait_for_device_timeout"   default="-1.0"/>
  <arg name="unite_imu_method"          default="linear_interpolation"/>
  <arg name="topic_odom_in"             default="odom_in"/>
  <arg name="calib_odom_file"           default=""/>
  <arg name="publish_odom_tf"           default="true"/>

  <arg name="stereo_module/exposure/1"  default="7500"/>
  <arg name="stereo_module/gain/1"      default="16"/>
  <arg name="stereo_module/exposure/2"  default="1"/>
  <arg name="stereo_module/gain/2"      default="16"/>
  
  

  <group ns="$(arg camera)">
    <include file="$(find realsense2_camera)/launch/includes/nodelet.launch.xml">
      <arg name="tf_prefix"                value="$(arg tf_prefix)"/>
      <arg name="external_manager"         value="$(arg external_manager)"/>
      <arg name="manager"                  value="$(arg manager)"/>
      <arg name="output"                   value="$(arg output)"/>
      <arg name="respawn"                  value="$(arg respawn)"/>
      <arg name="serial_no"                value="$(arg serial_no)"/>
      <arg name="usb_port_id"              value="$(arg usb_port_id)"/>
      <arg name="device_type"              value="$(arg device_type)"/>
      <arg name="json_file_path"           value="$(arg json_file_path)"/>

      <arg name="enable_pointcloud"        value="$(arg enable_pointcloud)"/>
      <arg name="pointcloud_texture_stream" value="$(arg pointcloud_texture_stream)"/>
      <arg name="pointcloud_texture_index"  value="$(arg pointcloud_texture_index)"/>
      <arg name="enable_sync"              value="$(arg enable_sync)"/>
      <arg name="align_depth"              value="$(arg align_depth)"/>

      <arg name="fisheye_width"            value="$(arg fisheye_width)"/>
      <arg name="fisheye_height"           value="$(arg fisheye_height)"/>
      <arg name="enable_fisheye"           value="$(arg enable_fisheye)"/>

      <arg name="depth_width"              value="$(arg depth_width)"/>
      <arg name="depth_height"             value="$(arg depth_height)"/>
      <arg name="enable_depth"             value="$(arg enable_depth)"/>

      <arg name="confidence_width"         value="$(arg confidence_width)"/>
      <arg name="confidence_height"        value="$(arg confidence_height)"/>
      <arg name="enable_confidence"        value="$(arg enable_confidence)"/>
      <arg name="confidence_fps"           value="$(arg confidence_fps)"/>

      <arg name="color_width"              value="$(arg color_width)"/>
      <arg name="color_height"             value="$(arg color_height)"/>
      <arg name="enable_color"             value="$(arg enable_color)"/>

      <arg name="infra_width"              value="$(arg infra_width)"/>
      <arg name="infra_height"             value="$(arg infra_height)"/>
      <arg name="enable_infra"             value="$(arg enable_infra)"/>
      <arg name="enable_infra1"            value="$(arg enable_infra1)"/>
      <arg name="enable_infra2"            value="$(arg enable_infra2)"/>
      <arg name="infra_rgb"                value="$(arg infra_rgb)"/>

      <arg name="fisheye_fps"              value="$(arg fisheye_fps)"/>
      <arg name="depth_fps"                value="$(arg depth_fps)"/>
      <arg name="infra_fps"                value="$(arg infra_fps)"/>
      <arg name="color_fps"                value="$(arg color_fps)"/>
      <arg name="gyro_fps"                 value="$(arg gyro_fps)"/>
      <arg name="accel_fps"                value="$(arg accel_fps)"/>
      <arg name="enable_gyro"              value="$(arg enable_gyro)"/>
      <arg name="enable_accel"             value="$(arg enable_accel)"/>

      <arg name="publish_tf"               value="$(arg publish_tf)"/>
      <arg name="tf_publish_rate"          value="$(arg tf_publish_rate)"/>

      <arg name="filters"                  value="$(arg filters)"/>
      <arg name="clip_distance"            value="$(arg clip_distance)"/>
      <arg name="linear_accel_cov"         value="$(arg linear_accel_cov)"/>
      <arg name="initial_reset"            value="$(arg initial_reset)"/>
      <arg name="reconnect_timeout"        value="$(arg reconnect_timeout)"/>
      <arg name="wait_for_device_timeout"  value="$(arg wait_for_device_timeout)"/>
      <arg name="unite_imu_method"         value="$(arg unite_imu_method)"/>
      <arg name="topic_odom_in"            value="$(arg topic_odom_in)"/>
      <arg name="calib_odom_file"          value="$(arg calib_odom_file)"/>
      <arg name="publish_odom_tf"          value="$(arg publish_odom_tf)"/>
      <arg name="stereo_module/exposure/1" value="$(arg stereo_module/exposure/1)"/>
      <arg name="stereo_module/gain/1"     value="$(arg stereo_module/gain/1)"/>
      <arg name="stereo_module/exposure/2" value="$(arg stereo_module/exposure/2)"/>
      <arg name="stereo_module/gain/2"     value="$(arg stereo_module/gain/2)"/>

      <arg name="allow_no_texture_points"  value="$(arg allow_no_texture_points)"/>
      <arg name="ordered_pc"               value="$(arg ordered_pc)"/>
      
    </include>
  </group>
</launch>

对左目相机、右目相机的内参,以及二者的外参、IMU的内参,三者之间的外参进行修改,修改的内容在 /home/jetson/vins_ws/src/vins-fusion-master/config/realsense-d455目录下的几个config文件。我这里是将原来的realsense-d435i文件夹改为了realsense-d455,以匹配我们自己的相机。

首先是realsense_stereo_imu_config.yaml,注意修改外参矩阵的时候一定一定只修改数字,矩阵的格式千万不要乱改,否则后面会报错!!这里暂时还没有修改外参矩阵,因为不确定标定出来的结果和这里需要的数据是不是互为逆矩阵的关系。

注意,这三个文件在执行完2.2之后才可以进行修改!!

Realsence D455标定并运行Vins-Fusion_呼叫江江的博客-优快云博客
RealSense D455的标定并运行VINS-FUSION_d455 vins_fusion特征点跟踪_Z_Jin16的博客-优快云博客

D455+VINS-Fusion+surfelmapping 稠密建图(一)_全日制一起混的博客-优快云博客

我的 imu_stereo_640-camchain-imuchain.yaml文件如下:

cam0:
  T_cam_imu:
  - [0.9998851348340306, -0.0021921047276981247, -0.014997060205074563, 0.036842277585409956]
  - [0.002177439333697627, 0.9999971352144796, -0.0009941432398879824, -0.007225974700776494]
  - [0.01499919650780981, 0.0009613738586793749, 0.9998870435526322, 0.006851651766817668]
  - [0.0, 0.0, 0.0, 1.0]
  camera_model: pinhole
  distortion_coeffs: [0.03895290441617228, 0.0003692506119585448, -0.0013914282494255352, -0.006030565804081987]
  distortion_model: radtan
  intrinsics: [460.7062328758583, 455.4640464436609, 422.6278853333841, 240.8869376879919]
  resolution: [848, 480]
  rostopic: /infra_left
  timeshift_cam_imu: -0.0029392271760237025
cam1:
  T_cam_imu:
  - [0.9999252623958573, -0.0013857955755283706, 0.01214698288461059, -0.05711704780261238]
  - [0.00139169505633966, 0.9999989177150971, -0.0004772352721477469, -0.007042647147316498]
  - [-0.012146308387585767, 0.0004941045007568492, 0.9999261087966932, 0.009396884003985403]
  - [0.0, 0.0, 0.0, 1.0]
  T_cn_cnm1:
  - [0.9996312745900379, 0.0007794091506576187, 0.027142354040736682, -0.09412607867133031]
  - [-0.0007934100300518533, 0.9999995577044534, 0.0005050657604811013, 0.00020909485529962252]
  - [-0.027141948382918956, -0.0005264145458366489, 0.9996314508486142, 0.0035439247459540207]
  - [0.0, 0.0, 0.0, 1.0]
  camera_model: pinhole
  distortion_coeffs: [0.03074930453668952, 0.021929389608515797, -0.0011136804576574138, -0.009039816962162986]
  distortion_model: radtan
  intrinsics: [460.59214846502533, 457.1370164914071, 410.1124036762654, 240.57108314961997]
  resolution: [848, 480]
  rostopic: /infra_right
  timeshift_cam_imu: -0.0030586273364197098

 注意,上面的yaml文件里面的旋转矩阵是imu->cam,而vins中需要的外参矩阵是cam->imu,因此需要进行一个求逆。其实就是把上面的矩阵前三行三列转置,最后一列的前三行求相反数,最后一行保持0,0,0,1不变即可。

或者可以直接参考imu_stereo_640-results-imucam.txt,文档内容如下,只是里面的矩阵没有那么精确了。

修改后的yaml文件如下: 

%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu: 1         
num_of_cam: 2  

imu_topic: "/camera/imu"
image0_topic: "/camera/infra1/image_rect_raw"
image1_topic: "/camera/infra2/image_rect_raw"
output_path: "/home/zj/output/"

cam0_calib: "left.yaml"
cam1_calib: "right.yaml"
image_width: 848
image_height: 480
   

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
#相机到imu的变换矩阵
body_T_cam0: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [0.9998851348340306, 0.002177439333697627, 0.01499919650780981, -0.036842277585409956,
       -0.0021921047276981247, 0.9999971352144796, 0.0009613738586793749, 0.007225974700776494,
       -0.014997060205074563, -0.0009941432398879824, 0.9998870435526322, -0.006851651766817668,
        0.0, 0.0, 0.0, 1.0]


body_T_cam1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [0.9999252623958573, 0.00139169505633966,  -0.012146308387585767, 0.05711704780261238,
       -0.0013857955755283706, 0.9999989177150971, 0.0004941045007568492, 0.007042647147316498,
       0.01214698288461059, -0.0004772352721477469, 0.9999261087966932, -0.009396884003985403,
        0.0, 0.0, 0.0, 1.0]


#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 1.2016768615517539e-02        # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 2.9607246176650919e-03      # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 2.9698273035189197e-04      # accelerometer bias random work noise standard deviation.  #0.002
gyr_w: 3.1833671764598549e-05    # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.78921469         # gravity magnitude

#unsynchronization parameters
estimate_td: 1                      # online estimate time offset between camera and imu
td: -0.004528525521911893           # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#loop closure parameters
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/zj/output/pose_graph/" # save and load path
save_image: 0                   # save image in pose graph for visualization prupose; you can close this function by setting 0 

然后是left.yaml:

%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 848
image_height: 480
distortion_parameters:
   k1: 0.03895290441617228
   k2: 0.0003692506119585448
   p1: -0.0013914282494255352
   p2: -0.006030565804081987
projection_parameters:
   fx: 460.7062328758583
   fy: 455.4640464436609
   cx: 422.6278853333841
   cy: 240.8869376879919

最后是right.yaml:

%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 848
image_height: 480
distortion_parameters:
   k1: 0.03074930453668952
   k2: 0.021929389608515797
   p1: -0.0011136804576574138
   p2: -0.009039816962162986
projection_parameters:
   fx: 460.59214846502533
   fy: 457.1370164914071
   cx: 410.1124036762654
   cy: 240.57108314961997

2.2 算法部署

网址传送门:

mirrors / hkust-aerial-robotics / vins-fusion · GitCode

直接在网站上git下来master版本

创建一个工作空间:并把解压缩后的功能包直接放在工作空间的src下。

mkdir -p vins_ws/src
cd vins_ws

获取功能包后,在工作空间下进行编译:

catkin_make

编译过程中会出现很多问题,主要集中在OpenCV版本的问题上,慢慢解决。

ubuntu安装opencv及vinsmono_SCH0的博客-优快云博客

在Ubuntu20.04运行VINS-Fusion_euroc_mono.yaml 段错误_饥饿的帕尼尼的博客-优快云博客

Ubuntu20.04运行Vins-fusion_可即的博客-优快云博客

问题1:需要在chessboard.h文件中添加一些头文件,文件路径如下:/home/jetson/VINS_ws/src/vins-fusion-master/camera_models/include/camodocal/chessboard

#include <opencv2/imgproc/types_c.h>
#include <opencv2/calib3d/calib3d_c.h>

在cameracalibration.h文件中添加头文件,文件路径如下:/home/jetson/VINS_ws/src/vins-fusion-master/camera_models/include/camodocal/calib

#include <opencv2/imgproc/types_c.h>
#include <opencv2/imgproc/imgproc_c.h>

问题2:有些函数名称在cv里面被修改了,只需要在对应的文件中修改函数名称即可。

将报错文件上的 CV_FONT_HERSHEY_SIMPLEX 参数改为 cv::FONT_HERSHEY_SIMPLEX

出现报错:

error: ‘CV_RGB2GRAY’ was not declared in this scope
   53 |       cv::cvtColor(image, aux, CV_RGB2GRAY);
      |                                ^~~~~~~~~~~

在对应文件下添加头文件:#include <opencv2/imgproc/types_c.h>即可

出现报错:

error: ‘CV_LOAD_IMAGE_GRAYSCALE’ was not declared in this scope
  125 |    imLeft = cv::imread(leftImagePath,  CV_LOAD_IMAGE_GRAYSCALE );
      |                                        ^~~~~~~~~~~~~~~~~~~~~~~

解决措施:

但在Opencv4中,CV_LOAD_IMAGE_GRAYSCALE找不到,经过查看Opencv的API可知,CV_LOAD_IMAGE_GRAYSCALE已改为 cv::IMREAD_GRAYSCALE,修改即可。

出现报错:

只需要在cv_bridgeConfig.cmake里面对两个路径进行修改即可。

ROS Ubuntu20.04多版本opencv运行及bug解决_error: ‘cv_font_hershey_simplex’ was not declared _arrow_zjj的博客-优快云博客

编译成功

启动命令:

第一个终端在realsense-ros的工作空间下,启动配置好的双目相机+imu。

source ./devel/setup.bash
roslaunch realsense2_camera rs_vins_fusion.launch

第二个终端在vins_ws工作空间下,启动rviz。

source ./devel/setup.bash
roslaunch vins vins_rviz.launch

第三个终端也在vins_ws工作空间下,启动运行节点。

source ./devel/setup.bash
rosrun vins vins_node  /home/jetson/vins_ws/src/vins-fusion-master/config/realsense_d455/realsense_stereo_imu_config.yaml

问题:一旦运行相机的launch文件,就会在运行vins_node的终端里报错,如下:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.5.4) /home/ubuntu/build_opencv/opencv/modules/core/src/matrix.cpp:250: error: (-215:Assertion failed) s >= 0 in function 'setSize'

已放弃 (核心已转储)

解决措施:感觉是opencv的问题,所以尝试更换opencv的版本库。

参考更换opencv版本库的连接,也是我自己写的博客:

Ubuntu20.04下OpenCV4.2.0安装以及多版本切换_weixin_49247766的博客-优快云博客

更换opencv库4.2.0完成后,修改vins中三个文件夹的cmakelist文件,分别如下:

 添加语句并修改find_package(OpenCV REQUIRED):

set(OpenCV_DIR /usr/local/opencv420/lib/cmake/opencv4)
find_package(OpenCV 4.2.0 REQUIRED)

注意,最好也修改以下realsense_ws中的CMakelist,否则后续运行相机的时候会出现些问题。修改方式和这里是一样的。

三、运行结果

终于可以成功运行了,运行结果如下:

### 安装 CUDA 加速的 OpenCV 到 Jetson Orin Nano 的 Python 虚拟环境 为了在 Jetson Orin Nano 上创建并配置带有 CUDA 支持的 OpenCV-Python 环境,需遵循一系列特定操作来确保兼容性和性能优化。 #### 创建虚拟环境 首先,在目标目录下初始化一个新的 Python 虚拟环境: ```bash python3 -m venv mycv_venv source mycv_venv/bin/activate ``` 激活此虚拟环境后,所有的后续命令都将在该隔离环境中执行[^1]。 #### 准备依赖项 由于默认安装可能不包含必要的构建工具和库文件,因此建议预先安装这些依赖项以简化编译流程。这可以通过 APT 包管理器完成: ```bash sudo apt-get install build-essential cmake git pkg-config \ libgtk-3-dev libavcodec-dev libavformat-dev libswscale-dev \ python3-dev python3-numpy libtbb2 libtbb-dev libjpeg-dev \ libpng-dev libtiff-dev gfortran openexr libatlas-base-dev \ qtbase5-dev libcanberra-gtk-module libeigen3-dev \ libhdf5-dev protobuf-compiler libgoogle-glog-dev \ libgflags-dev liblapack-dev checkinstall yasm libfaac-dev \ libopencore-amrnb-dev libopencore-amrwb-dev libsdl2-dev \ libtheora-dev libva-dev libvdpau-dev libvorbis-dev \ libxvidcore-dev x264 v4l-utils unzip wget curl ca-certificates \ software-properties-common dirmngr gnupg lsb-release ``` 对于 CUDA 特定的支持,则需要额外加载 NVIDIA 提供的相关组件: ```bash sudo apt-get install cuda-toolkit-11-8 libcudnn8=8.9.*-1+cuda11.8 \ libcudnn8-dev=8.9.*-1+cuda11.8 ``` 以上步骤确保了所有必需软件包的存在,从而为接下来的手动编译提供了坚实的基础[^2]。 #### 编译与安装自定义版 OpenCV 获取官方 GitHub 仓库中的最新源代码,并按照推荐设置进行本地化调整以便充分利用硬件特性: ```bash cd ~ git clone https://github.com/opencv/opencv.git git clone https://github.com/opencv/opencv_contrib.git mkdir -p ~/opencv/build && cd ~/opencv/build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D WITH_CUDA=ON \ -D ENABLE_FAST_MATH=ON \ -D CUDA_FAST_MATH=ON \ -D WITH_CUBLAS=ON \ .. make -j$(nproc) sudo make install ``` 上述指令集成了对 GPU 计算单元的最佳利用方案,同时保持与其他视觉处理模块的良好协作能力[^3]。 #### 配置 Python 绑定 最后一步是在当前活跃的虚拟环境中注册新版本的 OpenCV 库路径,使得 Python 解释器能够识别到它: ```bash echo "/usr/local/lib" | sudo tee /etc/ld.so.conf.d/opencv.conf sudo ldconfig pip install numpy pip install opencv-contrib-python==<version> ``` 注意替换 `<version>` 参数为你实际编译成功的 OpenCV 发布号。通过这种方式,可以在不影响全局系统的前提下享受增强型图像处理功能带来的便利。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值