or1200下Raw-OS学习(仿真环境篇)

        貌似最近都在公司混日子过了,怎么办?哎哎哎~罪过啊罪过,不过也是的,加工资居然没我份,顶领导个肺的,叫我怎么继续活啊~哎哎哎~

        算了,不谈这些鸟事情了,说多了都是泪啊,这篇blog开始我们进入raw-os的api学习,为期大概是2个月吧,因为我从来没有接触过操作系统,好多东西都要补,本科学的是通信专业,妹的天天搞电路,不是放大器就是滤波器,不是锁相环就是AGC,天天搞天天搞,搞到现在出来工作了分配到个做软件的岗位,天朝就是这个坑爹来的么,从来都不懂什么叫资源合理分配,不过也没办法了,有时也找找自身的原因,不要去做愤青了~

        这篇blog先说说在openrisc架构体系下如何做验证raw-os的程序吧,至于实验环境还是官方提供的ubuntu镜像,因为这样我们就可以不用手动地区编译万恶的cross-tools,这些手动编译起来超级麻烦。



        至于官方提供的是vbox的镜像,真是什么都是开源的,打开深入之后就是这个样子的了,里面都是一些文档,至于环境搭建请参考《or1200软件环境搭建》,由于如果只是学习raw-os里kernel提供的APIs的话,现在可以不用去搭建FPGA上的SOC,这样学起来会比较快。

        但是我个人是不太喜欢用图形界面去搞的,在linux上命令行才是王道,所以呢,用linux的text模式登陆吧,类似于ubuntu server版本的情况,官方怎么不用server版呢,可能要迎合大众的口味吧~

        好吧,修改grub文件,如下图啦~

        命令是:sudo gedit /etc/default/grub



        然后对着:

        GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

        GRUB_CMDLINE_LINUX=""


        改为:

        GRUB_CMDLINE_LINUX_DEFAULT="quiet splash text"

        GRUB_CMDLINE_LINUX="vga=789"


        至于“vga=789”这个参数呢,貌似是有个参数表的,这个是修改成800*600的分辨率和24bit色的,这个为了分辨率而弄,具体可以google一下有关文章咧~

        好,保存,然后更新配置选项。

        命令:sudo update-grub



        OK,到此你就可以reboot你的系统了。

        命令:sudo reboot


        然后等再开机你会发现你没进入图形界面了,这个应该是一片黑的,然后在你的本本的键盘上激活控制台。

        命令:crtl+alt+f1~f6均可~默认开启是有6个tty控制台的,现在用crtl+alt+f2来激活tty2

        账号密码均为:openrisc


        登录进来,这里就是linux万恶的命令行模式,在这里我们就可以在纯软件而不用硬件平台去验证所写的raw-os程序,然后我们就一个例子看看如果使用这个环境。

        至于txj老大会不会最后选择FPGA平台的openrisc架构作为Raw-OS的下一个发展方向,我都会全力支持陈sir的伟大宏愿的~支持~v587,至于所写的例程都会作为学习raw-os的入门文档,当然也要希望各位去捧场lor~


        不废话了,接下来看看怎么使用这个环境吧~

        在这个tty的环境中和进了图形界面的terminal环境没什么区别的,唯一不能的就是图形复制和粘贴操作,这个比较可恨,遇到一个你比较长的命令的话就要敲半天,但是也没关系了,码农唯一一个技能就是能码,你吹我不胀,要不也可以写个shell脚本从windows那边粘贴命令过来连续执行。

        顺便了,在这个环境还能强制我们用shell和VIM,想转linux环境的最好还是自己逼迫自己使用这些东东吧。


        一个例子,这个有关定时器的一个例程了,这个星期会转入到任务同步和通信的学习,首先是信号量,然后再慢慢加深。


        1.首先挂在windows上的共享文件夹,目录根据自己的环境定哈~



        2.再然后看看共享文件夹里面关于raw-os工程的目录结构

        之前移植raw-os的blog里又说这个文件目录结构,这里不细说了~



        3.然后打开or1ksim.cfg文件,这个就是在linux中利用or1ksim这个软件架构模拟openrisc运行行为的配置文件,至于or1ksim已经安装好在官方提供的镜像了,这里就直说用法啦~

        找到section uart,修改成,这样就能在tty控制台观察到log了~



        4.然后回到tty里面,因为工程已经写好,就直接make了,在tty里面就是比在terminal里面的速度快好几倍~爽啊~


        到最后会生成boot.or32文件,这个就是or1ksim能识别的执行文件


        5.输入命令sim -f or1ksim.cfg boot.or32,回车,就可以观察到代码的运行情况,这样在软件上验证代码比起每次都要拿个开发板去验证爽多了



        因为现在软件这边打算在一个产品上上Raw-OS,现在是学习和验证Raw-OS的,因为软件这边是我全部负责的,所以用什么技术不用经过领导评审,哇哈哈~用的是TI的cortex-M4的单片机,话说TI也太恶心了,明明量产了,供货周期居然要18个月,你妹的,不过也好,我刚好能把软件重写一遍~上Raw-OS去跑~

        希望大家能继续支持Raw-OS的发展,小弟我也默默在支持支持~

开源处理器源代码,学习优秀的代码编写 ////////////////////////////////////////////////////////////////////// //// //// //// OR1200's ALU //// //// //// //// This file is part of the OpenRISC 1200 project //// //// http://www.opencores.org/cores/or1k/ //// //// //// //// Description //// //// ALU //// //// //// //// To Do: //// //// - make it smaller and faster //// //// //// //// Author(s): //// //// - Damjan Lampret, lampret@opencores.org //// //// //// ////////////////////////////////////////////////////////////////////// //// //// //// Copyright (C) 2000 Authors and OPENCORES.ORG //// //// //// //// This source file may be used and distributed without //// //// restriction provided that this copyright statement is not //// //// removed from the file and that any derivative work contains //// //// the original copyright notice and the associated disclaimer. //// //// //// //// This source file is free software; you can redistribute it //// //// and/or modify it under the terms of the GNU Lesser General //// //// Public License as published by the Free Software Foundation; //// //// either version 2.1 of the License, or (at your option) any //// //// later version. //// //// //// //// This source is distributed in the hope that it will be //// //// useful, but WITHOUT ANY WARRANTY; without even the implied //// //// warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR //// //// PURPOSE. See the GNU Lesser General Public License for more //// //// details. //// //// //// //// You should have received a copy of the GNU Lesser General //// //// Public License along with this source; if not, download it //// //// from http://www.opencores.org/lgpl.shtml //// //// //// //////////////////////////////////////////////////////////////////////
现有一ros机械臂在gazebo仿真环境中自动识别物体并进行抓取物体放入对应框内的项目,其原理是通过摄像头将要抓取的物品进行拍照再通过yolo进行物品识别与定位,在将物品抓取放入对应框格内;现要对项目进行修改,将拍照进行图像识别改为摄像头常开实时对物品进行识别,其中相机名称为kinect,物品识别代码为:#! /usr/bin/env python3 import cv2 as cv import numpy as np import torch import message_filters import rospy import sys import time from sensor_msgs.msg import Image from cv_bridge import CvBridge from rospkg import RosPack # get abs path from os import path # get home path from gazebo_msgs.msg import ModelStates from geometry_msgs.msg import * from pyquaternion import Quaternion as PyQuaternion # Global variables path_yolo = path.join(path.expanduser('~'), 'yolov5') path_vision = RosPack().get_path('vision') path_weigths = path.join(path_vision, 'weigths') cam_point = (-0.44, -0.5, 1.58) height_tavolo = 0.74 dist_tavolo = None origin = None model = None model_orientation = None legoClasses = ['X1-Y1-Z2', 'X1-Y2-Z1', 'X1-Y2-Z2', 'X1-Y2-Z2-CHAMFER', 'X1-Y2-Z2-TWINFILLET', 'X1-Y3-Z2', 'X1-Y3-Z2-FILLET', 'X1-Y4-Z1', 'X1-Y4-Z2', 'X2-Y2-Z2', 'X2-Y2-Z2-FILLET'] argv = sys.argv a_show = '-show' in argv # Utility Functions def get_dist_tavolo(depth, hsv, img_draw): global dist_tavolo #color = (120,1,190) #mask = get_lego_mask(color, hsv, (5, 5, 5)) #dist_tavolo = depth[mask].max() #if dist_tavolo > 1: dist_tavolo -= height_tavolo dist_tavolo = np.nanmax(depth) def get_origin(img): global origin origin = np.array(img.shape[1::-1]) // 2 def get_lego_distance(depth): return depth.min() def get_lego_color(center, rgb): return rgb[center].tolist() def get_lego_mask(color, hsv, toll = (20, 20, 255)): thresh = np.array(color) mintoll = thresh - np.array([toll[0], toll[1], min(thresh[2]-1, toll[2])]) maxtoll = thresh + np.array(toll) return cv.inRange(hsv, mintoll, maxtoll) def getDepthAxis(height, lego): X, Y, Z = (int(x) for x in lego[1:8:3]) #Z = (0.038, 0.057) X = (0.031, 0.063) Y = (0.031, 0.063, 0.095, 0.127) rapZ = height / 0.019 - 1 pinZ = round(rapZ) rapXY = height / 0.032 pinXY = round(rapXY) errZ = abs(pinZ - rapZ) + max(pinZ - 2, 0) errXY = abs(pinXY - rapXY) + max(pinXY - 4, 0) if errZ < errXY: return pinZ, 2, pinZ == Z # pin, is ax Z, match else: if pinXY == Y: return pinXY, 1, True else: return pinXY, 0, pinXY == X def point_distorption(point, height, origin): p = dist_tavolo / (dist_tavolo - height) point = point - origin return p * point + origin def point_inverse_distortption(point, height): p = dist_tavolo / (dist_tavolo - height) point = point - origin return point / p + origin def myimshow(title, img): def mouseCB(event,x,y,a,b): print(x, y, img[y, x], "\r",end='',flush=True) print("\033[K", end='') cv.imshow(title, img) cv.setMouseCallback(title, mouseCB) cv.waitKey() # ----------------- LOCALIZATION ----------------- # def process_item(imgs, item): #images rgb, hsv, depth, img_draw = imgs #obtaining Yolo informations (class, coordinates, center) x1, y1, x2, y2, cn, cl, nm = item.values() mar = 15 x1, y1 = max(mar, x1), max(mar, y1) x2, y2 = min(rgb.shape[1]-mar, x2), min(rgb.shape[0]-mar, y2) boxMin = np.array((x1-mar, y1-mar)) x1, y1, x2, y2 = np.int0((x1, y1, x2, y2)) boxCenter = (y2 + y1) // 2, (x2 + x1) // 2 color = get_lego_color(boxCenter, rgb) hsvcolor = get_lego_color(boxCenter, hsv) sliceBox = slice(y1-mar, y2+mar), slice(x1-mar, x2+mar) #crop img with coordinate bounding box; computing all imgs l_rgb = rgb[sliceBox] l_hsv = hsv[sliceBox] if a_show: cv.rectangle(img_draw, (x1,y1),(x2,y2), color, 2) l_depth = depth[sliceBox] l_mask = get_lego_mask(hsvcolor, l_hsv) # filter mask by color l_mask = np.where(l_depth < dist_tavolo, l_mask, 0) l_depth = np.where(l_mask != 0, l_depth, dist_tavolo) #myimshow("asda", hsv) #getting lego height from camera and table l_dist = get_lego_distance(l_depth) l_height = dist_tavolo - l_dist #masking l_top_mask = cv.inRange(l_depth, l_dist-0.002, l_dist+0.002) #cv.bitwise_xor(img_draw,img_draw,img_draw, mask=cv.inRange(depth, l_dist-0.002, l_dist+0.002)) #myimshow("hmask", l_top_mask) # model detect orientation depth_borded = np.zeros(depth.shape, dtype=np.float32) depth_borded[sliceBox] = l_depth depth_image = cv.normalize( depth_borded, None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U ) depth_image = cv.cvtColor(depth_image, cv.COLOR_GRAY2RGB).astype(np.uint8) #yolo in order to keep te orientation #print("Model orientation: Start...", end='\r') model_orientation.conf = 0.7 results = model_orientation(depth_image) pandino = [] pandino = results.pandas().xyxy[0].to_dict(orient="records") n = len(pandino) #print("Model orientation: Finish", n) # Adjust prediction pinN, ax, isCorrect = getDepthAxis(l_height, nm) if not isCorrect and ax == 2: if cl in (1,2,3) and pinN in (1, 2): # X1-Y2-Z*, *1,2,2-CHAMFER cl = 1 if pinN == 1 else 2 # -> Z'pinN' elif cl in (7, 8) and pinN in (1, 2): # X1-Y4-Z*, *1,2 cl = 7 if pinN == 1 else 8 # -> Z'pinN' elif pinN == -1: nm = "{} -> {}".format(nm, "Target") else: print("[Warning] Error in classification") elif not isCorrect: ax = 1 if cl in (0, 2, 5, 8) and pinN <= 4: # X1-Y*-Z2, *1,2,3,4 cl = (2, 5, 8)[pinN-2] # -> Y'pinN' elif cl in (1, 7) and pinN in (2, 4): # X1-Y*-Z1, *2,4 cl = 1 if pinN == 2 else 7 # -> Y'pinN' elif cl == 9 and pinN == 1: # X2-Y2-Z2 cl = 2 # -> X1 ax = 0 else: print("[Warning] Error in classification") nm = legoClasses[cl] if n != 1: print("[Warning] Classification not found") or_cn, or_cl, or_nm = ['?']*3 or_nm = ('lato', 'lato', 'sopra/sotto')[ax] else: print() #obtaining Yolo informations (class, coordinates, center) or_item = pandino[0] or_cn, or_cl, or_nm = or_item['confidence'], or_item['class'], or_item['name'] if or_nm == 'sotto': ax = 2 if or_nm == 'lato' and ax == 2: ax = 1 #--- #creating silouette top surface lego contours, hierarchy = cv.findContours(l_top_mask, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) top_center = np.zeros(2) for cnt in contours: tmp_center, top_size, top_angle = cv.minAreaRect(cnt) top_center += np.array(tmp_center) top_center = boxMin + top_center / len(contours) #creating silouette lego contours, hierarchy = cv.findContours(l_mask, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) if len(contours) == 0: return None cnt = contours[0] l_center, l_size, l_angle = cv.minAreaRect(cnt) l_center += boxMin if ax != 2: l_angle = top_angle l_box = cv.boxPoints((l_center, l_size, l_angle)) if l_size[0] <=3 or l_size[1] <=3: cv.drawContours(img_draw, np.int0([l_box]), 0, (0,0,0), 2) return None # filter out artifacts if a_show: cv.drawContours(img_draw, np.int0([l_box]), 0, color, 2) # silouette distorption # get vertexs distance from origin top_box = l_box.copy() vertexs_norm = [(i, np.linalg.norm(vec - origin)) for vec, i in zip(l_box, range(4))] vertexs_norm.sort(key=lambda tup: tup[1]) # get closest vertex iver = vertexs_norm[0][0] vec = l_box[iver] # distorping closest vertex if or_nm == 'sopra': l_height -= 0.019 top_box[iver] = point_distorption(l_box[iver], l_height, origin) v0 = top_box[iver] - vec # adapt adiacents veretx v1 = l_box[iver - 3] - vec # i - 3 = i+1 % 4 v2 = l_box[iver - 1] - vec top_box[iver - 3] += np.dot(v0, v2) / np.dot(v2, v2) * v2 top_box[iver - 1] += np.dot(v0, v1) / np.dot(v1, v1) * v1 l_center = (top_box[0] + top_box[2]) / 2 if a_show: cv.drawContours(img_draw, np.int0([top_box]), 0, (5,5,5), 2) cv.circle(img_draw, np.int0(top_box[iver]),1, (0,0,255),1,cv.LINE_AA) #rotation and axis drawing if or_nm in ('sopra', 'sotto', 'sopra/sotto', '?'): # fin x, y directions (dirX, dirY) dirZ = np.array((0,0,1)) if or_nm == 'sotto': dirZ = np.array((0,0,-1)) projdir = l_center - top_center if np.linalg.norm(projdir) < l_size[0] / 10: dirY = top_box[0] - top_box[1] dirX = top_box[0] - top_box[-1] if np.linalg.norm(dirY) < np.linalg.norm(dirX): dirX, dirY = dirY, dirX projdir = dirY * np.dot(dirY, projdir) edgeFar = [ver for ver in top_box if np.dot(ver - l_center, projdir) >= 0][:2] dirY = (edgeFar[0] + edgeFar[1]) / 2 - l_center dirY /= np.linalg.norm(dirY) dirY = np.array((*dirY, 0)) dirX = np.cross(dirZ, dirY) elif or_nm == "lato": # find pin direction (dirZ) edgePin = [ver for ver in top_box if np.dot(ver - l_center, l_center - top_center) >= 0][:2] dirZ = (edgePin[0] + edgePin[1]) / 2 - l_center dirZ /= np.linalg.norm(dirZ) dirZ = np.array((*dirZ, 0)) if cl == 10: if top_size[1] > top_size[0]: top_size = top_size[::-1] if top_size[0] / top_size[1] < 1.7: ax = 0 if ax == 0: vx,vy,x,y = cv.fitLine(cnt, cv.DIST_L2,0,0.01,0.01) dir = np.array((vx, vy)) vertexs_distance = [abs(np.dot(ver - l_center, dir)) for ver in edgePin] iverFar = np.array(vertexs_distance).argmin() dirY = edgePin[iverFar] - edgePin[iverFar-1] dirY /= np.linalg.norm(dirY) dirY = np.array((*dirY, 0)) dirX = np.cross(dirZ, dirY) if a_show: cv.circle(img_draw, np.int0(edgePin[iverFar]), 5, (70,10,50), 1) #cv.line(img_draw, np.int0(l_center), np.int0(l_center+np.array([int(vx*100),int(vy*100)])),(0,0,255), 3) if ax == 1: dirY = np.array((0,0,1)) dirX = np.cross(dirZ, dirY) if a_show: cv.line(img_draw, *np.int0(edgePin), (255,255,0), 2) l_center = point_inverse_distortption(l_center, l_height) # post rotation extra theta = 0 if cl == 1 and ax == 1: theta = 1.715224 - np.pi / 2 if cl == 3 and or_nm == 'sotto': theta = 2.359515 - np.pi if cl == 4 and ax == 1: theta = 2.145295 - np.pi if cl == 6 and or_nm == 'sotto': theta = 2.645291 - np.pi if cl == 10 and or_nm == 'sotto': theta = 2.496793 - np.pi rotX = PyQuaternion(axis=dirX, angle=theta) dirY = rotX.rotate(dirY) dirZ = rotX.rotate(dirZ) if a_show: # draw frame lenFrame = 50 unit_z = 0.031 unit_x = 22 * 0.8039 / dist_tavolo x_to_z = lenFrame * unit_z/unit_x center = np.int0(l_center) origin_from_top = origin - l_center endX = point_distorption(lenFrame * dirX[:2], x_to_z * dirX[2], origin_from_top) frameX = (center, center + np.int0(endX)) endY = point_distorption(lenFrame * dirY[:2], x_to_z * dirY[2], origin_from_top) frameY = (center, center + np.int0(endY)) endZ = point_distorption(lenFrame * dirZ[:2], x_to_z * dirZ[2], origin_from_top) frameZ = (center, center + np.int0(endZ)) cv.line(img_draw, *frameX, (0,0,255), 2) cv.line(img_draw, *frameY, (0,255,0), 2) cv.line(img_draw, *frameZ, (255,0,0), 2) # --- # draw text if or_cl != '?': or_cn = ['SIDE', 'UP', 'DOWN'][or_cl] text = "{} {:.2f} {}".format(nm, cn, or_cn) (text_width, text_height) = cv.getTextSize(text, cv.FONT_HERSHEY_DUPLEX, 0.4, 1)[0] text_offset_x = boxCenter[1] - text_width // 2 text_offset_y = y1 - text_height box_coords = ((text_offset_x - 1, text_offset_y + 1), (text_offset_x + text_width + 1, text_offset_y - text_height - 1)) cv.rectangle(img_draw, box_coords[0], box_coords[1], (210,210,10), cv.FILLED) cv.putText(img_draw, text, (text_offset_x, text_offset_y), cv.FONT_HERSHEY_DUPLEX, 0.4, (255, 255, 255), 1) def getAngle(vec, ax): vec = np.array(vec) if not vec.any(): return 0 vec = vec / np.linalg.norm(vec) wise = 1 if vec[-1] >= 0 else -1 dotclamp = max(-1, min(1, np.dot(vec, np.array(ax)))) return wise * np.arccos(dotclamp) msg = ModelStates() msg.name = nm #fov = 1.047198 #rap = np.tan(fov) #print("rap: ", rap) xyz = np.array((l_center[0], l_center[1], l_height / 2 + height_tavolo)) xyz[:2] /= rgb.shape[1], rgb.shape[0] xyz[:2] -= 0.5 xyz[:2] *= (-0.968, 0.691) xyz[:2] *= dist_tavolo / 0.84 xyz[:2] += cam_point[:2] rdirX, rdirY, rdirZ = dirX, dirY, dirZ rdirX[0] *= -1 rdirY[0] *= -1 rdirZ[0] *= -1 qz1 = PyQuaternion(axis=(0,0,1), angle=-getAngle(dirZ[:2], (1,0))) rdirZ = qz1.rotate(dirZ) qy2 = PyQuaternion(axis=(0,1,0), angle=-getAngle((rdirZ[2],rdirZ[0]), (1,0))) rdirX = qy2.rotate(qz1.rotate(rdirX)) qz3 = PyQuaternion(axis=(0,0,1), angle=-getAngle(rdirX[:2], (1,0))) rot = qz3 * qy2 * qz1 rot = rot.inverse msg.pose = Pose(Point(*xyz), Quaternion(x=rot.x,y=rot.y,z=rot.z,w=rot.w)) #pub.publish(msg) #print(msg) return msg #image processing def process_image(rgb, depth): img_draw = rgb.copy() hsv = cv.cvtColor(rgb, cv.COLOR_BGR2HSV) get_dist_tavolo(depth, hsv, img_draw) get_origin(rgb) #results collecting localization #print("Model localization: Start...",end='\r') model.conf = 0.6 results = model(rgb) pandino = results.pandas().xyxy[0].to_dict(orient="records") #print("Model localization: Finish ") # ---- if depth is not None: imgs = (rgb, hsv, depth, img_draw) results = [process_item(imgs, item) for item in pandino] # ---- msg = ModelStates() for point in results: if point is not None: msg.name.append(point.name) msg.pose.append(point.pose) pub.publish(msg) if a_show: cv.imshow("vision-results.png", img_draw) cv.waitKey() pass def process_CB(image_rgb, image_depth): t_start = time.time() #from standard message image to opencv image rgb = CvBridge().imgmsg_to_cv2(image_rgb, "bgr8") depth = CvBridge().imgmsg_to_cv2(image_depth, "32FC1") process_image(rgb, depth) print("Time:", time.time() - t_start) rospy.signal_shutdown(0) pass #init node function def start_node(): global pub print("Starting Node Vision 1.0") rospy.init_node('vision') print("Subscribing to camera images") #topics subscription rgb = message_filters.Subscriber("/camera/color/image_raw", Image) depth = message_filters.Subscriber("/camera/depth/image_raw", Image) #publisher results pub=rospy.Publisher("lego_detections", ModelStates, queue_size=1) print("Localization is starting.. ") print("(Waiting for images..)", end='\r'), print(end='\033[K') #images synchronization syncro = message_filters.TimeSynchronizer([rgb, depth], 1, reset=True) syncro.registerCallback(process_CB) #keep node always alive rospy.spin() pass def load_models(): global model, model_orientation #yolo model and weights classification print("Loading model best.pt") weight = path.join(path_weigths, 'best.pt') model = torch.hub.load(path_yolo,'custom',path=weight, source='local') #yolo model and weights orientation print("Loading model orientation.pt") weight = path.join(path_weigths, 'depth.pt') model_orientation = torch.hub.load(path_yolo,'custom',path=weight, source='local') pass if __name__ == '__main__': load_models() try: start_node() except rospy.ROSInterruptException: pass 要如何进行修改
最新发布
10-22
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值