Dobot magician + realsense D435i 手眼标定(外参)

想着拿实验室的Dobot magician 来测试自己的算法,测试之前要先确定D435i相对Dobot基座标的位姿矩阵,想着是不是可以弄个程序跑完就出一个位姿矩阵,就有了这几天踩坑的历程,虽然网上有些资源,不过拿来用的时候就各种问题,后面就自己在这些资料中,自己弄了个容易理解的(自认为)。以下主要记录踩坑的填坑过程,方便别人避坑。
一、pipeline:
1.利用realsense2 SDK 编写获取图像数据
2.利用cv2的arocru库,把二维码粘在Dobot末端上。实现相机坐标系下位置坐标数据的获取。
3…dobot 8个位置点的编程(利用API)
4,利用AX=BX求解 ,算出位姿矩阵

二、填坑过程
网上找到了前人做好的一些好资源(https://www.it610.com/article/1175085231611392000.htm),因为想省事,资源是python做的,就直接想用python做了。
1.坑1 dobot python ubuntu 的API 动态链接库找不到
本来想直接在ubuntu里面弄的,不过dobot 的python ubuntu API中的动态链接库怎么都找不到,(如下图 用dobot 论坛其中一个帖子的图)
在这里插入图片描述

尝试过从dobot官网下载过dobotdll文件夹中的Linux 库的libDobotDll.so,但是不管用,不知道是不是版本的问题,也尝试过用ROS_DEMO里面的libDobotDll.so都不行,扒dobot论坛,好几个帖子都发过这个问题,但都没有官方答复(顺便吐槽一下dobot论坛,几何没什么官方答复),只有个哥们说他后面直接发email给官方要了个so文件,其实ros_demo里面这个so是可以在ros_demo里面用的,也许是版本的问题吧,还有就是ros是有aruco库的,其实可以用ros来弄的,应该也容易,不过这都是后话了,因为就想在别人的树下乘凉就没用ros尝试。后面怎么试都不行,索性改成在win7下弄,就没有这个问题。
2.realsense SDK python版本问题
刚开始没注意版本要求,直接装了python3.8的,结果在pycharm里面装了pyrealsense2库 读取realsense时总直接蹦了,又折折腾腾,各种搜说是只支持3.6的,又刷回3.6的,不过仍然还是闪崩,这是是各种郁闷,想到装的库是不是有问题,就把安装realsense SDK时 顺手安装的库文件 移到 python 3.6下的lib\site-packages 覆盖在pycharm里面装的,结果还真是安装库的问题
3.在跑dobot 程序 向dobot发送指令时,出现非法访问,显示怀疑没有连对端口,可是程序里面连接dobot指令时连接成功的,就一个个式,拿dobot官方的python Demo来跑 是可以跑得通的,就把demo里面的程序复制过来到项目底下新建一个,结果还是非法访问,真是奇葩,只能断点调试进去看,结果发现在调用python API时,demo进的DobotDllType.py 跟我项目里的DobotDllType.py 里面的程序时不一样的,我项目里的DobotDllType.py是用网上github上搜来的(前人做的资源没有这个文件),demo里面是官网下载里包括有的,应该是32位和64位的版本区别吧,就直接把demo里面的这个文件拷贝到我项目里面,终于可以向dobot发送指令了。
4.前人弄的是多线程的,两个线程,没有多线程基础,就自己改成了单线程,下面是代码,没有整理,只是能实现功能。结果如下:

Image_to_Arm
-------------------
Expected: [209, -140, 75]
Result: [ 206.92765176 -139.50130895   75.09675625]
Expected: [205, -92, 64]
Result: [206.15440382 -93.04199169  64.2760236 ]
Expected: [236, 24, 29]
Result: [235.21704529  24.25993536  29.03191212]
Expected: [169, 30, 53]
Result: [168.58617494  30.01540248  53.04496766]
Expected: [262, 41, 7]
Result: [260.72704713  40.56181579   7.35622151]
Expected: [221, 4, 67]
Result: [221.47604745   4.21964259  66.86120285]
Expected: [214, -29, 125]
Result: [215.01937538 -28.88518771 124.78165033]
Expected: [237, -38, -12]
Result: [238.89225425 -37.62830786 -12.44873431]
-------------------
Arm_to_Image
-------------------
Expected: [-0.11113452 -0.04092187  0.37600002]
Result: [-0.11193481 -0.03970785  0.37425096]
Expected: [-0.05280176 -0.03155927  0.377     ]
Result: [-0.05139119 -0.03194701  0.37806281]
Expected: [0.09206092 0.01579761 0.35600001]
Result: [0.09166938 0.0162513  0.35534533]
Expected: [ 0.10412183 -0.04093695  0.40200001]
Result: [ 0.10406784 -0.04067172  0.40165461]
Expected: [0.11099508 0.04827704 0.34300002]
Result: [0.11144238 0.04928318 0.34199833]
Expected: [ 0.06675044 -0.02382139  0.35000002]
Result: [ 0.06651353 -0.02420318  0.35037833]
Expected: [ 0.02369131 -0.07649451  0.32800001]
Result: [ 0.02362968 -0.07724047  0.32881026]
Expected: [0.01647647 0.05169865 0.38300002]
Result: [0.01616295 0.05027507 0.38449947]

可以看出是有些误差的,误差来自dobot机械误差,aruco误差,标签贴的误差吧
放代码:

import threading
import time
import pyrealsense2 as rs
import numpy as np
import cv2
# 提示没有aruco的看问题汇总
import cv2.aruco as aruco
import rsAruco as ra
import DobotDllType as dType

# 配置摄像头与开启pipeline
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
profile = pipeline.start(config)
align_to = rs.stream.color
align = rs.align(align_to)

# 获取对齐的rgb和深度图
def get_aligned_images( ):
    frames = pipeline.wait_for_frames()
    aligned_frames = align.process(frames)
    aligned_depth_frame = aligned_frames.get_depth_frame()
    color_frame = aligned_frames.get_color_frame()

    # 获取intelrealsense参数
    intr = color_frame.profile.as_video_stream_profile().intrinsics
    # 内参矩阵,转ndarray方便后续opencv直接使用
    intr_matrix = np.array([
        [intr.fx, 0, intr.ppx], [0, intr.fy, intr.ppy], [0, 0, 1]
    ])
    # 深度图-16位
    depth_image = np.asanyarray(aligned_depth_frame.get_data())
    # 深度图-8位
    depth_image_8bit = cv2.convertScaleAbs(depth_image, alpha=0.03)
    pos = np.where(depth_image_8bit == 0)
    depth_image_8bit[pos] = 255
    # rgb图
    color_image = np.asanyarray(color_frame.get_data())
    # return: rgb图,深度图,相机内参,相机畸变系数(intr.coeffs)
    return color_image, depth_image,intr, intr_matrix, np.array(intr.coeffs),aligned_depth_frame


def center_aruco():
    rgb, depth, intr,intr_matrix, intr_coeffs,aligned_depth_frame = get_aligned_images()
    # 获取dictionary, 5x5的码,指示位50个,这个看你打出来的二维码是选哪些参数打的来设置
    aruco_dict = aruco.Dictionary_get(aruco.DICT_5X5_50)
    # 创建detector parameters
    parameters = aruco.DetectorParameters_create()
    # 输入rgb图, aruco的dictionary, 相机内参, 相机的畸变参数
    corners, ids, rejected_img_points = aruco.detectMarkers(rgb, aruco_dict, parameters=parameters,
                                                            cameraMatrix=intr_matrix, distCoeff=intr_coeffs)
    if len(corners) != 0:
        point = np.average(corners[0][0], axis=0)
        depth = aligned_depth_frame.get_distance(point[0], point[1])
        point = np.append(point, depth)
        if depth != 0:
            global center
            global color_image
            #                     print("center:%f %f, depth:%f m" %(point[0], point[1], point[2]))
            x = point[0]
            y = point[1]
            z = point[2]
            ## see rs2 document:
            ## https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0#point-coordinates
            ## and example: https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0#point-coordinates
            x, y, z = rs.rs2_deproject_pixel_to_point(intr, [x, y], z)
            center = [x, y, z]
            # print("center is ", center)
            #                     print(center)
            color_image = aruco.drawDetectedMarkers(rgb, corners)

    else:
        center, color_image = None, None
        print("not found aruco!")
    return center, color_image



def dobot_connect( ):
    # dobot init
    CON_STR = {
        dType.DobotConnect.DobotConnect_NoError: "DobotConnect_NoError",
        dType.DobotConnect.DobotConnect_NotFound: "DobotConnect_NotFound",
        dType.DobotConnect.DobotConnect_Occupied: "DobotConnect_Occupied"}

    # Load Dll
    api = dType.load()

    # Search
    # print(dType.SearchDobot(api))
    # Connect Dobot
    state = dType.ConnectDobot(api, "COM3", 115200)[0]

    print("Connect status:", CON_STR[state])
    return api, state

def dobot_init(api,state):
    if (state == dType.DobotConnect.DobotConnect_NoError):
            #Clean Command Queued
        dType.SetQueuedCmdClear(api)
        dType.SetQueuedCmdStartExec(api)
            #Async Motion Params Setting
        # dType.SetHOMEParams(api, 145, -20, 118, 0, isQueued = 1)
        # dType.SetPTPJointParams(api, 200, 200, 200, 200, 200, 200, 200, 200, isQueued = 1)
        # dType.SetPTPCommonParams(api, 100, 100, isQueued = 1)

        #Async Home
        # dType.SetHOMECmd(api, temp = 0, isQueued = 1)
        dType.SetPTPCmd(api,1, 145, -20, 118, 0, isQueued = 1)


if __name__ == "__main__":
    n = 0
    i = 0
    default_cali_points = [[209, -140, 75, 0], [205, -92, 64, 0],
                           [236, 24, 29, 0], [169, 30, 53, 0],
                           [262, 41, 7, 0], [221, 4, 67, 0],
                           [214, -29, 125, 0], [237, -38, -12, 0]]
    np_cali_points = np.array(default_cali_points)
    arm_cord = np.column_stack(
        (np_cali_points[:, 0:3], np.ones(np_cali_points.shape[0]).T)).T
    centers = np.ones(arm_cord.shape)

    api, state = dobot_connect()
    dobot_init(api, state)

    if (state == dType.DobotConnect.DobotConnect_NoError):
        for ind, pt in enumerate(default_cali_points):
            print("Current points:", pt)
            queuedCmdIndex = dType.SetPTPCmd(
                api, 1, pt[0], pt[1], pt[2], pt[3], isQueued=1)
            while dType.GetQueuedCmdCurrentIndex(api) != queuedCmdIndex:
                time.sleep(2)
            time.sleep(3)
            center, color_image = center_aruco()
            if center is not None:
                centers[0:3, ind] = center
                print("centers is ", center)
            else:
                print("no aruco!")

    image_to_arm = np.dot(arm_cord, np.linalg.pinv(centers))
    arm_to_image = np.linalg.pinv(image_to_arm)
    # dType.SetPTPCmd(api, 1, 217, 0, 154, 0, isQueued=0)
    dType.SetQueuedCmdStopExec(api)

    print("Finished")
    print("Image to arm transform:\n", image_to_arm)
    print("Arm to Image transform:\n", arm_to_image)
    print("Sanity Test:")

    print("-------------------")
    print("Image_to_Arm")
    print("-------------------")
    for ind, pt in enumerate(centers.T):
        print("Expected:", default_cali_points[ind][0:3])
        print("Result:", np.dot(image_to_arm, np.array(pt))[0:3])

    print("-------------------")
    print("Arm_to_Image")
    print("-------------------")
    for ind, pt in enumerate(default_cali_points):
        print("Expected:", centers.T[ind][0:3])
        pt[3] = 1
        print("Result:", np.dot(arm_to_image, np.array(pt))[0:3])

    cv2.destroyAllWindows()
    pipeline.stop()
    time.sleep(1)
    dType.DisconnectDobot(api)

DobotDllType.py自己去dobot官网下python的demo里面找吧,要对版本!

dobot机械臂抓取主要是通过改变机械臂的base坐标系(tool坐标系)来实现的。视觉机械臂可以安装摄像头来实现视觉识别,摄像头可以安装在机械臂上或周围环境上。为了确定摄像头和机械臂的位置坐标关系,需要进行手眼标定,并得到机械臂坐标系和摄像头坐标系之间的转换矩阵。 dobot机械臂抓取过程中,还涉及相机坐标系和图像坐标系之间的转换。图像坐标系的坐标原点是相机光轴与成像平面的交点,而相机坐标系和图像坐标系之间的转换可以使用转换矩阵实现。 相机的成像原理是通过光学系统将光线聚焦在感光元件上,感光元件将光线转换为电信号,然后进行图像处理和存储。这个过程通过摄像头来实现,可以获取到图像信息,从而进行抓取操作。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Dobot magician机械臂抓取实战---前言](https://blog.youkuaiyun.com/weixin_64037619/article/details/130691848)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [Dobot magician机械臂抓取实战---手眼标定(1)](https://blog.youkuaiyun.com/weixin_64037619/article/details/131161932)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值