使用avcodec_receive_frame(vsFrame->pCodecCtx, vsFrame->pFrame)接收到抓屏的数据(vsFrame->pFrame)后,转为NDI发送帧结构体NDIlib_video_frame_v2_t,最后通过NDIlib_send_send_video_v2 发送具体过程如下:
AVFrame <-->NDIlib_video_frame_v2_t
1 在发送之前创建NDIlib_send_send_video_v2 结构体
NDIlib_video_frame_v2_t m_NDIVideoFrame;
m_NDIVideoFrame.xres = w;
m_NDIVideoFrame.yres = h;
m_NDIVideoFrame.FourCC = NDIlib_FourCC_type_BGRA;
m_NDIVideoFrame.p_data = (uint8_t*)malloc(w*h*4);
2 抓屏,抓屏循环体中的接收函数(ffmpeg抓屏代码)
avcodec_receive_frame(vsFrame->pCodecCtx, vsFrame->pFrame)
3 转换抓到的数据帧,并拷贝到NDIlib_send_send_video_v2 结构体的.p_data中
// 这里申请临时AVFrame需要在循环外申请
AVFrame * pRGBA = av_frame_alloc();
av_image_alloc(pRGBA->data, pRGBA->linesize, m_NDIVideoFrame.xres, m_NDIVideoFrame.yres, AV_PIX_FMT_RGBA, 1);
//
struct SwsContext *img_convert_ctx = sws_getContext(m_NDIVideoFrame.xres, m_NDIVideoFrame.yres, AV_PIX_FMT_RGBA,
m_NDIVideoFrame.xres, m_NDIVideoFrame.yres, AV_PIX_FMT_RGBA, SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(img_convert_ctx, (const uint8_t* const*)vsFrame->pFrame->data, vsFrame->pFrame->linesize, 0, m_NDIVideoFrame.yres,
pRGBA->data, pRGBA->linesize);
int len = m_NDIVideoFrame.xres * m_NDIVideoFrame.yres * 4;
memcpy_s(m_NDIVideoFrame.p_data, len, pRGBA->data[0], len);
NDIlib_send_send_video_v2(m_NDISend, &m_NDIVideoFrame);
av_freep(&pRGBA->data[0]);
sws_freeContext(img_convert_ctx);
需要转换而不直接拷贝的原因有两个:
(1) avcodec_receive_frame()函数会采用64字节对齐(在64位平台自测试)
(2) NDIlib_send_send_video_v2()函数发送的数据是会截断NDIlib_send_send_video_v2.p_data,实际发送的大小为:v2.xres*v2.yres*4( 4 是BGRA格式一个像素占用的字节数)