FFmpeg 源码分析:av_seek_frame()

@[TOC](FFmpeg 源码分析:av_seek_frame())

FFmpeg 源码分析:av_seek_frame()

函数原型

av_seek_frame() 是 FFmpeg 中的一个函数,位于 libavformat/avformat.h,用于定位媒体文件中的某一帧。

下面是 av_seek_frame() 的函数定义:

/**
 * Seek to the keyframe at timestamp.
 * 'timestamp' in 'stream_index'.
 *
 * @param s media file handle
 * @param stream_index If stream_index is (-1), a default
 * stream is selected, and timestamp is automatically converted
 * from AV_TIME_BASE units to the stream specific time_base.
 * @param timestamp Timestamp in AVStream.time_base units
 *        or, if no stream is specified, in AV_TIME_BASE units.
 * @param flags flags which select direction and seeking mode
 * @return >= 0 on success
 */
int av_seek_frame(AVFormatContext *s, int stream_index, 
					int64_t timestamp, int flags);

参数说明:

  1. AVFormatContext *s:流媒体打开的上下文结构指针。

  2. stream_index:需要定位的音视频流的索引。一般0为视频流索引,1为音频流索引;-1为默认,说明是所有的媒体流同时跳转。

  3. timestamp:定位的时间戳。在 FFmpeg 中,时间戳(timestamp)的单位是时间基数(time_base),时间戳值乘以时间基,可以得到实际的时刻值(以秒等为单位)。

  4. flags:设置跳转模式。该参数一共有以下四种具体取值:

    #define AVSEEK_FLAG_BACKWARD 1 ///< seek backward
    #define AVSEEK_FLAG_BYTE     2 ///< seeking based on position in bytes
    #define AVSEEK_FLAG_ANY      4 ///< seek to any frame, even non-keyframes
    #define AVSEEK_FLAG_FRAME    8 ///< seeking based on frame number
    

    下面来详细解释一下:

    • AVSEEK_FLAG_BACKWARD:定位到请求的时间戳之前最近的关键帧
    • AVSEEK_FLAG_BYTE:基于字节位置的跳转,当采用这个 flags 时,时间戳要改为 byte 字节计数。
    • AVSEEK_FLAG_ANY:任意定位,跳转到任意帧,不一定是关键帧,因此使用时可能会导致花屏。
    • AVSEEK_FLAG_FRAME:强制定位到最接近的关键帧。

    此外,flags 可以通过或运算组合起来使用。

返回值:返回值大于等于 0,代表打开成功,否则失败。

需要注意的是,使用 av_seek_frame() 函数进行定位时,必须保证输入格式上下文中的流信息已经完全读取,否则可能会出现定位失败的情况。

使用细节

  • seek操作基准:即seek操作当前时间点,因此在进行播放时需要实时记录音视频流的DTS或者PTS。
  • seek时间单位:进行seek操作时,根据流索引,确定其时间单位time_base。
  • seek时间大小:即向前或者向后seek的时间大小,根据此时间大小和当前时间点,可以计算出向前或者向后进行seek操作时的目标时间。
  • seek操作载体:即是按照音频还是视频进行seek操作,只需要按照其中一个流进行seek即可,不需要分别对音视频流进行seek操作。
  • seek操作刷新:在进行seek操作之后,正常播放之前需要将编解码的内部缓冲区进行刷新,同时也要将自行控制的缓冲区内缓存的音视频清零。

实例:循环播放、可快进快退的播放器

参考:https://blog.youkuaiyun.com/zhaoyun_zzz/article/details/89406087

unistd.h 导入:https://www.cnblogs.com/lutaishi/p/13436265.html
新建一个 unistd.h,内容如下:

/** This file is part of the Mingw32 package.
 *  unistd.h maps     (roughly) to io.h
 */
#ifndef _UNISTD_H
#define _UNISTD_H
#include <io.h>
#include <process.h>
#endif /* _UNISTD_H */

放到 Visual Studio VC 文件夹的 include 下,我的路径是:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\include。

pthread.h 导入:https://blog.youkuaiyun.com/Hexa_H/article/details/130691024

// Simplest FFmpeg Seek Player.cpp : 定义控制台应用程序的入口点。
//

#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <windows.h>
#define __STDC_CONSTANT_MACROS

#ifdef __cplusplus
extern "C"
{
#endif
#include <libavutil/time.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavdevice/avdevice.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#include <SDL2/SDL.h>

#include <errno.h>

#include <unistd.h>
#include <assert.h>
#include <pthread.h>
#include <semaphore.h>

#ifdef __cplusplus
};
#endif

// 报错:
// LNK2019 无法解析的外部符号 __imp__fprintf,该符号在函数 _ShowError 中被引用
// LNK2019 无法解析的外部符号 __imp____iob_func,该符号在函数 _ShowError 中被引用

// 解决办法:
// 包含库的编译器版本低于当前编译版本,需要将包含库源码用vs2017重新编译,由于没有包含库的源码,此路不通。
// 然后查到说是stdin, stderr, stdout 这几个函数vs2015和以前的定义得不一样,所以报错。
// 解决方法呢,就是使用{ *stdin,*stdout,*stderr }数组自己定义__iob_func()
#pragma comment(lib,"legacy_stdio_definitions.lib")
extern "C"
{
	FILE __iob_func[3] = { *stdin, *stdout, *stderr };
}

#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio

#define PACKET_ARRAY_SIZE			(60)
typedef struct __PacketStruct
{
	AVPacket Packet;
	int64_t dts;
	int64_t pts;
	int state;
}PacketStruct;

typedef struct
{
	unsigned int rIndex;
	unsigned int wIndex;
	PacketStruct PacketArray[PACKET_ARRAY_SIZE];
}PacketArrayStruct;

typedef struct __AudioCtrlStruct
{
	AVFormatContext	*pFormatCtx;
	AVStream 		*pStream;
	AVCodec			*pCodec;
	AVCodecContext	*pCodecCtx;
	SwrContext 		*pConvertCtx;

	Uint8  	*audio_chunk;
	Sint32  audio_len;
	Uint8  	*audio_pos;
	int 	AudioIndex;
	int 	AudioCnt;
	uint64_t AudioOutChannelLayout;
	int out_nb_samples;				// nb_samples: AAC-1024 MP3-1152
	AVSampleFormat out_sample_fmt;
	int out_sample_rate;
	int out_channels;
	int out_buffer_size;
	unsigned char* pAudioOutBuffer;

	sem_t frame_put;
	sem_t frame_get;

	PacketArrayStruct 	Audio;
}AudioCtrlStruct;


typedef struct __VideoCtrlStruct
{
	AVFormatContext	*pFormatCtx;
	AVStream 		*pStream;
	AVCodec			*pCodec;
	AVCodecContext	*pCodecCtx;
	SwsContext 		*pConvertCtx;
	AVFrame			*pVideoFrame, *pFrameYUV;
	unsigned char 	*pVideoOutBuffer;
	int 			VideoIndex;
	int 			VideoCnt;
	int 			RefreshTime;
	int screen_w, screen_h;
	SDL_Window *screen;
	SDL_Renderer* sdlRenderer;
	SDL_Texture* sdlTexture;
	SDL_Rect sdlRect;
	SDL_Thread *video_tid;

	sem_t frame_put;
	sem_t video_refresh;
	PacketArrayStruct Video;
}VideoCtrlStruct;


//Refresh Event
#define SFM_REFRESH_VIDEO_EVENT  	(SDL_USEREVENT + 1)
#define SFM_REFRESH_AUDIO_EVENT  	(SDL_USEREVENT + 2)
#define SFM_BREAK_EVENT  			(SDL_USEREVENT + 3)

int thread_exit = 0;
int thread_pause = 0;
int audio_pause = 0;		//音频播放是否暂停,1表示暂停,0表示在播放
int video_pause = 0;		//视频播放是否暂停,1表示暂停,0表示在播放
SDL_Keycode CurKeyCode;		//记录按下的是前进还是后退键,即小键盘的左右方向键,右表示前进,左表示后退
int CurKeyProcess;			//按键seek操作是否已经被处理,0表示未处理,1表示已经处理
int64_t CurVideoDts;	//记录当前播放的视频流Packet的DTS
int64_t CurVideoPts;	//记录当前播放的视频流Packet的DTS

int64_t CurAudioDts;	//记录当前播放的音频流Packet的DTS
int64_t CurAudioPts;	//记录当前播放的音频流Packet的DTS

int64_t DstAudioDts;	//进行seek操作时,计算后的目标音频流的DTS
int64_t DstAudioPts;	//进行seek操作时,计算后的目标音频流的PTS
int64_t DstVideoDts;	//进行seek操作时,计算后的目标视频流的DTS
int64_t DstVideoPts;	//进行seek操作时,计算后的目标视频流的DTS

VideoCtrlStruct VideoCtrl;
AudioCtrlStruct AudioCtrl;
//video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1
//audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0
int IsPacketArrayFull(PacketArrayStruct* p)
{
	int i = 0;
	i = p->wIndex % PACKET_ARRAY_SIZE;
	if (p->PacketArray[i].state != 0) return 1;

	return 0;
}

int IsPacketArrayEmpty(PacketArrayStruct* p)
{
	int i = 0;
	i = p->rIndex % PACKET_ARRAY_SIZE;
	if (p->PacketArray[i].state == 0) return 1;

	return 0;
}

int PacketArrayClear(PacketArrayStruct* p)
{
	int i = 0;
	for (i = 0; i < PACKET_ARRAY_SIZE; i++)
	{
		if (p->PacketArray[i].state != 0)
		{
			av_packet_unref(&p->PacketArray[i].Packet);
			p->PacketArray[i].state = 0;
		}
	}
	p->rIndex = 0;
	p->wIndex = 0;

	return 0;
}

int SDL_event_thread(void *opaque)
{
	SDL_Event event;

	while (1)
	{
		SDL_WaitEvent(&event);
		if (event.type == SDL_KEYDOWN)
		{
			//Pause
			if (event.key.keysym.sym == SDLK_SPACE)
			{
				thread_pause = !thread_pause;
				printf("video got pause event!\n");
			}
			if (event.key.keysym.sym == SDLK_RIGHT)
			{
				thread_pause = !thread_pause;
				CurKeyProcess = 0;
				CurKeyCode = SDLK_RIGHT;
				printf("video got right key event!\n");
			}

			if (event.key.keysym.sym == SDLK_LEFT)
			{
				thread_pause = !thread_pause;
				CurKeyProcess = 0;
				CurKeyCode = SDLK_LEFT;
				printf("video got left key event!\n");
			}

		}
		else if (event.type == SDL_QUIT)
		{
			thread_exit = 1;
			printf("------------------------------>video got SDL_QUIT event!\n");
			break;
		}
		else if (event.type == SFM_BREAK_EVENT)
		{
			break;
		}
	}

	printf("---------> SDL_event_thread end !!!! \n");
	return 0;
}

int video_refresh_thread(void *opaque)
{
	while (1)
	{
		if (thread_exit) break;
		if (thread_pause)
		{
			SDL_Delay(40);
			continue;
		}
		//SDL_Delay(40);
		Sleep(VideoCtrl.RefreshTime);
		sem_post(&VideoCtrl.video_refresh);
	}
	printf("---------> video_refresh_thread end !!!! \n");
	return 0;
}

static void *thread_audio(void *arg)
{
	AVCodecContext	*pAudioCodecCtx;
	AVFrame			*pAudioFrame;
	unsigned char 	*pAudioOutBuffer;
	AVPacket 		*Packet;
	int 			i, ret, GotAudioPicture;

	struct SwrContext *AudioConvertCtx;

	AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)arg;

	pAudioCodecCtx = AudioCtrl->pCodecCtx;
	pAudioOutBuffer = AudioCtrl->pAudioOutBuffer;
	AudioConvertCtx = AudioCtrl->pConvertCtx;
	printf("---------> thread_audio start !!!! \n");
	pAudioFrame = av_frame_alloc();
	while (1)
	{
		if (thread_exit) break;
		if (thread_pause)
		{
			Sleep(10000);
			audio_pause = 1;
			continue;
		}

		if (IsPacketArrayEmpty(&AudioCtrl->Audio))
		{
			SDL_Delay(1);
			printf("---------> thread_audio empty !!!! \n");
			continue;
		}
		audio_pause = 0;
		i = AudioCtrl->Audio.rIndex;
		Packet = &AudioCtrl->Audio.PacketArray[i].Packet;
		CurAudioDts = AudioCtrl->Audio.PacketArray[i].dts;
		CurAudioPts = AudioCtrl->Audio.PacketArray[i].pts;
		if (Packet->stream_index == AudioCtrl->AudioIndex)
		{
			// ret = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &GotAudioPicture, Packet);
			ret = avcodec_send_packet(pAudioCodecCtx, Packet);
			if (ret < 0)
			{
				printf("Error in decoding audio frame.\n");
				return 0;
			}
			while (avcodec_receive_frame(pAudioCodecCtx, pAudioFrame) == 0)
			{
				swr_convert(AudioConvertCtx, &pAudioOutBuffer, MAX_AUDIO_FRAME_SIZE,
					(const uint8_t **)pAudioFrame->data, pAudioFrame->nb_samples);
				printf("Auduo index:%5d\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",
					AudioCtrl->AudioCnt, Packet->pts, Packet->size, pAudioFrame->nb_samples);

				AudioCtrl->AudioCnt++;
			}

			while (AudioCtrl->audio_len > 0)//Wait until finish
				SDL_Delay(1);

			//Set audio buffer (PCM data)
			AudioCtrl->audio_chunk = (Uint8 *)pAudioOutBuffer;
			AudioCtrl->audio_pos = AudioCtrl->audio_chunk;
			AudioCtrl->audio_len = AudioCtrl->out_buffer_size;

			av_packet_unref(Packet);

			AudioCtrl->Audio.PacketArray[i].state = 0;
			i++;
			if (i >= PACKET_ARRAY_SIZE) i = 0;
			AudioCtrl->Audio.rIndex = i;
		}
	}

	printf("---------> thread_audio end !!!! \n");
	return 0;
}

static void *thread_video(void *arg)
{
	//AVFormatContext	*pFormatCtx;
	AVCodecContext	*pVideoCodecCtx;
	//AVCodec			*pVideoCodec;
	AVFrame			*pVideoFrame, *pFrameYUV;
	//unsigned char 	*pVideoOutBuffer;
	AVPacket 		*Packet;
	int 			i, ret, GotPicture;
	struct SwsContext *VideoConvertCtx;

	VideoCtrlStruct* VideoCtrl = (VideoCtrlStruct*)arg;

	pVideoCodecCtx = VideoCtrl->pCodecCtx;
	//pVideoOutBuffer = VideoCtrl->pVideoOutBuffer;
	VideoConvertCtx = VideoCtrl->pConvertCtx;
	pVideoFrame = VideoCtrl->pVideoFrame;
	pFrameYUV = VideoCtrl->pFrameYUV;
	printf("---------> thread_video start !!!! \n");
	while (1)
	{
		if (thread_exit) break;
		if (thread_pause)
		{
			Sleep(10000);
			video_pause = 1;
			continue;
		}

		if (IsPacketArrayEmpty(&VideoCtrl->Video))
		{
			SDL_Delay(1);
			continue;
		}
		video_pause = 0;
		i = VideoCtrl->Video.rIndex;
		Packet = &VideoCtrl->Video.PacketArray[i].Packet;
		CurVideoDts = VideoCtrl->Video.PacketArray[i].dts;
		CurVideoPts = VideoCtrl->Video.PacketArray[i].pts;
		if (Packet->stream_index == VideoCtrl->VideoIndex)
		{
			// ret = avcodec_decode_video2(pVideoCodecCtx, pVideoFrame, &GotPicture, Packet);
			ret = avcodec_send_packet(pVideoCodecCtx, Packet);
			if (ret != 0)
			{
				printf("Video Decode Error.\n");
				return 0;
			}
			GotPicture = avcodec_receive_frame(pVideoCodecCtx, pVideoFrame);
			printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d, GotVideoPicture:%d\n",
				VideoCtrl->VideoCnt, Packet->dts, Packet->pts, Packet->size, GotPicture);
			//			printf("Video index:%5d\t pFrame->pkt_dts:%ld, pFrame->pkt_pts:%ld, pFrame->pts:%ld, pFrame->pict_type:%d, "
			//					"pFrame->best_effort_timestamp:%ld, pFrame->pkt_pos:%ld, pVideoFrame->pkt_duration:%ld\n",
			//					VideoCtrl->VideoCnt, pVideoFrame->pkt_dts, pVideoFrame->pkt_pts, pVideoFrame->pts,
			//					pVideoFrame->pict_type, pVideoFrame->best_effort_timestamp,
			//					pVideoFrame->pkt_pos, pVideoFrame->pkt_duration);
			VideoCtrl->VideoCnt++;
			if (GotPicture == 0)
			{
				sws_scale(VideoConvertCtx, (const unsigned char* const*)pVideoFrame->data,
					pVideoFrame->linesize, 0, pVideoCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

				sem_wait(&VideoCtrl->video_refresh);
				//SDL---------------------------
				SDL_UpdateTexture(VideoCtrl->sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0]);
				SDL_RenderClear(VideoCtrl->sdlRenderer);
				//SDL_RenderCopy( sdlRenderer, sdlTexture, &sdlRect, &sdlRect );
				SDL_RenderCopy(VideoCtrl->sdlRenderer, VideoCtrl->sdlTexture, NULL, NULL);
				SDL_RenderPresent(VideoCtrl->sdlRenderer);
				//SDL End-----------------------
			}

			av_packet_unref(Packet);
			VideoCtrl->Video.PacketArray[i].state = 0;
			i++;
			if (i >= PACKET_ARRAY_SIZE) i = 0;
			VideoCtrl->Video.rIndex = i;
		}
	}
	printf("---------> thread_video end !!!! \n");
	return 0;
}

/* The audio function callback takes the following parameters:
* stream: A pointer to the audio buffer to be filled
* len: The length (in bytes) of the audio buffer
*/
void fill_audio(void *udata, Uint8 *stream, int len)
{
	AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)udata;
	//SDL 2.0
	SDL_memset(stream, 0, len);
	if (AudioCtrl->audio_len == 0) return;

	len = (len > AudioCtrl->audio_len ? AudioCtrl->audio_len : len);	/*  Mix  as  much  data  as  possible  */

	SDL_MixAudio(stream, AudioCtrl->audio_pos, len, SDL_MIX_MAXVOLUME);
	AudioCtrl->audio_pos += len;
	AudioCtrl->audio_len -= len;
}

AVFormatContext	*pFormatCtx;
AVCodecContext	*pVideoCodecCtx, *pAudioCodecCtx;
AVCodec			*pVideoCodec, *pAudioCodec;
AVCodecParameters *pVideoCodecParameters, *pAudioCodecParameters;
AVPacket		*Packet;
unsigned char 	*pVideoOutBuffer, *pAudioOutBuffer;

int 			ret;
unsigned int    i;
pthread_t 		audio_tid, video_tid;

uint64_t AudioOutChannelLayout;
//nb_samples: AAC-1024 MP3-1152
int out_nb_samples;
AVSampleFormat out_sample_fmt;
int out_sample_rate;
int out_channels;
//Out Buffer Size
int out_buffer_size;

//------------SDL----------------

struct SwsContext *VideoConvertCtx;
struct SwrContext *AudioConvertCtx;
int VideoIndex, VideoCnt;
int AudioIndex, AudioCnt;

int main(int argc, char* argv[])
{
	char *filepath = "cuc_ieschool.mp4";
	memset(&AudioCtrl, 0, sizeof(AudioCtrlStruct));
	memset(&VideoCtrl, 0, sizeof(VideoCtrlStruct));

	sem_init(&VideoCtrl.video_refresh, 0, 0);
	sem_init(&VideoCtrl.frame_put, 0, 0);
	sem_init(&AudioCtrl.frame_put, 0, 0);
	thread_exit = 0;
	thread_pause = 0;
	CurKeyProcess = 1;
	CurKeyCode = 0;
	// av_register_all();
	avformat_network_init();
	pFormatCtx = avformat_alloc_context();

	if (avformat_open_input(&pFormatCtx, filepath, NULL, NULL) != 0)
	{
		printf("Couldn't open input stream.\n");
		return -1;
	}
	if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
	{
		printf("Couldn't find stream information.\n");
		return -1;
	}

	VideoIndex = -1;
	AudioIndex = -1;
	for (i = 0; i < pFormatCtx->nb_streams; i++)
	{
		if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
		{
			VideoIndex = i;
			//打印输出视频流的信息
			printf("video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
				pFormatCtx->streams[VideoIndex]->time_base.num,
				pFormatCtx->streams[VideoIndex]->time_base.den,
				pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,
				pFormatCtx->streams[VideoIndex]->avg_frame_rate.den);
		}

		if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
		{
			AudioIndex = i;
			//打印输出音频流的信息
			//pFormatCtx->streams[AudioIndex]->time_base.den <<= 1;
			printf("audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
				pFormatCtx->streams[AudioIndex]->time_base.num,
				pFormatCtx->streams[AudioIndex]->time_base.den,
				pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,
				pFormatCtx->streams[AudioIndex]->avg_frame_rate.den);
		}
	}

	if (VideoIndex != -1)
	{	
		// 准备视频的解码操作上下文数据结构
		pVideoCodec = avcodec_find_decoder(pFormatCtx->streams[VideoIndex]->codecpar->codec_id);
		pVideoCodecCtx = avcodec_alloc_context3(pVideoCodec);
		pVideoCodecParameters = pFormatCtx->streams[VideoIndex]->codecpar;
		avcodec_parameters_to_context(pVideoCodecCtx, pVideoCodecParameters);
		if (pVideoCodec == NULL)
		{
			printf("Video Codec not found.\n");
			return -1;
		}
		if (avcodec_open2(pVideoCodecCtx, pVideoCodec, NULL) < 0)
		{
			printf("Could not open video codec.\n");
			return -1;
		}

		// prepare video
		VideoCtrl.pVideoFrame = av_frame_alloc();
		VideoCtrl.pFrameYUV = av_frame_alloc();

		ret = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);
		pVideoOutBuffer = (unsigned char *)av_malloc(ret);
		av_image_fill_arrays(VideoCtrl.pFrameYUV->data, VideoCtrl.pFrameYUV->linesize, pVideoOutBuffer,
			AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);

		VideoConvertCtx = sws_getContext(pVideoCodecCtx->width, pVideoCodecCtx->height, pVideoCodecCtx->pix_fmt,
			pVideoCodecCtx->width, pVideoCodecCtx->height,
			AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

		VideoCtrl.pFormatCtx = pFormatCtx;
		VideoCtrl.pStream = pFormatCtx->streams[VideoIndex];
		VideoCtrl.pCodec = pVideoCodec;
		// VideoCtrl.pCodecCtx = pFormatCtx->streams[VideoIndex]->codec;
		pVideoCodecParameters = pFormatCtx->streams[VideoIndex]->codecpar;
		AVCodecContext *pVideoCodecContext = avcodec_alloc_context3(NULL);
		avcodec_parameters_to_context(pVideoCodecContext, pVideoCodecParameters);
		VideoCtrl.pCodecCtx = pVideoCodecContext;
		VideoCtrl.pConvertCtx = VideoConvertCtx;
		VideoCtrl.pVideoOutBuffer = pVideoOutBuffer;
		VideoCtrl.VideoIndex = VideoIndex;

		if (pFormatCtx->streams[VideoIndex]->avg_frame_rate.num == 0 ||
			pFormatCtx->streams[VideoIndex]->avg_frame_rate.den == 0)
		{
			VideoCtrl.RefreshTime = 40000;
		}
		else
		{
			VideoCtrl.RefreshTime = 1000000 * pFormatCtx->streams[VideoIndex]->avg_frame_rate.den;
			VideoCtrl.RefreshTime /= pFormatCtx->streams[VideoIndex]->avg_frame_rate.num;
		}
		printf("VideoCtrl.RefreshTime:%d\n", VideoCtrl.RefreshTime);
	}
	else
	{
		printf("Didn't find a video stream.\n");
	}

	if (AudioIndex != -1)
	{	//准备音频的解码操作上下文数据结构,
		// pAudioCodecCtx = pFormatCtx->streams[AudioIndex]->codec;	
		pAudioCodec = avcodec_find_decoder(pFormatCtx->streams[AudioIndex]->codecpar->codec_id);
		pAudioCodecCtx = avcodec_alloc_context3(pAudioCodec);
		pAudioCodecParameters = pFormatCtx->streams[AudioIndex]->codecpar;
		avcodec_parameters_to_context(pAudioCodecCtx, pAudioCodecParameters);
		if (pAudioCodec == NULL)
		{
			printf("Audio Codec not found.\n");
			return -1;
		}
		if (avcodec_open2(pAudioCodecCtx, pAudioCodec, NULL) < 0)
		{
			printf("Could not open audio codec.\n");
			return -1;
		}
		// prepare Out Audio Param
		AudioOutChannelLayout = AV_CH_LAYOUT_STEREO;
		out_nb_samples = pAudioCodecCtx->frame_size * 2;	//nb_samples: AAC-1024 MP3-1152
		out_sample_fmt = AV_SAMPLE_FMT_S16;
		out_sample_rate = pAudioCodecCtx->sample_rate * 2;
		// 此处一定使用pAudioCodecCtx->sample_rate这个变量赋值,否则使用不一样的值会造成音频少采样或者过采样,导致音频播放出现杂音
		out_channels = av_get_channel_layout_nb_channels(AudioOutChannelLayout);
		out_buffer_size = av_samples_get_buffer_size(NULL, out_channels, out_nb_samples, out_sample_fmt, 1);

		//mp3:out_nb_samples:1152, out_channels:2, out_buffer_size:4608, pCodecCtx->channels:2
		//aac:out_nb_samples:1024, out_channels:2, out_buffer_size:4096, pCodecCtx->channels:2
		printf("out_nb_samples:%d, out_channels:%d, out_buffer_size:%d, pCodecCtx->channels:%d\n",
			out_nb_samples, out_channels, out_buffer_size, pAudioCodecCtx->channels);
		pAudioOutBuffer = (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE * 2);

		//FIX:Some Codec's Context Information is missing
		int64_t in_channel_layout = av_get_default_channel_layout(pAudioCodecCtx->channels);
		//Swr
		AudioConvertCtx = swr_alloc();
		AudioConvertCtx = swr_alloc_set_opts(AudioConvertCtx, AudioOutChannelLayout,
			out_sample_fmt, out_sample_rate,
			in_channel_layout, pAudioCodecCtx->sample_fmt,
			pAudioCodecCtx->sample_rate, 0, NULL);
		swr_init(AudioConvertCtx);

		AudioCtrl.pFormatCtx = pFormatCtx;
		AudioCtrl.pStream = pFormatCtx->streams[AudioIndex];
		AudioCtrl.pCodec = pAudioCodec;
		// AudioCtrl.pCodecCtx = pFormatCtx->streams[AudioIndex]->codec;
		pAudioCodecParameters = pFormatCtx->streams[AudioIndex]->codecpar;
		AVCodecContext *pAudioCodecContext = avcodec_alloc_context3(NULL);
		avcodec_parameters_to_context(pAudioCodecContext, pAudioCodecParameters);
		AudioCtrl.pCodecCtx = pAudioCodecContext;
		AudioCtrl.pConvertCtx = AudioConvertCtx;

		AudioCtrl.AudioOutChannelLayout = AudioOutChannelLayout;
		AudioCtrl.out_nb_samples = out_nb_samples;
		AudioCtrl.out_sample_fmt = out_sample_fmt;
		AudioCtrl.out_sample_rate = out_sample_rate;
		AudioCtrl.out_channels = out_channels;
		AudioCtrl.out_buffer_size = out_buffer_size;
		AudioCtrl.pAudioOutBuffer = pAudioOutBuffer;
		AudioCtrl.AudioIndex = AudioIndex;
	}
	else
	{
		printf("Didn't find a audio stream.\n");
	}

	//Output Info-----------------------------
	printf("---------------- File Information ---------------\n");
	av_dump_format(pFormatCtx, 0, filepath, 0);
	printf("-------------- File Information end -------------\n");

	if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
	{
		printf("Could not initialize SDL - %s\n", SDL_GetError());
		return -1;
	}

	if (VideoIndex != -1)
	{
		//SDL 2.0 Support for multiple windows
		//SDL_VideoSpec
		VideoCtrl.screen_w = pVideoCodecCtx->width;
		VideoCtrl.screen_h = pVideoCodecCtx->height;
		VideoCtrl.screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED,
			SDL_WINDOWPOS_UNDEFINED, VideoCtrl.screen_w, VideoCtrl.screen_h, SDL_WINDOW_OPENGL);

		if (!VideoCtrl.screen)
		{
			printf("SDL: could not create window - exiting:%s\n", SDL_GetError());
			return -1;
		}
		VideoCtrl.sdlRenderer = SDL_CreateRenderer(VideoCtrl.screen, -1, 0);
		//IYUV: Y + U + V  (3 planes)
		//YV12: Y + V + U  (3 planes)
		VideoCtrl.sdlTexture = SDL_CreateTexture(VideoCtrl.sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,
			pVideoCodecCtx->width, pVideoCodecCtx->height);

		VideoCtrl.sdlRect.x = 0;
		VideoCtrl.sdlRect.y = 0;
		VideoCtrl.sdlRect.w = VideoCtrl.screen_w;
		VideoCtrl.sdlRect.h = VideoCtrl.screen_h;

		VideoCtrl.video_tid = SDL_CreateThread(video_refresh_thread, NULL, NULL);
		ret = pthread_create(&video_tid, NULL, thread_video, &VideoCtrl);
		if (ret)
		{
			printf("create thr_rvs video thread failed, error = %d \n", ret);
			return -1;
		}
	}

	if (AudioIndex != -1)
	{
		//SDL_AudioSpec
		SDL_AudioSpec AudioSpec;
		AudioSpec.freq = out_sample_rate;
		AudioSpec.format = AUDIO_S16SYS;
		AudioSpec.channels = out_channels;
		AudioSpec.silence = 0;
		AudioSpec.samples = out_nb_samples;
		AudioSpec.callback = fill_audio;
		AudioSpec.userdata = (void*)&AudioCtrl;

		if (SDL_OpenAudio(&AudioSpec, NULL) < 0)
		{
			printf("can't open audio.\n");
			return -1;
		}

		ret = pthread_create(&audio_tid, NULL, thread_audio, &AudioCtrl);
		if (ret)
		{
			printf("create thr_rvs video thread failed, error = %d \n", ret);
			return -1;
		}
		SDL_PauseAudio(0);
	}

	SDL_Thread *event_tid;
	event_tid = SDL_CreateThread(SDL_event_thread, NULL, NULL);

	VideoCnt = 0;
	AudioCnt = 0;
	Packet = (AVPacket *)av_malloc(sizeof(AVPacket));
	av_init_packet(Packet);

	while (1)
	{
		if (thread_pause)
		{
			if ((CurKeyProcess == 0) && video_pause && audio_pause)
			{
				switch (CurKeyCode)
				{
				case SDLK_RIGHT:
					//DstAudioDts = CurAudioDts + (int64_t) (3 / av_q2d(AudioCtrl.pStream->time_base));
					//因为time_base分数结构中,num基本都为1,因此为了减少浮点数乘除法,可以直接乘以time_base的分母
					DstAudioDts = CurAudioDts + 3 * AudioCtrl.pStream->time_base.den;
					DstVideoDts = CurVideoDts + 3 * VideoCtrl.pStream->time_base.den;
					ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_FRAME);
					//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_FRAME);
					printf("SDLK_RIGHT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",
						ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);
					avcodec_flush_buffers(AudioCtrl.pCodecCtx);
					avcodec_flush_buffers(VideoCtrl.pCodecCtx);
					PacketArrayClear(&VideoCtrl.Video);
					PacketArrayClear(&AudioCtrl.Audio);
					break;
				case SDLK_LEFT:
					DstAudioDts = CurAudioDts - 3 * AudioCtrl.pStream->time_base.den;
					DstVideoDts = CurVideoDts - 3 * VideoCtrl.pStream->time_base.den;
					if (DstAudioDts < 0) DstAudioDts = 0;
					if (DstVideoDts < 0) DstVideoDts = 0;
					ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);
					//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);
					printf("SDLK_LEFT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",
						ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);
					avcodec_flush_buffers(AudioCtrl.pCodecCtx);
					avcodec_flush_buffers(VideoCtrl.pCodecCtx);
					PacketArrayClear(&VideoCtrl.Video);
					PacketArrayClear(&AudioCtrl.Audio);
					break;
				default:
					break;
				}
				CurKeyProcess = 1;
				thread_pause = !thread_pause;
			}
			Sleep(10000);
			continue;
		}
		if (av_read_frame(pFormatCtx, Packet) < 0)
		{
			//			thread_exit = 1;
			//			SDL_Event event;
			//			event.type = SFM_BREAK_EVENT;
			//			SDL_PushEvent(&event);
			//			printf("---------> av_read_frame < 0, thread_exit = 1  !!!\n");
			//			break;
			av_seek_frame(pFormatCtx, AudioIndex, 0, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);
			continue;
		}
		if (Packet->stream_index == VideoIndex)
		{
			if (VideoCtrl.Video.wIndex >= PACKET_ARRAY_SIZE)
			{
				VideoCtrl.Video.wIndex = 0;
			}
			while (IsPacketArrayFull(&VideoCtrl.Video))
			{
				Sleep(5000);
				//printf("---------> VideoCtrl.Video.PacketArray FULL !!!\n");
			}
			i = VideoCtrl.Video.wIndex;
			VideoCtrl.Video.PacketArray[i].Packet = *Packet;
			VideoCtrl.Video.PacketArray[i].dts = Packet->dts;
			VideoCtrl.Video.PacketArray[i].pts = Packet->pts;
			VideoCtrl.Video.PacketArray[i].state = 1;
			VideoCtrl.Video.wIndex++;
			//printf("VideoCtrl packet put,dts:%ld, pts:%ld, VideoCnt:%d\n", Packet->dts, Packet->pts, VideoCnt++);
		}

		if (Packet->stream_index == AudioIndex)
		{
			if (AudioCtrl.Audio.wIndex >= PACKET_ARRAY_SIZE)
			{
				AudioCtrl.Audio.wIndex = 0;
			}
			while (IsPacketArrayFull(&AudioCtrl.Audio))
			{
				Sleep(5000);
				//printf("---------> AudioCtrl.Audio.PacketArray FULL !!!\n");
			}
			i = AudioCtrl.Audio.wIndex;
			AudioCtrl.Audio.PacketArray[i].Packet = *Packet;
			AudioCtrl.Audio.PacketArray[i].dts = Packet->dts;
			AudioCtrl.Audio.PacketArray[i].pts = Packet->pts;
			AudioCtrl.Audio.PacketArray[i].state = 1;
			AudioCtrl.Audio.wIndex++;
			//printf("AudioCtrl.frame_put, AudioCnt:%d\n", AudioCnt++);
		}

		if (thread_exit) break;
	}

	SDL_WaitThread(event_tid, NULL);
	SDL_WaitThread(VideoCtrl.video_tid, NULL);
	pthread_join(audio_tid, NULL);

	pthread_join(video_tid, NULL);
	SDL_CloseAudio();//Close SDL
	SDL_Quit();
	swr_free(&AudioConvertCtx);
	sws_freeContext(VideoConvertCtx);
	av_free(pVideoOutBuffer);
	avcodec_close(pVideoCodecCtx);
	av_free(pAudioOutBuffer);
	avcodec_close(pAudioCodecCtx);
	avformat_close_input(&pFormatCtx);

	return 0;
}

参考

  1. https://blog.youkuaiyun.com/zhaoyun_zzz/article/details/89406087
  2. https://blog.youkuaiyun.com/logani/article/details/128378138
  3. https://www.cnblogs.com/yongdaimi/p/15796060.html
  4. https://wenku.youkuaiyun.com/answer/4cb7a683e83842d292ba68e38e62a748
  5. https://blog.youkuaiyun.com/u011686167/article/details/123029573
  6. https://blog.youkuaiyun.com/bolitongyue/article/details/130806031
  7. https://blog.youkuaiyun.com/e891377/article/details/126705854
<think>好的,我现在需要帮用户解决编译时出现的undefined reference错误。首先,用户提供的错误信息显示在链接阶段,多个函数如av_strerror、av_log_set_level等未找到引用。这些函数都属于FFmpeg库,所以问题可能出在链接器没有正确链接FFmpeg的库文件。 首先,我得确认这些函数属于FFmpeg的哪个库。比如,av_strerror和av_log_set_level可能在libavutil里,而av_seek_frame可能在libavformat。用户的项目在链接时可能没有包含这些库。因此,用户需要在编译命令中添加-lavutil、-lavformat等选项。 接下来,用户使用的是CMake,所以需要检查CMakeLists.txt文件是否正确包含了FFmpeg的库和头文件。可能需要使用find_package来查找FFmpeg组件,或者手动指定库的路径和名称。如果find_package找不到,可能需要设置FFmpeg_DIR或者确保FFmpeg正确安装。 另外,用户链接的是CXX可执行文件,但主项目是C对象,可能需要检查是否在CMake中正确指定了C和C++的链接顺序,以及是否所有必要的库都被包含。还有,FFmpeg库之间的依赖关系需要按正确的顺序排列,比如libavcodec依赖于libavutil,所以在链接时libavcodec应该放在前面。 可能的解决方案包括:在CMakeLists.txt中添加target_link_libraries,包含avcodec、avformat、avutil、swscale等库。同时,确保系统已经安装了FFmpeg的开发包,比如在Ubuntu上是libavcodec-dev等。如果用户手动编译了FFmpeg,可能需要指定库的路径,如使用link_directories。 最后,建议用户检查FFmpeg的安装情况,确认头文件和库文件的位置是否正确,以及在CMake中是否正确配置了这些路径。如果问题依旧,可能需要详细查看CMake的输出,确认链接命令是否包含了所有必要的库。</think>这个编译错误是由于项目链接时缺少FFmpeg相关库导致的未定义引用问题。以下是详细分析和解决方案: **错误原因分析:** 1. 错误信息显示如$undefined\ reference\ to\ `av\_strerror'$等符号未定义 2. 这些符号都属于FFmpeg库的函数,说明项目未正确链接FFmpeg相关库 3. 涉及的主要FFmpeg组件包括: - $libavutil$(提供av_strerror等基础函数) - $libavformat$(提供av_seek_frame等媒体格式处理函数) - $libavcodec$(编解码相关函数) **解决方案步骤:** 1. **安装FFmpeg开发库** ```bash # Ubuntu/Debian sudo apt-get install libavcodec-dev libavformat-dev libavutil-dev libswscale-dev # Fedora sudo dnf install ffmpeg-devel ``` 2. **修改CMakeLists.txt** ```cmake # 在项目配置部分添加 find_package(PkgConfig REQUIRED) pkg_check_modules(FFMPEG REQUIRED libavcodec libavformat libavutil libswscale) # 在可执行文件配置部分添加链接 target_link_libraries(main PRIVATE ${FFMPEG_LIBRARIES} # 其他已有库... ) # 添加包含目录 target_include_directories(main PRIVATE ${FFMPEG_INCLUDE_DIRS} ) ``` 3. **验证FFmpeg安装路径** ```bash # 确认库文件存在 ls /usr/lib/x86_64-linux-gnu/libavcodec.so ls /usr/lib/x86_64-linux-gnu/libavformat.so ``` 4. **完整编译命令示例** ```bash gcc -o main main.c mouse_cursor_icon.c \ -I/usr/include/x86_64-linux-gnu \ -lavcodec -lavformat -lavutil -lswscale \ `pkg-config --libs --cflags lvgl` ``` **补充说明:** 1. 如果使用静态链接,需要确认已编译静态库文件(.a)的存在路径 2. 不同Linux发行版可能需要调整包管理器安装命令 3. 建议保持FFmpeg库版本一致性(推荐4.x以上版本) 4. 对于交叉编译环境,需要配置正确的FFmpeg工具链路径 **验证方法:** ```bash # 查看链接后的可执行文件是否包含FFmpeg符号 ldd ./main | grep avcodec nm ./main | grep av_strerror ``` 按照上述步骤操作后,应该可以解决FFmpeg相关符号未定义的链接错误。如果问题仍然存在,请检查FFmpeg库版本是否与项目要求的版本匹配。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

UestcXiye

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值