FIFO生成器V2.3

一、序言

赛琳思FIFO生成器逻辑核是应用于顺序存储和索引的先进先出的存储队列。对于所有FIFO来说,通过配置实现以最小资源达到最大传输性能(高达350MHZ)的目的,FIFO逻辑核提供了这种最佳解决方案。用户可以根据需要设置赛琳思核生成器的结构,包括宽度、深度、状态标志、存储类型和读写端口比。
这个文档描述了FIFO生成器逻辑核的功能,详细说明了此逻辑核的接口参数,同时还定义了输入和输出信号。更详细的FIFO生成器逻辑核设计说明,请参考《FIFO
Generator User Guide》。
1、FIFO深度高达4,194,304字;
2、 FIFO数据宽度为1到256位;
3、非对称端口比(读写端口比范围从1:8到8:1);
4、 异步或同步时钟域;
5、可选的存储类型(块随机存储、分布式随机存储、移位寄存器或者Virtex-4内建FIFO);
6、First-word fall-through(FWFT);
7、 满和空状态标记,用预留字来标记全满和全空;
8、可编程的满和空状态,可以通过自定义相应内容或输入端口来实现;
9、 配置握手信号;
10、 完全可配置的赛琳思核生成系统。

二、应用

在数字设计中,需要对具有数据操作任务的FIFO功能进行约束管理,如如跨时钟、低延迟缓存、总线宽度变化等。如下图1中,高亮部分是FIFO生成器支持的众多功能中的一种。以下设计了两个时钟域和写数据总线是读数据总线的4倍的例子。对于FIFO生成器来说,它能通过定制特殊的需求来生成最优的解决方案。
在这里插入图片描述

三、功能概述

时钟实现操作
FIFO生成器可以把FIFO配置成独立时钟或共用时钟进行读写操作。如果FIFO生成器把时钟配置成独立时钟时,可以让用户实现专用的时钟读写功能。FIFO生成器能生成最优化的时钟核把数据缓存到单一的时钟域。

Virtex-4 内建FIFO
FIFO生成器支持Virtex-4 内建FIFO模式,使能大量的FIFO将内置的FIFO宽度和深度级联。它还有内核扩展了这种能力,能通过创建可选的状态标志来实现不能内建的FIFO宏定义。

First-word fall –throuht(FWFT)
First-word fall –throuht(FWFT)实现了FIFO不需要通过读操作就可以预取下一个字的这种写属性。当在FIFO里获取数据时,首字会自动出现在输出总线上(DOUT)。FWFT应用于低延迟访问数据和阻碍读数据。FWFT支持块存储和风布式存储所用的独立时钟模式。

存储类型
FIFO生成器可以从块存储、分布式存储、移位寄存器或者Virtex-4内建FIFO中创建相应的FIFO。它生成的核具有向下兼容的特性。下表列出了应用测试数据:
在这里插入图片描述

非对称宽度比
生成的FIFO能读写不同的端口宽度,同时能自动转换数据宽度。非对称宽度比支持的比例范围为1:8到8:1。可以通过相应的FIFO把块存储配置为独立的读、写时钟。

FIFO生成器配置
表2简要的列出所支持的存储和时钟配置
在这里插入图片描述
独立时钟:块存储和分布式存储
这个实现类型允许用户选择块存储或分布式存储同时支持读写数据访问独立的时钟域。读状态和写状态的同步时钟。FIFO中可设置的属性类型有非对称宽度比(不同的读写接口宽度)、状态标志(满、接近满、空和接近),还可以是用户自己定义门限范围的空和满标志。相对于它们各自的时钟域,FIFO中有读数据计数和写数据记数用于指示所存储字的数量。另外,选择握手和错误标志可获的(写应答,上溢、有效、下溢)。

独立时钟:Virtex-4 内建FIFO
在Virtex-4 架构中允许用户选择内建的FIFO。可以通过读时钟和写时钟分别对读写进行同步。Virtex-4 内建FIFO支持的配置属性包括状态标志(空和满)和用户自己定义的空和满的门限标志。另外,选择握手和错误标志可获的(写应答,上溢、有效、下溢)。

共同时钟:块存储、分布式存储、移位寄存器
在这个实现类型里用户可以选择块存储、分布式存储或移位寄存器和读写访问同步时钟。它支持的可配置属性包括状态标志(满、接近满、空和接近空)和用户自定义的空和满的门限标志。另外,握手和错误标志选项支持(写应答、上溢、有效和下溢)还有计数器记录FIFO中存储字的数量。

共同时钟:Virtex-4 内建FIFO
在Virtex-4架构中,实现种类允许用户选择内建的FIFO来支持读写数据访问时钟同步。它支持的可配置属性包括状态标志(满和空)和用户自定义的空和满的门限标志。另外,握手和错误标志选项可获得(写应答、上溢、有效和下溢)。

FIFO生成器属性
表3中概括了FIFO生成器属性所支持的每一个时钟配置和存储类型。想要了解更详细的说明信息,请参考《FIFO Generator User Guide》。
在这里插入图片描述

FIFO接口
接下的章节说明FIFO接口的定义。图2解释了FIFO核中的支持单独读写时钟的标准和可选信号。
在这里插入图片描述

接口信号:FIFO的独立时钟

表4中定义的RST信号,通过RST可以复位整个内核逻辑(读写时钟域)。应用以前它是一种核内部同步化的异步输入需要用户进行硬件复位。详细的信息参照《FIFO Generator User Guide》。
在这里插入图片描述

表5定义了独立时钟的写接口,这个写接口信号分配到所需要的和选中的信号中,并且所有的信号通过WR_CLK进行同步。
在这里插入图片描述在这里插入图片描述

表6定义了独立时钟的读接口,这个读接口信号被分配到所需要的和选中的信号中,并且所有信号通过RD_CLK进行同步。
在这里插入图片描述
在这里插入图片描述

接口信号:FIFO的公共时钟
表7定义了公共读写时钟信号。表中定义的标准和可选的接口信号和除reset外的所有信号都是通过公共CLK来同步的。更详细的信息参照《FIFO Generator User Guide》。
在这里插入图片描述
在这里插入图片描述在这里插入图片描述
在这里插入图片描述

资源利用率与性能
依靠定制内核时所选的配置和属性FIFO的性能和资源利用率也不同。下表提供了FIFO配置的最大性能时所需的资源。表8中提供的没有可选属性的FIFO配置信息。性能参考于Virtex-II 2v3000-5和Virtex-4 4vlx15-11。
在这里插入图片描述
在这里插入图片描述

表9提供了多门限输入的FIFO配置。性能参考于Virtex-II 2v3000-5和Virtex-4 4vlx15-11。
在这里插入图片描述

表10列出FIFO配置为Virtex-4内置FIFO没有可选属性的结果。
在这里插入图片描述

分析以下代码可能存在的问题。// TsToMp4Converter.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。 // extern "C" { #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libavfilter/buffersink.h> #include <libavfilter/buffersrc.h> #include <libavutil/audio_fifo.h> #include <libavutil/avassert.h> #include <libavutil/avstring.h> #include <libavutil/frame.h> #include <libavutil/opt.h> #include <libswresample/swresample.h> #include <libavutil/audio_fifo.h> } #if defined(_MSC_VER) static char av_error[AV_ERROR_MAX_STRING_SIZE] = { 0 }; #define av_err2str(errnum) \ av_make_error_string(av_error, AV_ERROR_MAX_STRING_SIZE, errnum) #elif #define av_err2str(errnum) \ av_make_error_string((char[AV_ERROR_MAX_STRING_SIZE]){0}, AV_ERROR_MAX_STRING_SIZE, errnum) #endif static AVFormatContext* ifmt_ctx; static AVFormatContext* ofmt_ctx; typedef struct FilteringContext { AVFilterContext* buffersink_ctx; AVFilterContext* buffersrc_ctx; AVFilterGraph* filter_graph; AVPacket* enc_pkt; AVFrame* filtered_frame; } FilteringContext; static FilteringContext* filter_ctx; typedef struct StreamContext { AVCodecContext* dec_ctx; AVCodecContext* enc_ctx; AVFrame* dec_frame; } StreamContext; static StreamContext* stream_ctx; static int audio_index = -1; static int video_index = -1; static int64_t current_audio_pts = 0; //重采样时,会有缓存,这时候要另外计算dts和pts static int64_t first_video_pts = 0; static AVAudioFifo* fifo = NULL; //重采样时,如果输入nb_sample比输出的nb_sample小时,需要缓存 //#define SAVE_AUDIO_FILE #ifdef SAVE_AUDIO_FILE static FILE* save_audio = fopen("d:\\sampler\\1.pcm", "w+b"); static void save_audio_data(AVFrame* frame) { int data_size = av_get_bytes_per_sample(stream_ctx[audio_index].enc_ctx->sample_fmt); if (data_size >= 0) { for (int i = 0; i < frame->nb_samples; i++) for (int ch = 0; ch < stream_ctx[audio_index].enc_ctx->channels; ch++) fwrite(frame->data[ch] + data_size * i, 1, data_size, save_audio); } } #endif static int open_input_file(const char* filename) { int ret; unsigned int i; ifmt_ctx = NULL; /**(解封装 1.1):创建并初始化AVFormatContext*/ if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n"); return ret; } /**(解封装 1.2):检索流信息,这个过程会检查输入流中信息是否存在异常*/ if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n"); return ret; } stream_ctx = (StreamContext*)av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx)); if (!stream_ctx) return AVERROR(ENOMEM); for (i = 0; i < ifmt_ctx->nb_streams; i++) { AVStream* stream = ifmt_ctx->streams[i]; /**(解码 2.1):查找解码器(AVCodec)*/ AVCodec* dec = avcodec_find_decoder(stream->codecpar->codec_id); AVCodecContext* codec_ctx; if (!dec) { av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i); return AVERROR_DECODER_NOT_FOUND; } /**(解码 2.2):通过解码器(AVCodec)生成解码器上下文(AVCodecContext)*/ codec_ctx = avcodec_alloc_context3(dec); if (!codec_ctx) { av_log(NULL, AV_LOG_ERROR, "Failed to allocate the decoder context for stream #%u\n", i); return AVERROR(ENOMEM); } /**(解码 2.3):将AVCodecParameters参数赋值给AVCodecContext*/ ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Failed to copy decoder parameters to input decoder context " "for stream #%u\n", i); return ret; } /* Reencode video & audio and remux subtitles etc. */ if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) { if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO){ codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL); video_index = i; } else { audio_index = i; } /* Open decoder */ /**(解码 2.4):初始化码器器上下文*/ ret = avcodec_open2(codec_ctx, dec, NULL); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i); return ret; } } //保存解码上下文 stream_ctx[i].dec_ctx = codec_ctx; //分配解码帧 stream_ctx[i].dec_frame = av_frame_alloc(); if (!stream_ctx[i].dec_frame) return AVERROR(ENOMEM); } av_dump_format(ifmt_ctx, 0, filename, 0); return 0; } static int open_output_file(const char* filename) { AVStream* out_stream; AVStream* in_stream; AVCodecContext* dec_ctx, * enc_ctx; AVCodec* encoder; int ret; unsigned int i; ofmt_ctx = NULL; /**(封装 4.1):根据文件格式初始化封装器上下文AVFormatContext*/ avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename); if (!ofmt_ctx) { av_log(NULL, AV_LOG_ERROR, "Could not create output context\n"); return AVERROR_UNKNOWN; } for (i = 0; i < ifmt_ctx->nb_streams; i++) { /**(封装 4.2):创建输出视频和音频AVStream*/ out_stream = avformat_new_stream(ofmt_ctx, NULL); if (!out_stream) { av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n"); return AVERROR_UNKNOWN; } in_stream = ifmt_ctx->streams[i]; dec_ctx = stream_ctx[i].dec_ctx; if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) { /* in this example, we choose transcoding to same codec */ /**(编码 3.1):获取对应的编码器AVCodec*/ #if 0 encoder = avcodec_find_encoder(dec_ctx->codec_id); #else if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) { encoder = avcodec_find_encoder(AV_CODEC_ID_H264); } else { encoder = avcodec_find_encoder(AV_CODEC_ID_AAC); } #endif if (!encoder) { av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n"); return AVERROR_INVALIDDATA; } /**(编码 3.2):通过编码器(AVCodec)获取编码器上下文(AVCodecContext)*/ enc_ctx = avcodec_alloc_context3(encoder); if (!enc_ctx) { av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n"); return AVERROR(ENOMEM); } /**给编码器初始化信息*/ /* In this example, we transcode to same properties (picture size, * sample rate etc.). These properties can be changed for output * streams easily using filters */ if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) { enc_ctx->height = dec_ctx->height; enc_ctx->width = dec_ctx->width; enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio; /* take first format from list of supported formats */ if (encoder->pix_fmts) enc_ctx->pix_fmt = encoder->pix_fmts[0]; else enc_ctx->pix_fmt = dec_ctx->pix_fmt; /* video time_base can be set to whatever is handy and supported by encoder */ #if 0 enc_ctx->time_base = av_inv_q(dec_ctx->framerate); #else enc_ctx->time_base = dec_ctx->time_base; enc_ctx->has_b_frames = dec_ctx->has_b_frames; //输出将相对于输入延迟max_b_frames + 1-->但是输入的为0! //enc_ctx->max_b_frames = dec_ctx->max_b_frames + 1; enc_ctx->max_b_frames = 2; enc_ctx->bit_rate = dec_ctx->bit_rate; enc_ctx->codec_type = dec_ctx->codec_type; // 禁用B帧 if (enc_ctx->max_b_frames && enc_ctx->codec_id != AV_CODEC_ID_MPEG4 && enc_ctx->codec_id != AV_CODEC_ID_MPEG1VIDEO && enc_ctx->codec_id != AV_CODEC_ID_MPEG2VIDEO) { enc_ctx->has_b_frames = 0; enc_ctx->max_b_frames = 0; } #endif } else { enc_ctx->sample_rate = dec_ctx->sample_rate; enc_ctx->channel_layout = dec_ctx->channel_layout; enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout); /* take first format from list of supported formats */ enc_ctx->sample_fmt = encoder->sample_fmts[0]; enc_ctx->time_base = { 1, enc_ctx->sample_rate }; enc_ctx->bit_rate = dec_ctx->bit_rate; enc_ctx->codec_type = dec_ctx->codec_type; //enc_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL; } if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; /**(编码 3.3):*/ /* Third parameter can be used to pass settings to encoder */ ret = avcodec_open2(enc_ctx, encoder, NULL); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i); return ret; } /**(编码 3.4):*/ ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i); return ret; } out_stream->time_base = enc_ctx->time_base; //保存编码上下文 stream_ctx[i].enc_ctx = enc_ctx; } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) { av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i); return AVERROR_INVALIDDATA; } else { /* if this stream must be remuxed */ ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Copying parameters for stream #%u failed\n", i); return ret; } out_stream->time_base = in_stream->time_base; } } av_dump_format(ofmt_ctx, 0, filename, 1); /**(封装 4.4):初始化AVIOContext*/ if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) { ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Could not open output file &#39;%s&#39;", filename); return ret; } } /**(封装 4.5):写入文件头*/ /* init muxer, write output file header */ ret = avformat_write_header(ofmt_ctx, NULL); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n"); return ret; } return 0; } static int init_fifo(AVAudioFifo** fifo, AVCodecContext* output_codec_context) { /* Create the FIFO buffer based on the specified output sample format. */ if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt, output_codec_context->channels, 1))) { fprintf(stderr, "Could not allocate FIFO\n"); return AVERROR(ENOMEM); } return 0; } static int init_filter(FilteringContext* fctx, AVCodecContext* dec_ctx, AVCodecContext* enc_ctx, const char* filter_spec) { char args[512]; int ret = 0; const AVFilter* buffersrc = NULL; const AVFilter* buffersink = NULL; AVFilterContext* buffersrc_ctx = NULL; AVFilterContext* buffersink_ctx = NULL; AVFilterInOut* outputs = avfilter_inout_alloc(); AVFilterInOut* inputs = avfilter_inout_alloc(); AVFilterGraph* filter_graph = avfilter_graph_alloc(); if (!outputs || !inputs || !filter_graph) { ret = AVERROR(ENOMEM); goto end; } if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) { /**(滤镜 6.1):获取输入和输出滤镜器【同音频】*/ buffersrc = avfilter_get_by_name("buffer"); buffersink = avfilter_get_by_name("buffersink"); if (!buffersrc || !buffersink) { av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n"); ret = AVERROR_UNKNOWN; goto end; } snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt, dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den); /**(滤镜 6.2):创建和初始化输入和输出过滤器实例并将其添加到现有图形中*/ ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n"); goto end; } ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n"); goto end; } /**(滤镜 6.3):给【输出】滤镜器上下文设置参数*/ ret = av_opt_set_bin(buffersink_ctx, "pix_fmts", (uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt), AV_OPT_SEARCH_CHILDREN); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n"); goto end; } } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) { buffersrc = avfilter_get_by_name("abuffer"); buffersink = avfilter_get_by_name("abuffersink"); if (!buffersrc || !buffersink) { av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n"); ret = AVERROR_UNKNOWN; goto end; } if (!dec_ctx->channel_layout) dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels); snprintf(args, sizeof(args), "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%x", dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate, av_get_sample_fmt_name(dec_ctx->sample_fmt), (int)dec_ctx->channel_layout); ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n"); goto end; } ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n"); goto end; } ret = av_opt_set_bin(buffersink_ctx, "sample_fmts", (uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt), AV_OPT_SEARCH_CHILDREN); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n"); goto end; } ret = av_opt_set_bin(buffersink_ctx, "channel_layouts", (uint8_t*)&enc_ctx->channel_layout, sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n"); goto end; } ret = av_opt_set_bin(buffersink_ctx, "sample_rates", (uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate), AV_OPT_SEARCH_CHILDREN); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n"); goto end; } } else { ret = AVERROR_UNKNOWN; goto end; } //绑定关系 in ——> buffersrc_ctx /* Endpoints for the filter graph. */ outputs->name = av_strdup("in"); outputs->filter_ctx = buffersrc_ctx; outputs->pad_idx = 0; outputs->next = NULL; //绑定关系 out ——> buffersink_ctx inputs->name = av_strdup("out"); inputs->filter_ctx = buffersink_ctx; inputs->pad_idx = 0; inputs->next = NULL; if (!outputs->name || !inputs->name) { ret = AVERROR(ENOMEM); goto end; } /**(滤镜 6.4):将字符串描述的图形添加到图形中*/ if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec, &inputs, &outputs, NULL)) < 0) goto end; /**(滤镜 6.5):检查AVFilterGraph有效性*/ if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) goto end; /* Fill FilteringContext */ fctx->buffersrc_ctx = buffersrc_ctx; fctx->buffersink_ctx = buffersink_ctx; fctx->filter_graph = filter_graph; end: avfilter_inout_free(&inputs); avfilter_inout_free(&outputs); return ret; } static int init_filters(void) { const char* filter_spec; unsigned int i; int ret; filter_ctx = (FilteringContext*)av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx)); if (!filter_ctx) return AVERROR(ENOMEM); //这里会根据音频和视频的stream_index创建对应的filter_stm组 for (i = 0; i < ifmt_ctx->nb_streams; i++) { filter_ctx[i].buffersrc_ctx = NULL; filter_ctx[i].buffersink_ctx = NULL; filter_ctx[i].filter_graph = NULL; if (!(ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO || ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)) continue; if (ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) filter_spec = "null"; /* passthrough (dummy) filter for video */ else filter_spec = "anull"; /* passthrough (dummy) filter for audio */ ret = init_filter(&filter_ctx[i], stream_ctx[i].dec_ctx, stream_ctx[i].enc_ctx, filter_spec); if (ret) return ret; filter_ctx[i].enc_pkt = av_packet_alloc(); if (!filter_ctx[i].enc_pkt) return AVERROR(ENOMEM); filter_ctx[i].filtered_frame = av_frame_alloc(); if (!filter_ctx[i].filtered_frame) return AVERROR(ENOMEM); } return 0; } static int add_samples_to_fifo(AVAudioFifo* fifo, uint8_t** converted_input_samples, const int frame_size) { int error = 0; /* Make the FIFO as large as it needs to be to hold both, * the old and the new samples. */ if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) { fprintf(stderr, "Could not reallocate FIFO\n"); return error; } /* Store the new samples in the FIFO buffer. */ if (av_audio_fifo_write(fifo, (void**)converted_input_samples, frame_size) < frame_size) { fprintf(stderr, "Could not write data to FIFO\n"); return AVERROR_EXIT; } return 0; } static int store_audio( AVAudioFifo* fifo, const AVFrame* input_frame) { int ret = 0; /* Add the converted input samples to the FIFO buffer for later processing. */ // 写入FIFO缓冲区 ret = add_samples_to_fifo( fifo, (uint8_t**)input_frame->data, input_frame->nb_samples); return ret; } static int init_output_frame(AVFrame** frame, AVCodecContext* output_codec_context, int frame_size) { int error; /* Create a new frame to store the audio samples. */ if (!(*frame = av_frame_alloc())) { fprintf(stderr, "Could not allocate output frame\n"); return AVERROR_EXIT; } /* Set the frame&#39;s parameters, especially its size and format. * av_frame_get_buffer needs this to allocate memory for the * audio samples of the frame. * Default channel layouts based on the number of channels * are assumed for simplicity. */ (*frame)->nb_samples = frame_size; (*frame)->channel_layout = output_codec_context->channel_layout; (*frame)->format = output_codec_context->sample_fmt; (*frame)->sample_rate = output_codec_context->sample_rate; /* Allocate the samples of the created frame. This call will make * sure that the audio frame can hold as many samples as specified. */ if ((error = av_frame_get_buffer(*frame, 0)) < 0) { fprintf(stderr, "Could not allocate output frame samples (error &#39;%s&#39;)\n", av_err2str(error)); av_frame_free(frame); return error; } return 0; } static int init_packet(AVPacket** packet) { if (!(*packet = av_packet_alloc())) { fprintf(stderr, "Could not allocate packet\n"); return AVERROR(ENOMEM); } return 0; } static int encode_audio_frame(AVFrame* frame, AVFormatContext* output_format_context, AVCodecContext* output_codec_context, int* data_present) { /* Packet used for temporary storage. */ AVPacket* output_packet; int error; error = init_packet(&output_packet); if (error < 0) return error; /* Set a timestamp based on the sample rate for the container. */ if (frame) { current_audio_pts += output_codec_context->frame_size; frame->pts = current_audio_pts; //frame->pkt_pts = current_audio_pts; //frame->pkt_dts = current_audio_pts; } /* Send the audio frame stored in the temporary packet to the encoder. * The output audio stream encoder is used to do this. */ error = avcodec_send_frame(output_codec_context, frame); /* The encoder signals that it has nothing more to encode. */ if (error == AVERROR_EOF) { error = 0; goto cleanup; } else if (error < 0) { fprintf(stderr, "Could not send packet for encoding (error &#39;%s&#39;)\n", av_err2str(error)); goto cleanup; } cleanup: av_packet_free(&output_packet); return error; } int encode_and_write(AVAudioFifo* fifo, AVFormatContext* output_format_context, AVCodecContext* output_codec_context) { /* Temporary storage of the output samples of the frame written to the file. */ AVFrame* output_frame; /* Use the maximum number of possible samples per frame. * If there is less than the maximum possible frame size in the FIFO * buffer use this number. Otherwise, use the maximum possible frame size. */ const int frame_size = FFMIN(av_audio_fifo_size(fifo), output_codec_context->frame_size); int data_written; /* Initialize temporary storage for one output frame. */ if (init_output_frame(&output_frame, output_codec_context, frame_size)) return AVERROR_EXIT; /* Read as many samples from the FIFO buffer as required to fill the frame. * The samples are stored in the frame temporarily. */ if (av_audio_fifo_read(fifo, (void**)output_frame->data, frame_size) < frame_size) { fprintf(stderr, "Could not read data from FIFO\n"); av_frame_free(&output_frame); return AVERROR_EXIT; } //测试保存音频(Fload 32bit) #ifdef SAVE_AUDIO_FILE save_audio_data(output_frame); #endif /* Encode one frame worth of audio samples. */ if (encode_audio_frame(output_frame, output_format_context, output_codec_context, &data_written)) { av_frame_free(&output_frame); return AVERROR_EXIT; } av_frame_free(&output_frame); return 0; } static int encode_write_frame(unsigned int stream_index, int flush) { StreamContext* stream = &stream_ctx[stream_index]; FilteringContext* filter = &filter_ctx[stream_index]; AVFrame* filt_frame = flush ? NULL : filter->filtered_frame; AVPacket* enc_pkt = filter->enc_pkt; AVFrame* reasampling_frame = NULL; const int enc_frame_size = stream->enc_ctx->frame_size; int ret; //av_log(NULL, AV_LOG_INFO, "Encoding frame\n"); /* encode filtered frame */ av_packet_unref(enc_pkt); /**(编码 3.5):把滤镜处理后的AVFrame送去编码*/ // 调试 #if 0 if (filt_frame) { if (stream_index == AVMEDIA_TYPE_AUDIO) { filt_frame->nb_samples = 1024; //编码前重新给pts和dts赋值 current_audio_pts += stream->enc_ctx->frame_size; filt_frame->pts = current_audio_pts; filt_frame->pkt_dts = current_audio_pts; } else { if (0 == first_video_pts) { first_video_pts = filt_frame->best_effort_timestamp; } int64_t current_video_pts = filt_frame->best_effort_timestamp - first_video_pts; filt_frame->pts = current_video_pts; filt_frame->pkt_dts = current_video_pts; } } ret = avcodec_send_frame(stream->enc_ctx, filt_frame); if (ret < 0) { return ret; } #else //当音频样本数不满足预期时,需要重采样再进行输出 if (stream_index == AVMEDIA_TYPE_AUDIO && filt_frame && filt_frame->nb_samples != stream->enc_ctx->frame_size) { // 写入音频至队列 ret = store_audio( fifo, filt_frame); if (ret < 0) { return ret; } // 从队列中读取音频 while (1) { int fifo_size = av_audio_fifo_size(fifo); if (fifo_size < enc_frame_size) { break; } ret = encode_and_write( fifo, ofmt_ctx, stream_ctx[audio_index].enc_ctx); if (ret < 0) { return ret; } } } else { if (filt_frame) { if (stream_index == AVMEDIA_TYPE_AUDIO) { current_audio_pts += stream->enc_ctx->frame_size; filt_frame->pts = current_audio_pts; //filt_frame->pkt_pts = current_audio_pts; //filt_frame->pkt_dts = current_audio_pts; } else { if (0 == first_video_pts) { first_video_pts = filt_frame->best_effort_timestamp; } int64_t current_video_pts = filt_frame->best_effort_timestamp - first_video_pts; filt_frame->pts = current_video_pts; //filt_frame->pkt_pts = current_video_pts; //filt_frame->pkt_dts = current_video_pts; } } /**(编码 3.5):把滤镜处理后的AVFrame送去编码*/ ret = avcodec_send_frame(stream->enc_ctx, filt_frame); } #endif while (ret >= 0) { /**(编码 3.6):从编码器中得到编码后数据,放入AVPacket中*/ ret = avcodec_receive_packet(stream->enc_ctx, enc_pkt); if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { return 0; } printf("write1 %s Packet. size:%5d\tdts:%5lld\tpts:%5lld\tduration:%5lld\tcur_dts:%5lld\n", stream_index == AVMEDIA_TYPE_AUDIO ? "a>>>>>" : "v-----", enc_pkt->size, enc_pkt->dts, enc_pkt->pts, enc_pkt->duration, ofmt_ctx->streams[stream_index]->cur_dts); /* prepare packet for muxing */ //设置pts等信息 enc_pkt->stream_index = stream_index; av_packet_rescale_ts(enc_pkt, stream->enc_ctx->time_base, ofmt_ctx->streams[stream_index]->time_base); enc_pkt->pos = -1; //av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n"); printf("write2 %s Packet. size:%5d\tdts:%5lld\tpts:%5lld\tduration:%5lld\tcur_dts:%5lld\n", stream_index == AVMEDIA_TYPE_AUDIO ? "a>>>>>" : "v-----", enc_pkt->size, enc_pkt->dts, enc_pkt->pts, enc_pkt->duration, ofmt_ctx->streams[stream_index]->cur_dts); /* mux encoded frame */ ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt); //擦除数据 av_packet_unref(enc_pkt); } return ret; } static int filter_encode_write_frame(AVFrame* frame, unsigned int stream_index) { FilteringContext* filter = &filter_ctx[stream_index]; int ret; //av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n"); /* push the decoded frame into the filtergraph */ /**(滤镜 6.6):将解码后的AVFrame送去filtergraph进行滤镜处理*/ ret = av_buffersrc_add_frame_flags(filter->buffersrc_ctx, frame, 0); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n"); return ret; } /* pull filtered frames from the filtergraph */ while (1) { //av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n"); /**(滤镜 6.7):得到滤镜处理后的数据*/ ret = av_buffersink_get_frame(filter->buffersink_ctx, filter->filtered_frame); if (ret < 0) { /* if no more frames for output - returns AVERROR(EAGAIN) * if flushed and no more frames for output - returns AVERROR_EOF * rewrite retcode to 0 to show it as normal procedure completion */ if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) ret = 0; break; } filter->filtered_frame->pict_type = AV_PICTURE_TYPE_NONE; //然后把滤镜处理后的数据重新进行编码成你想要的格式,再封装输出 ret = encode_write_frame(stream_index, 0); av_frame_unref(filter->filtered_frame); if (ret < 0) break; } return ret; } static int flush_encoder(unsigned int stream_index) { if (!(stream_ctx[stream_index].enc_ctx->codec->capabilities & AV_CODEC_CAP_DELAY)) return 0; av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index); return encode_write_frame(stream_index, 1); } int main(int argc, char** argv) { int ret; AVPacket* packet = NULL; unsigned int stream_index; unsigned int i; if (argc != 3) { av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]); return 1; } if ((ret = open_input_file(argv[1])) < 0) goto end; if ((ret = open_output_file(argv[2])) < 0) goto end; if ((ret = init_fifo( &fifo, stream_ctx[audio_index].enc_ctx)) < 0) goto end; if ((ret = init_filters()) < 0) goto end; if (!(packet = av_packet_alloc())) goto end; /* read all packets */ while (1) { /**(解封装 1.3):读取解封装后数据到AVPacket中*/ if ((ret = av_read_frame(ifmt_ctx, packet)) < 0) break; stream_index = packet->stream_index; av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index); if (filter_ctx[stream_index].filter_graph) { StreamContext* stream = &stream_ctx[stream_index]; av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n"); av_packet_rescale_ts(packet, ifmt_ctx->streams[stream_index]->time_base, stream->dec_ctx->time_base); /**(解码 2.5):把AVPacket送去解码*/ ret = avcodec_send_packet(stream->dec_ctx, packet); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Decoding failed\n"); #if 0 break; #else continue; #endif } while (ret >= 0) { /**(解码 2.6):从解码器获取解码后的数据到AVFrame*/ ret = avcodec_receive_frame(stream->dec_ctx, stream->dec_frame); if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) break; else if (ret < 0) goto end; stream->dec_frame->pts = stream->dec_frame->best_effort_timestamp; //这是解码后的裸数据,如果可以对其进行滤镜处理 ret = filter_encode_write_frame(stream->dec_frame, stream_index); if (ret < 0) goto end; } } else { /* remux this frame without reencoding */ av_packet_rescale_ts(packet, ifmt_ctx->streams[stream_index]->time_base, ofmt_ctx->streams[stream_index]->time_base); ret = av_interleaved_write_frame(ofmt_ctx, packet); if (ret < 0) goto end; } av_packet_unref(packet); } /* flush filters and encoders */ for (i = 0; i < ifmt_ctx->nb_streams; i++) { /* flush filter */ if (!filter_ctx[i].filter_graph) continue; ret = filter_encode_write_frame(NULL, i); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n"); goto end; } /* flush encoder */ ret = flush_encoder(i); if (ret < 0) { av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n"); goto end; } } /**(封装 4.7):写入文件尾*/ av_write_trailer(ofmt_ctx); end: if (packet) { av_packet_free(&packet); } if (ifmt_ctx) { for (i = 0; i < ifmt_ctx->nb_streams; i++) { avcodec_free_context(&stream_ctx[i].dec_ctx); if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && stream_ctx[i].enc_ctx) avcodec_free_context(&stream_ctx[i].enc_ctx); if (filter_ctx && filter_ctx[i].filter_graph) { avfilter_graph_free(&filter_ctx[i].filter_graph); av_packet_free(&filter_ctx[i].enc_pkt); av_frame_free(&filter_ctx[i].filtered_frame); } av_frame_free(&stream_ctx[i].dec_frame); } } if (filter_ctx) { av_free(filter_ctx); } if (stream_ctx) { av_free(stream_ctx); } if (fifo) { av_audio_fifo_free(fifo); } if (ifmt_ctx) { avformat_close_input(&ifmt_ctx); if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) avio_closep(&ofmt_ctx->pb); avformat_free_context(ofmt_ctx); } if (ret < 0){ av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret)); } return ret ? 1 : 0; }
最新发布
05-14
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值