<3> Filter Stream & Data Stream & Streams in memory

本文详细介绍了 Java 中的 Filter Streams、Data Streams 和内存中的 Streams。包括如何通过 Filter Streams 进行数据预处理,Data Streams 如何处理各种数据类型,以及 SequenceInputStream、ByteArrayInputStream 和 Piped Streams 的使用。
[b]六、Filter Streams[/b]
Filter input Streams从[b]预先[/b]存在的输入流(比如FileInputStream)中读取数据,在把数据发送给客户端程序之前可以对数据进行[b]预处理[/b]。Filter output Streams把数据写到预先存在的输出流(比如FileOutputStream)里,可以在把数据写到底层的输出流[b]之前[/b]再加工数据。[color=red]一个底层的stream之上可以叠加多个filter[/color]。Filter Stream可以用来完成加密,压缩,缓存等操作。

Filter一词来源于滤水器(water filter)。 滤水器位于水管和水源之间,[b]stream filter位于数据源和最终的目的地之间[/b],在此间可以使用特定的算法处理数据。

[b]6.1 The Filter Stream Classes[/b]
java.io.FilterInputStream和java.io.FilterOutputStream是各具体filter stream实现类的父类。
public class FilterInputStream extends InputStream 
public class FilterOutputStream extends OutputStream

这两个类都有一个[b]protected[/b]的构造方法,必须指明要进行预处理的底层的stream(specify the underlying stream from which the filter stream reads or writes data)。
protected FilterInputStream(InputStream in)
protected FilterOutputStream(OutputStream out)
指定的底层stream会被分别保存到protected的成员变量里:
protected InputStream in
protected OutputStream out
[b][color=red]因为这两个类的构造方法是protected的,所以只能由子类来创建具体的对象(Filter Stream classes 的实现实际上是装饰器模式的应用)。[/color][/b]

FilterOutputStream的close()方法实现如下:
public void close() throws IOException 
{
out.close();
}

[color=red]因此,关闭一个filter stream也就是关闭了底层实际的stream,无论是Filter inputstream还是OutputStream,也无论嵌套几层Filter stream都是这样。[/color]

[b]6.2 The Filter Stream Subclasses[/b]
java.io包里有很多有用的filter stream classes. BufferedInputStream和BufferedOutputStream类会把读和写的数据首先放到缓存里。因此,应用程序可以不用调用底层数据流的native方法就可以读或写数据。很多情况下这可以提高性能。

java.io.PrintStream类(System.out和System.err就是这个类型的)可以简单的打印出原始类型值,对象和字符串。这个类使用平台默认的字符编码来转换字符成bytes。我们也能用PrintStream当做过滤器作用在其他outputStream上,比如作用在FileOutputStream上就可以简单的输出文本到文件中。

DataInputStream和DataOutputStream类可以用机器独立(machine-independent way)的方式来读和写java原始数据类型。(Big-endian for integer types, IEEE-754 for floats and doubles, UTF-8 for Unicode)。

[b]6.3 Buffered Streams[/b]
Buffered input Streams会读比原始需要要多的数据到一个缓存里([b]一个内部的byte数组[/b])。当调用Stream的read()方法时,首先会从缓存里获取数据而不是底层的stream, 当缓存的数据用完时,buffered stream会再次从底层stream里读一块数据填满缓存。同样地,buffered output stream在一个内部byte数组里存储数据,[b]直到缓存满了或显式调用flush()方法数据才会一下子输出到底层output stream.[/b]

BufferedInputStream和BufferedOutputStream分别有两个构造方法:
public BufferedInputStream(InputStream in) 
public BufferedInputStream(InputStream in, int size)
public BufferedOutputStream(OutputStream out)
public BufferedOutputStream(OutputStream out, int size)

第一个参数是底层实际的stream, 第二个参数size就是缓存的大小。[b]如果不指定缓存size, 默认使用2048字节的缓存[/b]。缓存大小最好根据平台来定,或者最好跟硬盘的block大小相关。 小于512字节可能会太小,大于4096可能会太多。对于不可靠的网络连接,最好使用较小的缓存大小:
URL u = new URL("http://java.developer.com"); 
BufferedInputStream bis = new BufferedInputStream(u.openStream(), 256);


[b]七、Data Streams[/b]
Data stream能读和写string,integer, float和其他数据类型数据。java.io.DataInputStream和java.io.DataOutputStream类以一种平台独立的方式读和写原始数据类型(boolean, int,double等)。

[b]7.1 The Data Stream Classes[/b]
java.io.DataInputStream和java.io.DataOutputStream类是FilterInputStream和FilterOutputStream的子类:
public class DataInputStream extends FilterInputStream implements DataInput 
public class DataOutputStream extends FilterOutputStream implements DataOutput


[b]7.1.1 The DataInput and DataOutput Interfaces [/b]
java.io.DataInput接口声明了15个读各种各样数据的方法:
public abstract boolean readBoolean() throws IOException 
public abstract byte readByte() throws IOException
public abstract int readUnsignedByte() throws IOException
public abstract short readShort() throws IOException
public abstract int readUnsignedShort() throws IOException
public abstract char readChar() throws IOException
public abstract int readInt() throws IOException
public abstract long readLong() throws IOException
public abstract float readFloat() throws IOException
public abstract double readDouble() throws IOException
public abstract String readLine() throws IOException
public abstract String readUTF() throws IOException
public void readFully(byte[] data) throws IOException
public void readFully(byte[] data, int offset, int length) throws IOException
public int skipBytes(int n) throws IOException


类似地,java.io.DataOutput接口声明了14个方法,大部分是跟DataInput接口对应的:
public abstract void write(int b) throws IOException 
public abstract void write(byte[] data) throws IOException
public abstract void write(byte[] data, int offset, int length) throws IOException
public abstract void writeBoolean(boolean v) throws IOException
public abstract void writeByte(int b) throws IOException
public abstract void writeShort(int s) throws IOException
public abstract void writeChar(int c) throws IOException
public abstract void writeInt(int i) throws IOException
public abstract void writeLong(long l) throws IOException
public abstract void writeFloat(float f) throws IOException
public abstract void writeDouble(double d) throws IOException
public abstract void writeBytes(String s) throws IOException
public abstract void writeChars(String s) throws IOException
public abstract void writeUTF(String s) throws IOException


注意这里的writerBytes()和writeChars()方法跟DataInput的readBytes()和readChars()方法不是对应匹配的。下面是DataInput和DataOutput的方法的对应关系:

[img]http://dl2.iteye.com/upload/attachment/0098/1675/cbb4d983-a485-3f98-bdd5-f3ff4fd149bb.bmp[/img]
[img]http://dl2.iteye.com/upload/attachment/0098/1673/29bc96f7-daa4-3416-96c4-82859cf67868.bmp[/img]

[b]7.1.2 Constructors[/b]
DataInputStream和DataOutputStream的构造方法如下:
public DataInputStream(InputStream in) 
public DataOutputStream(OutputStream out)

使用示例:
DataInputStream dis = new DataInputStream(new FileInputStream("data.txt")); 
DataOutputStream dos = new DataOutputStream(new FileOutputStream("output.dat"));


[b]八、Streams in Memory[/b]
前面介绍的输入输出主要是在java程序和外部程序之间。本章再扩展三个类似的方法。[b]Sequence input streams能把好几个[/b]input streams串在一起好像是一个stream一样;
[b] Byte array streams[/b]能把数据存储在byte arrays里也能从byte arrays里读数据;
最后,[b]piped input and output streams[/b]可以让一个线程的输出称为一个线程的输入。

[b]8.1 Sequence input Streams[/b]
java.io.SequenceInputStream类以一定的顺序连接多个input stream:
public class SequenceInputStream extends InputStream
从SequenceInputStream里读数据会先从第一个stream里读取所有数据,然后读取第二个stream的所有数据,以此类推直到最后一个stream的所有数据,这个类有两个构造方法:
public SequenceInputStream(Enumeration e) 
public SequenceInputStream(InputStream in1, InputStream in2)


示例:
try
{
URL u1 = new URL("http://java.sun.com/");
URL u2 = new URL("http://www.altavista.com");
SequenceInputStream sin = new SequenceInputStream(u1.openStream(),
u2.openStream());
}
catch (IOException e)
{ //...


[b]8.2 Byte Array Streams[/b]
有时候使用stream的方法操作byte arrays里的数据很方便。比如,我们可能接收到一个原始字节的数组,里面的数据需要解释成double类型。最快的方式是使用DataInputStream,但是在创建DataInputStream之前,我们首先需要创建一个原始的byte流才能接收字节数据,这正是java.io.ByteArrayInputStream类提供的能力。

[b]8.2.1 Byte Array Input Streams[/b]
ByteArrayInputStream类使用InputStream的方法从byte array里读数据:
public class ByteArrayInputStream extends InputStream
此类有两个构造方法,参数就是数据源:
public ByteArrayInputStream(byte[] buffer) 
public ByteArrayInputStream(byte[] buffer, int offset, int length)


[b]8.2.2 Byte Array Output Streams[/b]
ByteArrayOutputStream类使用java.io.OutputStream的方法输出数据到byte array.
public class ByteArrayOutputStream extends OutputStream
它有两个构造方法:
public ByteArrayOutputStream() 
public ByteArrayOutputStream(int size)

无参构造方法使用一个32字节的buffer.使用toByteArray()方法能返回包含要输出数据的byte数组:
public synchronized byte[] toByteArray()

这个类也有toString()方法能把bytes转成string。无参的版本使用平台默认的编码,第二个方法可以自己指定编码。
public String toString() 
public String toString(String encoding) throws UnsupportedEncodingException


比如,一个把double转成bytes的方法就是用DataOutputStream过滤类嵌套在ByteArrayOutputStream外面,然后输出double到byte array:
ByteArrayOutputStream bos = new ByteArrayOutputStream(1024); 
DataOutputStream dos = new DataOutputStream(bos);

for (int r = 1; r < 1024; r++)
{
dos.writeDouble(r * 2.0 * Math.PI);
}


[b]8.3 Communicating Between Threads with Piped Streams[/b]
java.io.PipedInputStream类和java.io.PipedOutputStream类提供了方便的方法来把stream在线程之间传递。
[img]http://dl2.iteye.com/upload/attachment/0098/1677/bd436f3c-d22d-30c9-8459-916eb8f29dd1.bmp[/img]

public class PipedInputStream extends InputStream
public class PipedOutputStream extends OutputStream


PipedInputStream有两个构造方法:
public PipedInputStream()
public PipedInputStream(PipedOutputStream source) throws IOException
无参的构造方法创建一个还没有和一个piped outputStream关联的piped InputStream。第二个构造方法创建一个已经和一个output stream连接的piped input Stream。
同样,PipedOutputStream也有两个构造方法:
public PipedOutputStream(PipedInputStream sink) throws IOException
public PipedOutputStream()

使用示例:
PipedOutputStream pout = new PipedOutputStream(); 
PipedInputStream pin = new PipedInputStream(pout);
PipedInputStream pin = new PipedInputStream();
PipedOutputStream pout = new PipedOutputStream(pin);


或者创建两个都没有关联的输入输出之后再连接:
PipedInputStream pin = new PipedInputStream(); 
PipedOutputStream pout = new PipedOutputStream();
pin.connect(pout);


参考:
《java I/O》 Elliotte Rusty Harold
分析以下代码可能存在的问题,目前发现,把ts流转mp4时,音视频不同步,需要重点关注下时间戳。 extern &quot;C&quot; { #include &lt;libavcodec/avcodec.h&gt; #include &lt;libavformat/avformat.h&gt; #include &lt;libavfilter/buffersink.h&gt; #include &lt;libavfilter/buffersrc.h&gt; #include &lt;libavutil/audio_fifo.h&gt; #include &lt;libavutil/avassert.h&gt; #include &lt;libavutil/avstring.h&gt; #include &lt;libavutil/frame.h&gt; #include &lt;libavutil/opt.h&gt; #include &lt;libswresample/swresample.h&gt; #include &lt;libavutil/audio_fifo.h&gt; } #if defined(_MSC_VER) static char av_error[AV_ERROR_MAX_STRING_SIZE] = { 0 }; #define av_err2str(errnum) \ av_make_error_string(av_error, AV_ERROR_MAX_STRING_SIZE, errnum) #elif #define av_err2str(errnum) \ av_make_error_string((char[AV_ERROR_MAX_STRING_SIZE]){0}, AV_ERROR_MAX_STRING_SIZE, errnum) #endif static AVFormatContext* ifmt_ctx; static AVFormatContext* ofmt_ctx; typedef struct FilteringContext { AVFilterContext* buffersink_ctx; AVFilterContext* buffersrc_ctx; AVFilterGraph* filter_graph; AVPacket* enc_pkt; AVFrame* filtered_frame; } FilteringContext; static FilteringContext* filter_ctx; typedef struct StreamContext { AVCodecContext* dec_ctx; AVCodecContext* enc_ctx; AVFrame* dec_frame; } StreamContext; static StreamContext* stream_ctx; static int audio_index = -1; static int video_index = -1; static int64_t current_audio_pts = 0; //重采样时,会有缓存,这时候要另外计算dts和pts static int64_t first_video_pts = 0; static AVAudioFifo* fifo = NULL; //重采样时,如果输入nb_sample比输出的nb_sample小时,需要缓存 //#define SAVE_AUDIO_FILE #ifdef SAVE_AUDIO_FILE static FILE* save_audio = fopen(&quot;d:\\sampler\\1.pcm&quot;, &quot;w+b&quot;); static void save_audio_data(AVFrame* frame) { int data_size = av_get_bytes_per_sample(stream_ctx[audio_index].enc_ctx-&gt;sample_fmt); if (data_size &gt;= 0) { for (int i = 0; i &lt; frame-&gt;nb_samples; i++) for (int ch = 0; ch &lt; stream_ctx[audio_index].enc_ctx-&gt;channels; ch++) fwrite(frame-&gt;data[ch] + data_size * i, 1, data_size, save_audio); } } #endif static int open_input_file(const char* filename) { int ret; unsigned int i; ifmt_ctx = NULL; /**(解封装 1.1):创建并初始化AVFormatContext*/ if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot open input file\n&quot;); return ret; } /**(解封装 1.2):检索流信息,这个过程会检查输入流中信息是否存在异常*/ if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot find stream information\n&quot;); return ret; } stream_ctx = (StreamContext*)av_mallocz_array(ifmt_ctx-&gt;nb_streams, sizeof(*stream_ctx)); if (!stream_ctx) return AVERROR(ENOMEM); for (i = 0; i &lt; ifmt_ctx-&gt;nb_streams; i++) { AVStream* stream = ifmt_ctx-&gt;streams[i]; /**(解码 2.1):查找解码器(AVCodec)*/ AVCodec* dec = avcodec_find_decoder(stream-&gt;codecpar-&gt;codec_id); AVCodecContext* codec_ctx; if (!dec) { av_log(NULL, AV_LOG_ERROR, &quot;Failed to find decoder for stream #%u\n&quot;, i); return AVERROR_DECODER_NOT_FOUND; } /**(解码 2.2):通过解码器(AVCodec)生成解码器上下文(AVCodecContext)*/ codec_ctx = avcodec_alloc_context3(dec); if (!codec_ctx) { av_log(NULL, AV_LOG_ERROR, &quot;Failed to allocate the decoder context for stream #%u\n&quot;, i); return AVERROR(ENOMEM); } /**(解码 2.3):将AVCodecParameters参数赋值给AVCodecContext*/ ret = avcodec_parameters_to_context(codec_ctx, stream-&gt;codecpar); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Failed to copy decoder parameters to input decoder context &quot; &quot;for stream #%u\n&quot;, i); return ret; } /* Reencode video &amp; audio and remux subtitles etc. */ if (codec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO || codec_ctx-&gt;codec_type == AVMEDIA_TYPE_AUDIO) { if (codec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO){ codec_ctx-&gt;framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL); video_index = i; } else { audio_index = i; } /* Open decoder */ /**(解码 2.4):初始化码器器上下文*/ ret = avcodec_open2(codec_ctx, dec, NULL); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Failed to open decoder for stream #%u\n&quot;, i); return ret; } } //保存解码上下文 stream_ctx[i].dec_ctx = codec_ctx; //分配解码帧 stream_ctx[i].dec_frame = av_frame_alloc(); if (!stream_ctx[i].dec_frame) return AVERROR(ENOMEM); } av_dump_format(ifmt_ctx, 0, filename, 0); return 0; } static int open_output_file(const char* filename) { AVStream* out_stream; AVStream* in_stream; AVCodecContext* dec_ctx, * enc_ctx; AVCodec* encoder; int ret; unsigned int i; ofmt_ctx = NULL; /**(封装 4.1):根据文件格式初始化封装器上下文AVFormatContext*/ avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename); if (!ofmt_ctx) { av_log(NULL, AV_LOG_ERROR, &quot;Could not create output context\n&quot;); return AVERROR_UNKNOWN; } for (i = 0; i &lt; ifmt_ctx-&gt;nb_streams; i++) { /**(封装 4.2):创建输出视频和音频AVStream*/ out_stream = avformat_new_stream(ofmt_ctx, NULL); if (!out_stream) { av_log(NULL, AV_LOG_ERROR, &quot;Failed allocating output stream\n&quot;); return AVERROR_UNKNOWN; } in_stream = ifmt_ctx-&gt;streams[i]; dec_ctx = stream_ctx[i].dec_ctx; if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO || dec_ctx-&gt;codec_type == AVMEDIA_TYPE_AUDIO) { /* in this example, we choose transcoding to same codec */ /**(编码 3.1):获取对应的编码器AVCodec*/ #if 0 encoder = avcodec_find_encoder(dec_ctx-&gt;codec_id); #else if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO) { encoder = avcodec_find_encoder(AV_CODEC_ID_H264); } else { encoder = avcodec_find_encoder(AV_CODEC_ID_AAC); } #endif if (!encoder) { av_log(NULL, AV_LOG_FATAL, &quot;Necessary encoder not found\n&quot;); return AVERROR_INVALIDDATA; } /**(编码 3.2):通过编码器(AVCodec)获取编码器上下文(AVCodecContext)*/ enc_ctx = avcodec_alloc_context3(encoder); if (!enc_ctx) { av_log(NULL, AV_LOG_FATAL, &quot;Failed to allocate the encoder context\n&quot;); return AVERROR(ENOMEM); } /**给编码器初始化信息*/ /* In this example, we transcode to same properties (picture size, * sample rate etc.). These properties can be changed for output * streams easily using filters */ if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO) { enc_ctx-&gt;height = dec_ctx-&gt;height; enc_ctx-&gt;width = dec_ctx-&gt;width; enc_ctx-&gt;sample_aspect_ratio = dec_ctx-&gt;sample_aspect_ratio; /* take first format from list of supported formats */ if (encoder-&gt;pix_fmts) enc_ctx-&gt;pix_fmt = encoder-&gt;pix_fmts[0]; else enc_ctx-&gt;pix_fmt = dec_ctx-&gt;pix_fmt; /* video time_base can be set to whatever is handy and supported by encoder */ #if 0 enc_ctx-&gt;time_base = av_inv_q(dec_ctx-&gt;framerate); #else enc_ctx-&gt;time_base = dec_ctx-&gt;time_base; enc_ctx-&gt;has_b_frames = dec_ctx-&gt;has_b_frames; //输出将相对于输入延迟max_b_frames + 1--&gt;但是输入的为0! //enc_ctx-&gt;max_b_frames = dec_ctx-&gt;max_b_frames + 1; enc_ctx-&gt;max_b_frames = 2; enc_ctx-&gt;bit_rate = dec_ctx-&gt;bit_rate; enc_ctx-&gt;codec_type = dec_ctx-&gt;codec_type; //不支持B帧 if (enc_ctx-&gt;max_b_frames &amp;&amp; enc_ctx-&gt;codec_id != AV_CODEC_ID_MPEG4 &amp;&amp; enc_ctx-&gt;codec_id != AV_CODEC_ID_MPEG1VIDEO &amp;&amp; enc_ctx-&gt;codec_id != AV_CODEC_ID_MPEG2VIDEO) { enc_ctx-&gt;has_b_frames = 0; enc_ctx-&gt;max_b_frames = 0; } #endif } else { enc_ctx-&gt;sample_rate = dec_ctx-&gt;sample_rate; enc_ctx-&gt;channel_layout = dec_ctx-&gt;channel_layout; enc_ctx-&gt;channels = av_get_channel_layout_nb_channels(enc_ctx-&gt;channel_layout); /* take first format from list of supported formats */ enc_ctx-&gt;sample_fmt = encoder-&gt;sample_fmts[0]; enc_ctx-&gt;time_base = { 1, enc_ctx-&gt;sample_rate }; enc_ctx-&gt;bit_rate = dec_ctx-&gt;bit_rate; enc_ctx-&gt;codec_type = dec_ctx-&gt;codec_type; //enc_ctx-&gt;strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL; } if (ofmt_ctx-&gt;oformat-&gt;flags &amp; AVFMT_GLOBALHEADER) enc_ctx-&gt;flags |= AV_CODEC_FLAG_GLOBAL_HEADER; /**(编码 3.3):*/ /* Third parameter can be used to pass settings to encoder */ ret = avcodec_open2(enc_ctx, encoder, NULL); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot open video encoder for stream #%u\n&quot;, i); return ret; } /**(编码 3.4):*/ ret = avcodec_parameters_from_context(out_stream-&gt;codecpar, enc_ctx); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Failed to copy encoder parameters to output stream #%u\n&quot;, i); return ret; } out_stream-&gt;time_base = enc_ctx-&gt;time_base; //保存编码上下文 stream_ctx[i].enc_ctx = enc_ctx; } else if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_UNKNOWN) { av_log(NULL, AV_LOG_FATAL, &quot;Elementary stream #%d is of unknown type, cannot proceed\n&quot;, i); return AVERROR_INVALIDDATA; } else { /* if this stream must be remuxed */ ret = avcodec_parameters_copy(out_stream-&gt;codecpar, in_stream-&gt;codecpar); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Copying parameters for stream #%u failed\n&quot;, i); return ret; } out_stream-&gt;time_base = in_stream-&gt;time_base; } } av_dump_format(ofmt_ctx, 0, filename, 1); /**(封装 4.4):初始化AVIOContext*/ if (!(ofmt_ctx-&gt;oformat-&gt;flags &amp; AVFMT_NOFILE)) { ret = avio_open(&amp;ofmt_ctx-&gt;pb, filename, AVIO_FLAG_WRITE); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Could not open output file &#39;%s&#39;&quot;, filename); return ret; } } /**(封装 4.5):写入文件头*/ /* init muxer, write output file header */ ret = avformat_write_header(ofmt_ctx, NULL); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Error occurred when opening output file\n&quot;); return ret; } return 0; } static int init_fifo(AVAudioFifo** fifo, AVCodecContext* output_codec_context) { /* Create the FIFO buffer based on the specified output sample format. */ if (!(*fifo = av_audio_fifo_alloc(output_codec_context-&gt;sample_fmt, output_codec_context-&gt;channels, 1))) { fprintf(stderr, &quot;Could not allocate FIFO\n&quot;); return AVERROR(ENOMEM); } return 0; } static int init_filter(FilteringContext* fctx, AVCodecContext* dec_ctx, AVCodecContext* enc_ctx, const char* filter_spec) { char args[512]; int ret = 0; const AVFilter* buffersrc = NULL; const AVFilter* buffersink = NULL; AVFilterContext* buffersrc_ctx = NULL; AVFilterContext* buffersink_ctx = NULL; AVFilterInOut* outputs = avfilter_inout_alloc(); AVFilterInOut* inputs = avfilter_inout_alloc(); AVFilterGraph* filter_graph = avfilter_graph_alloc(); if (!outputs || !inputs || !filter_graph) { ret = AVERROR(ENOMEM); goto end; } if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_VIDEO) { /**(滤镜 6.1):获取输入和输出滤镜器【同音频】*/ buffersrc = avfilter_get_by_name(&quot;buffer&quot;); buffersink = avfilter_get_by_name(&quot;buffersink&quot;); if (!buffersrc || !buffersink) { av_log(NULL, AV_LOG_ERROR, &quot;filtering source or sink element not found\n&quot;); ret = AVERROR_UNKNOWN; goto end; } snprintf(args, sizeof(args), &quot;video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d&quot;, dec_ctx-&gt;width, dec_ctx-&gt;height, dec_ctx-&gt;pix_fmt, dec_ctx-&gt;time_base.num, dec_ctx-&gt;time_base.den, dec_ctx-&gt;sample_aspect_ratio.num, dec_ctx-&gt;sample_aspect_ratio.den); /**(滤镜 6.2):创建和初始化输入和输出过滤器实例并将其添加到现有图形中*/ ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, &quot;in&quot;, args, NULL, filter_graph); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot create buffer source\n&quot;); goto end; } ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, &quot;out&quot;, NULL, NULL, filter_graph); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot create buffer sink\n&quot;); goto end; } /**(滤镜 6.3):给【输出】滤镜器上下文设置参数*/ ret = av_opt_set_bin(buffersink_ctx, &quot;pix_fmts&quot;, (uint8_t*)&amp;enc_ctx-&gt;pix_fmt, sizeof(enc_ctx-&gt;pix_fmt), AV_OPT_SEARCH_CHILDREN); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot set output pixel format\n&quot;); goto end; } } else if (dec_ctx-&gt;codec_type == AVMEDIA_TYPE_AUDIO) { buffersrc = avfilter_get_by_name(&quot;abuffer&quot;); buffersink = avfilter_get_by_name(&quot;abuffersink&quot;); if (!buffersrc || !buffersink) { av_log(NULL, AV_LOG_ERROR, &quot;filtering source or sink element not found\n&quot;); ret = AVERROR_UNKNOWN; goto end; } if (!dec_ctx-&gt;channel_layout) dec_ctx-&gt;channel_layout = av_get_default_channel_layout(dec_ctx-&gt;channels); snprintf(args, sizeof(args), &quot;time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%x&quot;, dec_ctx-&gt;time_base.num, dec_ctx-&gt;time_base.den, dec_ctx-&gt;sample_rate, av_get_sample_fmt_name(dec_ctx-&gt;sample_fmt), (int)dec_ctx-&gt;channel_layout); ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, &quot;in&quot;, args, NULL, filter_graph); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot create audio buffer source\n&quot;); goto end; } ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, &quot;out&quot;, NULL, NULL, filter_graph); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot create audio buffer sink\n&quot;); goto end; } ret = av_opt_set_bin(buffersink_ctx, &quot;sample_fmts&quot;, (uint8_t*)&amp;enc_ctx-&gt;sample_fmt, sizeof(enc_ctx-&gt;sample_fmt), AV_OPT_SEARCH_CHILDREN); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot set output sample format\n&quot;); goto end; } ret = av_opt_set_bin(buffersink_ctx, &quot;channel_layouts&quot;, (uint8_t*)&amp;enc_ctx-&gt;channel_layout, sizeof(enc_ctx-&gt;channel_layout), AV_OPT_SEARCH_CHILDREN); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot set output channel layout\n&quot;); goto end; } ret = av_opt_set_bin(buffersink_ctx, &quot;sample_rates&quot;, (uint8_t*)&amp;enc_ctx-&gt;sample_rate, sizeof(enc_ctx-&gt;sample_rate), AV_OPT_SEARCH_CHILDREN); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Cannot set output sample rate\n&quot;); goto end; } } else { ret = AVERROR_UNKNOWN; goto end; } //绑定关系 in &mdash;&mdash;&gt; buffersrc_ctx /* Endpoints for the filter graph. */ outputs-&gt;name = av_strdup(&quot;in&quot;); outputs-&gt;filter_ctx = buffersrc_ctx; outputs-&gt;pad_idx = 0; outputs-&gt;next = NULL; //绑定关系 out &mdash;&mdash;&gt; buffersink_ctx inputs-&gt;name = av_strdup(&quot;out&quot;); inputs-&gt;filter_ctx = buffersink_ctx; inputs-&gt;pad_idx = 0; inputs-&gt;next = NULL; if (!outputs-&gt;name || !inputs-&gt;name) { ret = AVERROR(ENOMEM); goto end; } /**(滤镜 6.4):将字符串描述的图形添加到图形中*/ if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec, &amp;inputs, &amp;outputs, NULL)) &lt; 0) goto end; /**(滤镜 6.5):检查AVFilterGraph有效性*/ if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0) goto end; /* Fill FilteringContext */ fctx-&gt;buffersrc_ctx = buffersrc_ctx; fctx-&gt;buffersink_ctx = buffersink_ctx; fctx-&gt;filter_graph = filter_graph; end: avfilter_inout_free(&amp;inputs); avfilter_inout_free(&amp;outputs); return ret; } static int init_filters(void) { const char* filter_spec; unsigned int i; int ret; filter_ctx = (FilteringContext*)av_malloc_array(ifmt_ctx-&gt;nb_streams, sizeof(*filter_ctx)); if (!filter_ctx) return AVERROR(ENOMEM); //这里会根据音频和视频的stream_index创建对应的filter_stm组 for (i = 0; i &lt; ifmt_ctx-&gt;nb_streams; i++) { filter_ctx[i].buffersrc_ctx = NULL; filter_ctx[i].buffersink_ctx = NULL; filter_ctx[i].filter_graph = NULL; if (!(ifmt_ctx-&gt;streams[i]-&gt;codecpar-&gt;codec_type == AVMEDIA_TYPE_AUDIO || ifmt_ctx-&gt;streams[i]-&gt;codecpar-&gt;codec_type == AVMEDIA_TYPE_VIDEO)) continue; if (ifmt_ctx-&gt;streams[i]-&gt;codecpar-&gt;codec_type == AVMEDIA_TYPE_VIDEO) filter_spec = &quot;null&quot;; /* passthrough (dummy) filter for video */ else filter_spec = &quot;anull&quot;; /* passthrough (dummy) filter for audio */ ret = init_filter(&amp;filter_ctx[i], stream_ctx[i].dec_ctx, stream_ctx[i].enc_ctx, filter_spec); if (ret) return ret; filter_ctx[i].enc_pkt = av_packet_alloc(); if (!filter_ctx[i].enc_pkt) return AVERROR(ENOMEM); filter_ctx[i].filtered_frame = av_frame_alloc(); if (!filter_ctx[i].filtered_frame) return AVERROR(ENOMEM); } return 0; } static int add_samples_to_fifo(AVAudioFifo* fifo, uint8_t** converted_input_samples, const int frame_size) { int error = 0; /* Make the FIFO as large as it needs to be to hold both, * the old and the new samples. */ if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) &lt; 0) { fprintf(stderr, &quot;Could not reallocate FIFO\n&quot;); return error; } /* Store the new samples in the FIFO buffer. */ if (av_audio_fifo_write(fifo, (void**)converted_input_samples, frame_size) &lt; frame_size) { fprintf(stderr, &quot;Could not write data to FIFO\n&quot;); return AVERROR_EXIT; } return 0; } static int store_audio( AVAudioFifo* fifo, const AVFrame* input_frame) { int ret = 0; /* Add the converted input samples to the FIFO buffer for later processing. */ // 写入FIFO缓冲区 ret = add_samples_to_fifo( fifo, (uint8_t**)input_frame-&gt;data, input_frame-&gt;nb_samples); return ret; } static int init_output_frame(AVFrame** frame, AVCodecContext* output_codec_context, int frame_size) { int error; /* Create a new frame to store the audio samples. */ if (!(*frame = av_frame_alloc())) { fprintf(stderr, &quot;Could not allocate output frame\n&quot;); return AVERROR_EXIT; } /* Set the frame&#39;s parameters, especially its size and format. * av_frame_get_buffer needs this to allocate memory for the * audio samples of the frame. * Default channel layouts based on the number of channels * are assumed for simplicity. */ (*frame)-&gt;nb_samples = frame_size; (*frame)-&gt;channel_layout = output_codec_context-&gt;channel_layout; (*frame)-&gt;format = output_codec_context-&gt;sample_fmt; (*frame)-&gt;sample_rate = output_codec_context-&gt;sample_rate; /* Allocate the samples of the created frame. This call will make * sure that the audio frame can hold as many samples as specified. */ if ((error = av_frame_get_buffer(*frame, 0)) &lt; 0) { fprintf(stderr, &quot;Could not allocate output frame samples (error &#39;%s&#39;)\n&quot;, av_err2str(error)); av_frame_free(frame); return error; } return 0; } static int init_packet(AVPacket** packet) { if (!(*packet = av_packet_alloc())) { fprintf(stderr, &quot;Could not allocate packet\n&quot;); return AVERROR(ENOMEM); } return 0; } static int encode_audio_frame(AVFrame* frame, AVFormatContext* output_format_context, AVCodecContext* output_codec_context, int* data_present) { /* Packet used for temporary storage. */ AVPacket* output_packet; int error; error = init_packet(&amp;output_packet); if (error &lt; 0) return error; /* Set a timestamp based on the sample rate for the container. */ if (frame) { current_audio_pts += output_codec_context-&gt;frame_size; frame-&gt;pts = current_audio_pts; //frame-&gt;pkt_pts = current_audio_pts; //frame-&gt;pkt_dts = current_audio_pts; } /* Send the audio frame stored in the temporary packet to the encoder. * The output audio stream encoder is used to do this. */ error = avcodec_send_frame(output_codec_context, frame); /* The encoder signals that it has nothing more to encode. */ if (error == AVERROR_EOF) { error = 0; goto cleanup; } else if (error &lt; 0) { fprintf(stderr, &quot;Could not send packet for encoding (error &#39;%s&#39;)\n&quot;, av_err2str(error)); goto cleanup; } cleanup: av_packet_free(&amp;output_packet); return error; } int encode_and_write(AVAudioFifo* fifo, AVFormatContext* output_format_context, AVCodecContext* output_codec_context) { /* Temporary storage of the output samples of the frame written to the file. */ AVFrame* output_frame; /* Use the maximum number of possible samples per frame. * If there is less than the maximum possible frame size in the FIFO * buffer use this number. Otherwise, use the maximum possible frame size. */ const int frame_size = FFMIN(av_audio_fifo_size(fifo), output_codec_context-&gt;frame_size); int data_written; /* Initialize temporary storage for one output frame. */ if (init_output_frame(&amp;output_frame, output_codec_context, frame_size)) return AVERROR_EXIT; /* Read as many samples from the FIFO buffer as required to fill the frame. * The samples are stored in the frame temporarily. */ if (av_audio_fifo_read(fifo, (void**)output_frame-&gt;data, frame_size) &lt; frame_size) { fprintf(stderr, &quot;Could not read data from FIFO\n&quot;); av_frame_free(&amp;output_frame); return AVERROR_EXIT; } //测试保存音频(Fload 32bit) #ifdef SAVE_AUDIO_FILE save_audio_data(output_frame); #endif /* Encode one frame worth of audio samples. */ if (encode_audio_frame(output_frame, output_format_context, output_codec_context, &amp;data_written)) { av_frame_free(&amp;output_frame); return AVERROR_EXIT; } av_frame_free(&amp;output_frame); return 0; } static int encode_write_frame(unsigned int stream_index, int flush) { StreamContext* stream = &amp;stream_ctx[stream_index]; FilteringContext* filter = &amp;filter_ctx[stream_index]; AVFrame* filt_frame = flush ? NULL : filter-&gt;filtered_frame; AVPacket* enc_pkt = filter-&gt;enc_pkt; AVFrame* reasampling_frame = NULL; const int enc_frame_size = stream-&gt;enc_ctx-&gt;frame_size; int ret; //av_log(NULL, AV_LOG_INFO, &quot;Encoding frame\n&quot;); /* encode filtered frame */ av_packet_unref(enc_pkt); /**(编码 3.5):把滤镜处理后的AVFrame送去编码*/ // 调试 #if 0 if (filt_frame) { if (stream_index == AVMEDIA_TYPE_AUDIO) { filt_frame-&gt;nb_samples = 1024; //编码前重新给pts和dts赋值 current_audio_pts += stream-&gt;enc_ctx-&gt;frame_size; filt_frame-&gt;pts = current_audio_pts; filt_frame-&gt;pkt_dts = current_audio_pts; } else { if (0 == first_video_pts) { first_video_pts = filt_frame-&gt;best_effort_timestamp; } int64_t current_video_pts = filt_frame-&gt;best_effort_timestamp - first_video_pts; filt_frame-&gt;pts = current_video_pts; filt_frame-&gt;pkt_dts = current_video_pts; } } ret = avcodec_send_frame(stream-&gt;enc_ctx, filt_frame); if (ret &lt; 0) { return ret; } #else //当音频样本数不满足预期时,需要重采样再进行输出 if (stream_index == AVMEDIA_TYPE_AUDIO &amp;&amp; filt_frame &amp;&amp; filt_frame-&gt;nb_samples != stream-&gt;enc_ctx-&gt;frame_size) { // 写入音频至队列 ret = store_audio( fifo, filt_frame); if (ret &lt; 0) { return ret; } // 从队列中读取音频 while (1) { int fifo_size = av_audio_fifo_size(fifo); if (fifo_size &lt; enc_frame_size) { break; } ret = encode_and_write( fifo, ofmt_ctx, stream_ctx[audio_index].enc_ctx); if (ret &lt; 0) { return ret; } } } else { if (filt_frame) { if (stream_index == AVMEDIA_TYPE_AUDIO) { current_audio_pts += stream-&gt;enc_ctx-&gt;frame_size; filt_frame-&gt;pts = current_audio_pts; //filt_frame-&gt;pkt_pts = current_audio_pts; //filt_frame-&gt;pkt_dts = current_audio_pts; } else { if (0 == first_video_pts) { first_video_pts = filt_frame-&gt;best_effort_timestamp; } int64_t current_video_pts = filt_frame-&gt;best_effort_timestamp - first_video_pts; filt_frame-&gt;pts = current_video_pts; //filt_frame-&gt;pkt_pts = current_video_pts; //filt_frame-&gt;pkt_dts = current_video_pts; } } /**(编码 3.5):把滤镜处理后的AVFrame送去编码*/ ret = avcodec_send_frame(stream-&gt;enc_ctx, filt_frame); } #endif while (ret &gt;= 0) { /**(编码 3.6):从编码器中得到编码后数据,放入AVPacket中*/ ret = avcodec_receive_packet(stream-&gt;enc_ctx, enc_pkt); if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { return 0; } printf(&quot;write1 %s Packet. size:%5d\tdts:%5lld\tpts:%5lld\tduration:%5lld\tcur_dts:%5lld\n&quot;, stream_index == AVMEDIA_TYPE_AUDIO ? &quot;a&gt;&gt;&gt;&gt;&gt;&quot; : &quot;v-----&quot;, enc_pkt-&gt;size, enc_pkt-&gt;dts, enc_pkt-&gt;pts, enc_pkt-&gt;duration, ofmt_ctx-&gt;streams[stream_index]-&gt;cur_dts); /* prepare packet for muxing */ //设置pts等信息 enc_pkt-&gt;stream_index = stream_index; av_packet_rescale_ts(enc_pkt, stream-&gt;enc_ctx-&gt;time_base, ofmt_ctx-&gt;streams[stream_index]-&gt;time_base); enc_pkt-&gt;pos = -1; //av_log(NULL, AV_LOG_DEBUG, &quot;Muxing frame\n&quot;); printf(&quot;write2 %s Packet. size:%5d\tdts:%5lld\tpts:%5lld\tduration:%5lld\tcur_dts:%5lld\n&quot;, stream_index == AVMEDIA_TYPE_AUDIO ? &quot;a&gt;&gt;&gt;&gt;&gt;&quot; : &quot;v-----&quot;, enc_pkt-&gt;size, enc_pkt-&gt;dts, enc_pkt-&gt;pts, enc_pkt-&gt;duration, ofmt_ctx-&gt;streams[stream_index]-&gt;cur_dts); /* mux encoded frame */ ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt); //擦除数据 av_packet_unref(enc_pkt); } return ret; } static int filter_encode_write_frame(AVFrame* frame, unsigned int stream_index) { FilteringContext* filter = &amp;filter_ctx[stream_index]; int ret; //av_log(NULL, AV_LOG_INFO, &quot;Pushing decoded frame to filters\n&quot;); /* push the decoded frame into the filtergraph */ /**(滤镜 6.6):将解码后的AVFrame送去filtergraph进行滤镜处理*/ ret = av_buffersrc_add_frame_flags(filter-&gt;buffersrc_ctx, frame, 0); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Error while feeding the filtergraph\n&quot;); return ret; } /* pull filtered frames from the filtergraph */ while (1) { //av_log(NULL, AV_LOG_INFO, &quot;Pulling filtered frame from filters\n&quot;); /**(滤镜 6.7):得到滤镜处理后的数据*/ ret = av_buffersink_get_frame(filter-&gt;buffersink_ctx, filter-&gt;filtered_frame); if (ret &lt; 0) { /* if no more frames for output - returns AVERROR(EAGAIN) * if flushed and no more frames for output - returns AVERROR_EOF * rewrite retcode to 0 to show it as normal procedure completion */ if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) ret = 0; break; } filter-&gt;filtered_frame-&gt;pict_type = AV_PICTURE_TYPE_NONE; //然后把滤镜处理后的数据重新进行编码成你想要的格式,再封装输出 ret = encode_write_frame(stream_index, 0); av_frame_unref(filter-&gt;filtered_frame); if (ret &lt; 0) break; } return ret; } static int flush_encoder(unsigned int stream_index) { if (!(stream_ctx[stream_index].enc_ctx-&gt;codec-&gt;capabilities &amp; AV_CODEC_CAP_DELAY)) return 0; av_log(NULL, AV_LOG_INFO, &quot;Flushing stream #%u encoder\n&quot;, stream_index); return encode_write_frame(stream_index, 1); } int main(int argc, char** argv) { int ret; AVPacket* packet = NULL; unsigned int stream_index; unsigned int i; if (argc != 3) { av_log(NULL, AV_LOG_ERROR, &quot;Usage: %s &lt;input file&gt; &lt;output file&gt;\n&quot;, argv[0]); return 1; } if ((ret = open_input_file(argv[1])) &lt; 0) goto end; if ((ret = open_output_file(argv[2])) &lt; 0) goto end; if ((ret = init_fifo( &amp;fifo, stream_ctx[audio_index].enc_ctx)) &lt; 0) goto end; if ((ret = init_filters()) &lt; 0) goto end; if (!(packet = av_packet_alloc())) goto end; /* read all packets */ while (1) { /**(解封装 1.3):读取解封装后数据到AVPacket中*/ if ((ret = av_read_frame(ifmt_ctx, packet)) &lt; 0) break; stream_index = packet-&gt;stream_index; av_log(NULL, AV_LOG_DEBUG, &quot;Demuxer gave frame of stream_index %u\n&quot;, stream_index); if (filter_ctx[stream_index].filter_graph) { StreamContext* stream = &amp;stream_ctx[stream_index]; av_log(NULL, AV_LOG_DEBUG, &quot;Going to reencode&amp;filter the frame\n&quot;); av_packet_rescale_ts(packet, ifmt_ctx-&gt;streams[stream_index]-&gt;time_base, stream-&gt;dec_ctx-&gt;time_base); /**(解码 2.5):把AVPacket送去解码*/ ret = avcodec_send_packet(stream-&gt;dec_ctx, packet); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Decoding failed\n&quot;); #if 0 break; #else continue; #endif } while (ret &gt;= 0) { /**(解码 2.6):从解码器获取解码后的数据到AVFrame*/ ret = avcodec_receive_frame(stream-&gt;dec_ctx, stream-&gt;dec_frame); if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) break; else if (ret &lt; 0) goto end; stream-&gt;dec_frame-&gt;pts = stream-&gt;dec_frame-&gt;best_effort_timestamp; //这是解码后的裸数据,如果可以对其进行滤镜处理 ret = filter_encode_write_frame(stream-&gt;dec_frame, stream_index); if (ret &lt; 0) goto end; } } else { /* remux this frame without reencoding */ av_packet_rescale_ts(packet, ifmt_ctx-&gt;streams[stream_index]-&gt;time_base, ofmt_ctx-&gt;streams[stream_index]-&gt;time_base); ret = av_interleaved_write_frame(ofmt_ctx, packet); if (ret &lt; 0) goto end; } av_packet_unref(packet); } /* flush filters and encoders */ for (i = 0; i &lt; ifmt_ctx-&gt;nb_streams; i++) { /* flush filter */ if (!filter_ctx[i].filter_graph) continue; ret = filter_encode_write_frame(NULL, i); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Flushing filter failed\n&quot;); goto end; } /* flush encoder */ ret = flush_encoder(i); if (ret &lt; 0) { av_log(NULL, AV_LOG_ERROR, &quot;Flushing encoder failed\n&quot;); goto end; } } /**(封装 4.7):写入文件尾*/ av_write_trailer(ofmt_ctx); end: if (packet) { av_packet_free(&amp;packet); } if (ifmt_ctx) { for (i = 0; i &lt; ifmt_ctx-&gt;nb_streams; i++) { avcodec_free_context(&amp;stream_ctx[i].dec_ctx); if (ofmt_ctx &amp;&amp; ofmt_ctx-&gt;nb_streams &gt; i &amp;&amp; ofmt_ctx-&gt;streams[i] &amp;&amp; stream_ctx[i].enc_ctx) avcodec_free_context(&amp;stream_ctx[i].enc_ctx); if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph) { avfilter_graph_free(&amp;filter_ctx[i].filter_graph); av_packet_free(&amp;filter_ctx[i].enc_pkt); av_frame_free(&amp;filter_ctx[i].filtered_frame); } av_frame_free(&amp;stream_ctx[i].dec_frame); } } if (filter_ctx) { av_free(filter_ctx); } if (stream_ctx) { av_free(stream_ctx); } if (fifo) { av_audio_fifo_free(fifo); } if (ifmt_ctx) { avformat_close_input(&amp;ifmt_ctx); if (ofmt_ctx &amp;&amp; !(ofmt_ctx-&gt;oformat-&gt;flags &amp; AVFMT_NOFILE)) avio_closep(&amp;ofmt_ctx-&gt;pb); avformat_free_context(ofmt_ctx); } if (ret &lt; 0){ av_log(NULL, AV_LOG_ERROR, &quot;Error occurred: %s\n&quot;, av_err2str(ret)); } return ret ? 1 : 0; }
最新发布
05-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值