ffmpeg parse_options函数解析

本文详细解析了FFMPEG中的`parse_options`函数及其相关子函数,包括`parse_option`、`opt_input_file`和`output_packet`中的`do_video_out`函数。重点讨论了解析过程和关键结构体的作用,如`AVFormatContext`,以及在解协议、解封装、解码过程中的角色。FFMPEG结构体分为解协议、解封装、解码和数据存储等类别,每个类别都有相应的关键结构和功能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

void parse_options(void *optctx, int argc, char **argv, const OptionDef *options,void (*parse_arg_function)(void *, const char*))函数解析

现在终于要分析这个函数了,从注释来看就是解析命令行参数
/**
 * Parse the command line arguments.
 *
 * @param optctx an opaque options context
 * @param options Array with the definitions required to interpret every
 * option of the form: -option_name [argument]
 * @param parse_arg_function Name of the function called to process every
 * argument without a leading option name flag. NULL if such arguments do
 * not have to be processed.
 */
函数的注释很明确了,参数的使用也说的很明白,那么我们开始阅读源码吧!
parse_options(NULL, argc, argv, options, opt_output_file);这是在main()函数调用的情况

看看源代码:在cmdutils.c文件中
void parse_options(void *optctx, int argc, char **argv, const OptionDef *options,
                   void (*parse_arg_function)(void *, const char*))
{
    const char *opt;
    int optindex, handleoptions = 1, ret;
    /* perform system-dependent conversions for arguments list */
    prepare_app_arguments(&argc, &argv);  //一个空的函数
    /* parse options */
    optindex = 1;
    while (optindex < argc) {
        opt = argv[optindex++]; // 注意,此时optindex =1;然后加一重新写入内存
        if (handleoptions && opt[0] == '-' && opt[1] != '\0') { // 带有“-”的是命令参数,没有带的就是紧跟参数的设置,比如“-i a.png”那么 “-i”b表示输入的意思,紧跟的"a.png"就是输入的文件名
            if (opt[1] == '-' && opt[2] == '\0') {  // 判断该命令是不是 “--”如果是就不处理,直接结束本次的循环。这一个逻辑很强,特别是 handleoptions变量的引入
                handleoptions = 0;
                continue;
            }
            opt++;  //去掉参数的“-”的真正参数项 例如 opt ="-i" 那么 opt++后就是 opt ="i";
            if ((ret = parse_option(optctx, opt, argv[optindex], options)) < 0)//这里详细看下面关于这个函数的详细解析
                exit_program(1);
            optindex += ret; //解析正确了就+1,感觉这真是多此一举,何必要ret这个变量,没有参透大神是什么意思
        } else {
            if (parse_arg_function) //这个就是处理输出文件的
                parse_arg_function(optctx, opt);
        }
    }
}


看看parse_option(optctx, opt, argv[optindex], options)函数

int parse_option(void *optctx, const char *opt, const char *arg,  const OptionDef *options)

/**
 * Parse one given option.
 *
 * @return on success 1 if arg was consumed, 0 otherwise; negative number on error
 */

  函数的功能已经注释的很明确了,就是解析一条给出的具体命令
  从上面的调用来看传送的参数 一个是NULL不用管,第二个参数是去掉“-”的参数和紧跟的参数。例如“-i a.png” 那么传入的参数就是 parse_option(NULL,i,a.png,options);
  函数的源代码:
int parse_option(void *optctx, const char *opt, const char *arg,
                 const OptionDef *options)
{
    const OptionDef *po;
    int bool_val = 1;
    int *dstcount;
    void *dst;
    po = find_option(options, opt); //找打这个参数在数组的位置,返回的是这个数组的该地址
    if (!po->name && opt[0] == 'n' && opt[1] == 'o') { //处理“no”这个bool型参数
        /* handle 'no' bool option */
        po = find_option(options, opt + 2);
        if (!(po->name && (po->flags & OPT_BOOL)))
            goto unknown_opt;
        bool_val = 0;
    }
    if (!po->name) //po->NULL 那么就处理下面
        po = find_option(options, "default");
    if (!po->name) {
unknown_opt:
        av_log(NULL, AV_LOG_ERROR, "Unrecognized option '%s'\n", opt);
        return AVERROR(EINVAL);
    }
    if (po->flags & HAS_ARG && !arg) {  //紧跟参数后面的设置 例如“-i k” arg = k
        av_log(NULL, AV_LOG_ERROR, "Missing argument for option '%s'\n", opt);
        return AVERROR(EINVAL);
    }
    /* new-style options contain an offset into optctx, old-style address of
     * a global var*/
    //  如果这个选项在options出现过并且是OPT_OFFSET或者是OPT_SPEC的,则dst指向该参数在optctx中偏移地址(uint8_t *)optctx + po->u.off;如果不是OPT_OFFSET和OPT_SPEC型的,则dst指向该选项的地址po->u.dst_ptr
   //为了简化思考的难度 我假设我输入的是这样的命令“ffmpeg -i test.png  kuan.mp4” 也就是将一张图片转为一个视频,这样对于“-i”这个命令,根据options数组发现:  { "i", HAS_ARG, {(void*)opt_input_file}, "input file name", "filename" },显然对于这样的命令项,执行下面命令后 dst =NULL;
    dst = po->flags & (OPT_OFFSET | OPT_SPEC) ? (uint8_t *)optctx + po->u.off
                                              : po->u.dst_ptr;
    //有了上一条假设命令那么该函数体不执行
    if (po->flags & OPT_SPEC) {
        SpecifierOpt **so = dst;
        char *p = strchr(opt, ':');
        dstcount = (int *)(so + 1);
        *so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1);
        (*so)[*dstcount - 1].specifier = av_strdup(p ? p + 1 : "");
        dst = &(*so)[*dstcount - 1].u;
    }
    if (po->flags & OPT_STRING) {
        char *str;
        str = av_strdup(arg);
        *(char **)dst = str;
    } else if (po->flags & OPT_BOOL) {
        *(int *)dst = bool_val;
    } else if (po->flags & OPT_INT) {
        *(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX);
    } else if (po->flags & OPT_INT64) {
        *(int64_t *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT64_MIN, INT64_MAX);
    } else if (po->flags & OPT_TIME) {
        *(int64_t *)dst = parse_time_or_die(opt, arg, 1);
    } else if (po->flags & OPT_FLOAT) {
        *(float *)dst = parse_number_or_die(opt, arg, OPT_FLOAT, -INFINITY, INFINITY);
    } else if (po->flags & OPT_DOUBLE) {
        *(double *)dst = parse_number_or_die(opt, arg, OPT_DOUBLE, -INFINITY, INFINITY);
    } else if (po->u.func_arg) { //此处的代码要执行 ,当然要执行的是 po->func_arg(opt,arg);这个函数的分析到此为此,我下面的分析都是假设“ffmpeg -i a.png kuan.mp4”来分析函数的调用,因此有的读者可以根据自己需要来分析具体的命令。
        int ret = po->flags & OPT_FUNC2 ? po->u.func2_arg(optctx, opt, arg)
                                        : po->u.func_arg(opt, arg); // 这里的函数指针是opt_input_file因此下面我不得不分析这个函数,不同的命令这里的函数指针是不同的
        if (ret < 0) {
            av_log(NULL, AV_LOG_ERROR,
                   "Failed to set value '%s' for option '%s'\n", arg, opt);
            return ret;
        }
    }
    if (po->flags & OPT_EXIT)
        exit_program(0);
    return !!(po->flags & HAS_ARG);
}


从代码的阅读来看,“ffmpeg -i a.png kuan.mp4”这条命令,我需要分析下opt_input_file函数下面就是这条函数的分析,当然我仍然以这条命令为例子,那么传入的参数就是opt_input_file(i,a.png);

a. opt_input_file函数解析(具体的命令具体分析函数)

注意:下面的代码分析都基于“ffmpeg -i a.png kuan.mp4”的命令分析的。

看看这个函数opt_output_file,该函数在ffmpeg.c源文件中,而且还以static修饰
看看源码:接着就逐行阅读吧
static int opt_input_file(const char *opt, const char *filename)
{
    AVFormatContext *ic; //对于该数据结构参考另一篇博客
    AVInputFormat *file_iformat = NULL;
    int err, i, ret, rfps, rfps_base;
    int64_t timestamp;
    uint8_t buf[128];
    AVDictionary **opts;
    int orig_nb_streams;                     // number of streams before avformat_find_stream_info

    if (last_asked_format) {   //从我分析的情形来看这个函数不执行
        if (!(file_iformat = av_find_input_format(last_asked_format))) {
            fprintf(stderr, "Unknown input format: '%s'\n", last_asked_format);
            exit_program(1);
        }
        last_asked_format = NULL;
    }

    if (!strcmp(filename, "-")) //不执行
        filename = "pipe:";
   //下面的代码很令人费解,不知道干什么的,其表达式这样: using_stdin = using_stdin|((!strncmp(filename, "pipe:", 5) ||!strcmp(filename, "/dev/stdin"));不过根据根据具体的命令,using_stdin = 0;
   using_stdin |= !strncmp(filename, "pipe:", 5) ||    //这个在后面就知道了,在输出文件的阶段,以这个参数来判断文件重名
                    !strcmp(filename, "/dev/stdin");

    /* get default parameters from command line */  //从命令行获取默认的参数
    ic = avformat_alloc_context(); // 初始化
    if (!ic) { //一般的能成功吧
        print_error(filename, AVERROR(ENOMEM));
        exit_program(1);
    }
    if (audio_sample_rate) {
        snprintf(buf, sizeof(buf), "%d", audio_sample_rate);
        av_dict_set(&format_opts, "sample_rate", buf, 0);
    }
    if (audio_channels) {
        snprintf(buf, sizeof(buf), "%d", audio_channels);
        av_dict_set(&format_opts, "channels", buf, 0);
    }
    if (frame_rate.num) {
        snprintf(buf, sizeof(buf), "%d/%d", frame_rate.num, frame_rate.den);
        av_dict_set(&format_opts, "framerate", buf, 0);
    }
    if (frame_width && frame_height) {
        snprintf(buf, sizeof(buf), "%dx%d", frame_width, frame_height);
        av_dict_set(&format_opts, "video_size", buf, 0);
    } // 上面都不用看
    if (frame_pix_fmt != PIX_FMT_NONE)
        av_dict_set(&format_opts, "pixel_format", av_get_pix_fmt_name(frame_pix_fmt), 0);
    //指定音频,视频,字幕的编码id
    ic->video_codec_id   =
        find_codec_or_die(video_codec_name   , AVMEDIA_TYPE_VIDEO   , 0);
    ic->audio_codec_id   =
        find_codec_or_die(audio_codec_name   , AVMEDIA_TYPE_AUDIO   , 0);
    ic->subtitle_codec_id=
        find_codec_or_die(subtitle_codec_name, AVMEDIA_TYPE_SUBTITLE, 0); //字幕的编解码id
    ic->flags |= AVFMT_FLAG_NONBLOCK;

    /* open the input file with generic libav function */
    err = avformat_open_input(&ic, filename, file_iformat, &format_opts); // Open an input stream and read the header. The codecs are not opened.该函数的代码阅读
    if (err < 0) {    //不用看
        print_error(filename, err);
        exit_program(1);
    }
    assert_avoptions(format_opts); //不用看

    if(opt_programid) {//不用看 ·
        int i, j;
        int found=0;
        for(i=0; i<ic->nb_streams; i++){
            ic->streams[i]->discard= AVDISCARD_ALL;
        }
        for(i=0; i<ic->nb_programs; i++){
            AVProgram *p= ic->programs[i];
            if(p->id != opt_programid){
                p->discard = AVDISCARD_ALL;
            }else{
                found=1; 
                for(j=0; j<p->nb_stream_indexes; j++){
                    ic->streams[p->stream_index[j]]->discard= AVDISCARD_DEFAULT;
                }
            }
        }
        if(!found){
            fprintf(stderr, "Specified program id not found\n");
            exit_program(1);
        }
        opt_programid=0;
    }

    if (loop_input) {
        av_log(NULL, AV_LOG_WARNING, "-loop_input is deprecated, use -loop 1\n");
        ic->loop_input = loop_input;
    }

    /* Set AVCodecContext options for avformat_find_stream_info */
    opts = setup_find_stream_info_opts(ic, codec_opts);
    orig_nb_streams = ic->nb_streams;

    /* If not enough info to get the stream parameters, we decode the
       first frames to get it. (used in mpeg case for example) */
    ret = avformat_find_stream_info(ic, opts); //这个函数,注释说的很明白了 ret = 0
    if (ret < 0 && verbose >= 0) {
        fprintf(stderr, "%s: could not find codec parameters\n", filename);
        av_close_input_file(ic);
        exit_program(1);
    }

    timestamp = start_time;
    /* add the stream start time */
    if (ic->start_time != AV_NOPTS_VALUE)
        timestamp += ic->start_time;

    /* if seeking requested, we execute it */
    if (start_time != 0) {
        ret = av_seek_frame(ic, -1, timestamp, AVSEEK_FLAG_BACKWARD);
        if (ret < 0) {
            fprintf(stderr, "%s: could not seek to position %0.3f\n",
                    filename, (double)timestamp / AV_TIME_BASE);
        }
        /* reset seek info */
        start_time = 0;
    }

    /* update the current parameters so that they match the one of the input stream */
    for(i=0;i<ic->nb_streams;i++) {
        AVStream *st = ic->streams[i];
        AVCodecContext *dec = st->codec;
        InputStream *ist;

        dec->thread_count = thread_count;

        input_streams = grow_array(input_streams, sizeof(*input_streams), &nb_input_streams, nb_input_streams + 1); //这个需要仔细的看看,特别是指针的偏移这一点不懂那么这里就看不懂了
        ist = &input_streams[nb_input_streams - 1];
        ist->st = st;
        ist->file_index = nb_input_files;
        ist->discard = 1;
        ist->opts = filter_codec_opts(codec_opts, ist->st->codec->codec_id, ic, st);

        if (i < nb_ts_scale)
            ist->ts_scale = ts_scale[i];

        switch (dec->codec_type) {
        case AVMEDIA_TYPE_AUDIO:
            ist->dec = avcodec_find_decoder_by_name(audio_codec_name);
            if(audio_disable)
                st->discard= AVDISCARD_ALL;
            break;
        case AVMEDIA_TYPE_VIDEO:
            ist->dec = avcodec_find_decoder_by_name(video_codec_name);
            rfps      = ic->streams[i]->r_frame_rate.num;
            rfps_base = ic->streams[i]->r_frame_rate.den;
            if (dec->lowres) {
                dec->flags |= CODEC_FLAG_EMU_EDGE;
                dec->height >>= dec->lowres;
                dec->width  >>= dec->lowres;
            }
            if(me_threshold)
                dec->debug |= FF_DEBUG_MV;

            if (dec->time_base.den != rfps*dec->ticks_per_frame || dec->time_base.num != rfps_base) {

                if (verbose >= 0)
                    fprintf(stderr,"\nSeems stream %d codec frame rate differs from container frame rate: %2.2f (%d/%d) -> %2.2f (%d/%d)\n",
                            i, (float)dec->time_base.den / dec->time_base.num, dec->time_base.den, dec->time_base.num,

                    (float)rfps / rfps_base, rfps, rfps_base);
            }

            if(video_disable)
                st->discard= AVDISCARD_ALL;
            else if(video_discard)
                st->discard= video_discard;
            break;
        case AVMEDIA_TYPE_DATA:
            break;
        case AVMEDIA_TYPE_SUBTITLE:
            ist->dec = avcodec_find_decoder_by_name(subtitle_codec_name);
            if(subtitle_disable)
                st->discard = AVDISCARD_ALL;
            break;
        case AVMEDIA_TYPE_ATTACHMENT:
        case AVMEDIA_TYPE_UNKNOWN:
            break;
        default:
            abort();
        }
    }

    /* dump the file content */
    if (verbose >= 0)
        av_dump_format(ic, nb_input_files, filename, 0);  //输出基本的信息属于库函数的内容,不做深入的研究

    input_files = grow_array(input_files, sizeof(*input_files), &nb_input_files, nb_input_files + 1);//同样使用了指针的偏移
    input_files[nb_input_files - 1].ctx        = ic;
    input_files[nb_input_files - 1].ist_index  = nb_input_streams - ic->nb_streams;
    input_files[nb_input_files - 1].ts_offset  = input_ts_offset - (copy_ts ? 0 : timestamp);
    input_files[nb_input_files - 1].nb_streams = ic->nb_streams;

    frame_rate    = (AVRational){0, 0};
    frame_pix_fmt = PIX_FMT_NONE;
    frame_height = 0;
    frame_width  = 0;
    audio_sample_rate = 0;
    audio_channels    = 0;
    audio_sample_fmt  = AV_SAMPLE_FMT_NONE;
    av_freep(&ts_scale);
    nb_ts_scale = 0;

    for (i = 0; i < orig_nb_streams; i++)
        av_dict_free(&opts[i]);
    av_freep(&opts);
    av_freep(&video_codec_name);
    av_freep(&audio_codec_name);
    av_freep(&subtitle_codec_name);
    uninit_opts();
    init_opts();
    return 0;
}

看完了opt_input_file函数(注意使用的是“ffmpeg  -i k.png kuan.mp4”命令),这个函数其实最终要完成的是input_files的赋值,其中使用的指针偏移的技巧很重要。
在实际上调试发现,AVFormatContext ic 被赋值,但是没有编码或者解码的信息,在成员数据段 AVStream 也没有具体的解码信息,也就是AVCoder的值为0




opt_output_file实际上parse_options函数最后回归到这个函数上

实际上经过上面的分析发现parse_options函数似乎就是检查参数的合法性而已,看了半天感觉就是这个输出文件函数比较重要,那么现在就好好的分析一下这个函数

源代码
static void opt_output_file(void *optctx, const char *filename)
{
    AVFormatContext *oc; //详解下面对这个的函数分析
    int err, use_video, use_audio, use_subtitle, use_data;
    int input_has_video, input_has_audio, input_has_subtitle, input_has_data;
    AVOutputFormat *file_oformat;
    if (!strcmp(filename, "-"))
        filename = "pipe:";
    oc = avformat_alloc_context(); //初始化,分配内存
    if (!oc) {
        print_error(filename, AVERROR(ENOMEM));
        exit_program(1);
    }
    if (last_asked_format) { //不用看,直接看else内容
        file_oformat = av_guess_format(last_asked_format, NULL, NULL);
        if (!file_oformat) {
            fprintf(stderr, "Requested output format '%s' is not a suitable output format\n", last_asked_format);
            exit_program(1);
        }
        last_asked_format = NULL;
    } else {
        file_oformat = av_guess_format(NULL, filename, NULL); //这个函数调用的是库函数的内容,这个函数的注释是这样的:Return the output format in the list of registered output formats* which best matches the provided parameters, or return NULL if    * there is no match.也就是说就是返回输出文件的格式,前提是这个格式必须注册了
        if (!file_oformat) {                                                  
                          
            fprintf(stderr, "Unable to find a suitable output format for '%s'\n",
                    filename);
            exit_program(1);
        }
    }
    oc->oformat = file_oformat;
    av_strlcpy(oc->filename, filename, sizeof(oc->filename));//这个函数属于ffmpeg库函数,主要的作用就是复制一个字符串之所以不使用strcmpy是因为分配的内存空间大小不同
    if (!strcmp(file_oformat->name, "ffm") &&
        av_strstart(filename, "http:", NULL)) {  //不用看,直接跳到else语句中去
        /* special case for files sent to avserver: we get the stream
           parameters from avserver */
        int err = read_avserver_streams(oc, filename);
        if (err < 0) {
            print_error(filename, err);
            exit_program(1);
        }
    } else {
        use_video = file_oformat->video_codec != CODEC_ID_NONE || video_stream_copy || video_codec_name;
        use_audio = file_oformat->audio_codec != CODEC_ID_NONE || audio_stream_copy || audio_codec_name;
        use_subtitle = file_oformat->subtitle_codec != CODEC_ID_NONE || subtitle_stream_copy || subtitle_codec_name;
        use_data = data_stream_copy ||  data_codec_name; /* XXX once generic data codec will be available add a ->data_codec reference and use it here */
        /* disable if no corresponding type found */
        check_inputs(&input_has_video,
                     &input_has_audio,
                     &input_has_subtitle,
                     &input_has_data);
        if (!input_has_video)
            use_video = 0;
        if (!input_has_audio)
            use_audio = 0;
        if (!input_has_subtitle)
            use_subtitle = 0;
        if (!input_has_data)
            use_data = 0;
        /* manual disable */
        if (audio_disable)    use_audio    = 0;
        if (video_disable)    use_video    = 0;
        if (subtitle_disable) use_subtitle = 0;
        if (data_disable)     use_data     = 0;
        if (use_video)    new_video_stream(oc, nb_output_files);
        if (use_audio)    new_audio_stream(oc, nb_output_files);
        if (use_subtitle) new_subtitle_stream(oc, nb_output_files);
        if (use_data)     new_data_stream(oc, nb_output_files);
        av_dict_copy(&oc->metadata, metadata, 0);
        av_dict_free(&metadata);
    }
    av_dict_copy(&output_opts[nb_output_files], format_opts, 0);
    output_files[nb_output_files++] = oc;
    /* check filename in case of an image number is expected */
    if (oc->oformat->flags & AVFMT_NEEDNUMBER) {
        if (!av_filename_number_test(oc->filename)) {
            print_error(oc->filename, AVERROR(EINVAL));
            exit_program(1);
        }
    }
    if (!(oc->oformat->flags & AVFMT_NOFILE)) {
        /* test if it already exists to avoid losing precious files */
        if (!file_overwrite &&
            (strchr(filename, ':') == NULL ||
             filename[1] == ':' ||
             av_strstart(filename, "file:", NULL))) {
            if (avio_check(filename, 0) == 0) {
                if (!using_stdin) {  //using_stdin这个的初始化或者赋值处理在输入文件的处理阶段
                    fprintf(stderr,"File '%s' already exists. Overwrite ? [y/N] ", filename);
                    fflush(stderr);
                    if (!read_yesno()) {
                        fprintf(stderr, "Not overwriting - exiting\n");
                        exit_program(1);
                    }
                }
                else {
                    fprintf(stderr,"File '%s' already exists. Exiting.\n", filename);
                    exit_program(1);
                }
            }
        }
        /* open the file */
        if ((err = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE)) < 0) {
            print_error(filename, err);
            exit_program(1);
        }
    }
    oc->preload= (int)(mux_preload*AV_TIME_BASE);
    oc->max_delay= (int)(mux_max_delay*AV_TIME_BASE);
    if (loop_output >= 0) {
        av_log(NULL, AV_LOG_WARNING, "-loop_output is deprecated, use -loop\n");
        oc->loop_output = loop_output;
    }
    oc->flags |= AVFMT_FLAG_NONBLOCK;
    frame_rate    = (AVRational){0, 0};
    frame_width   = 0;
    frame_height  = 0;
    audio_sample_rate = 0;
    audio_channels    = 0;
    audio_sample_fmt  = AV_SAMPLE_FMT_NONE;
    av_freep(&forced_key_frames);
    uninit_opts();
    init_opts();
}
这个函数的核心代码就下面这个!调试发先在输出流AVStrem 的AVCoder的成员变量是有值的,也是说,在输出的阶段完成了编码器的注册。实际上这个函数就完成一件事情:AVFormatContext *oc 的赋值
 output_files[nb_output_files++] = oc;

转码函数transcode

三个重要的函数终于到了要分析最后一个函数了
函数的源代码:
/*
 * The following code is the main loop of the file converter
 */
static int transcode(AVFormatContext **output_files,
                     int nb_output_files,
                     InputFile *input_files,
                     int nb_input_files,
                     StreamMap *stream_maps, int nb_stream_maps)
{
    int ret = 0, i, j, k, n, nb_ostreams = 0;
    AVFormatContext *is, *os;
    AVCodecContext *codec, *icodec;
    OutputStream *ost, **ost_table = NULL;
    InputStream *ist;
    char error[1024];
    int want_sdp = 1;
    uint8_t no_packet[MAX_FILES]={0};
    int no_packet_count=0;

    if (rate_emu)  //rate_emu = 0
        for (i = 0; i < nb_input_streams; i++)
            input_streams[i].start = av_gettime();

    /* output stream init */初始化输出的流
    nb_ostreams = 0;
    for(i=0;i<nb_output_files;i++) { // 
        os = output_files[i];   //二级指针的使用
        if (!os->nb_streams && !(os->oformat->flags & AVFMT_NOSTREAMS)) { //直接跳过
            av_dump_format(output_files[i], i, output_files[i]->filename, 1);
            fprintf(stderr, "Output file #%d does not contain any stream\n", i);
            ret = AVERROR(EINVAL);
            goto fail;
        }
        nb_ostreams += os->nb_streams;
    }
    if (nb_stream_maps > 0 && nb_stream_maps != nb_ostreams) { //nb_stream_maps = 0
        fprintf(stderr, "Number of stream maps must match number of output streams\n");
        ret = AVERROR(EINVAL);
        goto fail;
    }

    /* Sanity check the mapping args -- do the input files & streams exist? */
    for(i=0;i<nb_stream_maps;i++) {
        int fi = stream_maps[i].file_index;
        int si = stream_maps[i].stream_index;

        if (fi < 0 || fi > nb_input_files - 1 ||
            si < 0 || si > input_files[fi].nb_streams - 1) {
            fprintf(stderr,"Could not find input stream #%d.%d\n", fi, si);
            ret = AVERROR(EINVAL);
            goto fail;
        }
        fi = stream_maps[i].sync_file_index;
        si = stream_maps[i].sync_stream_index;
        if (fi < 0 || fi > nb_input_files - 1 ||
            si < 0 || si > input_files[fi].nb_streams - 1) {
            fprintf(stderr,"Could not find sync stream #%d.%d\n", fi, si);
            ret = AVERROR(EINVAL);
            goto fail;
        }
    }

    ost_table = av_mallocz(sizeof(OutputStream *) * nb_ostreams);
    if (!ost_table)
        goto fail;
    n = 0;
    for(k=0;k<nb_output_files;k++) {
        os = output_files[k];
        for(i=0;i<os->nb_streams;i++,n++) {
            int found;
            ost = ost_table[n] = output_streams_for_file[k][i];
            if (nb_stream_maps > 0) {
                ost->source_index = input_files[stream_maps[n].file_index].ist_index +
                    stream_maps[n].stream_index;

                /* Sanity check that the stream types match */
                if (input_streams[ost->source_index].st->codec->codec_type != ost->st->codec->codec_type) {
                    int i= ost->file_index;
                    av_dump_format(output_files[i], i, output_files[i]->filename, 1);
                    fprintf(stderr, "Codec type mismatch for mapping #%d.%d -> #%d.%d\n",
                        stream_maps[n].file_index, stream_maps[n].stream_index,
                        ost->file_index, ost->index);
                    exit_program(1);
                }

            } else {
                int best_nb_frames=-1;
                /* get corresponding input stream index : we select the first one with the right type */
                found = 0;
                for (j = 0; j < nb_input_streams; j++) {
                    int skip=0;
                    ist = &input_streams[j];
                    if(opt_programid){
                        int pi,si;
                        AVFormatContext *f = input_files[ist->file_index].ctx;
                        skip=1;
                        for(pi=0; pi<f->nb_programs; pi++){
                            AVProgram *p= f->programs[pi];
                            if(p->id == opt_programid)
                                for(si=0; si<p->nb_stream_indexes; si++){
                                    if(f->streams[ p->stream_index[si] ] == ist->st)
                                        skip=0;
                                }
                        }
                    }
                    if (ist->discard && ist->st->discard != AVDISCARD_ALL && !skip &&
                        ist->st->codec->codec_type == ost->st->codec->codec_type) {
                        if(best_nb_frames < ist->st->codec_info_nb_frames){
                            best_nb_frames= ist->st->codec_info_nb_frames;
                            ost->source_index = j;
                            found = 1;
                        }
                    }
                }

                if (!found) {
                    if(! opt_programid) {
                        /* try again and reuse existing stream */
                        for (j = 0; j < nb_input_streams; j++) {
                            ist = &input_streams[j];
                            if (   ist->st->codec->codec_type == ost->st->codec->codec_type
                                && ist->st->discard != AVDISCARD_ALL) {
                                ost->source_index = j;
                                found = 1;
                            }
                        }
                    }
                    if (!found) {
                        int i= ost->file_index;
                        av_dump_format(output_files[i], i, output_files[i]->filename, 1);
                        fprintf(stderr, "Could not find input stream matching output stream #%d.%d\n",
                                ost->file_index, ost->index);
                        exit_program(1);
                    }
                }
            }
            ist = &input_streams[ost->source_index];
            ist->discard = 0;
            ost->sync_ist = (nb_stream_maps > 0) ?
                &input_streams[input_files[stream_maps[n].sync_file_index].ist_index +
                         stream_maps[n].sync_stream_index] : ist;
        }
    }

    /* for each output stream, we compute the right encoding parameters */
    for(i=0;i<nb_ostreams;i++) { //匹配每一个流正确的编码参数 ,在单个输出流的时候不用着这么麻烦
        ost = ost_table[i];
        os = output_files[ost->file_index];
        ist = &input_streams[ost->source_index];

        codec = ost->st->codec;
        icodec = ist->st->codec;

        if (metadata_streams_autocopy)
            av_dict_copy(&ost->st->metadata, ist->st->metadata,
                         AV_DICT_DONT_OVERWRITE);

        ost->st->disposition = ist->st->disposition;
        codec->bits_per_raw_sample= icodec->bits_per_raw_sample;
        codec->chroma_sample_location = icodec->chroma_sample_location;

        if (ost->st->stream_copy) {
            uint64_t extra_size = (uint64_t)icodec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE;

            if (extra_size > INT_MAX)
                goto fail;

            /* if stream_copy is selected, no need to decode or encode */
            codec->codec_id = icodec->codec_id;
            codec->codec_type = icodec->codec_type;

            if(!codec->codec_tag){
                if(   !os->oformat->codec_tag
                   || av_codec_get_id (os->oformat->codec_tag, icodec->codec_tag) == codec->codec_id
                   || av_codec_get_tag(os->oformat->codec_tag, icodec->codec_id) <= 0)
                    codec->codec_tag = icodec->codec_tag;
            }

            codec->bit_rate = icodec->bit_rate;
            codec->rc_max_rate    = icodec->rc_max_rate;
            codec->rc_buffer_size = icodec->rc_buffer_size;
            codec->extradata= av_mallocz(extra_size);
            if (!codec->extradata)
                goto fail;
            memcpy(codec->extradata, icodec->extradata, icodec->extradata_size);
            codec->extradata_size= icodec->extradata_size;
            if(!copy_tb && av_q2d(icodec->time_base)*icodec->ticks_per_frame > av_q2d(ist->st->time_base) && av_q2d(ist->st->time_base) < 1.0/500){
                codec->time_base = icodec->time_base;
                codec->time_base.num *= icodec->ticks_per_frame;
                av_reduce(&codec->time_base.num, &codec->time_base.den,
                          codec->time_base.num, codec->time_base.den, INT_MAX);
            }else
                codec->time_base = ist->st->time_base;
            switch(codec->codec_type) {
            case AVMEDIA_TYPE_AUDIO:
                if(audio_volume != 256) {
                    fprintf(stderr,"-acodec copy and -vol are incompatible (frames are not decoded)\n");
                    exit_program(1);
                }
                codec->channel_layout = icodec->channel_layout;
                codec->sample_rate = icodec->sample_rate;
                codec->channels = icodec->channels;
                codec->frame_size = icodec->frame_size;
                codec->audio_service_type = icodec->audio_service_type;
                codec->block_align= icodec->block_align;
                if(codec->block_align == 1 && codec->codec_id == CODEC_ID_MP3)
                    codec->block_align= 0;
                if(codec->codec_id == CODEC_ID_AC3)
                    codec->block_align= 0;
                break;
            case AVMEDIA_TYPE_VIDEO:
                codec->pix_fmt = icodec->pix_fmt;
                codec->width = icodec->width;
                codec->height = icodec->height;
                codec->has_b_frames = icodec->has_b_frames;
                if (!codec->sample_aspect_ratio.num) {
                    codec->sample_aspect_ratio =
                    ost->st->sample_aspect_ratio =
                        ist->st->sample_aspect_ratio.num ? ist->st->sample_aspect_ratio :
                        ist->st->codec->sample_aspect_ratio.num ?
                        ist->st->codec->sample_aspect_ratio : (AVRational){0, 1};
                }
                break;
            case AVMEDIA_TYPE_SUBTITLE:
                codec->width = icodec->width;
                codec->height = icodec->height;
                break;
            case AVMEDIA_TYPE_DATA:
                break;
            default:
                abort();
            }
        } else {
            if (!ost->enc)
                ost->enc = avcodec_find_encoder(ost->st->codec->codec_id);
            switch(codec->codec_type) {
            case AVMEDIA_TYPE_AUDIO:
                ost->fifo= av_fifo_alloc(1024);
                if(!ost->fifo)
                    goto fail;
                ost->reformat_pair = MAKE_SFMT_PAIR(AV_SAMPLE_FMT_NONE,AV_SAMPLE_FMT_NONE);
                if (!codec->sample_rate) {
                    codec->sample_rate = icodec->sample_rate;
                    if (icodec->lowres)
                        codec->sample_rate >>= icodec->lowres;
                }
                choose_sample_rate(ost->st, ost->enc);
                codec->time_base = (AVRational){1, codec->sample_rate};
                if (codec->sample_fmt == AV_SAMPLE_FMT_NONE)
                    codec->sample_fmt = icodec->sample_fmt;
                choose_sample_fmt(ost->st, ost->enc);
                if (!codec->channels)
                    codec->channels = icodec->channels;
                codec->channel_layout = icodec->channel_layout;
                if (av_get_channel_layout_nb_channels(codec->channel_layout) != codec->channels)
                    codec->channel_layout = 0;
                ost->audio_resample = codec->sample_rate != icodec->sample_rate || audio_sync_method > 1;
                icodec->request_channels = codec->channels;
                ist->decoding_needed = 1;
                ost->encoding_needed = 1;
                ost->resample_sample_fmt  = icodec->sample_fmt;
                ost->resample_sample_rate = icodec->sample_rate;
                ost->resample_channels    = icodec->channels;
                break;
            case AVMEDIA_TYPE_VIDEO:
                if (codec->pix_fmt == PIX_FMT_NONE)
                    codec->pix_fmt = icodec->pix_fmt;
                choose_pixel_fmt(ost->st, ost->enc);

                if (ost->st->codec->pix_fmt == PIX_FMT_NONE) {
                    fprintf(stderr, "Video pixel format is unknown, stream cannot be encoded\n");
                    exit_program(1);
                }

                if (!codec->width || !codec->height) {
                    codec->width  = icodec->width;
                    codec->height = icodec->height;
                }

                ost->video_resample = codec->width   != icodec->width  ||
                                      codec->height  != icodec->height ||
                                      codec->pix_fmt != icodec->pix_fmt;
                if (ost->video_resample) {
#if !CONFIG_AVFILTER
                    avcodec_get_frame_defaults(&ost->pict_tmp);
                    if(avpicture_alloc((AVPicture*)&ost->pict_tmp, codec->pix_fmt,
                                       codec->width, codec->height)) {
                        fprintf(stderr, "Cannot allocate temp picture, check pix fmt\n");
                        exit_program(1);
                    }
                    ost->img_resample_ctx = sws_getContext(
                        icodec->width,
                        icodec->height,
                        icodec->pix_fmt,
                        codec->width,
                        codec->height,
                        codec->pix_fmt,
                        ost->sws_flags, NULL, NULL, NULL);
                    if (ost->img_resample_ctx == NULL) {
                        fprintf(stderr, "Cannot get resampling context\n");
                        exit_program(1);
                    }
#endif
                    codec->bits_per_raw_sample= 0;
                }

                ost->resample_height = icodec->height;
                ost->resample_width  = icodec->width;
                ost->resample_pix_fmt= icodec->pix_fmt;
                ost->encoding_needed = 1;
                ist->decoding_needed = 1;

                if (!ost->frame_rate.num)
                    ost->frame_rate = ist->st->r_frame_rate.num ? ist->st->r_frame_rate : (AVRational){25,1};
                if (ost->enc && ost->enc->supported_framerates && !force_fps) {
                    int idx = av_find_nearest_q_idx(ost->frame_rate, ost->enc->supported_framerates);
                    ost->frame_rate = ost->enc->supported_framerates[idx];
                }
                codec->time_base = (AVRational){ost->frame_rate.den, ost->frame_rate.num};

#if CONFIG_AVFILTER
                if (configure_video_filters(ist, ost)) {
                    fprintf(stderr, "Error opening filters!\n");
                    exit(1);
                }
#endif
                break;
            case AVMEDIA_TYPE_SUBTITLE:
                ost->encoding_needed = 1;
                ist->decoding_needed = 1;
                break;
            default:
                abort();
                break;
            }
            /* two pass mode */
            if (ost->encoding_needed &&
                (codec->flags & (CODEC_FLAG_PASS1 | CODEC_FLAG_PASS2))) {
                char logfilename[1024];
                FILE *f;

                snprintf(logfilename, sizeof(logfilename), "%s-%d.log",
                         pass_logfilename_prefix ? pass_logfilename_prefix : DEFAULT_PASS_LOGFILENAME_PREFIX,
                         i);
                if (codec->flags & CODEC_FLAG_PASS1) {
                    f = fopen(logfilename, "wb");
                    if (!f) {
                        fprintf(stderr, "Cannot write log file '%s' for pass-1 encoding: %s\n", logfilename, strerror(errno));
                        exit_program(1);
                    }
                    ost->logfile = f;
                } else {
                    char  *logbuffer;
                    size_t logbuffer_size;
                    if (cmdutils_read_file(logfilename, &logbuffer, &logbuffer_size) < 0) {
                        fprintf(stderr, "Error reading log file '%s' for pass-2 encoding\n", logfilename);
                        exit_program(1);
                    }
                    codec->stats_in = logbuffer;
                }
            }
        }
        if(codec->codec_type == AVMEDIA_TYPE_VIDEO){
            int size= codec->width * codec->height;
            bit_buffer_size= FFMAX(bit_buffer_size, 6*size + 200);
        }
    }

    if (!bit_buffer)
        bit_buffer = av_malloc(bit_buffer_size);
    if (!bit_buffer) {
        fprintf(stderr, "Cannot allocate %d bytes output buffer\n",
                bit_buffer_size);
        ret = AVERROR(ENOMEM);
        goto fail;
    }

    /* open each encoder */   打开每一个解码器
    for(i=0;i<nb_ostreams;i++) {
        ost = ost_table[i];
        if (ost->encoding_needed) {
            AVCodec *codec = ost->enc;
            AVCodecContext *dec = input_streams[ost->source_index].st->codec;
            if (!codec) {
                snprintf(error, sizeof(error), "Encoder (codec id %d) not found for output stream #%d.%d",
                         ost->st->codec->codec_id, ost->file_index, ost->index);
                ret = AVERROR(EINVAL);
                goto dump_format;
            }
            if (dec->subtitle_header) {
                ost->st->codec->subtitle_header = av_malloc(dec->subtitle_header_size);
                if (!ost->st->codec->subtitle_header) {
                    ret = AVERROR(ENOMEM);
                    goto dump_format;
                }
                memcpy(ost->st->codec->subtitle_header, dec->subtitle_header, dec->subtitle_header_size);
                ost->st->codec->subtitle_header_size = dec->subtitle_header_size;
            }
            if (avcodec_open2(ost->st->codec, codec, &ost->opts) < 0) {
                snprintf(error, sizeof(error), "Error while opening encoder for output stream #%d.%d - maybe incorrect parameters such as bit_rate, rate, width or height",
                        ost->file_index, ost->index);
                ret = AVERROR(EINVAL);
                goto dump_format;
            }
            assert_codec_experimental(ost->st->codec, 1);
            assert_avoptions(ost->opts);
            if (ost->st->codec->bit_rate && ost->st->codec->bit_rate < 1000)
                av_log(NULL, AV_LOG_WARNING, "The bitrate parameter is set too low."
                                             "It takes bits/s as argument, not kbits/s\n");
            extra_size += ost->st->codec->extradata_size;
        }
    }

    /* open each decoder */ //打开每一个编码器
    for (i = 0; i < nb_input_streams; i++) {
        ist = &input_streams[i];
        if (ist->decoding_needed) {
            AVCodec *codec = ist->dec;
            if (!codec)
                codec = avcodec_find_decoder(ist->st->codec->codec_id);
            if (!codec) {
                snprintf(error, sizeof(error), "Decoder (codec id %d) not found for input stream #%d.%d",
                        ist->st->codec->codec_id, ist->file_index, ist->st->index);
                ret = AVERROR(EINVAL);
                goto dump_format;
            }

            /* update requested sample format for the decoder based on the
               corresponding encoder sample format */
            for (j = 0; j < nb_ostreams; j++) {
                ost = ost_table[j];
                if (ost->source_index == i) {
                    update_sample_fmt(ist->st->codec, codec, ost->st->codec);
                    break;
                }
            }

            if (avcodec_open2(ist->st->codec, codec, &ist->opts) < 0) {
                snprintf(error, sizeof(error), "Error while opening decoder for input stream #%d.%d",
                        ist->file_index, ist->st->index);
                ret = AVERROR(EINVAL);
                goto dump_format;
            }
            assert_codec_experimental(ist->st->codec, 0);
            assert_avoptions(ost->opts);
        }
    }

    /* init pts */
    for (i = 0; i < nb_input_streams; i++) {
        AVStream *st;
        ist = &input_streams[i];
        st= ist->st;
        ist->pts = st->avg_frame_rate.num ? - st->codec->has_b_frames*AV_TIME_BASE / av_q2d(st->avg_frame_rate) : 0;
        ist->next_pts = AV_NOPTS_VALUE;
        init_pts_correction(&ist->pts_ctx);
        ist->is_start = 1;
    }

    /* set meta data information from input file if required */
    for (i=0;i<nb_meta_data_maps;i++) {
        AVFormatContext *files[2];
        AVDictionary    **meta[2];
        int j;

#define METADATA_CHECK_INDEX(index, nb_elems, desc)\
        if ((index) < 0 || (index) >= (nb_elems)) {\
            snprintf(error, sizeof(error), "Invalid %s index %d while processing metadata maps\n",\
                     (desc), (index));\
            ret = AVERROR(EINVAL);\
            goto dump_format;\
        }

        int out_file_index = meta_data_maps[i][0].file;
        int in_file_index = meta_data_maps[i][1].file;
        if (in_file_index < 0 || out_file_index < 0)
            continue;
        METADATA_CHECK_INDEX(out_file_index, nb_output_files, "output file")
        METADATA_CHECK_INDEX(in_file_index, nb_input_files, "input file")

        files[0] = output_files[out_file_index];
        files[1] = input_files[in_file_index].ctx;

        for (j = 0; j < 2; j++) {
            MetadataMap *map = &meta_data_maps[i][j];

            switch (map->type) {
            case 'g':
                meta[j] = &files[j]->metadata;
                break;
            case 's':
                METADATA_CHECK_INDEX(map->index, files[j]->nb_streams, "stream")
                meta[j] = &files[j]->streams[map->index]->metadata;
                break;
            case 'c':
                METADATA_CHECK_INDEX(map->index, files[j]->nb_chapters, "chapter")
                meta[j] = &files[j]->chapters[map->index]->metadata;
                break;
            case 'p':
                METADATA_CHECK_INDEX(map->index, files[j]->nb_programs, "program")
                meta[j] = &files[j]->programs[map->index]->metadata;
                break;
            }
        }

        av_dict_copy(meta[0], *meta[1], AV_DICT_DONT_OVERWRITE);
    }

    /* copy global metadata by default */
    if (metadata_global_autocopy) {

        for (i = 0; i < nb_output_files; i++)
            av_dict_copy(&output_files[i]->metadata, input_files[0].ctx->metadata,
                         AV_DICT_DONT_OVERWRITE);
    }

    /* copy chapters according to chapter maps */
    for (i = 0; i < nb_chapter_maps; i++) {           //不用看这里
        int infile  = chapter_maps[i].in_file;
        int outfile = chapter_maps[i].out_file;

        if (infile < 0 || outfile < 0)
            continue;
        if (infile >= nb_input_files) {
            snprintf(error, sizeof(error), "Invalid input file index %d in chapter mapping.\n", infile);
            ret = AVERROR(EINVAL);
            goto dump_format;
        }
        if (outfile >= nb_output_files) {
            snprintf(error, sizeof(error), "Invalid output file index %d in chapter mapping.\n",outfile);
            ret = AVERROR(EINVAL);
            goto dump_format;
        }
        copy_chapters(infile, outfile);
    }

    /* copy chapters from the first input file that has them*/
    if (!nb_chapter_maps)
        for (i = 0; i < nb_input_files; i++) {
            if (!input_files[i].ctx->nb_chapters)
                continue;

            for (j = 0; j < nb_output_files; j++)
                if ((ret = copy_chapters(i, j)) < 0)
                    goto dump_format;
            break;
        }

    /* open files and write file headers */
    for(i=0;i<nb_output_files;i++) {
        os = output_files[i];   //这里涉及到指针的偏移
        if (avformat_write_header(os, &output_opts[i]) < 0) {
            snprintf(error, sizeof(error), "Could not write header for output file #%d (incorrect codec parameters ?)", i);
            ret = AVERROR(EINVAL);
            goto dump_format;
        }
        assert_avoptions(output_opts[i]);
        if (strcmp(output_files[i]->oformat->name, "rtp")) {
            want_sdp = 0;
        }
    }

 dump_format:
    /* dump the file output parameters - cannot be done before in case
       of stream copy */
    for(i=0;i<nb_output_files;i++) {
        av_dump_format(output_files[i], i, output_files[i]->filename, 1);
    }

    /* dump the stream mapping */
    if (verbose >= 0) {
        fprintf(stderr, "Stream mapping:\n");
        for(i=0;i<nb_ostreams;i++) {
            ost = ost_table[i];
            fprintf(stderr, "  Stream #%d.%d -> #%d.%d",
                    input_streams[ost->source_index].file_index,
                    input_streams[ost->source_index].st->index,
                    ost->file_index,
                    ost->index);
            if (ost->sync_ist != &input_streams[ost->source_index])
                fprintf(stderr, " [sync #%d.%d]",
                        ost->sync_ist->file_index,
                        ost->sync_ist->st->index);
            fprintf(stderr, "\n");
        }
    }

    if (ret) {
        fprintf(stderr, "%s\n", error);
        goto fail;
    }

    if (want_sdp) {
        print_sdp(output_files, nb_output_files);
    }

    if (verbose >= 0)
        fprintf(stderr, "Press ctrl-c to stop encoding\n");
    term_init();

    timer_start = av_gettime();
    //下面就开始转码了,红色标识的代码部分就是转码的关键部分了
    for(; received_sigterm == 0;) {
        int file_index, ist_index;
        AVPacket pkt;
        double ipts_min;
        double opts_min;

    redo:
        ipts_min= 1e100;
        opts_min= 1e100;

        /* select the stream that we must read now by looking at the
           smallest output pts */
        file_index = -1;
        for(i=0;i<nb_ostreams;i++) {
            double ipts, opts;
            ost = ost_table[i];
            os = output_files[ost->file_index];
            ist = &input_streams[ost->source_index];
            if(ist->is_past_recording_time || no_packet[ist->file_index])
                continue;
                opts = ost->st->pts.val * av_q2d(ost->st->time_base);
            ipts = (double)ist->pts;
            if (!input_files[ist->file_index].eof_reached){
                if(ipts < ipts_min) {
                    ipts_min = ipts;
                    if(input_sync ) file_index = ist->file_index;
                }
                if(opts < opts_min) {
                    opts_min = opts;
                    if(!input_sync) file_index = ist->file_index;
                }
            }
            if(ost->frame_number >= max_frames[ost->st->codec->codec_type]){
                file_index= -1;
                break;
            }
        }
        /* if none, if is finished */
        if (file_index < 0) {
            if(no_packet_count){
                no_packet_count=0;
                memset(no_packet, 0, sizeof(no_packet));
                usleep(10000);
                continue;
            }
            break;
        }

        /* finish if limit size exhausted */
        if (limit_filesize != 0 && limit_filesize <= avio_tell(output_files[0]->pb))
            break;

        /* read a frame from it and output it in the fifo */   //读出一帧然后就是输出到输出的文件里面
        is = input_files[file_index].ctx; // 这里终于看到了,从输入流
        ret= av_read_frame(is, &pkt); //读取一帧的数据包
        if(ret == AVERROR(EAGAIN)){ //不用管,不会执行
            no_packet[file_index]=1;
            no_packet_count++;
            continue;
        }
        if (ret < 0) {
            input_files[file_index].eof_reached = 1; //不会执行
            if (opt_shortest)
                break;
            else
                continue;
        }

        no_packet_count=0;
        memset(no_packet, 0, sizeof(no_packet));

        if (do_pkt_dump) {
            av_pkt_dump_log2(NULL, AV_LOG_DEBUG, &pkt, do_hex_dump,   //跳过
                             is->streams[pkt.stream_index]);
        }
        /* the following test is needed in case new streams appear    //万一有新的流出现,不用管它
           dynamically in stream : we ignore them */
        if (pkt.stream_index >= input_files[file_index].nb_streams)
            goto discard_packet;
        ist_index = input_files[file_index].ist_index + pkt.stream_index;
        ist = &input_streams[ist_index];
        if (ist->discard)
            goto discard_packet;

        if (pkt.dts != AV_NOPTS_VALUE)
            pkt.dts += av_rescale_q(input_files[ist->file_index].ts_offset, AV_TIME_BASE_Q, ist->st->time_base);
        if (pkt.pts != AV_NOPTS_VALUE)
            pkt.pts += av_rescale_q(input_files[ist->file_index].ts_offset, AV_TIME_BASE_Q, ist->st->time_base);

        if (ist->ts_scale) {
            if(pkt.pts != AV_NOPTS_VALUE)
                pkt.pts *= ist->ts_scale;
            if(pkt.dts != AV_NOPTS_VALUE)
                pkt.dts *= ist->ts_scale;
        }

//        fprintf(stderr, "next:%"PRId64" dts:%"PRId64" off:%"PRId64" %d\n", ist->next_pts, pkt.dts, input_files[ist->file_index].ts_offset, ist->st->codec->codec_type);
        if (pkt.dts != AV_NOPTS_VALUE && ist->next_pts != AV_NOPTS_VALUE   //不用管
            && (is->iformat->flags & AVFMT_TS_DISCONT)) {
            int64_t pkt_dts= av_rescale_q(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q);
            int64_t delta= pkt_dts - ist->next_pts;
            if((FFABS(delta) > 1LL*dts_delta_threshold*AV_TIME_BASE || pkt_dts+1<ist->pts)&& !copy_ts){
                input_files[ist->file_index].ts_offset -= delta;
                if (verbose > 2)
                    fprintf(stderr, "timestamp discontinuity %"PRId64", new offset= %"PRId64"\n",
                            delta, input_files[ist->file_index].ts_offset);
                pkt.dts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
                if(pkt.pts != AV_NOPTS_VALUE)
                    pkt.pts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
            }
        }

        /* finish if recording time exhausted */
        if (recording_time != INT64_MAX &&
            av_compare_ts(pkt.pts, ist->st->time_base, recording_time + start_time, (AVRational){1, 1000000}) >= 0) {
            ist->is_past_recording_time = 1;
            goto discard_packet;
        }

        //fprintf(stderr,"read #%d.%d size=%d\n", ist->file_index, ist->st->index, pkt.size);
        if (output_packet(ist, ist_index, ost_table, nb_ostreams, &pkt) < 0) {   //这里就相当的重要 output_packet就是输出的帧了 实际上所谓的转码就发生在这里,转换帧并写到输出文件后面

            if (verbose >= 0)
                fprintf(stderr, "Error while decoding stream #%d.%d\n",
                        ist->file_index, ist->st->index);
            if (exit_on_error)
                exit_program(1);
            av_free_packet(&pkt);
            goto redo;
        }

    discard_packet:
        av_free_packet(&pkt);

        /* dump report by using the output first video and audio streams */
        print_report(output_files, ost_table, nb_ostreams, 0);
    }
     //在流的最后,我们必须清理编码的缓冲区
    /* at the end of stream, we must flush the decoder buffers */
    for (i = 0; i < nb_input_streams; i++) {
        ist = &input_streams[i];
        if (ist->decoding_needed) {
            output_packet(ist, i, ost_table, nb_ostreams, NULL);
        }
    }

    term_exit();

    /* write the trailer if needed and close file */
    for(i=0;i<nb_output_files;i++) {
        os = output_files[i];
        av_write_trailer(os);
    }

    /* dump report by using the first video and audio streams */
    print_report(output_files, ost_table, nb_ostreams, 1); //这里打印出信息,这里也就是要转话完了

    /* close each encoder */
    for(i=0;i<nb_ostreams;i++) {
        ost = ost_table[i];
        if (ost->encoding_needed) {
            av_freep(&ost->st->codec->stats_in);
            avcodec_close(ost->st->codec);
        }
#if CONFIG_AVFILTER
        avfilter_graph_free(&ost->graph);
#endif
    }

    /* close each decoder */
    for (i = 0; i < nb_input_streams; i++) {
        ist = &input_streams[i];
        if (ist->decoding_needed) {
            avcodec_close(ist->st->codec);
        }
    }

    /* finished ! */
    ret = 0;

 fail:
    av_freep(&bit_buffer);

    if (ost_table) {
        for(i=0;i<nb_ostreams;i++) {
            ost = ost_table[i];
            if (ost) {
                if (ost->st->stream_copy)
                    av_freep(&ost->st->codec->extradata);
                if (ost->logfile) {
                    fclose(ost->logfile);
                    ost->logfile = NULL;
                }
                av_fifo_free(ost->fifo); /* works even if fifo is not
                                             initialized but set to zero */
                av_freep(&ost->st->codec->subtitle_header);
                av_free(ost->pict_tmp.data[0]);
                av_free(ost->forced_kf_pts);
                if (ost->video_resample)
                    sws_freeContext(ost->img_resample_ctx);
                if (ost->resample)
                    audio_resample_close(ost->resample);
                if (ost->reformat_ctx)
                    av_audio_convert_free(ost->reformat_ctx);
                av_dict_free(&ost->opts);
                av_free(ost);
            }
        }
        av_free(ost_table);
    }
    return ret;
}

分析完成后发现实际上转码的过程就是 :从输入流中读取一帧的数据,然后就是把这一帧的数据包写到输出的流后面,实=实际上在转码的过程中二个重要的函数就是 av_read_frame(is, &pkt),output_packet(ist, ist_index, ost_table, nb_ostreams, &pkt);由于av_read_frame属于ffmpeg库函数的内容,因此这里只需要看看分析 output_packet函数了,这个函数是绕不去的坎。


    output_packet解析

static int output_packet(InputStream *ist, int ist_index,
                         OutputStream **ost_table, int nb_ostreams,
                         const AVPacket *pkt)
{
    AVFormatContext *os;
    OutputStream *ost;
    int ret, i;
    int got_output;
    AVFrame picture;
    void *buffer_to_free = NULL;
    static unsigned int samples_size= 0;
    AVSubtitle subtitle, *subtitle_to_free;
    int64_t pkt_pts = AV_NOPTS_VALUE;
#if CONFIG_AVFILTER
    int frame_available;
#endif
    float quality;

    AVPacket avpkt;
    int bps = av_get_bytes_per_sample(ist->st->codec->sample_fmt);

    if(ist->next_pts == AV_NOPTS_VALUE)
        ist->next_pts= ist->pts;

    if (pkt == NULL) {
        /* EOF handling */
        av_init_packet(&avpkt);
        avpkt.data = NULL;
        avpkt.size = 0;
        goto handle_eof;
    } else {
        avpkt = *pkt;
    }

    if(pkt->dts != AV_NOPTS_VALUE)
        ist->next_pts = ist->pts = av_rescale_q(pkt->dts, ist->st->time_base, AV_TIME_BASE_Q);
    if(pkt->pts != AV_NOPTS_VALUE)
        pkt_pts = av_rescale_q(pkt->pts, ist->st->time_base, AV_TIME_BASE_Q);

    //while we have more to decode or while the decoder did output something on EOF // 解码
    while (avpkt.size > 0 || (!pkt && got_output)) {
        uint8_t *data_buf, *decoded_data_buf;
        int data_size, decoded_data_size;
    handle_eof:
        ist->pts= ist->next_pts;

        if(avpkt.size && avpkt.size != pkt->size &&
           ((!ist->showed_multi_packet_warning && verbose>0) || verbose>1)){
            fprintf(stderr, "Multiple frames in a packet from stream %d\n", pkt->stream_index);
            ist->showed_multi_packet_warning=1;
        }

        /* decode the packet if needed */        //说的很明白 就是根据需要来解码包
        decoded_data_buf = NULL; /* fail safe */
        decoded_data_size= 0;
        data_buf  = avpkt.data;
        data_size = avpkt.size;
        subtitle_to_free = NULL;
        if (ist->decoding_needed) { // 如果这个数据包必须被解码,InputStream 的成员decoding_needed 为true就是必须要解码的包
            switch(ist->st->codec->codec_type) { //确定解码的类型,目前阶段就看看视频流的解码
            case AVMEDIA_TYPE_AUDIO:{
                if(pkt && samples_size < FFMAX(pkt->size*sizeof(*samples), AVCODEC_MAX_AUDIO_FRAME_SIZE)) {
                    samples_size = FFMAX(pkt->size*sizeof(*samples), AVCODEC_MAX_AUDIO_FRAME_SIZE);
                    av_free(samples);
                    samples= av_malloc(samples_size);
                }
                decoded_data_size= samples_size;
                    /* XXX: could avoid copy if PCM 16 bits with same
                       endianness as CPU */
                ret = avcodec_decode_audio3(ist->st->codec, samples, &decoded_data_size,
                                            &avpkt);
                if (ret < 0)
                    return ret;
                avpkt.data += ret;
                avpkt.size -= ret;
                data_size   = ret;
                got_output  = decoded_data_size > 0;
                /* Some bug in mpeg audio decoder gives */
                /* decoded_data_size < 0, it seems they are overflows */
                if (!got_output) {
                    /* no audio frame */
                    continue;
                }
                decoded_data_buf = (uint8_t *)samples;
                ist->next_pts += ((int64_t)AV_TIME_BASE/bps * decoded_data_size) /
                    (ist->st->codec->sample_rate * ist->st->codec->channels);
                break;}
            case AVMEDIA_TYPE_VIDEO:
                    decoded_data_size = (ist->st->codec->width * ist->st->codec->height * 3) / 2;
                    /* XXX: allocate picture correctly */
                    avcodec_get_frame_defaults(&picture);
                    avpkt.pts = pkt_pts;
                    avpkt.dts = ist->pts;
                    pkt_pts = AV_NOPTS_VALUE;

                    ret = avcodec_decode_video2(ist->st->codec,
                                                &picture, &got_output, &avpkt);   //这里就很重要了,就是解码一帧  //Decode the video frame of size avpkt->size from avpkt->data into picture.也就是解码

                    quality = same_quality ? picture.quality : 0;
                    if (ret < 0)
                        return ret;
                    if (!got_output) {
                        /* no picture yet */
                        goto discard_packet;
                    }
                    ist->next_pts = ist->pts = guess_correct_pts(&ist->pts_ctx, picture.pkt_pts, picture.pkt_dts);
                    if (ist->st->codec->time_base.num != 0) {
                        int ticks= ist->st->parser ? ist->st->parser->repeat_pict+1 : ist->st->codec->ticks_per_frame;
                        ist->next_pts += ((int64_t)AV_TIME_BASE *
                                          ist->st->codec->time_base.num * ticks) /
                            ist->st->codec->time_base.den;
                    }
                    avpkt.size = 0;
                    buffer_to_free = NULL;
                    pre_process_video_frame(ist, (AVPicture *)&picture, &buffer_to_free);
                    break;
            case AVMEDIA_TYPE_SUBTITLE:
                ret = avcodec_decode_subtitle2(ist->st->codec,
                                               &subtitle, &got_output, &avpkt);
                if (ret < 0)
                    return ret;
                if (!got_output) {
                    goto discard_packet;
                }
                subtitle_to_free = &subtitle;
                avpkt.size = 0;
                break;
            default:
                return -1;
            }
        } else {
            switch(ist->st->codec->codec_type) {
            case AVMEDIA_TYPE_AUDIO:
                ist->next_pts += ((int64_t)AV_TIME_BASE * ist->st->codec->frame_size) /
                    ist->st->codec->sample_rate;
                break;
            case AVMEDIA_TYPE_VIDEO:
                if (ist->st->codec->time_base.num != 0) {
                    int ticks= ist->st->parser ? ist->st->parser->repeat_pict+1 : ist->st->codec->ticks_per_frame;
                    ist->next_pts += ((int64_t)AV_TIME_BASE *
                                      ist->st->codec->time_base.num * ticks) /
                        ist->st->codec->time_base.den;
                }
                break;
            }
            ret = avpkt.size;
            avpkt.size = 0;
        }

#if CONFIG_AVFILTER
        if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
            for (i = 0; i < nb_ostreams; i++) {
                ost = ost_table[i];
                if (ost->input_video_filter && ost->source_index == ist_index) {
                    AVRational sar;
                    if (ist->st->sample_aspect_ratio.num)
                        sar = ist->st->sample_aspect_ratio;
                    else
                        sar = ist->st->codec->sample_aspect_ratio;
                    // add it to be filtered
                    av_vsrc_buffer_add_frame(ost->input_video_filter, &picture,
                                             ist->pts,
                                             sar);
                }
            }
        }
#endif

        // preprocess audio (volume)
        if (ist->st->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
            if (audio_volume != 256) {
                short *volp;
                volp = samples;
                for(i=0;i<(decoded_data_size / sizeof(short));i++) {
                    int v = ((*volp) * audio_volume + 128) >> 8;
                    if (v < -32768) v = -32768;
                    if (v >  32767) v = 32767;
                    *volp++ = v;
                }
            }
        }

        /* frame rate emulation */
        if (rate_emu) {
            int64_t pts = av_rescale(ist->pts, 1000000, AV_TIME_BASE);
            int64_t now = av_gettime() - ist->start;
            if (pts > now)
                usleep(pts - now);
        }
        /* if output time reached then transcode raw format,  //这部分就是编码了
           encode packets and output them */
        if (start_time == 0 || ist->pts >= start_time)
            for(i=0;i<nb_ostreams;i++) {
                int frame_size;

                ost = ost_table[i];
                if (ost->source_index == ist_index) {
#if CONFIG_AVFILTER
                frame_available = ist->st->codec->codec_type != AVMEDIA_TYPE_VIDEO ||
                    !ost->output_video_filter || avfilter_poll_frame(ost->output_video_filter->inputs[0]);
                while (frame_available) {
                    AVRational ist_pts_tb;
                    if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && ost->output_video_filter)
                        get_filtered_video_frame(ost->output_video_filter, &picture, &ost->picref, &ist_pts_tb);
                    if (ost->picref)
                        ist->pts = av_rescale_q(ost->picref->pts, ist_pts_tb, AV_TIME_BASE_Q);
#endif
                    os = output_files[ost->file_index];

                    /* set the input output pts pairs */
                    //ost->sync_ipts = (double)(ist->pts + input_files[ist->file_index].ts_offset - start_time)/ AV_TIME_BASE;

                    if (ost->encoding_needed) {  //看看这个输出流是否必须编码
                        av_assert0(ist->decoding_needed);
                        switch(ost->st->codec->codec_type) {
                        case AVMEDIA_TYPE_AUDIO:
                            do_audio_out(os, ost, ist, decoded_data_buf, decoded_data_size);
                            break;
                        case AVMEDIA_TYPE_VIDEO: //主要看这里
#if CONFIG_AVFILTER
                            if (ost->picref->video && !ost->frame_aspect_ratio)
                                ost->st->codec->sample_aspect_ratio = ost->picref->video->pixel_aspect;
#endif
                            do_video_out(os, ost, ist, &picture, &frame_size,
                                         same_quality ? quality : ost->st->codec->global_quality); //就这个函数。好了,现在又不得不去分析这个函数了
                            if (vstats_filename && frame_size)
                                do_video_stats(os, ost, frame_size);
                            break;
                        case AVMEDIA_TYPE_SUBTITLE:
                            do_subtitle_out(os, ost, ist, &subtitle,
                                            pkt->pts);
                            break;
                        default:
                            abort();
                        }
                    } else {
                        AVFrame avframe; //FIXME/XXX remove this
                        AVPacket opkt;
                        int64_t ost_tb_start_time= av_rescale_q(start_time, AV_TIME_BASE_Q, ost->st->time_base);

                        av_init_packet(&opkt);

                        if ((!ost->frame_number && !(pkt->flags & AV_PKT_FLAG_KEY)) && !copy_initial_nonkeyframes)
#if !CONFIG_AVFILTER
                            continue;
#else
                            goto cont;
#endif

                        /* no reencoding needed : output the packet directly */
                        /* force the input stream PTS */

                        avcodec_get_frame_defaults(&avframe);
                        ost->st->codec->coded_frame= &avframe;
                        avframe.key_frame = pkt->flags & AV_PKT_FLAG_KEY;

                        if(ost->st->codec->codec_type == AVMEDIA_TYPE_AUDIO)
                            audio_size += data_size;
                        else if (ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
                            video_size += data_size;
                            ost->sync_opts++;
                        }

                        opkt.stream_index= ost->index;
                        if(pkt->pts != AV_NOPTS_VALUE)
                            opkt.pts= av_rescale_q(pkt->pts, ist->st->time_base, ost->st->time_base) - ost_tb_start_time;
                        else
                            opkt.pts= AV_NOPTS_VALUE;

                        if (pkt->dts == AV_NOPTS_VALUE)
                            opkt.dts = av_rescale_q(ist->pts, AV_TIME_BASE_Q, ost->st->time_base);
                        else
                            opkt.dts = av_rescale_q(pkt->dts, ist->st->time_base, ost->st->time_base);
                        opkt.dts -= ost_tb_start_time;

                        opkt.duration = av_rescale_q(pkt->duration, ist->st->time_base, ost->st->time_base);
                        opkt.flags= pkt->flags;

                        //FIXME remove the following 2 lines they shall be replaced by the bitstream filters
                        if(   ost->st->codec->codec_id != CODEC_ID_H264
                           && ost->st->codec->codec_id != CODEC_ID_MPEG1VIDEO
                           && ost->st->codec->codec_id != CODEC_ID_MPEG2VIDEO
                           ) {
                            if(av_parser_change(ist->st->parser, ost->st->codec, &opkt.data, &opkt.size, data_buf, data_size, pkt->flags & AV_PKT_FLAG_KEY))
                                opkt.destruct= av_destruct_packet;
                        } else {
                            opkt.data = data_buf;
                            opkt.size = data_size;
                        }

                        write_frame(os, &opkt, ost->st->codec, ost->bitstream_filters);
                        ost->st->codec->frame_number++;
                        ost->frame_number++;
                        av_free_packet(&opkt);
                    }
#if CONFIG_AVFILTER
                    cont:
                    frame_available = (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO) &&
                                       ost->output_video_filter && avfilter_poll_frame(ost->output_video_filter->inputs[0]);
                    if (ost->picref)
                        avfilter_unref_buffer(ost->picref);
                }
#endif
                }
            }

        av_free(buffer_to_free);
        /* XXX: allocate the subtitles in the codec ? */
        if (subtitle_to_free) {
            avsubtitle_free(subtitle_to_free);
            subtitle_to_free = NULL;
        }
    }
 discard_packet:
    if (pkt == NULL) {
        /* EOF handling */

        for(i=0;i<nb_ostreams;i++) {
            ost = ost_table[i];
            if (ost->source_index == ist_index) {
                AVCodecContext *enc= ost->st->codec;
                os = output_files[ost->file_index];

                if(ost->st->codec->codec_type == AVMEDIA_TYPE_AUDIO && enc->frame_size <=1)
                    continue;
                if(ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && (os->oformat->flags & AVFMT_RAWPICTURE))
                    continue;

                if (ost->encoding_needed) {
                    for(;;) {
                        AVPacket pkt;
                        int fifo_bytes;
                        av_init_packet(&pkt);
                        pkt.stream_index= ost->index;

                        switch(ost->st->codec->codec_type) {
                        case AVMEDIA_TYPE_AUDIO:
                            fifo_bytes = av_fifo_size(ost->fifo);
                            ret = 0;
                            /* encode any samples remaining in fifo */
                            if (fifo_bytes > 0) {
                                int osize = av_get_bytes_per_sample(enc->sample_fmt);
                                int fs_tmp = enc->frame_size;

                                av_fifo_generic_read(ost->fifo, audio_buf, fifo_bytes, NULL);
                                if (enc->codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME) {
                                    enc->frame_size = fifo_bytes / (osize * enc->channels);
                                } else { /* pad */
                                    int frame_bytes = enc->frame_size*osize*enc->channels;
                                    if (allocated_audio_buf_size < frame_bytes)
                                        exit_program(1);
                                    generate_silence(audio_buf+fifo_bytes, enc->sample_fmt, frame_bytes - fifo_bytes);
                                }

                                ret = avcodec_encode_audio(enc, bit_buffer, bit_buffer_size, (short *)audio_buf);
                                pkt.duration = av_rescale((int64_t)enc->frame_size*ost->st->time_base.den,
                                                          ost->st->time_base.num, enc->sample_rate);
                                enc->frame_size = fs_tmp;
                            }
                            if(ret <= 0) {
                                ret = avcodec_encode_audio(enc, bit_buffer, bit_buffer_size, NULL);
                            }
                            if (ret < 0) {
                                fprintf(stderr, "Audio encoding failed\n");
                                exit_program(1);
                            }
                            audio_size += ret;
                            pkt.flags |= AV_PKT_FLAG_KEY;
                            break;
                        case AVMEDIA_TYPE_VIDEO:
                            ret = avcodec_encode_video(enc, bit_buffer, bit_buffer_size, NULL);
                            if (ret < 0) {
                                fprintf(stderr, "Video encoding failed\n");
                                exit_program(1);
                            }
                            video_size += ret;
                            if(enc->coded_frame && enc->coded_frame->key_frame)
                                pkt.flags |= AV_PKT_FLAG_KEY;
                            if (ost->logfile && enc->stats_out) {
                                fprintf(ost->logfile, "%s", enc->stats_out);
                            }
                            break;
                        default:
                            ret=-1;
                        }

                        if(ret<=0)
                            break;
                        pkt.data= bit_buffer;
                        pkt.size= ret;
                        if(enc->coded_frame && enc->coded_frame->pts != AV_NOPTS_VALUE)
                            pkt.pts= av_rescale_q(enc->coded_frame->pts, enc->time_base, ost->st->time_base);
                        write_frame(os, &pkt, ost->st->codec, ost->bitstream_filters);
                    }
                }
            }
        }
    }

    return 0;
}

这个函数的整个过程就是 解码——》编码的过程,也就是两个主要的函数(不考虑音频的编码) avcodec_decode_video2 ——》do_video_out  ,第一个函数avcodec_decode_video2函数属于库函数,这里不去研究只要会用就好,那么就去看看看do_video_out函数吧,这个函数不是库函数了。

do_video_out函数解析

static void do_video_out(AVFormatContext *s,
                         OutputStream *ost,
                         InputStream *ist,
                         AVFrame *in_picture,
                         int *frame_size, float quality)
{
    int nb_frames, i, ret, resample_changed;
    AVFrame *final_picture, *formatted_picture;
    AVCodecContext *enc, *dec;
    double sync_ipts;

    enc = ost->st->codec;
    dec = ist->st->codec;

    sync_ipts = get_sync_ipts(ost) / av_q2d(enc->time_base);

    /* by default, we output a single frame */
    nb_frames = 1;

    *frame_size = 0;

    if(video_sync_method){
        double vdelta = sync_ipts - ost->sync_opts;
        //FIXME set to 0.5 after we fix some dts/pts bugs like in avidec.c
        if (vdelta < -1.1)
            nb_frames = 0;
        else if (video_sync_method == 2 || (video_sync_method<0 && (s->oformat->flags & AVFMT_VARIABLE_FPS))){
            if(vdelta<=-0.6){
                nb_frames=0;
            }else if(vdelta>0.6)
                ost->sync_opts= lrintf(sync_ipts);
        }else if (vdelta > 1.1)
            nb_frames = lrintf(vdelta);
//fprintf(stderr, "vdelta:%f, ost->sync_opts:%"PRId64", ost->sync_ipts:%f nb_frames:%d\n", vdelta, ost->sync_opts, get_sync_ipts(ost), nb_frames);
        if (nb_frames == 0){
            ++nb_frames_drop;
            if (verbose>2)
                fprintf(stderr, "*** drop!\n");
        }else if (nb_frames > 1) {
            nb_frames_dup += nb_frames - 1;
            if (verbose>2)
                fprintf(stderr, "*** %d dup!\n", nb_frames-1);
        }
    }else
        ost->sync_opts= lrintf(sync_ipts);

    nb_frames= FFMIN(nb_frames, max_frames[AVMEDIA_TYPE_VIDEO] - ost->frame_number);
    if (nb_frames <= 0)
        return;

    formatted_picture = in_picture;
    final_picture = formatted_picture;

    resample_changed = ost->resample_width   != dec->width  ||
                       ost->resample_height  != dec->height ||
                       ost->resample_pix_fmt != dec->pix_fmt;

    if (resample_changed) {
        av_log(NULL, AV_LOG_INFO,
               "Input stream #%d.%d frame changed from size:%dx%d fmt:%s to size:%dx%d fmt:%s\n",
               ist->file_index, ist->st->index,
               ost->resample_width, ost->resample_height, av_get_pix_fmt_name(ost->resample_pix_fmt),
               dec->width         , dec->height         , av_get_pix_fmt_name(dec->pix_fmt));
        if(!ost->video_resample)
            exit_program(1);
    }

#if !CONFIG_AVFILTER
    if (ost->video_resample) {
        final_picture = &ost->pict_tmp;
        if (resample_changed) {
            /* initialize a new scaler context */
            sws_freeContext(ost->img_resample_ctx);
            ost->img_resample_ctx = sws_getContext(
                ist->st->codec->width,
                ist->st->codec->height,
                ist->st->codec->pix_fmt,
                ost->st->codec->width,
                ost->st->codec->height,
                ost->st->codec->pix_fmt,
                ost->sws_flags, NULL, NULL, NULL);
            if (ost->img_resample_ctx == NULL) {
                fprintf(stderr, "Cannot get resampling context\n");
                exit_program(1);
            }
        }
        sws_scale(ost->img_resample_ctx, formatted_picture->data, formatted_picture->linesize,
              0, ost->resample_height, final_picture->data, final_picture->linesize);
    }
#endif

    /* duplicates frame if needed */
    for(i=0;i<nb_frames;i++) {
        AVPacket pkt;
        av_init_packet(&pkt);
        pkt.stream_index= ost->index;

        if (s->oformat->flags & AVFMT_RAWPICTURE) {
            /* raw pictures are written as AVPicture structure to
               avoid any copies. We support temorarily the older
               method. */
            AVFrame* old_frame = enc->coded_frame;
            enc->coded_frame = dec->coded_frame; //FIXME/XXX remove this hack
            pkt.data= (uint8_t *)final_picture;
            pkt.size=  sizeof(AVPicture);
            pkt.pts= av_rescale_q(ost->sync_opts, enc->time_base, ost->st->time_base);
            pkt.flags |= AV_PKT_FLAG_KEY;

            write_frame(s, &pkt, ost->st->codec, ost->bitstream_filters);
            enc->coded_frame = old_frame;
        } else {
            AVFrame big_picture;

            big_picture= *final_picture;
            /* better than nothing: use input picture interlaced
               settings */
            big_picture.interlaced_frame = in_picture->interlaced_frame;
            if (ost->st->codec->flags & (CODEC_FLAG_INTERLACED_DCT|CODEC_FLAG_INTERLACED_ME)) {
                if(top_field_first == -1)
                    big_picture.top_field_first = in_picture->top_field_first;
                else
                    big_picture.top_field_first = top_field_first;
            }

            /* handles sameq here. This is not correct because it may
               not be a global option */
            big_picture.quality = quality;
            if(!me_threshold)
                big_picture.pict_type = 0;
//            big_picture.pts = AV_NOPTS_VALUE;
            big_picture.pts= ost->sync_opts;
//            big_picture.pts= av_rescale(ost->sync_opts, AV_TIME_BASE*(int64_t)enc->time_base.num, enc->time_base.den);
//av_log(NULL, AV_LOG_DEBUG, "%"PRId64" -> encoder\n", ost->sync_opts);
            if (ost->forced_kf_index < ost->forced_kf_count &&
                big_picture.pts >= ost->forced_kf_pts[ost->forced_kf_index]) {
                big_picture.pict_type = AV_PICTURE_TYPE_I;
                ost->forced_kf_index++;
            }
            ret = avcodec_encode_video(enc,
                                       bit_buffer, bit_buffer_size,
                                       &big_picture);    //编码一帧
            if (ret < 0) {
                fprintf(stderr, "Video encoding failed\n");
                exit_program(1);
            }

            if(ret>0){
                pkt.data= bit_buffer;
                pkt.size= ret;
                if(enc->coded_frame->pts != AV_NOPTS_VALUE)
                    pkt.pts= av_rescale_q(enc->coded_frame->pts, enc->time_base, ost->st->time_base);
/*av_log(NULL, AV_LOG_DEBUG, "encoder -> %"PRId64"/%"PRId64"\n",
   pkt.pts != AV_NOPTS_VALUE ? av_rescale(pkt.pts, enc->time_base.den, AV_TIME_BASE*(int64_t)enc->time_base.num) : -1,
   pkt.dts != AV_NOPTS_VALUE ? av_rescale(pkt.dts, enc->time_base.den, AV_TIME_BASE*(int64_t)enc->time_base.num) : -1);*/

                if(enc->coded_frame->key_frame)
                    pkt.flags |= AV_PKT_FLAG_KEY;
                write_frame(s, &pkt, ost->st->codec, ost->bitstream_filters);
                *frame_size = ret;
                video_size += ret;
                //fprintf(stderr,"\nFrame: %3d size: %5d type: %d",
                //        enc->frame_number-1, ret, enc->pict_type);
                /* if two pass, output log */
                if (ost->logfile && enc->stats_out) {
                    fprintf(ost->logfile, "%s", enc->stats_out);
                }
            }
        }
        ost->sync_opts++;
        ost->frame_number++;
    }
}
上面那个函数说白了重要的就是avcodec_encode_video       函数,实际上这个函数的作用就是:编码一帧


现在从头到尾的整理这些函数发现,重要的几个函数就那么多,实际上大量的代码就用来判断各种情况的处理,现在把这些重要的函数整理如下:
    
1.  avformat_open_input      Open an input stream and read the header. The codecs are not opened.The stream must be closed with av_close_input_file(). 也就是打开一个输入的流和读这个流的头文件,但是编解器是没有打开的
     (文件打开阶段)
2.av_guess_format    一个输出流的格式的确定   Return the output format in the list of registered output formats , which best matches the provided parameters, or return NULL if,there is no match。
    (输出文件流)
3.av_read_frame   读取一帧的数据|  avcodec_decode_video2(解码一帧)|  avcodec_encode_video编码一帧
     (转码阶段)





 a.AVFormatContext

在一篇博客对这个数据结构做了介绍   AVFormatContext 但是介绍的不是很仔细这里有一篇各种数据结构的联系图倒是可以参考一下:

下面的内容来自一篇博客,该博客来自于  http://blog.youkuaiyun.com/leixiaohua1020/article/details/11693997

FFMPEG中结构体很多。最关键的结构体可以分成以下几类:

a)        解协议(http,rtsp,rtmp,mms)

AVIOContext,URLProtocol,URLContext主要存储视音频使用的协议的类型以及状态。URLProtocol存储输入视音频使用的封装格式。每种协议都对应一个URLProtocol结构。(注意:FFMPEG中文件也被当做一种协议“file”)

b)        解封装(flv,avi,rmvb,mp4)

AVFormatContext主要存储视音频封装格式中包含的信息;AVInputFormat存储输入视音频使用的封装格式。每种视音频封装格式都对应一个AVInputFormat 结构。

c)        解码(h264,mpeg2,aac,mp3)

每个AVStream存储一个视频/音频流的相关数据;每个AVStream对应一个AVCodecContext,存储该视频/音频流使用解码方式的相关数据;每个AVCodecContext中对应一个AVCodec,包含该视频/音频对应的解码器。每种解码器都对应一个AVCodec结构。

d) 存数据

视频的话,每个结构一般是存一帧;音频可能有好几帧

解码前数据:AVPacket

解码后数据:AVFrame


他们之间的对应关系如下所示:




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值