ffmpeg(2) 关于AVFMT_NOFILE

本文深入探讨了FFmpeg中的AVFMT_NOFILE旗标的作用及其应用场景,特别是对于那些不通过标准文件I/O操作的特殊设备。文章还详细介绍了与该旗标相关的AVIOContext和AVIOInterruptCB的使用细节。

转自:http://blog.youkuaiyun.com/xiruanliuwei/article/details/20325467


在学习ffmpeg源码时,常常看到一个Flag:AVFMT_NOFILE,这个Flag非常重要,其在avformat.h中定义:#define AVFMT_NOFILE        0x0001

在整个工程中搜索AVFMT_NOFILE,能够看到ff_alsa_demuxer、ff_bktr_demuxer、ff_dshow_demuxer、ff_dv1394_demuxer、ff_fbdev_demuxer、ff_image2_demuxer等部分

AVInputFormat类型的变量,其结构体成员flags的值被设置为AVFMT_NOFILE,当然,不是只有AVInputFormat类型的变量,部分AVOutputFormat类型的变量,其结构体成员

flags的值也被设置为AVFMT_NOFILE,查看AVFMT_NOFILE的搜索结果,对其稍微详细的解释在avdevice.h中:

/**
 * @defgroup lavd Special devices muxing/demuxing library
 * @{
 * Libavdevice is a complementary library to @ref libavf "libavformat". It
 * provides various "special" platform-specific muxers and demuxers, e.g. for
 * grabbing devices, audio capture and playback etc. As a consequence, the
 * (de)muxers in libavdevice are of the AVFMT_NOFILE type (they use their own
 * I/O functions). The filename passed to avformat_open_input() often does not
 * refer to an actually existing file, but has some special device-specific
 * meaning - e.g. for x11grab it is the display name.
 *
 * To use libavdevice, simply call avdevice_register_all() to register all
 * compiled muxers and demuxers. They all use standard libavformat API.
 * @}
 */

libavdevice目录中的内容是libavformat目录中内容的补充,libavdevice提供的是各种特殊平台专用的muxers和demuxers,例如grabbing devices、audio capture、playback等,

因此,libavdevice中的muxers和demuxers,都是AVFMT_NOFILE类型的,这些muxers和demuxers都使用它们自己的I/O函数,传递给avformat_open_input()函数的函数filename,

通常不是指一个实际存在的文件,而是根据特定的设备确定,例如,x11grab,就是display name?显示器名字?


在AVFormatContext结构体中:

    /**
     * I/O context.
     *
     * decoding: either set by the user before avformat_open_input() (then
     * the user must close it manually) or set by avformat_open_input().
     * encoding: set by the user.
     *
     * Do NOT set this field if AVFMT_NOFILE flag is set in
     * iformat/oformat.flags. In such a case, the (de)muxer will handle
     * I/O in some other way and this field will be NULL.
     */
    AVIOContext *pb;


AVIOContext表示字节流输入/输出的上下文,在muxers和demuxers的数据成员flags有设置AVFMT_NOFILE时,这个成员变量pb就不需要设置,因为muxers和demuxers会

使用其它的方式处理输入/输出。


    /**
     * Custom interrupt callbacks for the I/O layer.
     *
     * decoding: set by the user before avformat_open_input().
     * encoding: set by the user before avformat_write_header()
     * (mainly useful for AVFMT_NOFILE formats). The callback
     * should also be passed to avio_open2() if it's used to
     * open the file.
     */
    AVIOInterruptCB interrupt_callback;


AVIOInterruptCB在cflags有设置AVFMT_NOFILE时使用,传递给函数avio_open2使用。

/**
 * Callback for checking whether to abort blocking functions.
 * AVERROR_EXIT is returned in this case by the interrupted
 * function. During blocking operations, callback is called with
 * opaque as parameter. If the callback returns 1, the
 * blocking operation will be aborted.
 *
 * No members can be added to this struct without a major bump, if
 * new elements have been added after this struct in AVFormatContext
 * or AVIOContext.
 */
typedef struct AVIOInterruptCB 
{
    int (*callback)(void*);
    void *opaque;
} AVIOInterruptCB;


另外,就是:

/**
 * Guess the file format.
 *
 * @param is_opened Whether the file is already opened; determines whether
 *                  demuxers with or without AVFMT_NOFILE are probed.
 */
AVInputFormat *av_probe_input_format(AVProbeData *pd, int is_opened);


/**
 * Guess the file format.
 *
 * @param is_opened Whether the file is already opened; determines whether
 *                  demuxers with or without AVFMT_NOFILE are probed.
 * @param score_max A probe score larger that this is required to accept a
 *                  detection, the variable is set to the actual detection
 *                  score afterwards.
 *                  If the score is <= AVPROBE_SCORE_MAX / 4 it is recommended
 *                  to retry with a larger probe buffer.
 */
AVInputFormat *av_probe_input_format2(AVProbeData *pd, int is_opened, int *score_max);


参数is_opened,与muxers和demuxers有关系。


关于avio_open2:

/**
 * Create and initialize a AVIOContext for accessing the
 * resource indicated by url.
 * @note When the resource indicated by url has been opened in
 * read+write mode, the AVIOContext can be used only for writing.
 *
 * @param s Used to return the pointer to the created AVIOContext.
 * In case of failure the pointed to value is set to NULL.
 * @param flags flags which control how the resource indicated by url
 * is to be opened
 * @param int_cb an interrupt callback to be used at the protocols level
 * @param options  A dictionary filled with protocol-private options. On return
 * this parameter will be destroyed and replaced with a dict containing options
 * that were not found. May be NULL.
 * @return >= 0 in case of success, a negative value corresponding to an
 * AVERROR code in case of failure
 */
int avio_open2(AVIOContext **s, const char *url, int flags, const AVIOInterruptCB *int_cb, AVDictionary **options);


参数const AVIOInterruptCB *int_cb,经过层层调用,最终在函数url_alloc_for_protocol中,赋值给新分配的URLContext类型变量的数据成员AVIOInterruptCB interrupt_callback;

static int url_alloc_for_protocol(URLContext **puc, struct URLProtocol *up,
                                  const char *filename, int flags,
                                  const AVIOInterruptCB *int_cb)
{
    URLContext *uc;

   ...

   uc = av_mallocz(sizeof(URLContext) + strlen(filename) + 1);

   ...

       if (int_cb)
        uc->interrupt_callback = *int_cb;

   ...


接下来要做的事情,就是理清楚,在有AVFMT_NOFILE和没有AVFMT_NOFILE的两种情况下,avformat input open分别的流程,以及二者之间有什么差别?

另外,还有AVIOInterruptCB使用的问题。


/** * @file lv_ffmpeg.c * */ /********************* * INCLUDES *********************/ #include "lv_ffmpeg_private.h" #if LV_USE_FFMPEG != 0 #include "../../draw/lv_image_decoder_private.h" #include "../../draw/lv_draw_buf_private.h" #include "../../core/lv_obj_class_private.h" #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libavutil/imgutils.h> #include <libavutil/samplefmt.h> #include <libavutil/timestamp.h> #include <libswscale/swscale.h> /********************* * DEFINES *********************/ #define DECODER_NAME "FFMPEG" #if LV_COLOR_DEPTH == 8 #define AV_PIX_FMT_TRUE_COLOR AV_PIX_FMT_RGB8 #elif LV_COLOR_DEPTH == 16 #define AV_PIX_FMT_TRUE_COLOR AV_PIX_FMT_RGB565LE #elif LV_COLOR_DEPTH == 32 #define AV_PIX_FMT_TRUE_COLOR AV_PIX_FMT_BGR0 #else #error Unsupported LV_COLOR_DEPTH #endif #define MY_CLASS (&lv_ffmpeg_player_class) #define FRAME_DEF_REFR_PERIOD 33 /*[ms]*/ #define DECODER_BUFFER_SIZE (8 * 1024) /********************** * TYPEDEFS **********************/ struct ffmpeg_context_s { AVIOContext * io_ctx; lv_fs_file_t lv_file; AVFormatContext * fmt_ctx; AVCodecContext * video_dec_ctx; AVStream * video_stream; uint8_t * video_src_data[4]; uint8_t * video_dst_data[4]; struct SwsContext * sws_ctx; AVFrame * frame; AVPacket * pkt; int video_stream_idx; int video_src_linesize[4]; int video_dst_linesize[4]; enum AVPixelFormat video_dst_pix_fmt; bool has_alpha; lv_draw_buf_t draw_buf; lv_draw_buf_handlers_t draw_buf_handlers; }; #pragma pack(1) struct _lv_image_pixel_color_s { lv_color_t c; uint8_t alpha; }; #pragma pack() /********************** * STATIC PROTOTYPES **********************/ static lv_result_t decoder_info(lv_image_decoder_t * decoder, lv_image_decoder_dsc_t * src, lv_image_header_t * header); static lv_result_t decoder_open(lv_image_decoder_t * decoder, lv_image_decoder_dsc_t * dsc); static void decoder_close(lv_image_decoder_t * dec, lv_image_decoder_dsc_t * dsc); static int ffmpeg_lvfs_read(void * ptr, uint8_t * buf, int buf_size); static int64_t ffmpeg_lvfs_seek(void * ptr, int64_t pos, int whence); static AVIOContext * ffmpeg_open_io_context(lv_fs_file_t * file); static struct ffmpeg_context_s * ffmpeg_open_file(const char * path, bool is_lv_fs_path); static void ffmpeg_close(struct ffmpeg_context_s * ffmpeg_ctx); static void ffmpeg_close_src_ctx(struct ffmpeg_context_s * ffmpeg_ctx); static void ffmpeg_close_dst_ctx(struct ffmpeg_context_s * ffmpeg_ctx); static int ffmpeg_image_allocate(struct ffmpeg_context_s * ffmpeg_ctx); static int ffmpeg_get_image_header(lv_image_decoder_dsc_t * dsc, lv_image_header_t * header); static int ffmpeg_get_frame_refr_period(struct ffmpeg_context_s * ffmpeg_ctx); static uint8_t * ffmpeg_get_image_data(struct ffmpeg_context_s * ffmpeg_ctx); static int ffmpeg_update_next_frame(struct ffmpeg_context_s * ffmpeg_ctx); static int ffmpeg_output_video_frame(struct ffmpeg_context_s * ffmpeg_ctx); static bool ffmpeg_pix_fmt_has_alpha(enum AVPixelFormat pix_fmt); static bool ffmpeg_pix_fmt_is_yuv(enum AVPixelFormat pix_fmt); static void lv_ffmpeg_player_constructor(const lv_obj_class_t * class_p, lv_obj_t * obj); static void lv_ffmpeg_player_destructor(const lv_obj_class_t * class_p, lv_obj_t * obj); /********************** * STATIC VARIABLES **********************/ const lv_obj_class_t lv_ffmpeg_player_class = { .constructor_cb = lv_ffmpeg_player_constructor, .destructor_cb = lv_ffmpeg_player_destructor, .instance_size = sizeof(lv_ffmpeg_player_t), .base_class = &lv_image_class, .name = "lv_ffmpeg_player", }; /********************** * MACROS **********************/ /********************** * GLOBAL FUNCTIONS **********************/ void lv_ffmpeg_init(void) { lv_image_decoder_t * dec = lv_image_decoder_create(); lv_image_decoder_set_info_cb(dec, decoder_info); lv_image_decoder_set_open_cb(dec, decoder_open); lv_image_decoder_set_close_cb(dec, decoder_close); dec->name = DECODER_NAME; #if LV_FFMPEG_DUMP_FORMAT == 0 av_log_set_level(AV_LOG_QUIET); #endif } void lv_ffmpeg_deinit(void) { lv_image_decoder_t * dec = NULL; while((dec = lv_image_decoder_get_next(dec)) != NULL) { if(dec->info_cb == decoder_info) { lv_image_decoder_delete(dec); break; } } } int lv_ffmpeg_get_frame_num(const char * path) { int ret = -1; struct ffmpeg_context_s * ffmpeg_ctx = ffmpeg_open_file(path, LV_FFMPEG_PLAYER_USE_LV_FS); if(ffmpeg_ctx) { ret = ffmpeg_ctx->video_stream->nb_frames; ffmpeg_close(ffmpeg_ctx); } return ret; } lv_obj_t * lv_ffmpeg_player_create(lv_obj_t * parent) { lv_obj_t * obj = lv_obj_class_create_obj(MY_CLASS, parent); lv_obj_class_init_obj(obj); return obj; } lv_result_t lv_ffmpeg_player_set_src(lv_obj_t * obj, const char * path) { LV_ASSERT_OBJ(obj, MY_CLASS); lv_result_t res = LV_RESULT_INVALID; lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; if(player->ffmpeg_ctx) { ffmpeg_close(player->ffmpeg_ctx); player->ffmpeg_ctx = NULL; } lv_timer_pause(player->timer); player->ffmpeg_ctx = ffmpeg_open_file(path, LV_FFMPEG_PLAYER_USE_LV_FS); if(!player->ffmpeg_ctx) { goto failed; } if(ffmpeg_image_allocate(player->ffmpeg_ctx) < 0) { LV_LOG_ERROR("ffmpeg image allocate failed"); ffmpeg_close(player->ffmpeg_ctx); player->ffmpeg_ctx = NULL; goto failed; } bool has_alpha = player->ffmpeg_ctx->has_alpha; int width = player->ffmpeg_ctx->video_dec_ctx->width; int height = player->ffmpeg_ctx->video_dec_ctx->height; uint8_t * data = ffmpeg_get_image_data(player->ffmpeg_ctx); lv_color_format_t cf = has_alpha ? LV_COLOR_FORMAT_ARGB8888 : LV_COLOR_FORMAT_NATIVE; uint32_t stride = width * lv_color_format_get_size(cf); uint32_t data_size = stride * height; lv_memzero(data, data_size); player->imgdsc.header.w = width; player->imgdsc.header.h = height; player->imgdsc.data_size = data_size; player->imgdsc.header.cf = cf; player->imgdsc.header.stride = stride; player->imgdsc.data = data; lv_image_set_src(&player->img.obj, &(player->imgdsc)); int period = ffmpeg_get_frame_refr_period(player->ffmpeg_ctx); if(period > 0) { LV_LOG_INFO("frame refresh period = %d ms, rate = %d fps", period, 1000 / period); lv_timer_set_period(player->timer, period); } else { LV_LOG_WARN("unable to get frame refresh period"); } res = LV_RESULT_OK; failed: return res; } void lv_ffmpeg_player_set_cmd(lv_obj_t * obj, lv_ffmpeg_player_cmd_t cmd) { LV_ASSERT_OBJ(obj, MY_CLASS); lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; if(!player->ffmpeg_ctx) { LV_LOG_ERROR("ffmpeg_ctx is NULL"); return; } lv_timer_t * timer = player->timer; switch(cmd) { case LV_FFMPEG_PLAYER_CMD_START: av_seek_frame(player->ffmpeg_ctx->fmt_ctx, 0, 0, AVSEEK_FLAG_BACKWARD); lv_timer_resume(timer); LV_LOG_INFO("ffmpeg player start"); break; case LV_FFMPEG_PLAYER_CMD_STOP: av_seek_frame(player->ffmpeg_ctx->fmt_ctx, 0, 0, AVSEEK_FLAG_BACKWARD); lv_timer_pause(timer); LV_LOG_INFO("ffmpeg player stop"); break; case LV_FFMPEG_PLAYER_CMD_PAUSE: lv_timer_pause(timer); LV_LOG_INFO("ffmpeg player pause"); break; case LV_FFMPEG_PLAYER_CMD_RESUME: lv_timer_resume(timer); LV_LOG_INFO("ffmpeg player resume"); break; default: LV_LOG_ERROR("Error cmd: %d", cmd); break; } } void lv_ffmpeg_player_set_auto_restart(lv_obj_t * obj, bool en) { LV_ASSERT_OBJ(obj, MY_CLASS); lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; player->auto_restart = en; } /********************** * STATIC FUNCTIONS **********************/ static lv_result_t decoder_info(lv_image_decoder_t * decoder, lv_image_decoder_dsc_t * dsc, lv_image_header_t * header) { LV_UNUSED(decoder); /* Get the source type */ lv_image_src_t src_type = dsc->src_type; if(src_type == LV_IMAGE_SRC_FILE) { if(ffmpeg_get_image_header(dsc, header) < 0) { LV_LOG_ERROR("ffmpeg can't get image header"); return LV_RESULT_INVALID; } return LV_RESULT_OK; } /* If didn't succeeded earlier then it's an error */ return LV_RESULT_INVALID; } /** * Decode an image using ffmpeg library * @param decoder pointer to the decoder * @param dsc pointer to the decoder descriptor * @return LV_RESULT_OK: no error; LV_RESULT_INVALID: can't open the image */ static lv_result_t decoder_open(lv_image_decoder_t * decoder, lv_image_decoder_dsc_t * dsc) { LV_UNUSED(decoder); if(dsc->src_type == LV_IMAGE_SRC_FILE) { const char * path = dsc->src; struct ffmpeg_context_s * ffmpeg_ctx = ffmpeg_open_file(path, true); if(ffmpeg_ctx == NULL) { return LV_RESULT_INVALID; } if(ffmpeg_image_allocate(ffmpeg_ctx) < 0) { LV_LOG_ERROR("ffmpeg image allocate failed"); ffmpeg_close(ffmpeg_ctx); return LV_RESULT_INVALID; } if(ffmpeg_update_next_frame(ffmpeg_ctx) < 0) { ffmpeg_close(ffmpeg_ctx); LV_LOG_ERROR("ffmpeg update frame failed"); return LV_RESULT_INVALID; } ffmpeg_close_src_ctx(ffmpeg_ctx); uint8_t * img_data = ffmpeg_get_image_data(ffmpeg_ctx); dsc->user_data = ffmpeg_ctx; lv_draw_buf_t * decoded = &ffmpeg_ctx->draw_buf; lv_draw_buf_init( decoded, dsc->header.w, dsc->header.h, dsc->header.cf, dsc->header.stride, img_data, dsc->header.stride * dsc->header.h); lv_draw_buf_set_flag(decoded, LV_IMAGE_FLAGS_MODIFIABLE); /* Empty handlers to avoid decoder asserts */ lv_draw_buf_handlers_init(&ffmpeg_ctx->draw_buf_handlers, NULL, NULL, NULL, NULL, NULL, NULL); decoded->handlers = &ffmpeg_ctx->draw_buf_handlers; if(dsc->args.premultiply && ffmpeg_ctx->has_alpha) { lv_draw_buf_premultiply(decoded); } dsc->decoded = decoded; /* The image is fully decoded. Return with its pointer */ return LV_RESULT_OK; } /* If not returned earlier then it failed */ return LV_RESULT_INVALID; } static void decoder_close(lv_image_decoder_t * decoder, lv_image_decoder_dsc_t * dsc) { LV_UNUSED(decoder); struct ffmpeg_context_s * ffmpeg_ctx = dsc->user_data; ffmpeg_close(ffmpeg_ctx); } static uint8_t * ffmpeg_get_image_data(struct ffmpeg_context_s * ffmpeg_ctx) { uint8_t * img_data = ffmpeg_ctx->video_dst_data[0]; if(img_data == NULL) { LV_LOG_ERROR("ffmpeg video dst data is NULL"); } return img_data; } static bool ffmpeg_pix_fmt_has_alpha(enum AVPixelFormat pix_fmt) { const AVPixFmtDescriptor * desc = av_pix_fmt_desc_get(pix_fmt); if(desc == NULL) { return false; } if(pix_fmt == AV_PIX_FMT_PAL8) { return true; } return desc->flags & AV_PIX_FMT_FLAG_ALPHA; } static bool ffmpeg_pix_fmt_is_yuv(enum AVPixelFormat pix_fmt) { const AVPixFmtDescriptor * desc = av_pix_fmt_desc_get(pix_fmt); if(desc == NULL) { return false; } return !(desc->flags & AV_PIX_FMT_FLAG_RGB) && desc->nb_components >= 2; } static int ffmpeg_output_video_frame(struct ffmpeg_context_s * ffmpeg_ctx) { int ret = -1; int width = ffmpeg_ctx->video_dec_ctx->width; int height = ffmpeg_ctx->video_dec_ctx->height; AVFrame * frame = ffmpeg_ctx->frame; if(frame->width != width || frame->height != height || frame->format != ffmpeg_ctx->video_dec_ctx->pix_fmt) { /* To handle this change, one could call av_image_alloc again and * decode the following frames into another rawvideo file. */ LV_LOG_ERROR("Width, height and pixel format have to be " "constant in a rawvideo file, but the width, height or " "pixel format of the input video changed:\n" "old: width = %d, height = %d, format = %s\n" "new: width = %d, height = %d, format = %s\n", width, height, av_get_pix_fmt_name(ffmpeg_ctx->video_dec_ctx->pix_fmt), frame->width, frame->height, av_get_pix_fmt_name(frame->format)); goto failed; } /* copy decoded frame to destination buffer: * this is required since rawvideo expects non aligned data */ av_image_copy(ffmpeg_ctx->video_src_data, ffmpeg_ctx->video_src_linesize, (const uint8_t **)(frame->data), frame->linesize, ffmpeg_ctx->video_dec_ctx->pix_fmt, width, height); if(ffmpeg_ctx->sws_ctx == NULL) { int swsFlags = SWS_BILINEAR; if(ffmpeg_pix_fmt_is_yuv(ffmpeg_ctx->video_dec_ctx->pix_fmt)) { /* When the video width and height are not multiples of 8, * and there is no size change in the conversion, * a blurry screen will appear on the right side * This problem was discovered in 2012 and * continues to exist in version 4.1.3 in 2019 * This problem can be avoided by increasing SWS_ACCURATE_RND */ if((width & 0x7) || (height & 0x7)) { LV_LOG_WARN("The width(%d) and height(%d) the image " "is not a multiple of 8, " "the decoding speed may be reduced", width, height); swsFlags |= SWS_ACCURATE_RND; } } ffmpeg_ctx->sws_ctx = sws_getContext( width, height, ffmpeg_ctx->video_dec_ctx->pix_fmt, width, height, ffmpeg_ctx->video_dst_pix_fmt, swsFlags, NULL, NULL, NULL); } if(!ffmpeg_ctx->has_alpha) { int lv_linesize = lv_color_format_get_size(LV_COLOR_FORMAT_NATIVE) * width; int dst_linesize = ffmpeg_ctx->video_dst_linesize[0]; if(dst_linesize != lv_linesize) { LV_LOG_WARN("ffmpeg linesize = %d, but lvgl image require %d", dst_linesize, lv_linesize); ffmpeg_ctx->video_dst_linesize[0] = lv_linesize; } } ret = sws_scale( ffmpeg_ctx->sws_ctx, (const uint8_t * const *)(ffmpeg_ctx->video_src_data), ffmpeg_ctx->video_src_linesize, 0, height, ffmpeg_ctx->video_dst_data, ffmpeg_ctx->video_dst_linesize); failed: return ret; } static int ffmpeg_decode_packet(AVCodecContext * dec, const AVPacket * pkt, struct ffmpeg_context_s * ffmpeg_ctx) { int ret = 0; /* submit the packet to the decoder */ ret = avcodec_send_packet(dec, pkt); if(ret < 0) { LV_LOG_ERROR("Error submitting a packet for decoding (%s)", av_err2str(ret)); return ret; } /* get all the available frames from the decoder */ while(ret >= 0) { ret = avcodec_receive_frame(dec, ffmpeg_ctx->frame); if(ret < 0) { /* those two return values are special and mean there is * no output frame available, * but there were no errors during decoding */ if(ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) { return 0; } LV_LOG_ERROR("Error during decoding (%s)", av_err2str(ret)); return ret; } /* write the frame data to output file */ if(dec->codec->type == AVMEDIA_TYPE_VIDEO) { ret = ffmpeg_output_video_frame(ffmpeg_ctx); } av_frame_unref(ffmpeg_ctx->frame); if(ret < 0) { LV_LOG_WARN("ffmpeg_decode_packet ended %d", ret); return ret; } } return 0; } static int ffmpeg_open_codec_context(int * stream_idx, AVCodecContext ** dec_ctx, AVFormatContext * fmt_ctx, enum AVMediaType type) { int ret; int stream_index; AVStream * st; const AVCodec * dec = NULL; AVDictionary * opts = NULL; ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0); if(ret < 0) { LV_LOG_ERROR("Could not find %s stream in input file", av_get_media_type_string(type)); return ret; } else { stream_index = ret; st = fmt_ctx->streams[stream_index]; /* find decoder for the stream */ dec = avcodec_find_decoder(st->codecpar->codec_id); if(dec == NULL) { LV_LOG_ERROR("Failed to find %s codec", av_get_media_type_string(type)); return AVERROR(EINVAL); } /* Allocate a codec context for the decoder */ *dec_ctx = avcodec_alloc_context3(dec); if(*dec_ctx == NULL) { LV_LOG_ERROR("Failed to allocate the %s codec context", av_get_media_type_string(type)); return AVERROR(ENOMEM); } /* Copy codec parameters from input stream to output codec context */ if((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) { LV_LOG_ERROR( "Failed to copy %s codec parameters to decoder context", av_get_media_type_string(type)); return ret; } /* Init the decoders */ if((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) { LV_LOG_ERROR("Failed to open %s codec", av_get_media_type_string(type)); return ret; } *stream_idx = stream_index; } return 0; } static int ffmpeg_get_image_header(lv_image_decoder_dsc_t * dsc, lv_image_header_t * header) { int ret = -1; AVFormatContext * fmt_ctx = NULL; AVCodecContext * video_dec_ctx = NULL; AVIOContext * io_ctx = NULL; int video_stream_idx; io_ctx = ffmpeg_open_io_context(&dsc->file); if(io_ctx == NULL) { LV_LOG_ERROR("io_ctx malloc failed"); return ret; } fmt_ctx = avformat_alloc_context(); if(fmt_ctx == NULL) { LV_LOG_ERROR("fmt_ctx malloc failed"); goto failed; } fmt_ctx->pb = io_ctx; fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO; /* open input file, and allocate format context */ if(avformat_open_input(&fmt_ctx, (const char *)dsc->src, NULL, NULL) < 0) { LV_LOG_ERROR("Could not open source file %s", (const char *)dsc->src); goto failed; } /* retrieve stream information */ if(avformat_find_stream_info(fmt_ctx, NULL) < 0) { LV_LOG_ERROR("Could not find stream information"); goto failed; } if(ffmpeg_open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) { bool has_alpha = ffmpeg_pix_fmt_has_alpha(video_dec_ctx->pix_fmt); /* allocate image where the decoded image will be put */ header->w = video_dec_ctx->width; header->h = video_dec_ctx->height; header->cf = has_alpha ? LV_COLOR_FORMAT_ARGB8888 : LV_COLOR_FORMAT_NATIVE; header->stride = header->w * lv_color_format_get_size(header->cf); ret = 0; } failed: avcodec_free_context(&video_dec_ctx); avformat_close_input(&fmt_ctx); if(io_ctx != NULL) { av_free(io_ctx->buffer); av_free(io_ctx); } return ret; } static int ffmpeg_get_frame_refr_period(struct ffmpeg_context_s * ffmpeg_ctx) { int avg_frame_rate_num = ffmpeg_ctx->video_stream->avg_frame_rate.num; if(avg_frame_rate_num > 0) { int period = 1000 * (int64_t)ffmpeg_ctx->video_stream->avg_frame_rate.den / avg_frame_rate_num; return period; } return -1; } static int ffmpeg_update_next_frame(struct ffmpeg_context_s * ffmpeg_ctx) { int ret = 0; while(1) { /* read frames from the file */ if(av_read_frame(ffmpeg_ctx->fmt_ctx, ffmpeg_ctx->pkt) >= 0) { bool is_image = false; /* check if the packet belongs to a stream we are interested in, * otherwise skip it */ if(ffmpeg_ctx->pkt->stream_index == ffmpeg_ctx->video_stream_idx) { ret = ffmpeg_decode_packet(ffmpeg_ctx->video_dec_ctx, ffmpeg_ctx->pkt, ffmpeg_ctx); is_image = true; } av_packet_unref(ffmpeg_ctx->pkt); if(ret < 0) { LV_LOG_WARN("video frame is empty %d", ret); break; } /* Used to filter data that is not an image */ if(is_image) { break; } } else { ret = -1; break; } } return ret; } static int ffmpeg_lvfs_read(void * ptr, uint8_t * buf, int buf_size) { lv_fs_file_t * file = ptr; uint32_t bytesRead = 0; lv_fs_res_t res = lv_fs_read(file, buf, buf_size, &bytesRead); if(bytesRead == 0) return AVERROR_EOF; /* Let FFmpeg know that we have reached eof */ if(res != LV_FS_RES_OK) return AVERROR_EOF; return bytesRead; } static int64_t ffmpeg_lvfs_seek(void * ptr, int64_t pos, int whence) { lv_fs_file_t * file = ptr; if(whence == SEEK_SET && lv_fs_seek(file, pos, SEEK_SET) == LV_FS_RES_OK) { return pos; } return -1; } static AVIOContext * ffmpeg_open_io_context(lv_fs_file_t * file) { uint8_t * iBuffer = av_malloc(DECODER_BUFFER_SIZE); if(iBuffer == NULL) { LV_LOG_ERROR("iBuffer malloc failed"); return NULL; } AVIOContext * pIOCtx = avio_alloc_context(iBuffer, DECODER_BUFFER_SIZE, /* internal Buffer and its size */ 0, /* bWriteable (1=true,0=false) */ file, /* user data ; will be passed to our callback functions */ ffmpeg_lvfs_read, /* Read callback function */ 0, /* Write callback function */ ffmpeg_lvfs_seek); /* Seek callback function */ if(pIOCtx == NULL) { av_free(iBuffer); return NULL; } return pIOCtx; } static struct ffmpeg_context_s * ffmpeg_open_file(const char * path, bool is_lv_fs_path) { if(path == NULL || lv_strlen(path) == 0) { LV_LOG_ERROR("file path is empty"); return NULL; } struct ffmpeg_context_s * ffmpeg_ctx = lv_malloc_zeroed(sizeof(struct ffmpeg_context_s)); LV_ASSERT_MALLOC(ffmpeg_ctx); if(ffmpeg_ctx == NULL) { LV_LOG_ERROR("ffmpeg_ctx malloc failed"); goto failed; } if(is_lv_fs_path) { const lv_fs_res_t fs_res = lv_fs_open(&(ffmpeg_ctx->lv_file), path, LV_FS_MODE_RD); if(fs_res != LV_FS_RES_OK) { LV_LOG_WARN("Could not open file: %s, res: %d", path, fs_res); lv_free(ffmpeg_ctx); return NULL; } ffmpeg_ctx->io_ctx = ffmpeg_open_io_context(&(ffmpeg_ctx->lv_file)); /* Save the buffer pointer to free it later */ if(ffmpeg_ctx->io_ctx == NULL) { LV_LOG_ERROR("io_ctx malloc failed"); goto failed; } ffmpeg_ctx->fmt_ctx = avformat_alloc_context(); if(ffmpeg_ctx->fmt_ctx == NULL) { LV_LOG_ERROR("fmt_ctx malloc failed"); goto failed; } ffmpeg_ctx->fmt_ctx->pb = ffmpeg_ctx->io_ctx; ffmpeg_ctx->fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO; } /* open input file, and allocate format context */ if(avformat_open_input(&(ffmpeg_ctx->fmt_ctx), path, NULL, NULL) < 0) { LV_LOG_ERROR("Could not open source file %s", path); goto failed; } /* retrieve stream information */ if(avformat_find_stream_info(ffmpeg_ctx->fmt_ctx, NULL) < 0) { LV_LOG_ERROR("Could not find stream information"); goto failed; } if(ffmpeg_open_codec_context( &(ffmpeg_ctx->video_stream_idx), &(ffmpeg_ctx->video_dec_ctx), ffmpeg_ctx->fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) { ffmpeg_ctx->video_stream = ffmpeg_ctx->fmt_ctx->streams[ffmpeg_ctx->video_stream_idx]; ffmpeg_ctx->has_alpha = ffmpeg_pix_fmt_has_alpha(ffmpeg_ctx->video_dec_ctx->pix_fmt); ffmpeg_ctx->video_dst_pix_fmt = (ffmpeg_ctx->has_alpha ? AV_PIX_FMT_BGRA : AV_PIX_FMT_TRUE_COLOR); } #if LV_FFMPEG_DUMP_FORMAT /* dump input information to stderr */ av_dump_format(ffmpeg_ctx->fmt_ctx, 0, path, 0); #endif if(ffmpeg_ctx->video_stream == NULL) { LV_LOG_ERROR("Could not find video stream in the input, aborting"); goto failed; } return ffmpeg_ctx; failed: ffmpeg_close(ffmpeg_ctx); return NULL; } static int ffmpeg_image_allocate(struct ffmpeg_context_s * ffmpeg_ctx) { int ret; /* allocate image where the decoded image will be put */ ret = av_image_alloc( ffmpeg_ctx->video_src_data, ffmpeg_ctx->video_src_linesize, ffmpeg_ctx->video_dec_ctx->width, ffmpeg_ctx->video_dec_ctx->height, ffmpeg_ctx->video_dec_ctx->pix_fmt, 4); if(ret < 0) { LV_LOG_ERROR("Could not allocate src raw video buffer"); return ret; } LV_LOG_INFO("alloc video_src_bufsize = %d", ret); ret = av_image_alloc( ffmpeg_ctx->video_dst_data, ffmpeg_ctx->video_dst_linesize, ffmpeg_ctx->video_dec_ctx->width, ffmpeg_ctx->video_dec_ctx->height, ffmpeg_ctx->video_dst_pix_fmt, 4); if(ret < 0) { LV_LOG_ERROR("Could not allocate dst raw video buffer"); return ret; } LV_LOG_INFO("allocate video_dst_bufsize = %d", ret); ffmpeg_ctx->frame = av_frame_alloc(); if(ffmpeg_ctx->frame == NULL) { LV_LOG_ERROR("Could not allocate frame"); return -1; } /* allocate packet, set data to NULL, let the demuxer fill it */ ffmpeg_ctx->pkt = av_packet_alloc(); if(ffmpeg_ctx->pkt == NULL) { LV_LOG_ERROR("av_packet_alloc failed"); return -1; } ffmpeg_ctx->pkt->data = NULL; ffmpeg_ctx->pkt->size = 0; return 0; } static void ffmpeg_close_src_ctx(struct ffmpeg_context_s * ffmpeg_ctx) { avcodec_free_context(&(ffmpeg_ctx->video_dec_ctx)); avformat_close_input(&(ffmpeg_ctx->fmt_ctx)); av_packet_free(&ffmpeg_ctx->pkt); av_frame_free(&(ffmpeg_ctx->frame)); if(ffmpeg_ctx->video_src_data[0] != NULL) { av_free(ffmpeg_ctx->video_src_data[0]); ffmpeg_ctx->video_src_data[0] = NULL; } } static void ffmpeg_close_dst_ctx(struct ffmpeg_context_s * ffmpeg_ctx) { if(ffmpeg_ctx->video_dst_data[0] != NULL) { av_free(ffmpeg_ctx->video_dst_data[0]); ffmpeg_ctx->video_dst_data[0] = NULL; } } static void ffmpeg_close(struct ffmpeg_context_s * ffmpeg_ctx) { if(ffmpeg_ctx == NULL) { LV_LOG_WARN("ffmpeg_ctx is NULL"); return; } sws_freeContext(ffmpeg_ctx->sws_ctx); ffmpeg_close_src_ctx(ffmpeg_ctx); ffmpeg_close_dst_ctx(ffmpeg_ctx); if(ffmpeg_ctx->io_ctx != NULL) { av_free(ffmpeg_ctx->io_ctx->buffer); av_free(ffmpeg_ctx->io_ctx); lv_fs_close(&(ffmpeg_ctx->lv_file)); } lv_free(ffmpeg_ctx); LV_LOG_INFO("ffmpeg_ctx closed"); } static void lv_ffmpeg_player_frame_update_cb(lv_timer_t * timer) { lv_obj_t * obj = (lv_obj_t *)lv_timer_get_user_data(timer); lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; if(!player->ffmpeg_ctx) { return; } int has_next = ffmpeg_update_next_frame(player->ffmpeg_ctx); if(has_next < 0) { lv_ffmpeg_player_set_cmd(obj, player->auto_restart ? LV_FFMPEG_PLAYER_CMD_START : LV_FFMPEG_PLAYER_CMD_STOP); if(!player->auto_restart) { lv_obj_send_event((lv_obj_t *)player, LV_EVENT_READY, NULL); } return; } lv_image_cache_drop(lv_image_get_src(obj)); lv_obj_invalidate(obj); } static void lv_ffmpeg_player_constructor(const lv_obj_class_t * class_p, lv_obj_t * obj) { LV_UNUSED(class_p); LV_TRACE_OBJ_CREATE("begin"); lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; player->auto_restart = false; player->ffmpeg_ctx = NULL; player->timer = lv_timer_create(lv_ffmpeg_player_frame_update_cb, FRAME_DEF_REFR_PERIOD, obj); lv_timer_pause(player->timer); LV_TRACE_OBJ_CREATE("finished"); } static void lv_ffmpeg_player_destructor(const lv_obj_class_t * class_p, lv_obj_t * obj) { LV_UNUSED(class_p); LV_TRACE_OBJ_CREATE("begin"); lv_ffmpeg_player_t * player = (lv_ffmpeg_player_t *)obj; if(player->timer) { lv_timer_delete(player->timer); player->timer = NULL; } lv_image_cache_drop(lv_image_get_src(obj)); ffmpeg_close(player->ffmpeg_ctx); player->ffmpeg_ctx = NULL; LV_TRACE_OBJ_CREATE("finished"); } #endif /*LV_USE_FFMPEG*/ 解释代码。这里有可以读取本地视频文件并读取帧/GOP的嘛?
最新发布
09-16
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值