转 笔记 sdl

ffmpeg+SDL能同时播放多路视频么?

如题,有能实现的大侠么?

jinux发表于 2013-9-27 16:20:35

解决了,几处重点:

1 用SDL2,因为ffplay.c用的是SDL1.2.5,不支持多窗口显示。
  ffplay.c的需要修改queue_picture(),video_image_display()等函数。

2 SDL2的render不支持多线程处理(同时渲染的时候会内存冲突),需要修改SDL2的D3D函数(我用的windows),SDL_render_d3d.c,D3D_CreateRenderer()函数里面 IDirect3D9_CreateDevice()的调用参数,加上D3DCREATE_MULTITHREADED(支持多线程)。

3 多线程启动播放。



http://bbs.youkuaiyun.com/topics/390678889



Tutorial 04: Spawning Threads

Code: tutorial04.c

Overview

Last time we added audio support by taking advantage of SDL's audio functions. SDL started a thread that made callbacks to a function we defined every time it needed audio. Now we're going to do the same sort of thing with the video display. This makes the code more modular and easier to work with - especially when we want to add syncing. So where do we start?

First we notice that our main function is handling an awful lot: it's running through the event loop, reading in packets, and decoding the video. So what we're going to do is split all those apart: we're going to have a thread that will be responsible for decoding the packets; these packets will then be added to the queue and read by the corresponding audio and video threads. The audio thread we have already set up the way we want it; the video thread will be a little more complicated since we have to display the video ourselves. We will add the actual display code to the main loop. But instead of just displaying video every time we loop, we will integrate the video display into the event loop. The idea is to decode the video, save the resulting frame inanother queue, then create a custom event (FF_REFRESH_EVENT) that we add to the event system, then when our event loop sees this event, it will display the next frame in the queue. Here's a handy ASCII art illustration of what is going on:

 ________ audio  _______      _____
|        | pkts |       |    |     | to spkr
| DECODE |----->| AUDIO |--->| SDL |-->
|________|      |_______|    |_____|
    |  video     _______
    |   pkts    |       |
    +---------->| VIDEO |
 ________       |_______|   _______
|       |          |       |       |
| EVENT |          +------>| VIDEO | to mon.
| LOOP  |----------------->| DISP. |-->
|_______|<---FF_REFRESH----|_______|
The main purpose of moving controlling the video display via the event loop is that using an SDL_Delay thread, we can control exactly when the next video frame shows up on the screen. When we finally sync the video in the next tutorial, it will be a simple matter to add the code that will schedule the next video refresh so the right picture is being shown on the screen at the right time.

Simplifying Code

We're also going to clean up the code a bit. We have all this audio and video codec information, and we're going to be adding queues and buffers and who knows what else. All this stuff is for one logical unit,viz. the movie. So we're going to make a large struct that will hold all that information called theVideoState.

typedef struct VideoState {

  AVFormatContext *pFormatCtx;
  int             videoStream, audioStream;
  AVStream        *audio_st;
  AVCodecContext  *audio_ctx;
  PacketQueue     audioq;
  uint8_t         audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
  unsigned int    audio_buf_size;
  unsigned int    audio_buf_index;
  AVPacket        audio_pkt;
  uint8_t         *audio_pkt_data;
  int             audio_pkt_size;
  AVStream        *video_st;
  AVCodecContext  *video_ctx;
  PacketQueue     videoq;

  VideoPicture    pictq[VIDEO_PICTURE_QUEUE_SIZE];
  int             pictq_size, pictq_rindex, pictq_windex;
  SDL_mutex       *pictq_mutex;
  SDL_cond        *pictq_cond;
  
  SDL_Thread      *parse_tid;
  SDL_Thread      *video_tid;

  char            filename[1024];
  int             quit;
} VideoState;
Here we see a glimpse of what we're going to get to. First we see the basic information - the format context and the indices of the audio and video stream, and the corresponding AVStream objects. Then we can see that we've moved some of those audio buffers into this structure. These (audio_buf, audio_buf_size, etc.) were all for information about audio that was still lying around (or the lack thereof). We've added another queue for the video, and a buffer (which will be used as a queue; we don't need any fancy queueing stuff for this) for the decoded frames (saved as an overlay). The VideoPicture struct is of our own creations (we'll see what's in it when we come to it). We also notice that we've allocated pointers for the two extra threads we will create, and the quit flag and the filename of the movie.

So now we take it all the way back to the main function to see how this changes our program. Let's set up ourVideoState struct:

int main(int argc, char *argv[]) {

  SDL_Event       event;

  VideoState      *is;

  is = av_mallocz(sizeof(VideoState));
av_mallocz() is a nice function that will allocate memory for us and zero it out.

Then we'll initialize our locks for the display buffer (pictq), because since the event loop calls our display function - the display function, remember, will be pulling pre-decoded frames frompictq. At the same time, our video decoder will be putting information into it - we don't know who will get there first. Hopefully you recognize that this is a classicrace condition. So we allocate it now before we start any threads. Let's also copy the filename of our movie into ourVideoState.

av_strlcpy(is->filename, argv[1], sizeof(is->filename));

is->pictq_mutex = SDL_CreateMutex();
is->pictq_cond = SDL_CreateCond();
av_strlcpy is a function from ffmpeg that does some extra bounds checking beyond strncpy.

Our First Thread

Now let's finally launch our threads and get the real work done:

schedule_refresh(is, 40);

is->parse_tid = SDL_CreateThread(decode_thread, is);
if(!is->parse_tid) {
  av_free(is);
  return -1;
}
schedule_refresh is a function we will define later. What it basically does is tell the system to push a FF_REFRESH_EVENT after the specified number of milliseconds. This will in turn call the video refresh function when we see it in the event queue. But for now, let's look at SDL_CreateThread().

SDL_CreateThread() does just that - it spawns a new thread that has complete access to all the memory of the original process, and starts the thread running on the function we give it. It will also pass that function user-defined data. In this case, we're callingdecode_thread() and with ourVideoState struct attached. The first half of the function has nothing new; it simply does the work of opening the file and finding the index of the audio and video streams. The only thing we do different is save the format context in our big struct. After we've found our stream indices, we call another function that we will define,stream_component_open(). This is a pretty natural way to split things up, and since we do a lot of similar things to set up the video and audio codec, we reuse some code by making this a function.

The stream_component_open() function is where we will find our codec decoder, set up our audio options, save important information to our big struct, and launch our audio and video threads. This is where we would also insert other options, such as forcing the codec instead of autodetecting it and so forth. Here it is:

int stream_component_open(VideoState *is, int stream_index) {

  AVFormatContext *pFormatCtx = is->pFormatCtx;
  AVCodecContext *codecCtx;
  AVCodec *codec;
  SDL_AudioSpec wanted_spec, spec;

  if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {
    return -1;
  }

  codec = avcodec_find_decoder(pFormatCtx->streams[stream_index]->codec->codec_id);
  if(!codec) {
    fprintf(stderr, "Unsupported codec!\n");
    return -1;
  }

  codecCtx = avcodec_alloc_context3(codec);
  if(avcodec_copy_context(codecCtx, pFormatCtx->streams[stream_index]->codec) != 0) {
    fprintf(stderr, "Couldn't copy codec context");
    return -1; // Error copying codec context
  }


  if(codecCtx->codec_type == AVMEDIA_TYPE_AUDIO) {
    // Set audio settings from codec info
    wanted_spec.freq = codecCtx->sample_rate;
    /* ...etc... */
    wanted_spec.callback = audio_callback;
    wanted_spec.userdata = is;
    
    if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {
      fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());
      return -1;
    }
  }
  if(avcodec_open2(codecCtx, codec, NULL) < 0) {
    fprintf(stderr, "Unsupported codec!\n");
    return -1;
  }

  switch(codecCtx->codec_type) {
  case AVMEDIA_TYPE_AUDIO:
    is->audioStream = stream_index;
    is->audio_st = pFormatCtx->streams[stream_index];
    is->audio_ctx = codecCtx;
    is->audio_buf_size = 0;
    is->audio_buf_index = 0;
    memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));
    packet_queue_init(&is->audioq);
    SDL_PauseAudio(0);
    break;
  case AVMEDIA_TYPE_VIDEO:
    is->videoStream = stream_index;
    is->video_st = pFormatCtx->streams[stream_index];
    is->video_ctx = codecCtx;
    
    packet_queue_init(&is->videoq);
    is->video_tid = SDL_CreateThread(video_thread, is);
    is->sws_ctx = sws_getContext(is->video_st->codec->width, is->video_st->codec->height,
				 is->video_st->codec->pix_fmt, is->video_st->codec->width,
				 is->video_st->codec->height, PIX_FMT_YUV420P,
				 SWS_BILINEAR, NULL, NULL, NULL
				 );
    break;
  default:
    break;
  }
}

This is pretty much the same as the code we had before, except now it's generalized for audio and video. Notice that instead of aCodecCtx, we've set up our big struct as the userdata for our audio callback. We've also saved the streams themselves as audio_st and video_st. We also have added our video queue and set it up in the same way we set up our audio queue. Most of the point is to launch the video and audio threads. These bits do it:
    SDL_PauseAudio(0);
    break;

/* ...... */

    is->video_tid = SDL_CreateThread(video_thread, is);
We remember SDL_PauseAudio() from last time, and SDL_CreateThread() is used as in the exact same way as before. We'll get back to our video_thread() function.

Before that, let's go back to the second half of our decode_thread() function. It's basically just a for loop that will read in a packet and put it on the right queue:

  for(;;) {
    if(is->quit) {
      break;
    }
    // seek stuff goes here
    if(is->audioq.size > MAX_AUDIOQ_SIZE ||
       is->videoq.size > MAX_VIDEOQ_SIZE) {
      SDL_Delay(10);
      continue;
    }
    if(av_read_frame(is->pFormatCtx, packet) < 0) {
      if((is->pFormatCtx->pb->error) == 0) {
	SDL_Delay(100); /* no error; wait for user input */
	continue;
      } else {
	break;
      }
    }
    // Is this a packet from the video stream?
    if(packet->stream_index == is->videoStream) {
      packet_queue_put(&is->videoq, packet);
    } else if(packet->stream_index == is->audioStream) {
      packet_queue_put(&is->audioq, packet);
    } else {
      av_free_packet(packet);
    }
  }
Nothing really new here, except that we now have a max size for our audio and video queue, and we've added a check for read errors. The format context has a ByteIOContext struct inside it called pb. ByteIOContext is the structure that basically keeps all the low-level file information in it.

After our for loop, we have all the code for waiting for the rest of the program to end or informing it that we've ended. This code is instructive because it shows us how we push events - something we'll have to later to display the video.

  while(!is->quit) {
    SDL_Delay(100);
  }

 fail:
  if(1){
    SDL_Event event;
    event.type = FF_QUIT_EVENT;
    event.user.data1 = is;
    SDL_PushEvent(&event);
  }
  return 0;
We get values for user events by using the SDL constant SDL_USEREVENT. The first user event should be assigned the value SDL_USEREVENT, the next SDL_USEREVENT + 1, and so on. FF_QUIT_EVENT is defined in our program as SDL_USEREVENT + 1. We can also pass user data if we like, too, and here we pass our pointer to the big struct. Finally we call SDL_PushEvent(). In our event loop switch, we just put this by the SDL_QUIT_EVENT section we had before. We'll see our event loop in more detail; for now, just be assured that when we push the FF_QUIT_EVENT, we'll catch it later and raise our quit flag.

Getting the Frame: video_thread

After we have our codec prepared, we start our video thread. This thread reads in packets from the video queue, decodes the video into frames, and then calls aqueue_picture function to put the processed frame onto a picture queue:

int video_thread(void *arg) {
  VideoState *is = (VideoState *)arg;
  AVPacket pkt1, *packet = &pkt1;
  int frameFinished;
  AVFrame *pFrame;

  pFrame = av_frame_alloc();

  for(;;) {
    if(packet_queue_get(&is->videoq, packet, 1) < 0) {
      // means we quit getting packets
      break;
    }
    // Decode video frame
    avcodec_decode_video2(is->video_st->codec, pFrame, &frameFinished, packet);

    // Did we get a video frame?
    if(frameFinished) {
      if(queue_picture(is, pFrame) < 0) {
	break;
      }
    }
    av_free_packet(packet);
  }
  av_free(pFrame);
  return 0;
}
Most of this function should be familiar by this point. We've moved our avcodec_decode_video2 function here, just replaced some of the arguments; for example, we have the AVStream stored in our big struct, so we get our codec from there. We just keep getting packets from our video queue until someone tells us to quit or we encounter an error.

Queueing the Frame

Let's look at the function that stores our decoded frame, pFrame in our picture queue. Since our picture queue is an SDL overlay (presumably to allow the video display function to have as little calculation as possible), we need to convert our frame into that. The data we store in the picture queue is a struct of our making:

typedef struct VideoPicture {
  SDL_Overlay *bmp;
  int width, height; /* source height & width */
  int allocated;
} VideoPicture;
Our big struct has a buffer of these in it where we can store them. However, we need to allocate the SDL_Overlay ourselves (notice the allocated flag that will indicate whether we have done so or not).

To use this queue, we have two pointers - the writing index and the reading index. We also keep track of how many actual pictures are in the buffer. To write to the queue, we're going to first wait for our buffer to clear out so we have space to store ourVideoPicture. Then we check and see if we have already allocated the overlay at our writing index. If not, we'll have to allocate some space. We also have to reallocate the buffer if the size of the window has changed!

int queue_picture(VideoState *is, AVFrame *pFrame) {

  VideoPicture *vp;
  int dst_pix_fmt;
  AVPicture pict;

  /* wait until we have space for a new pic */
  SDL_LockMutex(is->pictq_mutex);
  while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&
	!is->quit) {
    SDL_CondWait(is->pictq_cond, is->pictq_mutex);
  }
  SDL_UnlockMutex(is->pictq_mutex);

  if(is->quit)
    return -1;

  // windex is set to 0 initially
  vp = &is->pictq[is->pictq_windex];

  /* allocate or resize the buffer! */
  if(!vp->bmp ||
     vp->width != is->video_st->codec->width ||
     vp->height != is->video_st->codec->height) {
    SDL_Event event;

    vp->allocated = 0;
    alloc_picture(is);
    if(is->quit) {
      return -1;
    }
  }

Let's look at the alloc_picture() function:

void alloc_picture(void *userdata) {

  VideoState *is = (VideoState *)userdata;
  VideoPicture *vp;

  vp = &is->pictq[is->pictq_windex];
  if(vp->bmp) {
    // we already have one make another, bigger/smaller
    SDL_FreeYUVOverlay(vp->bmp);
  }
  // Allocate a place to put our YUV image on that screen
  SDL_LockMutex(screen_mutex);
  vp->bmp = SDL_CreateYUVOverlay(is->video_st->codec->width,
				 is->video_st->codec->height,
				 SDL_YV12_OVERLAY,
				 screen);
  SDL_UnlockMutex(screen_mutex);
  vp->width = is->video_st->codec->width;
  vp->height = is->video_st->codec->height;  
  vp->allocated = 1;
}
You should recognize the SDL_CreateYUVOverlay function that we've moved from our main loop to this section. This code should be fairly self-explanitory by now. However, now we have a mutex lock around it because two threads cannot write information to the screen at the same time! This will prevent our alloc_picture function from stepping on the toes of the function that will display the picture. (We've created this lock as a global variable and initialized it in main(); see code.) Remember that we save the width and height in the VideoPicture structure because we need to make sure that our video size doesn't change for some reason.

Okay, we're all settled and we have our YUV overlay allocated and ready to receive a picture. Let's go back toqueue_picture and look at the code to copy the frame into the overlay. You should recognize that part of it:

int queue_picture(VideoState *is, AVFrame *pFrame) {

  /* Allocate a frame if we need it... */
  /* ... */
  /* We have a place to put our picture on the queue */

  if(vp->bmp) {

    SDL_LockYUVOverlay(vp->bmp);
    
    dst_pix_fmt = PIX_FMT_YUV420P;
    /* point pict at the queue */

    pict.data[0] = vp->bmp->pixels[0];
    pict.data[1] = vp->bmp->pixels[2];
    pict.data[2] = vp->bmp->pixels[1];
    
    pict.linesize[0] = vp->bmp->pitches[0];
    pict.linesize[1] = vp->bmp->pitches[2];
    pict.linesize[2] = vp->bmp->pitches[1];
    
    // Convert the image into YUV format that SDL uses
    sws_scale(is->sws_ctx, (uint8_t const * const *)pFrame->data,
	      pFrame->linesize, 0, is->video_st->codec->height,
	      pict.data, pict.linesize);
    
    SDL_UnlockYUVOverlay(vp->bmp);
    /* now we inform our display thread that we have a pic ready */
    if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {
      is->pictq_windex = 0;
    }
    SDL_LockMutex(is->pictq_mutex);
    is->pictq_size++;
    SDL_UnlockMutex(is->pictq_mutex);
  }
  return 0;
}
The majority of this part is simply the code we used earlier to fill the YUV overlay with our frame. The last bit is simply "adding" our value onto the queue. The queue works by adding onto it until it is full, and reading from it as long as there is something on it. Therefore everything depends upon the is->pictq_size value, requiring us to lock it. So what we do here is increment the write pointer (and rollover if necessary), then lock the queue and increase its size. Now our reader will know there is more information on the queue, and if this makes our queue full, our writer will know about it.

Displaying the Video

That's it for our video thread! Now we've wrapped up all the loose threads except for one — remember that we called theschedule_refresh() function way back? Let's see what that actually did:

/* schedule a video refresh in 'delay' ms */
static void schedule_refresh(VideoState *is, int delay) {
  SDL_AddTimer(delay, sdl_refresh_timer_cb, is);
}
SDL_AddTimer() is an SDL function that simply makes a callback to the user-specfied function after a certain number of milliseconds (and optionally carrying some user data). We're going to use this function to schedule video updates - every time we call this function, it will set the timer, which will trigger an event, which will have our main() function in turn call a function that pulls a frame from our picture queue and displays it! Phew!

But first thing's first. Let's trigger that event. That sends us over to:

static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) {
  SDL_Event event;
  event.type = FF_REFRESH_EVENT;
  event.user.data1 = opaque;
  SDL_PushEvent(&event);
  return 0; /* 0 means stop timer */
}
Here is the now-familiar event push. FF_REFRESH_EVENT is defined here as SDL_USEREVENT + 1. One thing to notice is that when we return 0, SDL stops the timer so the callback is not made again.

Now we've pushed an FF_REFRESH_EVENT, we need to handle it in our event loop:

for(;;) {

  SDL_WaitEvent(&event);
  switch(event.type) {
  /* ... */
  case FF_REFRESH_EVENT:
    video_refresh_timer(event.user.data1);
    break;
and that sends us to this function, which will actually pull the data from our picture queue:
void video_refresh_timer(void *userdata) {

  VideoState *is = (VideoState *)userdata;
  VideoPicture *vp;
  
  if(is->video_st) {
    if(is->pictq_size == 0) {
      schedule_refresh(is, 1);
    } else {
      vp = &is->pictq[is->pictq_rindex];
      /* Timing code goes here */

      schedule_refresh(is, 80);
      
      /* show the picture! */
      video_display(is);
      
      /* update queue for next picture! */
      if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {
	is->pictq_rindex = 0;
      }
      SDL_LockMutex(is->pictq_mutex);
      is->pictq_size--;
      SDL_CondSignal(is->pictq_cond);
      SDL_UnlockMutex(is->pictq_mutex);
    }
  } else {
    schedule_refresh(is, 100);
  }
}
For now, this is a pretty simple function: it pulls from the queue when we have something, sets our timer for when the next video frame should be shown, calls video_display to actually show the video on the screen, then increments the counter on the queue, and decreases its size. You may notice that we don't actually do anything with vp in this function, and here's why: we will. Later. We're going to use it to access timing information when we start syncing the video to the audio. See where it says "timing code here"? In that section, we're going to figure out how soon we should show the next video frame, and then input that value into the schedule_refresh() function. For now we're just putting in a dummy value of 80. Technically, you could guess and check this value, and recompile it for every movie you watch, but 1) it would drift after a while and 2) it's quite silly. We'll come back to it later, though.

We're almost done; we just have one last thing to do: display the video! Here's thatvideo_display function:

void video_display(VideoState *is) {

  SDL_Rect rect;
  VideoPicture *vp;
  float aspect_ratio;
  int w, h, x, y;
  int i;

  vp = &is->pictq[is->pictq_rindex];
  if(vp->bmp) {
    if(is->video_st->codec->sample_aspect_ratio.num == 0) {
      aspect_ratio = 0;
    } else {
      aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) *
	is->video_st->codec->width / is->video_st->codec->height;
    }
    if(aspect_ratio <= 0.0) {
      aspect_ratio = (float)is->video_st->codec->width /
	(float)is->video_st->codec->height;
    }
    h = screen->h;
    w = ((int)rint(h * aspect_ratio)) & -3;
    if(w > screen->w) {
      w = screen->w;
      h = ((int)rint(w / aspect_ratio)) & -3;
    }
    x = (screen->w - w) / 2;
    y = (screen->h - h) / 2;
    
    rect.x = x;
    rect.y = y;
    rect.w = w;
    rect.h = h;
    SDL_LockMutex(screen_mutex);
    SDL_DisplayYUVOverlay(vp->bmp, &rect);
    SDL_UnlockMutex(screen_mutex);
  }
}
Since our screen can be of any size (we set ours to 640x480 and there are ways to set it so it is resizable by the user), we need to dynamically figure out how big we want our movie rectangle to be. So first we need to figure out our movie's aspect ratio, which is just the width divided by the height. Some codecs will have an odd sample aspect ratio, which is simply the width/height radio of a single pixel, or sample. Since the height and width values in our codec context are measured in pixels, the actual aspect ratio is equal to the aspect ratio times the sample aspect ratio. Some codecs will show an aspect ratio of 0, and this indicates that each pixel is simply of size 1x1. Then we scale the movie to fit as big in our screen as we can. The & -3 bit-twiddling in there simply rounds the value to the nearest multiple of 4. Then we center the movie, and call SDL_DisplayYUVOverlay(), making sure we use the screen mutex to access it.

So is that it? Are we done? Well, we still have to rewrite the audio code to use the newVideoStruct, but those are trivial changes, and you can look at those in the sample code. The last thing we have to do is to change our callback for ffmpeg's internal "quit" callback function:

VideoState *global_video_state;

int decode_interrupt_cb(void) {
  return (global_video_state && global_video_state->quit);
}
We set global_video_state to the big struct in main().

So that's it! Go ahead and compile it:

gcc -o tutorial04 tutorial04.c -lavutil -lavformat -lavcodec -lswscale -lz -lm \
`sdl-config --cflags --libs`
and enjoy your unsynced movie! Next time we'll finally build a video player that actually works!

http://dranger.com/ffmpeg/tutorial04.html


http://blog.youkuaiyun.com/ashqal/article/details/17722935


环境

FFmpeg:2013年9月9日 github master版本

SDL:SDL2

系统:macos 10.8 64位

ffmpeg编译参数:

[plain] view plain copy 在CODE上查看代码片 派生到我的代码片
  1. ./configure --cc=clang --disable-everything  --enable-libfdk_aac --enable-libmp3lame --enable-protocol=file --enable-protocol=file --enable-decoder=aac --enable-decoder=aac --enable-decoder=mp3 --enable-encoder=libmp3lame --enable-encoder=libfdk_aac --enable-demuxer=aac --enable-demuxer=mp3 --enable-muxer=adts --enable-muxer=mp3 --enable-parser=aac --enable-decoder=flv --enable-decoder=h264 --enable-decoder=mpeg4 --enable-demuxer=avi --enable-demuxer=flv --enable-demuxer=h264 --enable-muxer=mp4 --enable-muxer=flv  


代码源自

http://dranger.com/ffmpeg/tutorial04.html



修改后版本

  1. // tutorial04.c  
  2. // A pedagogical video player that will stream through every video frame as fast as it can,  
  3. // and play audio (out of sync).  
  4. //  
  5. // This tutorial was written by Stephen Dranger (dranger@gmail.com).  
  6. //  
  7. // Code based on FFplay, Copyright (c) 2003 Fabrice Bellard,  
  8. // and a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)  
  9. // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1  
  10. //  
  11. // Use the Makefile to build all the samples.  
  12. //  
  13. // Run using  
  14. // tutorial04 myvideofile.mpg  
  15. //  
  16. // to play the video stream on your screen.  
  17.   
  18.   
  19.   
  20. #include <libavcodec/avcodec.h>  
  21. #include <libavformat/avformat.h>  
  22. #include <libavformat/avio.h>  
  23. #include <libswscale/swscale.h>  
  24. #include <libavutil/avstring.h>  
  25.   
  26. #include <SDL2/SDL.h>  
  27. #include <SDL2/SDL_thread.h>  
  28.   
  29. #ifdef __MINGW32__  
  30. #undef main /* Prevents SDL from overriding main() */  
  31. #endif  
  32.   
  33. #include <stdio.h>  
  34. #include <math.h>  
  35.   
  36. #define SDL_AUDIO_BUFFER_SIZE 1024  
  37. #define MAX_AUDIO_FRAME_SIZE 192000  
  38.   
  39. #define MAX_AUDIOQ_SIZE (5 * 16 * 1024)  
  40. #define MAX_VIDEOQ_SIZE (5 * 256 * 1024)  
  41.   
  42. #define FF_ALLOC_EVENT   (SDL_USEREVENT)  
  43. #define FF_REFRESH_EVENT (SDL_USEREVENT + 1)  
  44. #define FF_QUIT_EVENT (SDL_USEREVENT + 2)  
  45.   
  46. #define VIDEO_PICTURE_QUEUE_SIZE 1  
  47.   
  48. typedef struct PacketQueue {  
  49.     AVPacketList *first_pkt, *last_pkt;  
  50.     int nb_packets;  
  51.     int size;  
  52.     SDL_mutex *mutex;  
  53.     SDL_cond *cond;  
  54. } PacketQueue;  
  55.   
  56.   
  57. typedef struct VideoPicture {  
  58.     //SDL_Texture *texture;  
  59.     AVFrame* rawdata;  
  60.     //SDL_Overlay *bmp;  
  61.     int width, height; /* source height & width */  
  62.     int allocated;  
  63. } VideoPicture;  
  64.   
  65. typedef struct VideoState {  
  66.       
  67.     AVFormatContext *pFormatCtx;  
  68.     int             videoStream, audioStream;  
  69.     AVStream        *audio_st;  
  70.     PacketQueue     audioq;  
  71.     uint8_t         audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2];  
  72.     unsigned int    audio_buf_size;  
  73.     unsigned int    audio_buf_index;  
  74.     AVFrame         audio_frame;  
  75.     AVPacket        audio_pkt;  
  76.     uint8_t         *audio_pkt_data;  
  77.     int             audio_pkt_size;  
  78.     AVStream        *video_st;  
  79.     PacketQueue     videoq;  
  80.       
  81.     VideoPicture    pictq[VIDEO_PICTURE_QUEUE_SIZE];  
  82.     int             pictq_size, pictq_rindex, pictq_windex;  
  83.     SDL_mutex       *pictq_mutex;  
  84.     SDL_cond        *pictq_cond;  
  85.       
  86.     SDL_Thread      *parse_tid;  
  87.     SDL_Thread      *video_tid;  
  88.       
  89.     char            filename[1024];  
  90.     int             quit;  
  91.       
  92.     AVIOContext     *io_context;  
  93.     struct SwsContext *sws_ctx;  
  94. } VideoState;  
  95.   
  96.   
  97.   
  98. /* Since we only have one decoding thread, the Big Struct 
  99.  can be global in case we need it. */  
  100. VideoState *global_video_state;  
  101.   
  102.   
  103.   
  104. void packet_queue_init(PacketQueue *q) {  
  105.     memset(q, 0, sizeof(PacketQueue));  
  106.     q->mutex = SDL_CreateMutex();  
  107.     q->cond = SDL_CreateCond();  
  108. }  
  109. int packet_queue_put(PacketQueue *q, AVPacket *pkt) {  
  110.       
  111.     AVPacketList *pkt1;  
  112.     if(av_dup_packet(pkt) < 0) {  
  113.         return -1;  
  114.     }  
  115.     pkt1 = av_malloc(sizeof(AVPacketList));  
  116.     if (!pkt1)  
  117.         return -1;  
  118.     pkt1->pkt = *pkt;  
  119.     pkt1->next = NULL;  
  120.       
  121.     SDL_LockMutex(q->mutex);  
  122.       
  123.     if (!q->last_pkt)  
  124.         q->first_pkt = pkt1;  
  125.     else  
  126.         q->last_pkt->next = pkt1;  
  127.     q->last_pkt = pkt1;  
  128.     q->nb_packets++;  
  129.     q->size += pkt1->pkt.size;  
  130.     SDL_CondSignal(q->cond);  
  131.       
  132.     SDL_UnlockMutex(q->mutex);  
  133.     return 0;  
  134. }  
  135. static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block)  
  136. {  
  137.     AVPacketList *pkt1;  
  138.     int ret;  
  139.       
  140.     SDL_LockMutex(q->mutex);  
  141.       
  142.     for(;;) {  
  143.           
  144.         if(global_video_state->quit) {  
  145.             ret = -1;  
  146.             break;  
  147.         }  
  148.           
  149.         pkt1 = q->first_pkt;  
  150.         if (pkt1) {  
  151.             q->first_pkt = pkt1->next;  
  152.             if (!q->first_pkt)  
  153.                 q->last_pkt = NULL;  
  154.             q->nb_packets--;  
  155.             q->size -= pkt1->pkt.size;  
  156.             *pkt = pkt1->pkt;  
  157.             av_free(pkt1);  
  158.             ret = 1;  
  159.             break;  
  160.         } else if (!block) {  
  161.             ret = 0;  
  162.             break;  
  163.         } else {  
  164.             SDL_CondWait(q->cond, q->mutex);  
  165.         }  
  166.     }  
  167.     SDL_UnlockMutex(q->mutex);  
  168.     return ret;  
  169. }  
  170.   
  171. int audio_decode_frame(VideoState *is) {  
  172.     int len1, data_size = 0;  
  173.     AVPacket *pkt = &is->audio_pkt;  
  174.       
  175.     for(;;) {  
  176.         while(is->audio_pkt_size > 0) {  
  177.             int got_frame = 0;  
  178.             len1 = avcodec_decode_audio4(is->audio_st->codec, &is->audio_frame, &got_frame, pkt);  
  179.             if(len1 < 0) {  
  180.                 /* if error, skip frame */  
  181.                 is->audio_pkt_size = 0;  
  182.                 break;  
  183.             }  
  184.             if (got_frame)  
  185.             {  
  186.                 data_size =  
  187.                 av_samples_get_buffer_size  
  188.                 (  
  189.                  NULL,  
  190.                  is->audio_st->codec->channels,  
  191.                  is->audio_frame.nb_samples,  
  192.                  is->audio_st->codec->sample_fmt,  
  193.                  1  
  194.                  );  
  195.                 memcpy(is->audio_buf, is->audio_frame.data[0], data_size);  
  196.             }  
  197.             is->audio_pkt_data += len1;  
  198.             is->audio_pkt_size -= len1;  
  199.             if(data_size <= 0) {  
  200.                 /* No data yet, get more frames */  
  201.                 continue;  
  202.             }  
  203.             /* We have data, return it and come back for more later */  
  204.             return data_size;  
  205.         }  
  206.         if(pkt->data)  
  207.             av_free_packet(pkt);  
  208.           
  209.         if(is->quit) {  
  210.             return -1;  
  211.         }  
  212.         /* next packet */  
  213.         if(packet_queue_get(&is->audioq, pkt, 1) < 0) {  
  214.             return -1;  
  215.         }  
  216.         is->audio_pkt_data = pkt->data;  
  217.         is->audio_pkt_size = pkt->size;  
  218.     }  
  219. }  
  220.   
  221. void audio_callback(void *userdata, Uint8 *stream, int len) {  
  222.       
  223.     VideoState *is = (VideoState *)userdata;  
  224.     int len1, audio_size;  
  225.       
  226.     while(len > 0) {  
  227.         if(is->audio_buf_index >= is->audio_buf_size) {  
  228.             /* We have already sent all our data; get more */  
  229.             audio_size = audio_decode_frame(is);  
  230.             if(audio_size < 0) {  
  231.                 /* If error, output silence */  
  232.                 is->audio_buf_size = 1024;  
  233.                 memset(is->audio_buf, 0, is->audio_buf_size);  
  234.             } else {  
  235.                 is->audio_buf_size = audio_size;  
  236.             }  
  237.             is->audio_buf_index = 0;  
  238.         }  
  239.         len1 = is->audio_buf_size - is->audio_buf_index;  
  240.         if(len1 > len)  
  241.             len1 = len;  
  242.         memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);  
  243.         len -= len1;  
  244.         stream += len1;  
  245.         is->audio_buf_index += len1;  
  246.     }  
  247. }  
  248.   
  249. static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) {  
  250.     SDL_Event event;  
  251.     event.type = FF_REFRESH_EVENT;  
  252.     event.user.data1 = opaque;  
  253.     SDL_PushEvent(&event);  
  254.     return 0; /* 0 means stop timer */  
  255. }  
  256.   
  257. /* schedule a video refresh in 'delay' ms */  
  258. static void schedule_refresh(VideoState *is, int delay) {  
  259.     SDL_AddTimer(delay, sdl_refresh_timer_cb, is);  
  260. }  
  261.   
  262. void video_display(VideoState *is,SDL_Renderer* renderer,SDL_Texture* texture) {  
  263.       
  264.     SDL_Rect rect;  
  265.     VideoPicture *vp;  
  266.     //AVPicture pict;  
  267.     float aspect_ratio;  
  268.     //int w=0, h=0, x=0, y=0;  
  269.     //screen->w;  
  270.     //int i;  
  271.       
  272.     //renderer->w;  
  273.       
  274.     vp = &is->pictq[is->pictq_rindex];  
  275.     if(vp->rawdata) {  
  276.         if(is->video_st->codec->sample_aspect_ratio.num == 0) {  
  277.             aspect_ratio = 0;  
  278.         } else {  
  279.             aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) *  
  280.             is->video_st->codec->width / is->video_st->codec->height;  
  281.         }  
  282.         if(aspect_ratio <= 0.0) {  
  283.             aspect_ratio = (float)is->video_st->codec->width /  
  284.             (float)is->video_st->codec->height;  
  285.         }  
  286.         //screen->h;  
  287. //        h = screen->h;  
  288. //        w = ((int)rint(h * aspect_ratio)) & -3;  
  289. //        if(w > screen->w) {  
  290. //            w = screen->w;  
  291. //            h = ((int)rint(w / aspect_ratio)) & -3;  
  292. //        }  
  293. //        x = (screen->w - w) / 2;  
  294. //        y = (screen->h - h) / 2;  
  295.           
  296.         rect.x = 0;  
  297.         rect.y = 0;  
  298.         rect.w = vp->width;  
  299.         rect.h = vp->height;  
  300.           
  301.         //SDL_SetRenderDrawColor(renderer, 120, 0, 0, 255);  
  302.         //SDL_RenderFillRect(renderer, &rect);  
  303.           
  304.         SDL_UpdateTexture( texture, &rect, vp->rawdata->data[0], vp->rawdata->linesize[0] );  
  305.         SDL_RenderClear( renderer );  
  306.         SDL_RenderCopy( renderer,texture , &rect, &rect );  
  307.         SDL_RenderPresent( renderer );  
  308.           
  309.         //SDL_DisplayYUVOverlay(vp->bmp, &rect);  
  310.     }  
  311. }  
  312.   
  313. void video_refresh_timer(void *userdata,SDL_Renderer* renderer,SDL_Texture* texture) {  
  314.       
  315.     VideoState *is = (VideoState *)userdata;  
  316.     // vp is used in later tutorials for synchronization  
  317.     //VideoPicture *vp;  
  318.       
  319.     if(is->video_st) {  
  320.         if(is->pictq_size == 0) {  
  321.             schedule_refresh(is, 1);  
  322.         } else {  
  323.             //vp = &is->pictq[is->pictq_rindex];  
  324.             /* Now, normally here goes a ton of code 
  325.              about timing, etc. we're just going to 
  326.              guess at a delay for now. You can 
  327.              increase and decrease this value and hard code 
  328.              the timing - but I don't suggest that ;) 
  329.              We'll learn how to do it for real later. 
  330.              */  
  331.             schedule_refresh(is, 30);  
  332.               
  333.             /* show the picture! */  
  334.             video_display(is,renderer,texture);  
  335.               
  336.             /* update queue for next picture! */  
  337.             if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {  
  338.                 is->pictq_rindex = 0;  
  339.             }  
  340.             SDL_LockMutex(is->pictq_mutex);  
  341.             is->pictq_size--;  
  342.             SDL_CondSignal(is->pictq_cond);  
  343.             SDL_UnlockMutex(is->pictq_mutex);  
  344.         }  
  345.     } else {  
  346.         schedule_refresh(is, 100);  
  347.     }  
  348. }  
  349.   
  350. void alloc_picture(void *userdata,SDL_Renderer* renderer) {  
  351.       
  352.     VideoState *is = (VideoState *)userdata;  
  353.     VideoPicture *vp;  
  354.       
  355.     vp = &is->pictq[is->pictq_windex];  
  356.     if(vp->rawdata) {  
  357.         // we already have one make another, bigger/smaller  
  358.         //SDL_DestroyTexture(vp->texture);  
  359.         //vp->texture = NULL;  
  360.         av_free(vp->rawdata);  
  361.         //SDL_FreeYUVOverlay(vp->bmp);  
  362.     }  
  363.     // Allocate a place to put our YUV image on that screen  
  364. //    vp->texture = SDL_CreateYUVOverlay(is->video_st->codec->width,  
  365. //                                   is->video_st->codec->height,  
  366. //                                   SDL_YV12_OVERLAY,  
  367. //                                   screen);  
  368.       
  369.       
  370.     vp->width = is->video_st->codec->width;  
  371.     vp->height = is->video_st->codec->height;  
  372.       
  373.       
  374.     AVCodecContext *pCodecCtx = NULL;  
  375.     pCodecCtx = is->video_st->codec;  
  376.       
  377.     AVFrame* pFrameYUV = avcodec_alloc_frame();  
  378.     if( pFrameYUV == NULL )  
  379.         return;  
  380.     int numBytes = avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width,  
  381.                                       pCodecCtx->height);  
  382.     uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));  
  383.       
  384.     avpicture_fill((AVPicture *)pFrameYUV, buffer, PIX_FMT_YUV420P,  
  385.                    pCodecCtx->width, pCodecCtx->height);  
  386.       
  387.       
  388.     vp->rawdata = pFrameYUV;  
  389.       
  390.     //SDL_CreateTexture(renderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, vp->width, vp->height);  
  391.       
  392.     SDL_LockMutex(is->pictq_mutex);  
  393.     vp->allocated = 1;  
  394.     SDL_CondSignal(is->pictq_cond);  
  395.     SDL_UnlockMutex(is->pictq_mutex);  
  396.       
  397. }  
  398.   
  399. int queue_picture(VideoState *is, AVFrame *pFrame) {  
  400.       
  401.     VideoPicture *vp;  
  402.     //AVCodecContext *pCodecCtx;  
  403.       
  404.       
  405.       
  406.       
  407.     /* wait until we have space for a new pic */  
  408.     SDL_LockMutex(is->pictq_mutex);  
  409.     while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&  
  410.           !is->quit) {  
  411.         SDL_CondWait(is->pictq_cond, is->pictq_mutex);  
  412.     }  
  413.     SDL_UnlockMutex(is->pictq_mutex);  
  414.       
  415.     if(is->quit)  
  416.         return -1;  
  417.       
  418.     // windex is set to 0 initially  
  419.     vp = &is->pictq[is->pictq_windex];  
  420.       
  421.     /* allocate or resize the buffer! */  
  422.     if(!vp->rawdata ||  
  423.        vp->width != is->video_st->codec->width ||  
  424.        vp->height != is->video_st->codec->height) {  
  425.         SDL_Event event;  
  426.           
  427.         vp->allocated = 0;  
  428.         /* we have to do it in the main thread */  
  429.         event.type = FF_ALLOC_EVENT;  
  430.         event.user.data1 = is;  
  431.         SDL_PushEvent(&event);  
  432.           
  433.         /* wait until we have a picture allocated */  
  434.         SDL_LockMutex(is->pictq_mutex);  
  435.         while(!vp->allocated && !is->quit) {  
  436.             SDL_CondWait(is->pictq_cond, is->pictq_mutex);  
  437.         }  
  438.         SDL_UnlockMutex(is->pictq_mutex);  
  439.         if(is->quit) {  
  440.             return -1;  
  441.         }  
  442.     }  
  443.       
  444.       
  445.       
  446.       
  447.     /* We have a place to put our picture on the queue */  
  448.       
  449.     if(vp->rawdata) {  
  450.           
  451.         //SDL_LockYUVOverlay(vp->bmp);  
  452.           
  453.         /* point pict at the queue */  
  454.           
  455. //        pict.data[0] = vp->bmp->pixels[0];  
  456. //        pict.data[1] = vp->bmp->pixels[2];  
  457. //        pict.data[2] = vp->bmp->pixels[1];  
  458. //          
  459. //        pict.linesize[0] = vp->bmp->pitches[0];  
  460. //        pict.linesize[1] = vp->bmp->pitches[2];  
  461. //        pict.linesize[2] = vp->bmp->pitches[1];  
  462.           
  463.         // Convert the image into YUV format that SDL uses  
  464.         sws_scale  
  465.         (  
  466.          is->sws_ctx,  
  467.          (uint8_t const * const *)pFrame->data,  
  468.          pFrame->linesize,  
  469.          0,  
  470.          is->video_st->codec->height,  
  471.          vp->rawdata->data,  
  472.          vp->rawdata->linesize  
  473.          );  
  474. //        SDL_Rect rect;  
  475. //        rect.x = 0;  
  476. //        rect.y = 0;  
  477. //        rect.w = vp->width;  
  478. //        rect.h = vp->height;  
  479.         //printf("%d,%d\n",rect.w,rect.h);  
  480.           
  481.         //SDL_UpdateTexture( vp->texture, &rect, pFrameYUV->data[0], pFrameYUV->linesize[0] );  
  482.           
  483.         //av_free(pFrameYUV);  
  484.           
  485.         //SDL_UnlockYUVOverlay(vp->bmp);  
  486.         /* now we inform our display thread that we have a pic ready */  
  487.         if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {  
  488.             is->pictq_windex = 0;  
  489.         }  
  490.         SDL_LockMutex(is->pictq_mutex);  
  491.         is->pictq_size++;  
  492.         SDL_UnlockMutex(is->pictq_mutex);  
  493.     }  
  494.     return 0;  
  495. }  
  496.   
  497. int video_thread(void *arg) {  
  498.     VideoState *is = (VideoState *)arg;  
  499.     AVPacket pkt1, *packet = &pkt1;  
  500.     int frameFinished;  
  501.     AVFrame *pFrame;  
  502.       
  503.     pFrame = avcodec_alloc_frame();  
  504.       
  505.     for(;;) {  
  506.         if(packet_queue_get(&is->videoq, packet, 1) < 0) {  
  507.             // means we quit getting packets  
  508.             break;  
  509.         }  
  510.         // Decode video frame  
  511.         avcodec_decode_video2(is->video_st->codec, pFrame, &frameFinished,  
  512.                               packet);  
  513.           
  514.         // Did we get a video frame?  
  515.         if(frameFinished) {  
  516.             //printf("video_thread\n");  
  517.             if(queue_picture(is, pFrame) < 0) {  
  518.                 break;  
  519.             }  
  520.         }  
  521.         av_free_packet(packet);  
  522.     }  
  523.     av_free(pFrame);  
  524.     return 0;  
  525. }  
  526.   
  527. int stream_component_open(VideoState *is, int stream_index) {  
  528.       
  529.     AVFormatContext *pFormatCtx = is->pFormatCtx;  
  530.     AVCodecContext *codecCtx = NULL;  
  531.     AVCodec *codec = NULL;  
  532.     AVDictionary *optionsDict = NULL;  
  533.     SDL_AudioSpec wanted_spec, spec;  
  534.       
  535.     if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {  
  536.         return -1;  
  537.     }  
  538.       
  539.     // Get a pointer to the codec context for the video stream  
  540.     codecCtx = pFormatCtx->streams[stream_index]->codec;  
  541.       
  542.     if(codecCtx->codec_type == AVMEDIA_TYPE_AUDIO) {  
  543.         // Set audio settings from codec info  
  544.         wanted_spec.freq = codecCtx->sample_rate;  
  545.         wanted_spec.format = AUDIO_S16SYS;  
  546.         wanted_spec.channels = codecCtx->channels;  
  547.         wanted_spec.silence = 0;  
  548.         wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;  
  549.         wanted_spec.callback = audio_callback;  
  550.         wanted_spec.userdata = is;  
  551.           
  552.         if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {  
  553.             fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());  
  554.             return -1;  
  555.         }  
  556.     }  
  557.     codec = avcodec_find_decoder(codecCtx->codec_id);  
  558.     if(!codec || (avcodec_open2(codecCtx, codec, &optionsDict) < 0)) {  
  559.         fprintf(stderr, "Unsupported codec!\n");  
  560.         return -1;  
  561.     }  
  562.       
  563.     switch(codecCtx->codec_type) {  
  564.         case AVMEDIA_TYPE_AUDIO:  
  565.             is->audioStream = stream_index;  
  566.             is->audio_st = pFormatCtx->streams[stream_index];  
  567.             is->audio_buf_size = 0;  
  568.             is->audio_buf_index = 0;  
  569.             memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));  
  570.             packet_queue_init(&is->audioq);  
  571.             SDL_PauseAudio(0);  
  572.             break;  
  573.         case AVMEDIA_TYPE_VIDEO:  
  574.             is->videoStream = stream_index;  
  575.             is->video_st = pFormatCtx->streams[stream_index];  
  576.               
  577.             packet_queue_init(&is->videoq);  
  578.             is->video_tid = SDL_CreateThread(video_thread, "video_thread",is);  
  579.             is->sws_ctx =  
  580.             sws_getContext  
  581.             (  
  582.              is->video_st->codec->width,  
  583.              is->video_st->codec->height,  
  584.              is->video_st->codec->pix_fmt,  
  585.              is->video_st->codec->width,  
  586.              is->video_st->codec->height,  
  587.              PIX_FMT_YUV420P,  
  588.              SWS_BILINEAR,  
  589.              NULL,  
  590.              NULL,  
  591.              NULL  
  592.              );  
  593.             break;  
  594.         default:  
  595.             break;  
  596.     }  
  597.     return 0;  
  598. }  
  599.   
  600. int decode_interrupt_cb(void *opaque) {  
  601.     return (global_video_state && global_video_state->quit);  
  602. }  
  603.   
  604. int decode_thread(void *arg) {  
  605.       
  606.     VideoState *is = (VideoState *)arg;  
  607.     AVFormatContext *pFormatCtx = NULL;  
  608.     AVPacket pkt1, *packet = &pkt1;  
  609.       
  610.     int video_index = -1;  
  611.     int audio_index = -1;  
  612.     int i;  
  613.       
  614.     AVDictionary *io_dict = NULL;  
  615.     AVIOInterruptCB callback;  
  616.       
  617.     is->videoStream=-1;  
  618.     is->audioStream=-1;  
  619.       
  620.     global_video_state = is;  
  621.     // will interrupt blocking functions if we quit!  
  622.     callback.callback = decode_interrupt_cb;  
  623.     callback.opaque = is;  
  624.     if (avio_open2(&is->io_context, is->filename, 0, &callback, &io_dict))  
  625.     {  
  626.         fprintf(stderr, "Unable to open I/O for %s\n", is->filename);  
  627.         return -1;  
  628.     }  
  629.       
  630.     // Open video file  
  631.     if(avformat_open_input(&pFormatCtx, is->filename, NULL, NULL)!=0)  
  632.         return -1; // Couldn't open file  
  633.       
  634.     is->pFormatCtx = pFormatCtx;  
  635.       
  636.     // Retrieve stream information  
  637.     if(avformat_find_stream_info(pFormatCtx, NULL)<0)  
  638.         return -1; // Couldn't find stream information  
  639.       
  640.     // Dump information about file onto standard error  
  641.     av_dump_format(pFormatCtx, 0, is->filename, 0);  
  642.       
  643.     // Find the first video stream  
  644.       
  645.     for(i=0; i<pFormatCtx->nb_streams; i++) {  
  646.         if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO &&  
  647.            video_index < 0) {  
  648.             video_index=i;  
  649.         }  
  650.         if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO &&  
  651.            audio_index < 0) {  
  652.             audio_index=i;  
  653.         }  
  654.     }  
  655.     if(audio_index >= 0) {  
  656.         stream_component_open(is, audio_index);  
  657.     }  
  658.     if(video_index >= 0) {  
  659.         stream_component_open(is, video_index);  
  660.     }  
  661.       
  662.     if(is->videoStream < 0 || is->audioStream < 0) {  
  663.         fprintf(stderr, "%s: could not open codecs\n", is->filename);  
  664.         goto fail;  
  665.     }  
  666.       
  667.     // main decode loop  
  668.       
  669.     for(;;) {  
  670.         if(is->quit) {  
  671.             break;  
  672.         }  
  673.         // seek stuff goes here  
  674.         if(is->audioq.size > MAX_AUDIOQ_SIZE ||  
  675.            is->videoq.size > MAX_VIDEOQ_SIZE) {  
  676.             SDL_Delay(10);  
  677.             continue;  
  678.         }  
  679.         if(av_read_frame(is->pFormatCtx, packet) < 0) {  
  680.             if(is->pFormatCtx->pb->error == 0) {  
  681.                 SDL_Delay(100); /* no error; wait for user input */  
  682.                 continue;  
  683.             } else {  
  684.                 break;  
  685.             }  
  686.         }  
  687.         // Is this a packet from the video stream?  
  688.         if(packet->stream_index == is->videoStream) {  
  689.             packet_queue_put(&is->videoq, packet);  
  690.         } else if(packet->stream_index == is->audioStream) {  
  691.             packet_queue_put(&is->audioq, packet);  
  692.         } else {  
  693.             av_free_packet(packet);  
  694.         }  
  695.     }  
  696.     /* all done - wait for it */  
  697.     while(!is->quit) {  
  698.         SDL_Delay(100);  
  699.     }  
  700.       
  701. fail:  
  702.     if(1){  
  703.         SDL_Event event;  
  704.         event.type = FF_QUIT_EVENT;  
  705.         event.user.data1 = is;  
  706.         SDL_PushEvent(&event);  
  707.     }  
  708.     return 0;  
  709. }  
  710.   
  711. int main(int argc, char *argv[]) {  
  712.       
  713.     SDL_Event       event;  
  714.       
  715.     VideoState      *is;  
  716.       
  717.     struct SDL_Window     *pScreen;  
  718.     struct SDL_Renderer   *pRenderer;  
  719.       
  720.       
  721.     is = av_mallocz(sizeof(VideoState));  
  722.       
  723.     if(argc < 2) {  
  724.         fprintf(stderr, "Usage: test <file>\n");  
  725.         exit(1);  
  726.     }  
  727.     // Register all formats and codecs  
  728.     av_register_all();  
  729.       
  730.     if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {  
  731.         fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());  
  732.         exit(1);  
  733.     }  
  734.       
  735.     // Make a screen to put our video  
  736. //#ifndef __DARWIN__  
  737. //    screen = SDL_SetVideoMode(640, 480, 0, 0);  
  738. //#else  
  739. //    screen = SDL_SetVideoMode(640, 480, 24, 0);  
  740. //#endif  
  741.       
  742.     pScreen = SDL_CreateWindow("audio & video", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 672,376, SDL_WINDOW_OPENGL);  
  743.       
  744.     if(!pScreen) {  
  745.         fprintf(stderr, "SDL: could not set video mode - exiting\n");  
  746.         exit(1);  
  747.     }  
  748.     //SDL_Window *windows = pScreen;  
  749.       
  750.     //pScreen->windowed;  
  751.     pRenderer = SDL_CreateRenderer(pScreen, -1, 0);  
  752.       
  753.     SDL_Texture* texture = SDL_CreateTexture(pRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, 672, 376);  
  754.       
  755.       
  756.     av_strlcpy(is->filename, argv[1], 1024);  
  757.       
  758.     is->pictq_mutex = SDL_CreateMutex();  
  759.     is->pictq_cond = SDL_CreateCond();  
  760.       
  761.     schedule_refresh(is, 40);  
  762.       
  763.     is->parse_tid = SDL_CreateThread(decode_thread,"parse_thread", is);  
  764.     if(!is->parse_tid) {  
  765.         av_free(is);  
  766.         return -1;  
  767.     }  
  768.     for(;;) {  
  769.           
  770.         SDL_WaitEvent(&event);  
  771.         switch(event.type) {  
  772.             case FF_QUIT_EVENT:  
  773.             case SDL_QUIT:  
  774.                 is->quit = 1;  
  775.                 /* 
  776.                  * If the video has finished playing, then both the picture and 
  777.                  * audio queues are waiting for more data.  Make them stop 
  778.                  * waiting and terminate normally. 
  779.                  */  
  780.                 SDL_CondSignal(is->audioq.cond);  
  781.                 SDL_CondSignal(is->videoq.cond);  
  782.                 SDL_Quit();  
  783.                 return 0;  
  784.                 break;  
  785.             case FF_ALLOC_EVENT:  
  786.                 alloc_picture(event.user.data1,pRenderer);  
  787.                 break;  
  788.             case FF_REFRESH_EVENT:  
  789.                 video_refresh_timer(event.user.data1,pRenderer,texture);  
  790.                 break;  
  791.             default:  
  792.                 break;  
  793.         }  
  794.     }  
  795.       
  796.     SDL_DestroyTexture(texture);  
  797.       
  798.     return 0;  
  799.       
  800. }  

这段代码的总体思路是跑一个parse线程解包,当解析到音频包时放入音频队列,当解析到视频包时放入视频队列,

当接到有视频包后,起一个线程用来解码视频包,解码完调用事件让主线程显示解码完的数据包



注意点

opengl相关操作需要在主线程中进行,比如SDL_UpdateTexture函数。

在我修改int queue_picture(VideoState *is,AVFrame *pFrame)代码时,误将SDL_Texure在此函数中写入数据

导致程序运行到SDL_UpdateTexture( vp->texture, &rect, pFrameYUV->data[0], pFrameYUV->linesize[0] );

后renderdata->glEnable(data->type);处报错,报bad_access,起先以为是pFrameYUV为空指针,后四处排查也找不到原因

突然想起opengl的操作是不是需要在主线程中进行?

后来我将SDL_UpdateTexture操作放置到video_display函数中,问题解决。


运行结果



引用库如下









评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值