RTSP Client use OpenRTSP (live555) with H264/MJpeg

本文介绍如何基于Live555库实现一个RTSP客户端,包括定制回调函数以获取每帧数据、禁用文件输出、调整缓冲区大小以支持高清视频流,并解决了多周期运行中的若干问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转载至:http://blog.xuite.net/antony0604/blog/130505326

早期, 約5年前吧, 就實作過 RTSP 的client程式.

現在,因為工作上的需要, 要實作一個 rtsp client的程式可以接收 rtsp/rtp 的影音串流, 加以解碼,並播放.

所以又重拾之前看過的Source Code, 不過還是會有一些不同的地方, 故寫此blog 以供記憶.


1. 首先, 要使用live555/testprogs/ 下的openrtsp 來實作 client 端, 將 OpenRTSP.cpp/playCommon.cpp/hh

    加入到要實作的專案裏面. 此程式的目的會幫你產生 視訊串流 的輸出檔.

    並且要將 live555 下的各子資料夾下的 include下的 .hh加入到專案下, 之後用genWindowsMakefiles.cmd 去產生 .mak, 再用 vc 打開 .mak 去產生各個 .lib 加入到專案使用

 

2. 因為我們想要在 每個frame取得時, 能夠單獨從 live555的lib 中取出此Frame, 並加以解碼. 故此須修改FileSink.cpp/hh, 提供Callback fuction 得知有Frame產生, 及取得Frame的函式. 修改如下:

//FileSink.hh

typedef void (CALLBACK* CALLBACK_FUN) (int, int, int); //add by antony 20130930

class FileSink: public MediaSink {
public:
  static FileSink* createNew(UsageEnvironment& env, char const* fileName,
        unsigned bufferSize = 20000,
        Boolean oneFilePerFrame = False);
  // "bufferSize" should be at least as large as the largest expected
  //   input frame.
  // "oneFilePerFrame" - if True - specifies that each input frame will
  //   be written to a separate file (using the presentation time as a
  //   file name suffix).  The default behavior ("oneFilePerFrame" == False)
  //   is to output all incoming data into a single file.

  void addData(unsigned char const* data, unsigned dataSize,
        struct timeval presentationTime);
  // (Available in case a client wants to add extra data to the output file)
  
  //add by antony 20130930 , begin
 CALLBACK_FUN  m_CallbackFun;
 unsigned int m_iDataSize;
 void SetCallback(CALLBACK_FUN chan_callback);
 BYTE *GetData(int *piDatasize);
  //add by antony 20130930 , end

//..............................

}

//FileSink.cpp

void FileSink::afterGettingFrame1(unsigned frameSize,
      struct timeval presentationTime) {
  addData(fBuffer, frameSize, presentationTime);

//add by antony 20130930 , begin
 m_iDataSize = frameSize;
 if (m_CallbackFun!=NULL)
  m_CallbackFun(0,0,0);//callback function to translate the data to UI
//add by antony 20130930 , end
  
  /* modify by antony 20131004, cause we don't want output file.
  if (fOutFid == NULL || fflush(fOutFid) == EOF) {
    // The output file has closed.  Handle this the same way as if the
    // input source had closed:
    onSourceClosure(this);

    stopPlaying();
    return;
  }
  */

  if (fPerFrameFileNameBuffer != NULL) {
    if (fOutFid != NULL) { fclose(fOutFid); fOutFid = NULL; }
  }

  // Then try getting the next frame:
  continuePlaying();
}

//add by antony 20130930 , begin
//得知何時有Frame產生
void FileSink::SetCallback(CALLBACK_FUN chan_callback){
 m_CallbackFun = (CALLBACK_FUN)chan_callback;
}
//取得Frame的資料
BYTE * FileSink::GetData(int *piDatasize){
 *piDatasize = m_iDataSize;
 return fBuffer;
}
//add by antony 20130930 , end

3. 第二步驟, 修改FileSink.cpp能夠得知Frame產生並取得Frame. 但我們仍需修改playCommon.cpp, 去透過FileSink 來取得Frame.

//FileSink.cpp

//以下為FileSink.cpp新增的程式碼

FileSink* g_fileSink = NULL; // add by antony 20100630

BYTE g_XVidBuffer[0xffffff];
int g_XVidBuffer_Len = 0;
typedef void (CALLBACK* CALLBACK_FUN) (int, int, int);
void CALLBACK ChannelCallback(int iParam1, int iParam2, int iEvent)
{
 if (g_fileSink){
  int iDataSize=0;
  BYTE *pData;
  pData = g_fileSink->GetData(&iDataSize);//此處可以取得Frame及其Size
  if (iDataSize>=300){//frame 大小判斷

   memcpy(g_XVidBuffer,pData,iDataSize);//將Frame 複製到Global var 以利後續程式可以讀取
   g_XVidBuffer_Len = iDataSize;

   } else {
   char sTemp[MAX_PATH];
   sprintf(sTemp, "error   ==> len:%06d, data: %02x %02x %02x %02x %02x %02x %02x %02x \n",iDataSize,
    pData[0],pData[1],pData[2],pData[3],pData[4],pData[5],pData[6],pData[7]);
   OutputDebugString(sTemp);
  }
 }
}

//以下為FileSink.cpp修改的程式碼

void setupStreams() {

  //........................

 FileSink* fileSink;

  //............................

 // add by antony 20131002, begin
 g_fileSink = fileSink; // callback when data receive add by antony 20100630
 //SetCallback , GetData 是由自己實作出來的為了取得串流的內容, 須自己到liveMedia/filesink.cpp 增加這二個函式的實作
 g_fileSink->SetCallback((CALLBACK_FUN)ChannelCallback); // callback when data receive 
 // add by antony 20131002, end

  //...........................

}

4. 之前在第一步驟, 提供OpenRtsp的程式會產生Output file, 而我們的程式並不想要存檔, 而是要直接解碼播放. 所以我們要將產生Output file 的地方由FileSink.cpp/H264VideoFileSink.cpp註解掉.

 

FileSink* FileSink::createNew(UsageEnvironment& env, char const* fileName,
         unsigned bufferSize, Boolean oneFilePerFrame) {
  do {
    FILE* fid;
    char const* perFrameFileNamePrefix;
    /* modify by antony 20131004, cause we don't want output file.
    if (oneFilePerFrame) {
      // Create the fid for each frame
      fid = NULL;
      perFrameFileNamePrefix = fileName;
    } else {
      // Normal case: create the fid once
      fid = OpenOutputFile(env, fileName);
      if (fid == NULL) break;
      perFrameFileNamePrefix = NULL;
    }
    */
    fid = NULL;//modify by antony 20131004, cause we don't want output file.
    perFrameFileNamePrefix = NULL;//modify by antony 20131004, cause we don't want output file.
    
    return new FileSink(env, fid, bufferSize, perFrameFileNamePrefix);
  } while (0);

 

  return NULL;
}

void FileSink::afterGettingFrame1(unsigned frameSize,
      struct timeval presentationTime) {
  addData(fBuffer, frameSize, presentationTime);

//add by antony 20130930 , begin
 m_iDataSize = frameSize;
 if (m_CallbackFun!=NULL)
  m_CallbackFun(0,0,0);//callback function to translate the data to UI
//add by antony 20130930 , end
  
  /* modify by antony 20131004, cause we don't want output file.
  if (fOutFid == NULL || fflush(fOutFid) == EOF) {
    // The output file has closed.  Handle this the same way as if the
    // input source had closed:
    onSourceClosure(this);

    stopPlaying();
    return;
  }
  */

  if (fPerFrameFileNameBuffer != NULL) {
    if (fOutFid != NULL) { fclose(fOutFid); fOutFid = NULL; }
  }

  // Then try getting the next frame:
  continuePlaying();
}

H264VideoFileSink*
H264VideoFileSink::createNew(UsageEnvironment& env, char const* fileName,
        char const* sPropParameterSetsStr,
        unsigned bufferSize, Boolean oneFilePerFrame) {
  do {
    FILE* fid;
    char const* perFrameFileNamePrefix;
    /* modify by antony 20131004, cause we don't want output file.
    if (oneFilePerFrame) {
      // Create the fid for each frame
      fid = NULL;
      perFrameFileNamePrefix = fileName;
    } else {
      // Normal case: create the fid once
      fid = OpenOutputFile(env, fileName);
      if (fid == NULL) break;
      perFrameFileNamePrefix = NULL;
    }
    */
    fid = NULL;//modify by antony 20131004, cause we don't want output file.
    perFrameFileNamePrefix = NULL;//modify by antony 20131004, cause we don't want output file.

    return new H264VideoFileSink(env, fid, sPropParameterSetsStr, bufferSize, perFrameFileNamePrefix);
  } while (0);

  return NULL;
}

 

5. H264的串流資料, 並不包含 SPS/PPS的資料, 所以須要去取徥 SPS/PPS 並將其加入在 SPS/PPS 的前端. 而預設給 每個  frame 存放的空間只有 100000, 對於百萬像素等級的H264 I frame可能會放不下, 所以要加大其空間到 3M

//playCommon.cpp

//100000 ==> 3*1024*1024 , 加大filesink可以存放的資料量
unsigned fileSinkBufferSize = 3*1024*1024; // modify by antony 20131002

//.................................................

BYTE g_ExtraSPS[1024]; //存放 H264 SPS的資料
int g_ExtraSPS_Len = 0;
typedef void (CALLBACK* CALLBACK_FUN) (int, int, int);
CALLBACK_FUN  g_CallbackFun = NULL;
void CALLBACK ChannelCallback(int iParam1, int iParam2, int iEvent)
{
 if (g_fileSink){
  int iDataSize=0;
  BYTE *pData;
  pData = g_fileSink->GetData(&iDataSize);
  if (iDataSize>=300){//

 

   memcpy(g_XVidBuffer,g_ExtraSPS,g_ExtraSPS_Len);
   memcpy(g_XVidBuffer+g_ExtraSPS_Len,pData,iDataSize);
   g_XVidBuffer_Len = g_ExtraSPS_Len+ iDataSize;

 

   char sTemp[MAX_PATH];
   sprintf(sTemp, "len:%06d, g_ExtraSPS_Len:%d , data: %02x %02x %02x %02x %02x %02x %02x %02x \n",iDataSize,
    g_ExtraSPS_Len,
    pData[0],pData[1],pData[2],pData[3],pData[4],pData[5],pData[6],pData[7]);
   //OutputDebugString(sTemp);
   if (g_CallbackFun!=NULL){
    g_CallbackFun(g_ExtraSPS_Len,iParam2, iEvent);
   }
  } else {
   char sTemp[MAX_PATH];
   sprintf(sTemp, "error   ==> len:%06d, data: %02x %02x %02x %02x %02x %02x %02x %02x \n",iDataSize,
    pData[0],pData[1],pData[2],pData[3],pData[4],pData[5],pData[6],pData[7]);
   //OutputDebugString(sTemp);
  }
 }
}

 FileSink* fileSink;
 if (strcmp(subsession->mediumName(), "audio") == 0 &&
     (strcmp(subsession->codecName(), "AMR") == 0 ||
      strcmp(subsession->codecName(), "AMR-WB") == 0)) {
   // For AMR audio streams, we use a special sink that inserts AMR frame hdrs:
   fileSink = AMRAudioFileSink::createNew(*env, outFileName,
       fileSinkBufferSize, oneFilePerFrame);
 } else if (strcmp(subsession->mediumName(), "video") == 0 &&
     (strcmp(subsession->codecName(), "H264") == 0)) {
      OutputDebugString("H264VideoFileSink");
   // For H.264 video stream, we use a special sink that insert start_codes:
   fileSink = H264VideoFileSink::createNew(*env, outFileName,
        subsession->fmtp_spropparametersets(),
        fileSinkBufferSize, oneFilePerFrame);
   // add by antony 20131002, begin
  unsigned int num=0; 
  SPropRecord * sps=parseSPropParameterSets(subsession->fmtp_spropparametersets(),num);
  struct timeval timeNow;
     gettimeofday(&timeNow, NULL);
  g_ExtraSPS_Len = 0;
  unsigned char start_code[4] = {0x00, 0x00, 0x00, 0x01}; 
  for(unsigned int i=0;i<num;i++){
   memcpy(&g_ExtraSPS[g_ExtraSPS_Len],start_code,4);
   g_ExtraSPS_Len+=4;
   memcpy(&g_ExtraSPS[g_ExtraSPS_Len],sps[i].sPropBytes,sps[i].sPropLength);
   g_ExtraSPS_Len+=sps[i].sPropLength;
  }
  memcpy(&g_ExtraSPS[g_ExtraSPS_Len],start_code,4);
  g_ExtraSPS_Len+=4;
  delete[] sps; 
   // add by antony 20131002, end

 } else {
   // Normal case:
   g_ExtraSPS_Len = 0; //add by antony 20131003
   fileSink = FileSink::createNew(*env, outFileName,
      fileSinkBufferSize, oneFilePerFrame);
 }

 

6. openRTSP 的程式是dos程式, 適合單次執行, 所以多次重複執行時, 有些地方須加以修改.

//playCommon.cpp

void setupStreams() {
  static MediaSubsessionIterator* setupIter = NULL;
  if (setupIter == NULL) setupIter = new MediaSubsessionIterator(*session);
  while ((subsession = setupIter->next()) != NULL) {
    // We have another subsession left to set up:
    if (subsession->clientPortNum() == 0) continue; // port # was not set

 

    setupSubsession(subsession, streamUsingTCP, continueAfterSETUP);
    return;
  }

 

  // We're done setting up subsessions.
  delete setupIter;
  setupIter = NULL; // add by antony to prevent crash when restart rtsp client, 20131002
  if (!madeProgress) shutdown();

//....................

}

Boolean areAlreadyShuttingDown = False;
int shutdownExitCode;
void shutdown(int exitCode) {
  if (areAlreadyShuttingDown) return; // in case we're called after receiving a RTCP "BYE" while in the middle of a "TEARDOWN".
  areAlreadyShuttingDown = True;

  shutdownExitCode = exitCode;
  if (env != NULL) {
    env->taskScheduler().unscheduleDelayedTask(sessionTimerTask);
    env->taskScheduler().unscheduleDelayedTask(arrivalCheckTimerTask);
    env->taskScheduler().unscheduleDelayedTask(interPacketGapCheckTimerTask);
    env->taskScheduler().unscheduleDelayedTask(qosMeasurementTimerTask);
  }

  if (qosMeasurementIntervalMS > 0) {
    printQOSData(exitCode);
  }

  // Teardown, then shutdown, any outstanding RTP/RTCP subsessions
  if (session != NULL) {
    tearDownSession(session, continueAfterTEARDOWN);
  } else {
    continueAfterTEARDOWN(NULL, 0, NULL);
  }
  areAlreadyShuttingDown = False; //Set back to default: in case of restart rtsp will not shutdown, add by antony 20131003
}


void continueAfterTEARDOWN(RTSPClient*, int /*resultCode*/, char* /*resultString*/) {
  // Now that we've stopped any more incoming data from arriving, close our output files:
  closeMediaSinks();
  Medium::close(session);

  // Finally, shut down our client:
  delete ourAuthenticator;
  Medium::close(ourClient);

  // Adios...
  //exit(shutdownExitCode); //modify by antony 20131002, 為了預防 shotdown時,會crash
}


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值