Experiments in Streaming Content in Java ME(3)

RTP流媒体播放实现
本文介绍了一种使用RTP协议进行流媒体播放的方法,包括RTPSourceStream和StreamingDataSource等核心类的设计与实现,同时提供了一个简单的MIDlet实例展示如何连接服务器并获取媒体包。

Back to RTPSourceStream and StreamingDataSource

With the protocol handler in place, let's revisit the RTPSourceStream and StreamingDataSource classes from earlier, where they contained only place-holder methods. The StreamingDataSource is simple to code:

 

import java.io.IOException;
import javax.microedition.media.Control;
import javax.microedition.media.protocol.DataSource;
import javax.microedition.media.protocol.SourceStream;

public class StreamingDataSource extends DataSource {

  // the full URL like locator to the destination
  private String locator;

  // the internal streams that connect to the source
  // in this case, there is only one
  private SourceStream[] streams;

  // is this connected to its source?
  private Boolean connected = false;

  public StreamingDataSource(String locator) {
      super(locator);
      setLocator(locator);
  }

  public void setLocator(String locator) { this.locator = locator; }

  public String getLocator() { return locator; }

  public void connect() throws IOException {

    // if already connected, return
    if (connected) return;

    // if locator is null, then can't actually connect
    if (locator == null)
      throw new IOException("locator is null");

    // now populate the sourcestream array
    streams = new RTPSourceStream[1];

    // with a new RTPSourceStream
    streams[0] = new RTPSourceStream(locator);

    // set flag
    connected = true;

    }

  public void disconnect() {

    // if there are any streams
    if (streams != null) {

      // close the individual stream
        try {
          ((RTPSourceStream)streams[0]).close();
        } catch(IOException ioex) {} // silent
    }

    // and set the flag
    connected = false;
  }

  public void start() throws IOException {

    if(!connected) return;

    // start the underlying stream
    ((RTPSourceStream)streams[0]).start();

  }

  public void stop() throws IOException {

    if(!connected) return;

    // stop the underlying stream
    ((RTPSourceStream)streams[0])Close();

  }

  public String getContentType() {
    // for the purposes of this article, it is only video/mpeg
    return "video/mpeg";
  }

  public Control[] getControls() { return new Control[0]; }

  public Control getControl(String controlType) { return null; }

  public SourceStream[] getStreams() {    return streams; }

}

 

The main work takes place in the connect() method. It creates a new RTPSourceStream with the requested address. Notice that the getContentType() method returns video/mpeg as the default content type, but change it to the supported content type for your system. Of course, this should not be hard-coded; it should be based on the actual support for different media types.

The next listing shows the complete RTPSourceStream class, which, along with RTSPProtocolHandler, does the bulk of work in connecting getting the RTP packets of the server:

 

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import javax.microedition.io.Datagram;
import javax.microedition.io.Connector;
import javax.microedition.media.Control;
import javax.microedition.io.SocketConnection;
import javax.microedition.io.DatagramConnection;
import javax.microedition.media.protocol.SourceStream;
import javax.microedition.media.protocol.ContentDescriptor;

public class RTPSourceStream implements SourceStream {

    private RTSPProtocolHandler handler;

    private InputStream is;
    private OutputStream Os;

    private DatagramConnection socket;

    public RTPSourceStream(String address) throws IOException {

        // create the protocol handler and set it up so that the
        // application is ready to read data

        // create a socketconnection to the remote host
        // (in this case I have set it up so that its localhost, you can
        // change it to wherever your server resides)
        SocketConnection sc =
          (SocketConnection)Connector.open("socket://localhost:554");

        // open the input and output streams
        is = sc.openInputStream();
        Os = sc.openOutputStream();

        // and initialize the handler
        handler = new RTSPProtocolHandler(address, is, Os);

        // send the basic signals to get it ready
        handler.doDescribe();
        handler.doSetup();
    }

    public void start() throws IOException {

      // open a local socket on port 8080 to read data to
      socket = (DatagramConnection)Connector.open("datagram://:8080");

      // and send the PLAY command
      handler.doPlay();
    }

    public void close() throws IOException {

        if(handler != null) handler.doTeardown();

        is.close();
        os.close();
    }

    public int read(byte[] buffer, int offset, int length)
      throws IOException {

      // create a byte array which will be used to read the datagram
      byte[] fullPkt = new byte[length];

      // the new Datagram
      Datagram packet = socket.newDatagram(fullPkt, length);

      // receive it
      socket.receive(packet);

      // extract the actual RTP Packet's media data in the requested buffer
      RTPPacket rtpPacket = getRTPPacket(packet, packet.getData());
      buffer = rtpPacket.getData();

      // debug
      System.err.println(rtpPacket + " with media length: " + buffer.length);

      // and return its length
      return buffer.length;
    }

    // extracts the RTP packet from each datagram packet received
    private RTPPacket getRTPPacket(Datagram packet, byte[] buf) {

      // SSRC
      long SSRC = 0;

        // the payload type
        byte PT = 0;

      // the time stamp
        int timeStamp = 0;

        // the sequence number of this packet
        short seqNo = 0;


        // see http://www.networksorcery.com/enp/protocol/rtp.htm
        // for detailed description of the packet and its data
        PT =
          (byte)((buf[1] & 0xff) & 0x7f);

        seqNo =
          (short)((buf[2] << 8) | ( buf[3] & 0xff));

        timeStamp =
          (((buf[4] & 0xff) << 24) | ((buf[5] & 0xff) << 16) |
            ((buf[6] & 0xff) << 8) | (buf[7] & 0xff)) ;

        SSRC =
          (((buf[8] & 0xff) << 24) | ((buf[9] & 0xff) << 16) |
            ((buf[10] & 0xff) << 8) | (buf[11] & 0xff));


        // create an RTPPacket based on these values
        RTPPacket rtpPkt = new RTPPacket();

        // the sequence number
        rtpPkt.setSequenceNumber(seqNo);

        // the timestamp
        rtpPkt.setTimeStamp(timeStamp);

        // the SSRC
        rtpPkt.setSSRC(SSRC);

        // the payload type
        rtpPkt.setPayloadType(PT);

        // the actual payload (the media data) is after the 12 byte header
        // which is constant
        byte payload[] = new byte [packet.getLength() - 12];

        for(int i=0; i < payload.length; i++) payload [i] = buf[i+12];

        // set the payload on the RTP Packet
        rtpPkt.setData(payload);

        // and return the payload
        return rtpPkt;

    }

    public long seek(long where) throws IOException {
     throw new IOException("cannot seek");
    }

    public long tell() { return -1; }

    public int getSeekType() { return NOT_SEEKABLE;    }

    public Control[] getControls() { return null; }

    public Control getControl(String controlType) { return null; }

    public long getContentLength() { return -1;    }

    public int getTransferSize() { return -1;    }

    public ContentDescriptor getContentDescriptor() {
        return new ContentDescriptor("audio/rtp");
    }
}

 

The constructor for the RTPSourceStream creates a SocketConnection to the remote server (hard-coded to the local server and port here, but you can change this to accept any server or port). It then opens the input and output streams, which it uses to create the RTSPProtocolHandler. Finally, using this handler, it sends the DESCRIBE and SETUP commands to the remote server to get the server ready to send the packets. The actual delivery doesn't start until the start() method is called by the StreamingDataSource, which opens up a local port (hard-coded to 8081 in this case) for receiving the packets and sends the PLAY command to start receiving these packets. The actual reading of the packets is done in the read() method, which receives the individual packets, strips them to create the RTPPacket instances (with the getRTPPacket() method), and returns the media data in the buffer supplied while calling the read() method.

A MIDlet to see if it works

With all the classes in place, let's write a simple MIDlet to first create a Player instance that will use the StreamingDataSource to connect to the server and then get media packets from it. The Player interface is defined by the MMAPI and allows you to control the playback (or recording) of media. Instances of this interface are created by using the Manager class from the MMAPI javax.microedition.media package (see the MMAPI tutorial). The following shows this rudimentary MIDlet:

 

import javax.microedition.media.Player;
import javax.microedition.midlet.MIDlet;
import javax.microedition.media.Manager;

public class StreamingMIDlet extends MIDlet {

  public void startApp() {

    try {

      // create Player instance, realize it and then try to start it
      Player player =
        Manager.createPlayer(
          new StreamingDataSource(
            "rtsp://localhost:554/sample_100kbit.mp4"));

      player.realize();

      player.start();

    } catch(Exception e) {
            e.printStackTrace();
    }
  }

  public void pauseApp() {}

  public void destroyApp(boolean unconditional) {}
}

 

So what should happen when you run this MIDlet in the Wireless toolkit? I have on purpose left out any code to display the resulting video on screen. When I run it in the toolkit, I know that I am receiving the packets because I see the debug statements as shown in Figure 2.

 

 

Figure 2. Running StreamingMIDlet output

The RTP packets as sent by the server are being received. The StreamingDataSource along with the RTSPProtocolHandler and RTPSourceStream are doing their job of making the streaming server send these packets. This is confirmed by looking at the streaming server's admin console as shown in Figure 3.

 

 

Figure 3. Darwin's admin console shows that the file is being streamed (click for full-size image).

Unfortunately, the player constructed by the Wireless toolkit is trying to read the entire content at one go. Even if I were to make a StreamingVideoControl, it will not display the video until it has read the whole file, therefore defeating the purpose of the streaming aspect of this whole experiment. So what needs to be done to achieve the full streaming experience?

Ideally, MMAPI should provide the means for developers to register the choice of Player for the playback of certain media. This is easily achieved by providing a new method in the Manager class for registering (or overriding) MIME types or protocols with developer-made Player instances. For example, let's say I create a Player instance that reads streaming data called StreamingMPEGPlayer. With the Manager class, I should be able to say Manager.registerPlayer("video/mpeg", StreamingMPEGPlayer.class) or Manager.registerPlayer("rtsp", StreamingMPEGPlayer.class). MMAPI should then simply load this developer-made Player instance and use this as the means to read data from the developer-made datasource.

In a nutshell, you need to be able to create an independent media player and register it as the choice of instance for playing the desired content. Unfortunately, this is not possible with the current MMAPI implementation, and this is the data consumption conundrum that I had talked about earlier.

Of course, if you can test this code in a toolkit that does not need to read the complete data before displaying it (or for audio files, playing them), then you have achieved the aim of streaming data using the existing MMAPI implementation.

This experiment should prove that you can stream data with the current MMAPI implementation, but you may not be able to manipulate it in a useful manner until you have better control over the Manager and Player instances. I look forward to your comments and experiments using this code.

 

 

本指南详细阐述基于Python编程语言结合OpenCV计算机视觉库构建实时眼部状态分析系统的技术流程。该系统能够准确识别眼部区域,并对眨眼动作与持续闭眼状态进行判别。OpenCV作为功能强大的图像处理工具库,配合Python简洁的语法特性与丰富的第三方模块支持,为开发此类视觉应用提供了理想环境。 在环境配置阶段,除基础Python运行环境外,还需安装OpenCV核心模块与dlib机器学习库。dlib库内置的HOG(方向梯度直方图)特征检测算法在面部特征定位方面表现卓越。 技术实现包含以下关键环节: - 面部区域检测:采用预训练的Haar级联分类器或HOG特征检测器完成初始人脸定位,为后续眼部分析建立基础坐标系 - 眼部精确定位:基于已识别的人脸区域,运用dlib提供的面部特征点预测模型准确标定双眼位置坐标 - 眼睑轮廓分析:通过OpenCV的轮廓提取算法精确勾勒眼睑边缘形态,为状态判别提供几何特征依据 - 眨眼动作识别:通过连续帧序列分析眼睑开合度变化,建立动态阈值模型判断瞬时闭合动作 - 持续闭眼检测:设定更严格的状态持续时间与闭合程度双重标准,准确识别长时间闭眼行为 - 实时处理架构:构建视频流处理管线,通过帧捕获、特征分析、状态判断的循环流程实现实时监控 完整的技术文档应包含模块化代码实现、依赖库安装指引、参数调优指南及常见问题解决方案。示例代码需具备完整的错误处理机制与性能优化建议,涵盖图像预处理、光照补偿等实际应用中的关键技术点。 掌握该技术体系不仅有助于深入理解计算机视觉原理,更为疲劳驾驶预警、医疗监护等实际应用场景提供了可靠的技术基础。后续优化方向可包括多模态特征融合、深度学习模型集成等进阶研究领域。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值