Apple http live stream

Introduction

If you areinterested in any of the following:

·        Streaming audio or video to iPhone, iPod touch, iPad, orApple TV

·        Streaming live events without special server software

·        Sending video on demand with encryption andauthentication

you should learnabout HTTP Live Streaming.

HTTP LiveStreaming lets you send audio and video over HTTP from an ordinary web serverfor playback on iOS-based devices—including iPhone, iPad, iPod touch, and AppleTV—and on desktop computers (Mac OS X). HTTP Live Streaming supports both livebroadcasts and prerecorded content (video on demand). HTTP Live Streamingsupports multiple alternate streams at different bit rates, and the clientsoftware can switch streams intelligently as network bandwidth changes. HTTPLive Streaming also provides for media encryption and user authentication overHTTPS, allowing publishers to protect their work.

All devicesrunning iOS 3.0 and later include built-in client software for HTTP LiveStreaming. The Safari browser can play HTTP streams within a webpage on iPadand desktop computers, and Safari launches a full-screen media player for HTTPstreams on iOS devices with small screens, such as iPhone and iPod touch. AppleTV 2 and later includes an HTTP Live Streaming client.

Important iPhone andiPad apps that send large amounts of audio or video data over cellular networksare required to use HTTP Live Streaming. See“Requirements for Apps.”

Safari plays HTTPLive streams natively as the source for the <VIDEO> tag. Mac OS X developers can usethe QTKit and AVFoundation frameworks to create desktop applications that playHTTP Live Streams, and iOS developers can use the MediaPlayer and AVFoundationframeworks to create iOS apps.

Important Always usethe <video> tag to embed HTTPLive Streaming. Do not use the <object> or <embed> tags (except to specify fallback content).

Because it usesHTTP, this kind of streaming is automatically supported by nearly all edgeservers, media distributors, caching systems, routers, and firewalls.

Note: Manyexisting streaming services require specialized servers to distribute contentto end users. These servers require specialized skills to set up and maintain,and in a large-scale deployment this can be costly. HTTP Live Streaming avoidsthis by using standard HTTP to deliver the media. Additionally, HTTP LiveStreaming is designed to work seamlessly in conjunction with media distributionnetworks for large scale operations.

The HTTP LiveStreaming specification is an IETF Internet-Draft.

At a Glance

HTTP Live Streaming is a wayto send audio and video over HTTP from a web server to client software on thedesktop or to iOS-based devices.

You Can Send Audio and Video Without Special ServerSoftware

You can serve HTTP LiveStreaming audio and video from an ordinary web server. The client software canbe the Safari browser or an app that you’ve written for iOS or Mac OS X.

HTTP Live Streaming sendsaudio and video as a series of small files, typically of about 10 secondsduration, called media segment files. An index file, or playlist, gives theclients the URLs of the media segment files. The playlist can be periodicallyrefreshed to accomodate live broadcasts, where media segment files areconstantly being produced. You can embed a link to the playlist in a webpage orsend it to an app that you’ve written.

Relevant Chapter: “HTTP Streaming Architecture”

You Can Send Live Streams or Video on Demand, withOptional Encryption

For video on demand fromprerecorded media, Apple provides a free tool to make media segment files andplaylists from MPEG-4 video or QuickTime movies with H.264 video compression,or audio files with AAC or MP3 compression. The playlists and media segmentfiles can be used for video on demand or streaming radio, for example.

For live streams, Appleprovides a free tool to make media segment files and playlists from live MPEG-2transport streams carrying H.264 video, AAC audio, or MP3 audio. There are anumber of hardware and software encoders that can create MPEG-2 transportstreams carrying MPEG-4 video and AAC audio in real time.

Either of these tools can beinstructed to encrypt your media and generate decryption keys. You can use asingle key for all your streams, a different key for each stream, or a seriesof randomly generated keys that change at intervals during a stream. Keys arefurther protected by the requirement for an initialization vector, which canalso be set to change periodically.

Relevant Chapter: “Using HTTP Live Streaming”

Prerequisites

You should have a generalunderstanding of common audio and video file formats and be familiar with howweb servers and browsers work.

See Also

· iOS Human Interface Guidelines—how to design webcontent for iOS-based devices.

· HTTP Live Streaming protocol—the IETFInternet-Draft of the HTTP Live Streaming specification.

HTTPStreaming Architecture

HTTPLive Streaming allows you to send live or prerecorded audio and video, withsupport for encryption and authentication, from an ordinary web server to anydevice running iOS 3.0 or later (including iPad and Apple TV), or any computerwith Safari 4.0 or later installed.

Overview

Conceptually,HTTP Live Streaming consists of three parts: the server component, thedistribution component, and the client software.

The servercomponent isresponsible for taking input streams of media and encoding them digitally,encapsulating them in a format suitable for delivery, and preparing theencapsulated media for distribution.

The distributioncomponent consistsof standard web servers. They are responsible for accepting client requests anddelivering prepared media and associated resources to the client. Forlarge-scale distribution, edge networks or other content delivery networks canalso be used.

The clientsoftware isresponsible for determining the appropriate media to request, downloading thoseresources, and then reassembling them so that the media can be presented to theuser in a continuous stream. Client software is included on iOS 3.0 and laterand computers with Safari 4.0 or later installed.

Ina typical configuration, a hardware encoder takes audio-video input, encodes itas H.264 video and AAC audio, and outputs it in an MPEG-2 Transport Stream,which is then broken into a series of short media files by a software streamsegmenter. These files are placed on a web server. The segmenter also createsand maintains an index file containing a list of the media files. The URL ofthe index file is published on the web server. Client software reads the index,then requests the listed media files in order and displays them without anypauses or gaps between segments.

An example of a simple HTTPstreaming configuration is shown in “A basic configuration.”

Figure1-1  A basic configuration

Inputcan be live or from a prerecorded source. It is typically encoded as MPEG-4(H.264 video and AAC audio) and packaged in an MPEG-2 Transport Stream byoff-the-shelf hardware. The MPEG-2 transport stream is then broken intosegments and saved as a series of one or more .ts media files. Thisis typically accomplished using a software tool such as the Apple streamsegmenter.

Audio-onlystreams can be a series of MPEG elementary audio files formatted as either AACwith ADTS headers or as MP3.

Thesegmenter also creates an index file. The index file contains a list of mediafiles. The index file also contains metadata. The index file is an .M3U8 playlist. The URLof the index file is accessed by clients, which then request the indexed filesin sequence.

Server Components

Theserver requires a media encoder, which can be off-the-shelf hardware, and a wayto break the encoded media into segments and save them as files, which can besoftware such as the media stream segmenter provided by Apple.

Media Encoder

Themedia encoder takes a real-time signal from an audio-video device, encodes themedia, and encapsulates it for transport. Encoding should be set to a formatsupported by the client device, such as H.264 video and HE-AAC audio.Currently, the supported delivery format is MPEG-2 Transport Streams foraudio-video, or MPEG elementary streams for audio-only.

Theencoder delivers the encoded media in an MPEG-2 Transport Stream over the localnetwork to the stream segmenter.

Note: MPEG-2transport streams should not be confused with MPEG-2 video compression. Thetransport stream is a packaging format that can be used with a number ofdifferent compression formats. Only MPEG-2 transport streams with H.264 videoand AAC audio are supported at this time for audio-video content. Audio-onlycontent can be either MPEG-2 transport or MPEG elementary audio streams, eitherin AAC format with ADTS headers or in MP3 format.

Important Thevideo encoder should not change stream settings—such as video dimensions orcodec type—in the midst of encoding a stream.

StreamSegmenter

Thestream segmenter is a process—typically software—that reads the TransportStream from the local network and divides it into a series of small media filesof equal duration. Even though each segment is in a separate file, video filesare made from a continuous stream which can be reconstructed seamlessly.

Thesegmenter also creates an index file containing references to the individualmedia files. Each time the segmenter completes a new media file, the index fileis updated. The index is used to track the availability and location of themedia files. The segmenter may also encrypt each media segment and create a keyfile as part of the process.

Mediasegments are saved as .ts files (MPEG-2transport stream files). Index files are saved as .M3U8 playlists.

FileSegmenter

Ifyou already have a media file encoded using supported codecs, you can use afile segmenter to encapsulate it in an MPEG stream and break it into segmentsof equal length. The file segmenter allows you to use a library of existingaudio and video files for sending video on demand via HTTP Live Streaming. Thefile segmenter performs the same tasks as the stream segmenter, but it takesfiles as input instead of streams.

Media Segment Files

Themedia segment files are normally produced by the stream segmenter, based oninput from the encoder, and consist of a series of .ts files containingsegments of an MPEG-2 Transport Stream carrying H.264 video and AAC audio. Foran audio-only broadcast, the segmenter can produce MPEG elementary audiostreams containing either AAC audio with ADTS headers or MP3 audio.

Index Files (Playlists)

Indexfiles are normally produced by the stream segmenter or file segmenter, andsaved as .M3U8 playlists, anextension of the .m3u format used forMP3 playlists.

Note: Becausethe index file format is an extension of the  .m3u playlist format,and because the system also supports  .mp3 audio mediafiles, the client software may also be compatible with typical MP3 playlistsused for streaming Internet radio.

Hereis a very simple example of an index file, in the form of an  .M3U8 playlist, that asegmenter might produce if the entire stream were contained in threeunencrypted 10-second media files:
#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:1
#EXTINF:10,
http://media.example.com/segment0.ts
#EXTINF:10,
http://media.example.com/segment1.ts
#EXTINF:10,
http://media.example.com/segment2.ts
#EXT-X-ENDLIST

Note: Youcan use the file segmenter provided by Apple to generate a variety of exampleplaylists, using an MPEG-4 video or AAC or MP3 audio file as a source. Fordetails, see  “Media File Segmenter.”

The index file may alsocontain URLs for encryption key files and alternate index files for differentbandwidths. For details of the index file format, see the IETF Internet-Draftof the  HTTP Live Streaming specification.

Indexfiles are normally created by the same segmenter that creates the media segmentfiles. Alternatively, it is possible to create the .M3U8 file and themedia segment files independently, provided they conform the publishedspecification. For audio-only broadcasts, for example, you could create an .M3U8 file using a texteditor, listing a series of existing .MP3 files.

Distribution Components

Thedistribution system is a web server or a web caching system that delivers themedia files and index files to the client over HTTP. No custom server modulesare required to deliver the content, and typically very little configuration isneeded on the web server.

Recommendedconfiguration is typically limited to specifying MIME-type associations for .M3U8 files and .ts files.

File extension

MIME type

.M3U8

application/x-mpegURL or vnd.apple.mpegURL

.ts

video/MP2T

Tuningtime-to-live (TTL) values for .M3U8 files may also benecessary to achieve desired caching behavior for downstream web caches, asthese files are frequently overwritten during live broadcasts, and the latestversion should be downloaded for each request.

Client Component

Theclient software begins by fetching the index file, based on a URL identifyingthe stream. The index file in turn specifies the location of the availablemedia files, decryption keys, and any alternate streams available. For theselected stream, the client downloads each available media file in sequence.Each file contains a consecutive segment of the stream. Once it has asufficient amount of data downloaded, the client begins presenting thereassembled stream to the user.

Theclient is responsible for fetching any decryption keys, authenticating orpresenting a user interface to allow authentication, and decrypting media filesas needed.

This process continues until the client encountersthe #EXT-X-ENDLIST tag in the indexfile. If no #EXT-X-ENDLIST tag is present,the index file is part of an ongoing broadcast. During ongoing broadcasts, theclient loads a new version of the index file periodically. The client looks fornew media files and encryption keys in the updated index and adds these URLs toits queue.

Using HTTP Live Streaming

Download the Tools

There are several toolsavailable that can help you set up an HTTP Live Streaming service. The toolsinclude a media stream segmenter, a media file segmenter, a stream validator,an id3 tag generator, and a variant playlist generator.

The tools are frequentlyupdated, so you should download the current version of the HTTP Live StreamingTools from the Apple Developer website. You can access them if you are a memberof the iPhone Developer Program. One way to navigate to the tools is to logonto connect.apple.com, then click iPhone under theDownloads heading.

Media Stream Segmenter

The mediastreamsegmenter command-line tool takes an MPEG-2 transport stream as an input andproduces a series of equal-length files from it, suitable for use in HTTP LiveStreaming. It can also generate index files (also known as playlists), encryptthe media, produce encryption keys, optimize the files by reducing overhead,and create the necessary files for automatically generating multiple streamalternates. For details, type manmediastreamsegmenter from the terminal window.

Usage example: mediastreamsegmenter -s 3 -D -f /Library/WebServer/Documents/stream239.4.1.5:20103

The usage example captures alive stream from the network at address 239.4.1.5:20103 and creates media segment files and index files from it. The index filescontain a list of the current three media segment files (-s 3). The media segment files aredeleted after use (-D). The index files and media segment files are storedin the directory /Library/WebServer/Documents/stream.

Media File Segmenter

The mediafilesegmenter command-line tool takes an encoded media file as an input, wraps it in anMPEG-2 transport stream, and produces a series of equal-length files from it,suitable for use in HTTP Live Streaming. The media file segmenter can alsoproduce index files (playlists) and decryption keys. The file segmenter behavesvery much like the stream segmenter, but it works on existing files instead ofstreams coming from an encoder. For details, type man mediafilesegmenter from the terminal window.

Media Stream Validator

The mediastreamvalidator command-line tool examines the index files, stream alternates, and mediasegment files on a server and tests to determine whether they will work withHTTP Live Streaming clients. For details, type man mediastreamvalidator from the terminal window.

Variant Playlist Creator

The variantplaylistcreator command-line tool creates a master index file, or playlist, listing theindex files for alternate streams at different bit rates, using the output ofthe stream or file segmenter. The segmenter must be invoked with the -generate-variant-playlist argument to produce the required output for the variant playlist creator.For details, type manvariantplaylistcreator from the terminal window.

Metadata Tag Generator

The id3taggenerator command-line tool generates ID3 metadata tags. Thesetags can either be written to a file or inserted into outgoing stream segments.For details, see “AddingMetadata.”

Session Types

The HTTP Live Streamingprotocol supports live broadcast sessions and video on demand (VOD) sessions.Live sessions can be presented as a complete record of an event, or as asliding window with a limited time range the user can seek within.

For live sessions, as newmedia files are created and made available, the index file is updated. The newindex file lists the new media files. Older media files can be removed from theindex and discarded, presenting a moving window into a continuous stream—thistype of session is suitable for continuous broadcasts. Alternatively, the indexcan simply add new media files to the existing list—this type of session can beeasily converted to VOD after the event completes.

For VOD sessions, media filesare available representing the entire duration of the presentation. The indexfile is static and contains a complete list of all files created since thebeginning of the presentation. This kind of session allows the client fullaccess to the entire program.

It is possible to create alive broadcast of an event that is instantly available for video on demand. Toconvert a live broadcast to VOD, do not remove the old media files from theserver or delete their URLs from the index file; instead, add an #EXT-X-ENDLIST tag to the index when the event ends. This allows clients to join thebroadcast late and still see the entire event. It also allows an event to bearchived for rebroadcast with no additional time or effort.

VOD can also be used todeliver “canned” media. HTTP Live Streaming offers advantages over progressivedownload for VOD, such as support for media encryption and dynamic switchingbetween streams of different data rates in response to changing connectionspeeds. (QuickTime also supports multiple-data-rate movies using progressivedownload, but QuickTime movies do not support dynamically switching betweendata rates in mid-movie.)

Content Protection

Media files containing streamsegments may be individually encrypted. When encryption is employed, referencesto the corresponding key files appear in the index file so that the client canretrieve the keys for decryption.

When a key file is listed inthe index file, the key file contains a cipher key that must be used to decryptsubsequent media files listed in the index file. Currently HTTP Live Streamingsupports AES-128 encryption using 16-octet keys. The format of the key file isa packed array of these 16 octets in binary format.

The media stream segmenteravailable from Apple provides encryption and supports three modes forconfiguring encryption.

The first mode allows you tospecify a path to an existing key file on disk. In this mode the segmenterinserts the URL of the existing key file in the index file. It encrypts allmedia files using this key.

The second mode instructs thesegmenter to generate a random key file, save it in a specified location, andreference it in the index file. All media files are encrypted using thisrandomly generated key.

The third mode instructs thesegmenter to generate a new random key file every n media segments, save it in aspecified location, and reference it in the index file. This mode is referredto as key rotation. Each group of n files is encrypted using a different key.

Note: All media files may beencrypted using the same key, or new keys may be required at intervals. Thetheoretical limit is one key per media file, but because each media key adds afile request and transfer to the overhead for presenting the subsequent mediasegments, changing to a new key periodically is less likely to impact systemperformance than changing keys for each segment.

You can serve key files using either HTTP or HTTPS.You may also choose to protect the delivery of the key files using your ownsession-based authentication scheme. For details, see “Serving KeyFiles Securely Over HTTPS.”

Key files require aninitialization vector (IV) to decode encrypted media. The IVs can be changedperiodically, just as the keys can.

Caching and Delivery Protocols

HTTPS is commonly used todeliver key files. It may also be used to deliver the media segment files andindex files, but this is not recommended when scalability is important, sinceHTTPS requests often bypass web server caches, causing all content requests tobe routed through your server and defeating the purpose of edge network distributionsystems.

For this very reason, however,it is important to make sure that any content delivery network you useunderstands that the .M3U8 index files are not to be cached for longer than onemedia segment duration for live broadcasts, where the index file is changingdynamically.

Stream Alternates

A master index file mayreference alternate streams of content. References can be used to supportdelivery of multiple streams of the same content with varying quality levelsfor different bandwidths or devices. HTTP Live Streaming supports switchingbetween streams dynamically if the available bandwidth changes. The clientsoftware uses heuristics to determine appropriate times to switch between thealternates. Currently, these heuristics are based on recent trends in measurednetwork throughput.

The master index file points to alternate streams ofmedia by including a specially tagged list of other index files, as illustratedin Figure 2-1

Figure 2-1  Stream alternates

Both the master index file andthe alternate index files are in .M3U8 playlist format. The master index file is downloadedonly once, but for live broadcasts the alternate index files are reloadedperiodically. The first alternate listed in the master index file is the firststream used—after that, the client chooses among the alternates by availablebandwidth.

Note that the client maychoose to change to an alternate stream at any time, such as when a mobiledevice enters or leaves a WiFi hotspot. All alternates should use identicalaudio to allow smooth transitions among streams.

You can create a set of stream alternates by using the variantplaylistcreator tool and specifying the -generate-variant-playlist option for either the mediafilesegmenter tool or the mediastreamsegmenter tool (see “Download theTools” for details).

When using stream alternates,it is important to bear the following considerations in mind:

·        The first entry in the variant playlist is played when a user joins thestream and is used as part of a test to determine which stream is mostappropriate. The order of the other entries is irrelevant.

·        Where possible, encode enough variants to provide the best quality streamacross a wide range of connection speeds. For example, encode variants at 150kbps, 350 kbps, 550 kbps, 900 kbps, 1500 kbps.

·        When possible, use relative path names in variant playlists and in theindividual .M3U8 playlist files

·        The audio track in all streams should be exactly the same, to preventaudio artifacts when the stream changes.

Note: One way to synchronizedifferent VOD streams is to copy the audio track from one media file and pasteit into each of the other media files, after compression but before segmenting.

·        The video aspect ratio on alternate streams must be exactly the same, butalternates can have different pixel dimensions, as long as they have the sameaspect ratio. For example, two stream alternates with the same 4:3 aspect ratiocould have dimensions of 400 x 300 and 800 x 600.

·        If you are an iOS app developer, you can query the user’s device todetermine whether the initial connection is cellular or WiFi and choose anappropriate master index file.

To ensure the user has a good experience when thestream is first played, regardless of the initial network connection, youshould have more than one master index file consisting of the same alternateindex files but with a different first stream.

A 150k stream for the cellular variant playlist isrecommended.

A 240k or 440k stream for the Wi-Fi variant playlistis recommended.

Note: For details on how toquery an iOS-based device for its network connection type, see the followingsample code: Reachability.

·        When you specify the bitrates for stream variants, it is important thatthe BANDWIDTH attribute closely match the actual bandwidth requiredby a given stream. If the actual bandwidth requirement is substantiallydifferent than the BANDWIDTH attribute, automatic switching of streams may notoperate smoothly or even correctly.

Video Over Cellular Networks

When you send video to amobile device such as iPhone or iPad, the client’s Internet connection may moveto or from a cellular network at any time.

HTTP Live Streaming allows theclient to choose among stream alternates dynamically as the network bandwidthchanges, providing the best stream as the device moves between cellular andWiFi connections, for example, or between 3G and EDGE connections. This is asignificant advantage over progressive download.

It is strongly recommendedthat you use HTTP Live Streaming to deliver video to all cellular-capabledevices, even for video on demand, so that your viewers have the bestexperience possible under changing conditions.

In addition, you shouldprovide cellular-capable clients an alternate stream at 64 Kbps or less forslower data connections. If you cannot provide video of acceptable quality at64 Kbps or lower, you should provide an audio-only stream, or audio with astill image.

A good choice for pixel dimensions when targetingcellular network connections is 400 x 224 for 16:9 content and 400 x 300 for4:3 content (see “Preparing Mediafor Delivery to iOS-Based Devices”).

Requirements for Apps

Warning iOSapps submitted for distribution in the App Store must conform to theserequirements.

If your app delivers videoover cellular networks, and the video exceeds either 10 minutes duration or 5MB of data in a five minute period, you are required to use HTTP LiveStreaming. (Progressive download may be used for smaller clips.)

If your app uses HTTP LiveStreaming over cellular networks, you are required to provide at least onestream at 64 Kbps or lower bandwidth (the low-bandwidth stream may beaudio-only or audio with a still image).

These requirements apply toiOS apps submitted for distribution in the App Store for use on Apple products.Non-compliant apps may be rejected or removed, at the discretion of Apple.

Failover Protection

If your playlist containsalternate streams, they can not only operate as bandwidth or device alternates,but as failure fallbacks. Starting with iOS 3.1, if the client is unable toreload the index file for a stream (due to a 404 error, for example), theclient attempts to switch to an alternate stream.

In the event of an index loadfailure on one stream, the client chooses the highest bandwidth alternatestream that the network connection supports. If there are multiple alternatesat the same bandwidth, the client chooses among them in the order listed in theplaylist.

You can use this feature toprovide redundant streams that will allow media to reach clients even in theevent of severe local failures, such as a server crashing or a contentdistributor node going down.

To implement failoverprotection, create a stream—or multiple alternate bandwidth streams—andgenerate a playlist file as you normally would. Then create a parallel stream,or set of streams, on a separate server or content distribution service. Addthe list of backup streams to the playlist file, so that the backup stream ateach bandwidth is listed after the primary stream. For example, if the primarystream comes from server ALPHA, and the backup stream is on server BETA, yourplaylist file might look something like this:

#EXTM3U

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=200000

http://ALPHA.mycompany.com/lo/prog_index.m3u8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=200000

http://BETA.mycompany.com/lo/prog_index.m3u8

 

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=500000

http://ALPHA.mycompany.com/md/prog_index.m3u8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=500000

http://BETA.mycompany.com/md/prog_index.m3u8

Note that the backup streamsare intermixed with the primary streams in the playlist, with the backup ateach bandwidth listed after the primary for that bandwidth.

You are not limited to asingle backup stream set. In the example above, ALPHA and BETA could befollowed by GAMMA, for instance. Similarly, you need not provide a completeparallel set of streams. You could provide a single low-bandwidth stream on abackup server, for example.

Adding Timed Metadata

You can add various kinds ofmetadata to media stream segments. For example, you can add the album art,artist’s name, and song title to an audio stream. As another example, you couldadd the current batter’s name and statistics to video of a baseball game.

If an audio-only streamincludes an image as metadata, the Apple client software automatically displaysit. Currently, the only metadata that is automatically displayed by theApple-supplied client software is a still image accompanying an audio-onlystream.

If you are writing your ownclient software, however, using either MPMoviePlayerController or AVPlayerItem, you can access streamed metadata using the timedMetaData property.

You can add timed metadata byspecifying a metadata file in the -F command line option to either the stream segmenter orthe file segmenter. The specified metadata source can be a file in ID3 formator an image file (JPEG or PNG). Metadata specified this way is automaticallyinserted into every media segment.

This is called timed metadatabecause it is inserted into a media stream at a given time offset. Timedmetadata can optionally be inserted into all segments after a given time.

To add timed metadata to alive stream, use the id3taggenerator tool, with its output set to the stream segmenter. Thetool generates ID3 metadata and passes it the stream segmenter for inclusion inthe outbound stream.

The tag generator can be runfrom a shell script, for example, to insert metadata at the desired time, or atdesired intervals. New timed metadata automatically replaces any existingmetadata.

Once metadata has beeninserted into a media segment, it is persistent. If a live broadcast isre-purposed as video on demand, for example, it retains any metadata insertedduring the original broadcast.

Adding timed metadata to astream created using the file segmenter is slightly more complicated.

1.    First, generate the metadata samples. You can generate ID3 metadata usingthe id3taggenerator command-line tool, with the output set to file.

2.    Next, create a metadata macro file—a text file in which each line contains the time toinsert the metadata, the type of metadata, and the path and filename of ametadata file.

For example, the following metadata macro file wouldinsert a picture at 1.2 seconds into the stream, then an ID3 tag at 10 seconds:

1.2 picture /meta/images/picture.jpg

10 id3 /meta/id3/title.id3

3.    Finally, specify the metadata macro file by name when you invoke the mediafile segmenter, using the -M command line option.

For additional details, seethe man pages for mediastreamsegmentermediafilesegmenter, and id3taggenerator.

Adding Closed Captions

HTTP Live Streaming supportsadding closed captions to streams.

Note: Closed captions shouldnot be confused with subtitles.

If you are using the streamsegmenter, you need to add CEA-608 closed captions to the MPEG-2 transportstream (in the main video elementary stream) as specified in ATSC A/72.

If you are using the filesegmenter, you should encapsulate your media in a QuickTime movie file and adda closed caption track ('clcp').

One tool you can use to addclosed captions to movie files or MPEG-2 transport streams is Compressor,included with Final Cut Studio. Compressor works in conjunction with Scenaristsoftware to generate closed captions. See the Compressor documentation fordetails.

If you are writing an app, theAVFoundation framework supports playback of closed captions.

Preparing Media for Delivery to iOS-Based Devices

The recommended encodersettings for streams used with iOS-based devices are shown in the followingfour tables. For live streams, these settings should be available from yourhardware or software encoder. If you are re-encoding from a master file for videoon demand, you can use a video editing tool such as Compressor.

File format for the filesegmenter can be a QuickTime movie, MPEG-4 video, or MP3 audio, using thespecified encoding.

Stream format for the streamsegmenter must be MPEG elementary audio and video streams, wrapped in an MPEG-2transport stream, and using the following encoding.

·        Encode video using H.264 compression

·        iPhone 3G and later, and iPod touch, 2nd generation and later, supportH.264 Baseline 3.1. If your app runs on older versions of iPhone or iPod touch,however, you should use H.264 Baseline 3.0 for compatibility.

·        For iPad, Apple TV 2 and later, and iPhone 4 and later, you can useBaseline profile 3.0, Baseline profile 3.1, or Main profile 3.1.

Note: Baseline profiles arenot the same as Main profiles. iPhone and iPod touch use Baseline profiles.iPad and Apple TV can use Baseline or Main profiles. Use Main profiles if yourcontent is for large-screen devices only; use Baseline profiles if your contentis meant to be viewed on small screens only, or on both large and smallscreens.

·        A frame rate of 10 fps is recommended for video streams under 200 kbps.For video streams under 300 kbps, a frame rate of 12 to 15 fps is recommended.For all other streams, a frame rate of 29.97 is recommended.

·        Encode audio as either of the following:

·        HE-AAC or AAC-LC, stereo

·        MP3 (MPEG-1 Audio Layer 3), stereo

·        Audio sample rate of 22.05 kHz and audio bit rate of 40 kbps isrecommended in all cases.

Table 2-1  Encoder settings for iPhone, iPod touch, iPad, and Apple TV, 16:9 aspect ratio

Connection

Dimensions

Total

bit rate

Video

bit rate

Keyframes

Cellular

480 x 320

64 kpbs

audio only

none

Cellular

480 x 224

150 kpbs

110 kbps

30

Cellular

480 x 224

240 kpbs

200 kbps

45

Cellular

480 x 224

440 kpbs

400 kbps

90

WiFi

640 x 360

640 kpbs

600 kbps

90

Table 2-2  Encoder settings for iPhone, iPod touch, iPad, and Apple TV, 4:3 aspect ratio

Connection

Dimensions

Total

bit rate

Video

bit rate

Keyframes

Cellular

480 x 320

64 kpbs

audio only

none

Cellular

480 x 300

150 kpbs

110 kbps

30

Cellular

480 x 300

240 kpbs

200 kbps

45

Cellular

480 x 300

440 kpbs

400 kbps

90

WiFi

640 x 480

640 kpbs

600 kbps

90

 

Table 2-3  Additional encoder settings for iPad and Apple TV only, 16:9 aspect ratio

Connection

Dimensions

Total

bit rate

Video

bit rate

Keyframes

WiFi

640 x 360

1240 kpbs

1200 kbps

90

WiFi

960 x 540

1840 kpbs

1800 kbps

90

WiFi

1280 x 720

2540 kpbs

1500 kbps

90

WiFi

1280 x 720

4540 kpbs

4500 kbps

90

Table 2-4  Additional encoder settings for iPad and Apple TV only, 4:3 aspect ratio

Connection

Dimensions

Total

bit rate

Video

bit rate

Keyframes

WiFi

640 x 480

1240 kpbs

1200 kbps

90

WiFi

960 x 720

1840 kpbs

1800 kbps

90

WiFi

960 x 720

2540 kpbs

2500 kbps

90

WiFi

1280 x 960

4540 kpbs

4500 kbps

90

Sample Streams

There are a series of HTTPstreams available for testing on Apple’s developer site. These examples showproper formatting of HTML to embed streams, .M3U8files to index the streams,and .ts media segment files. The streams can be accessed atthe following URLs:

·        http://devimages.apple.com/iphone/samples/bipbopgear1.html

·        http://devimages.apple.com/iphone/samples/bipbopgear2.html

·        http://devimages.apple.com/iphone/samples/bipbopgear3.html

·        http://devimages.apple.com/iphone/samples/bipbopgear4.html

·        http://devimages.apple.com/iphone/samples/bipbopall.html

The samples show the same NTSCtest pattern at four different resolutions and data rates. The last samplestreams at multiple data rates. The stream starts with sample 1 and switches tothe fastest sample the connection supports.

You must install iOS version3.0 or later to play these samples on your iPhone or iPod touch. Safari 4.0 orlater is required for playback on the desktop.

NextPrevious

Deploying HTTP Live Streaming

To actually deployHTTP Live Streaming, you need to create either an HTML page for browsers or aclient app to act as a receiver. You also need the use of a web server and away to either encode live streams as MPEG-2 transport streams or to create MP3or MPEG-4 media files with H.264 and AAC encoding from your source material.

You can use theApple-provided tools to segment the streams or media files, and to produce theindex files and variant playlists (see “Download the Tools”).

You should use theApple-provided media stream validator prior to serving your streams, to ensurethat they are fully compliant with HTTP Live Streaming.

You may want toencrypt your streams, in which case you probably also want to serve theencryption key files securely over HTTPS, so that only your intended clients candecrypt them.

Creating an HTML Page

The easiest way to distribute HTTP Live Streamingmedia is to create a webpage that includes the HTML5 <video> tag, using an .M3U8 playlist file as the video source. An example is shownin Listing 3-1.

Listing 3-1  Serving HTTP LiveStreaming in a webpage

<html>

<head>

    <title>HTTP Live Streaming Example</title>

</head>

<body>

    <video

        src="http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"

        height="300" width="400"

    >

    </video>

</body>

</html>

For browsers that don’tsupport the HTML5 video element, or browsers that don’t support HTTP LiveStreaming, you can include fallback code between the<video> and </video> tags. For example, you could fall back to aprogressive download movie or an RTSP stream using the QuickTime plug-in. See Safari HTML5 Audioand Video Guide for examples.

Configuring a Web Server

HTTP Live Streaming can beserved from an ordinary web server; no special configuration is necessary,apart from associating the MIME types of the files being served with their fileextensions.

Configure the following MIMEtypes for HTTP Live Streaming:

File Extension

MIME Type

.M3U8

application/x-mpegURL or

vnd.apple.mpegURL

.ts

video/MP2T

If your web server isconstrained with respect to MIME types, you can serve files ending in .m3u with MIME type audio/mpegURL for compatibility.

Index files can be long andmay be frequently redownloaded, but they are text files and can be compressedvery efficiently. You can reduce server overhead by enabling on-the-fly .gzip compression of .M3U8 index files; the HTTP Live Streaming client automaticallyunzips compressed index files.

Shortening time-to-live (TTL)values for .M3U8 files may also be needed to achieve proper cachingbehavior for downstream web caches, as these files are frequently overwrittenduring live broadcasts, and the latest version should be downloaded for eachrequest. Check with your content delivery service provider for specificrecommendations. For VOD, the index file is static and downloaded only once, socaching is not a factor.

Validating Your Streams

The mediastreamvalidator tool is a command-line utility for validating HTTPLive Streaming streams and servers (see “Download the Tools” for details on obtaining thetool).

The media stream validatorsimulates an HTTP Live Streaming session and verifies that the index file andmedia segments conform to the HTTP Live Streaming specification. It performsseveral checks to ensure reliable streaming. If any errors or problems arefound, a detailed diagnostic report is displayed.

You should always run thevalidator prior to serving a new stream or alternate stream set.

The media stream validatorshows a listing of the streams you provide, followed by the timing results foreach of those streams. (It may take a few minutes to calculate the actualtiming.) An example of validator output follows.

Validating http://devimages.apple.com/iphone/samples/bipbop/gear3/prog_index.m3u8 against iPhone OS 3.1.0

 

Average segment duration: 8.77 seconds

Average segment bitrate: 510.05 kbit/s

Average segment structural overhead: 96.37 kbit/s (18.89 %)

 

Video codec: avc1

Video resolution: 480x360 pixels

Video frame rate: 29.97 fps

Average video bitrate: 407.76 kbit/s

H.264 profile: Baseline

H.264 level: 2.1

 

Audio codec: aac

Audio sample rate: 22050 Hz

Average audio bitrate: 5.93 kbit/s

Serving Key Files Securely Over HTTPS

You can protect your media byencrypting it. The file segmenter and stream segmenter both have encryptionoptions, and you can tell them to change the encryption key periodically. Whoyou share the keys with is up to you.

Key files require aninitialization vector (IV) to decode encrypted media. The IVs can be changedperiodically, just as the keys can. Current recommendations for encryptingmedia while minimizing overhead is to change the key every 3-4 hours and changethe IV after every 50 Mb of data.

Even with restricted access tokeys, however, it is possible for an eavesdropper to obtain copies of the keyfiles if they are sent over HTTP. One solution to this problem is to send thekeys securely over HTTPS.

Before you attempt to servekey files over HTTPS, you should do a test serving the keys from an internalweb server over HTTP. This allows you to debug your setup before adding HTTPSto the mix. Once you have a known working system, you are ready to make theswitch to HTTPS.

There are three conditions youmust meet in order to use HTTPS to serve keys for HTTP Live Streaming:

·        You need to install an SSL certificate signed by a trusted authority onyour HTTPS server.

·        The authentication domain for the key files must be the same as theauthentication domain for the first playlist file. The simplest way toaccomplish this is to serve the variant playlist file from the HTTPS server—thevariant playlist file is downloaded only once, so this shouldn’t cause anexcessive burden. Other playlist files can be served using HTTP.

·        You must either initiate your own dialog for the user to authenticate, oryou must store the credentials on the client device—HTTP Live Streaming doesnot provide user dialogs for authentication. If you are writing your own clientapp, you can store credentials, whether cookie-based or HTTP digest based, andsupply the credentials in the didReceiveAuthenticationChallenge callback (see “Using NSURLConnection” and “Authentication Challenges” for details). The credentialsyou supply are cached and reused by the media player.

Important You must obtain an SSLcertificate signed by a trusted authority in order to use an HTTPS server withHTTP Live Streaming.

If your HTTPS server does nothave an SSL certificate signed by a trusted authority, you can still test yoursetup by creating a self-signed SSL Certificate Authority and a leafcertificate for your server. Attach the certificate for the certificateauthority to an email, send it to a device you want to use as a Live Streamingclient, and tap on the attachment in Mail to make the device trust the server.

NextPrevious

Frequently Asked Questions

1.     What kinds ofencoders are supported?

The protocolspecification does not limit the encoder selection. However, the current Appleimplementation should interoperate with encoders that produce MPEG-2 TransportStreams containing H.264 video and AAC audio (HE-AAC or AAC-LC). Encoders thatare capable of broadcasting the output stream over UDP should also becompatible with the current implementation of the Apple provided segmentersoftware.

2.     What are thespecifics of the video and audio formats supported?

Although theprotocol specification does not limit the video and audio formats, the currentApple implementation supports the following formats:

·        Video: H.264 Baseline Level 3.0, Baseline Level 3.1, andMain Level 3.1.

·        Audio:

§  HE-AAC or AAC-LCup to 48 kHz, stereo audio

§  MP3 (MPEG-1 AudioLayer 3) 8 kHz to 48 kHz, stereo audio

Note: iPad, iPhone 3G, and iPod touch (2nd generation andlater) support H.264 Baseline 3.1. If your app runs on older versions of iPhoneor iPod touch, however, you should use H.264 Baseline 3.0 for compatibility. Ifyour content is intended solely for iPad, Apple TV, iPhone 4 and later,and Mac OS X computers, you should use Main Level 3.1.

3.     What durationshould media files be?

The main point toconsider is that shorter segments result in more frequent refreshes of theindex file, which might create unnecessary network overhead for the client.Longer segments will extend the inherent latency of the broadcast and initialstartup time. A duration of 10 seconds of media per file seems to strike areasonable balance for most broadcast content.

4.     How many filesshould be listed in the index file during a continuous, ongoing session?

The normalrecommendation is 3, but the optimum number may be larger.

The importantpoint to consider when choosing the optimum number is that the number of filesavailable during a live session constrains the client's behavior when doingplay/pause and seeking operations. The more files in the list, the longer theclient can be paused without losing its place in the broadcast, the furtherback in the broadcast a new client begins when joining the stream, and thewider the time range within which the client can seek. The trade-off is that alonger index file adds to network overhead—during live broadcasts, the clientsare all refreshing the index file regularly, so it does add up, even though theindex file is typically small.

5.     What data ratesare supported?

The data rate thata content provider chooses for a stream is most influenced by the target clientplatform and the expected network topology. The streaming protocol itselfplaces no limitations on the data rates that can be used. The currentimplementation has been tested using audio-video streams with data rates as lowas 64 Kbps and as high as 3 Mbps to iPhone. Audio-only streams at 64 Kbps arerecommended as alternates for delivery over slow cellular connections.

For recommendeddata rates, see “Preparing Mediafor Delivery to iOS-Based Devices.”

Note: If the data rate exceeds the available bandwidth,there is more latency before startup and the client may have to pause to buffermore data periodically. If a broadcast uses an index file that provides amoving window into the content, the client will eventually fall behind in suchcases, causing one or more segments to be dropped. In the case of VOD, nosegments are lost, but inadequate bandwidth does cause slower startup andperiodic stalling while data buffers.

6.     What is a .tsfile?

.ts file contains an MPEG-2 Transport Stream.This is a file format that encapsulates a series of encoded mediasamples—typically audio and video. The file format supports a variety ofcompression formats, including MP3 audio, AAC audio, H.264 video, and so on.Not all compression formats are currently supported in the Apple HTTP LiveStreaming implementation, however. (For a list of currently supported formats,see “Media Encoder.”

MPEG-2 TransportStreams are containers, and should not be confused with MPEG-2 compression.

7.     What is an .M3U8file?

An .M3U8 file is a extensible playlist file format.It is an m3u playlist containing UTF-8 encoded text. The m3u file format is ade facto standard playlist format suitable for carrying lists of media fileURLs. This is the format used as the index file for HTTP Live Streaming. Fordetails, see IETF Internet-Draft of the HTTP Live Streaming specification.

8.     How does theclient software determine when to switch streams?

The current implementationof the client observes the effective bandwidth while playing a stream. If ahigher-quality stream is available and the bandwidth appears sufficient tosupport it, the client switches to a higher quality. If a lower-quality streamis available and the current bandwidth appears insufficient to support thecurrent stream, the client switches to a lower quality.

Note: For seamless transitions between alternate streams,the audio portion of the stream should be identical in all versions.

9.     Where can I find acopy of the media stream segmenter from Apple?

The media streamsegmenter, file stream segmenter, and other tools are frequently updated, soyou should download the current version of the HTTP Live Streaming Tools fromthe Apple Developer website. See “Download theTools” for details.

10.  What settings arerecommended for a typical HTTP stream, with alternates, for use with the mediasegmenter from Apple?

See “Preparing Mediafor Delivery to iOS-Based Devices.”

These settings arethe current recommendations. There are also certain requirements. The current mediastreamsegmenter tool works only with MPEG-2 TransportStreams as defined in ISO/IEC 13818. The transport stream must contain H.264(MPEG-4, part 10) video and AAC or MPEG audio. If AAC audio is used, it musthave ADTS headers. H.264 video access units must use Access Unit DelimiterNALs, and must be in unique PES packets.

The segmenter alsohas a number of user-configurable settings. You can obtain a list of thecommand line arguments and their meanings by typing man mediastreamsegmenter from the Terminal application. A targetduration (length of the media segments) of 10 seconds is recommended, and isthe default if no target duration is specified.

11.  How can I specifywhat codecs or H.264 profile are required to play back my stream?

Use the CODECS attribute of the EXT-X-STREAM-INF tag. When this attribute is present, itmust include all codecs and profiles required to play back the stream. The followingvalues are currently recognized:

AAC-LC

"mp4a.40.2"

HE-AAC

"mp4a.40.5"

MP3

"mp4a.40.34"

H.264 Baseline Profile level 3.0

"avc1.42001e" or “avc1.66.30”

Note: Use “avc1.66.30” for compatibility with iOS versions 3.0 to 3.12.

H.264 Baseline Profile level 3.1

"avc1.42001f"

H.264 Main Profile level 3.0

"avc1.4d001e" or “avc1.77.30”

Note: Use “avc1.77.30” for compatibility with iOS versions 3.0 to 3.12.

H.264 Main Profile level 3.1

"avc1.4d001f"

The attributevalue must be in quotes. If multiple values are specified, one set of quotes isused to contain all values, and the values are separated by commas. An examplefollows.

#EXTM3U

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=500000

mid_video_index.M3U8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=800000

wifi_video_index.M3U8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=3000000, CODECS="avc1.4d001e,mp4a.40.5"

h264main_heaac_index.M3U8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=64000, CODECS="mp4a.40.5"

aacaudio_index.M3U8

12.  How can I createan audio-only stream from audio/video input?

Add the -audio-only argument when invoking the stream or filessegmenter.

13.  How can I add astill image to an audio-only stream?

Use the -meta-file argument when invoking the stream or filesegmenter with -meta-type=picture to add an image to every segment. Forexample, this would add an image named poster.jpg to every segment of an audiostream created from the file track01.mp3:

mediafilesegmenter -f /Dir/outputFile -a --meta-file=poster.jpg --meta-type=picture track01.mp3

Remember that theimage is typically resent every ten seconds, so it’s best to keep the file sizesmall.

14.  How can I specifyan audio-only alternate to an audio-video stream?

Use the CODECS and BANDWIDTH attributes of the EXT-X-STREAM-INF tag together.

The BANDWIDTH attribute specifies the bandwidth requiredfor each alternate stream. If the available bandwidth is enough for the audioalternate, but not enough for the lowest video alternate, the client switchesto the audio stream.

If the CODECS attribute is included, it must list allcodecs required to play the stream. If only an audio codec is specified, thestream is identified as audio-only. Currently, it is not required to specifythat a stream is audio-only, so use of the CODECS attribute is optional.

The following isan example that specifies video streams at 500 Kbps for fast connections, 150Kbps for slower connections, and an audio-only stream at 64 Kbps for very slowconnections. All the streams should use the same 64 Kbps audio to allowtransitions between streams without an audible disturbance.

#EXTM3U

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=500000

mid_video_index.M3U8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=150000

3g_video_index.M3U8

#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=64000, CODECS="mp4a.40.5"

aacaudio_index.M3U8

15.  What are thehardware requirements or recommendations for servers?

See question #1for encoder hardware recommendations.

The Apple streamsegmenter is capable of running on any Intel-based Mac. We recommend using aMac with two Ethernet network interfaces, such as a Mac Pro or an XServe. Onenetwork interface can be used to obtain the encoded stream from the localnetwork, while the second network interface can provide access to a widernetwork.

16.  Does the Appleimplementation of HTTP Live Streaming support DRM?

No. However, mediacan be encrypted and key access can be limited using HTTPS authentication.

17.  What clientplatforms are supported?

iPhone, iPad, andiPod touch (requires iOS version 3.0 or later), Apple TV (version 2 and later),and Mac OS X computers.

18.  Is the protocolspecification available?

Yes. The protocolspecification is an IETF Internet-Draft, at http://tools.ietf.org/html/draft-pantos-http-live-streaming.

19.  Does the clientcache content?

The index file cancontain an instruction to the client that content should not be cached.Otherwise, the client may cache data for performance optimization when seekingwithin the media.

20.  Is this areal-time delivery system?

No. It hasinherent latency corresponding to the size and duration of the media filescontaining stream segments. At least one segment must fully download before itcan be viewed by the client, and two may be required to ensure seamlesstransitions between segments. In addition, the encoder and segmenter mustcreate a file from the input; the duration of this file is the minimum latencybefore media is available for download. Typical latency with recommendedsettings is in the neighborhood of 30 seconds.

21.  What is thelatency?

Approximately 30seconds, with recommended settings. See question #15.

22.  Do I need to use ahardware encoder?

No. Using theprotocol specification, it is possible to implement a software encoder.

23.  What advantagesdoes this approach have over RTP/RTSP?

HTTP is lesslikely to be disallowed by routers, NAT, or firewall settings. No ports need tobe opened that are commonly closed by default. Content is therefore more likelyto get through to the client in more locations and without special settings.HTTP is also supported by more content-distribution networks, which can affectcost in large distribution models. In general, more available hardware andsoftware works unmodified and as intended with HTTP than with RTP/RTSP.Expertise in customizing HTTP content delivery using tools such as PHP is alsomore widespread.

Also, HTTP LiveStreaming is supported in Safari and the media player framework on iOS. RTSPstreaming is not supported.

24.  Why is my stream’soverall bit rate higher than the sum of the audio and video bitrates?

MPEG-2 transportstreams can include substantial overhead. They utilize fixed packet sizes thatare padded when the packet contents are smaller than the default packet size.Encoder and multiplexer implementations vary in their efficiency at packingmedia data into these fixed packet sizes. The amount of padding can vary withframe rate, sample rate, and resolution.

25.  How can I reducethe overhead and bring the bit rate down?

Using a moreefficient encoder can reduce the amount of overhead, as can tuning the encodersettings.

Note: Optimizationin the stream segmenter is now on by default.

26.  Do all media fileshave to be part of the same MPEG-2 Transport Stream?

No. You can mixmedia files from different transport streams, as long as they are separated by EXT-X-DISCONTINUITY tags. See the protocol specification formore detail. For best results, however, all video media files should have thesame height and width dimensions in pixels.

27.  Where can I gethelp or advice on setting up an HTTP audio/video server?

You can visit theApple Developer Forum at http://devforums.apple.com/.

Also, check out Best Practices for Creating and Deploying HTTP Live Streaming Media forthe iPhone and iPad.

DocumentRevision History

This table describes the changes to HTTPLive Streaming Overview.

Date

Notes

2011-04-01

Added descripton of closed caption support, best practices for preparing media for iOS-based devices, and description of how to serve key files over HTTPS.

2010-11-15

Added description of timed metadata.

2010-03-25

Updated to include iPad and -optimize option for stream segmenter.

2010-02-05

Defines requirement for apps to use HTTP Live Streaming for video over cellular networks and requirement for 64 Kbps streams.

2010-01-20

Updated links to tools and sample streams. Added documentation of CODECS attribute and video streams with audio-only alternates.

2009-11-17

Fixed typos and corrected URLs for samples.

2009-09-09

Updated to include failover support.

2009-06-04

Added sample code, QuickTime X support. Reorganized document for readability.

2009-05-22

Changed title from iPhone Streaming Media Guide for Web Developers. Added URL for downloading media segmenter.

2009-03-15

New document that describes live streaming of audio and video over HTTP for iPhone.

 

Previous

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值