CARVIEW |
Media Source Extensions
W3C Editor's Draft 8 October 2012
- Latest published version:
- Not yet published
- Latest editor's draft:
- https://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html
- Editors:
- Aaron Colwell, Google, Inc.
- Adrian Bateman, Microsoft Corporation
- Mark Watson, Netflix, Inc.
- Bug/Issue lists:
- Bugzilla, Tracker
- Discussion list:
- public-html-media@w3.org
- Test Suite:
- None yet
Copyright © 2012 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
Status of this Document
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document was published by the HTML working group as an Editor's Draft. Please submit comments regarding this document by using the W3C's (public bug database) with the product set to HTML WG and the component set to Media Source Extensions. If you cannot access the bug database, submit comments to public-html-media@w3.org (subscribe, archives) and arrangements will be made to transpose the comments to the bug database. All feedback is welcome.
Publication as a Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
Abstract
This proposal extends HTMLMediaElement to allow JavaScript to generate media streams for playback. Allowing JavaScript to generate streams facilitates a variety of use cases like adaptive streaming and time shifting live streams.
Table of Contents
- 1. Introduction
- 2. Source Buffer Model
-
- 2.1. Creating Source Buffers
- 2.2. Removing Source Buffers
- 2.3. Basic appending model
- 2.4. Initialization Segment constraints
- 2.5. Media Segment constraints
- 2.6. Appending the first Initialization Segment
- 2.7. Appending a Media Segment to an unbuffered region
- 2.8. Appending a Media Segment over a buffered region
- 2.9. Source Buffer to Track Buffer transfer
- 2.10. Media Segment Eviction
- 2.11. Applying Timestamp Offsets
- 3. MediaSource Object
- 4. SourceBuffer Object
- 5. SourceBufferList Object
- 6. URL Object
- 7. HTMLMediaElement attributes
- 8. Byte Stream Formats
- 9. Examples
- 10. Revision History
1. Introduction
This proposal allows JavaScript to dynamically construct media streams for <audio> and <video>.
It defines objects that allow JavaScript to pass media segments to an HTMLMediaElement
.
A buffering model is also included to describe how the user agent should act when different media segments are
appended at different times. Byte stream specifications for WebM & ISO Base Media File Format are given to specify the
expected format of media segments used with these extensions.

1.1. Goals
This proposal was designed with the following goals in mind:
- Allow JavaScript to construct media streams independent of how the media is fetched.
- Define a splicing and buffering model that facilitates use cases like adaptive streaming, ad-insertion, time-shifting, and video editing.
- Minimize the need for media parsing in JavaScript.
- Leverage the browser cache as much as possible.
- Provide byte stream definitions for WebM & the ISO Base Media File Format.
- Not require support for any particular media format or codec.
1.2. Definitions
1.2.1. Initialization Segment
A sequence of bytes that contains all of the initialization information required to decode a sequence of media segments. This includes codec initialization data, Track ID mappings for multiplexed segments, and timestamp offsets (e.g. edit lists).
- ISO Base Media File Format
- A moov box.
- WebM
- The concatenation of the the EBML Header, Segment Header, Info element, and Tracks element.
Container specific examples of initialization segments:
1.2.2. Media Segment
A sequence of bytes that contain packetized & timestamped media data for a portion of the presentation timeline. Media segments are always associated with the most recently appended initialization segment.
- ISO Base Media File Format
- A moof box followed by one or more mdat boxes.
- WebM
- A Cluster element
Container specific examples of media segments:
1.2.3. Source Buffer
A hypothetical buffer that contains a distinct sequence of initialization segments & media segments. When media segments are passed to append()
they update the state of this buffer. The source buffer only allows a single media segment to cover a specific point in the presentation timeline of each track. If a media segment gets appended that contains media data overlapping (in presentation time) with media data from an existing segment, then the new media data will override the old media data. Since media segments depend on initialization segments the source buffer is also responsible for maintaining these associations. During playback, the media element pulls segment data out of the source buffers, demultiplexes it if necessary, and enqueues it into track buffers so it will get decoded and displayed. buffered
describes the time ranges that are covered by media segments in the source buffer.
1.2.4. Active Source Buffers
The set of source buffers that are providing the selected video track
, the enabled audio tracks
, and the "showing"
or "hidden"
text tracks. This is a subset of all the source buffers associated with a specific MediaSource
object. See Changes to selected/enabled track state for details.
1.2.5. Track Buffer
A hypothetical buffer that represents initialization and media data for a single AudioTrack
, VideoTrack
, or TextTrack
that has been queued for playback. This buffer may not exist in actual implementations, but it is intended to represent media data that will be decoded no matter what media segments are appended to update the source buffer. This distinction is important when considering appends that happen close to the current playback position. See Source Buffer to Track Buffer transfer for details.
1.2.6. Random Access Point
A position in a media segment where decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.
1.2.7. Presentation Start Time
The presentation start time is the earliest time point in the presentation and specifies the initial playback position
and earliest possible position
. All presentations created using this specification have a presentation start time of 0. Appending media segments with negative timestamps will cause playback to terminate with a MediaError.MEDIA_ERR_DECODE
error unless timestampOffset
is used to make the timestamps greater than or equal to 0.
1.2.8. MediaSource object URL
A MediaSource object URL is a unique Blob URI created by createObjectURL()
. It is used to attach a MediaSource
object to an HTMLMediaElement.
These URLs are the same as what the File API specification calls a Blob URI, except that anything in the definition of that feature that refers to File and Blob objects is hereby extended to also apply to MediaSource
objects.
1.2.9. Track ID
A Track ID is a byte stream format specific identifier that marks sections of the byte stream as being part of a specific track. The Track ID in a track description identifies which sections of a media segment belong to that track.
1.2.10. Track Description
A byte stream format specific structure that provides the Track ID, codec configuration, and other metadata for a single track. Each track description inside a single initialization segment must have a unique Track ID.
2. Source Buffer Model
The subsections below outline the buffering model for this proposal. It describes how to add and remove source buffers from the presentation and describes the various rules and behaviors associated with appending data to an individual source buffer. At the highest level, the web application simply creates source buffers and appends a sequence of initialization segments and media segments to update the buffer's state. The media element pulls media data out of the source buffers, plays it, and fires events just like it would if a normal URL was passed to the src
attribute. The web application is expected to monitor media element events to determine when it needs to append more media segments.
2.1. Creating Source Buffers
SourceBuffer
objects can be created once a MediaSource
object enters the "open"
state. The application calls addSourceBuffer()
with a type string that indicates the format of the data it intends to append to the new SourceBuffer. If the user agent supports the format and has sufficient resources, a new SourceBuffer
object is created, added to sourceBuffers
, and returned by the method. If the user agent doesn't support the specified format or can't support another SourceBuffer
then it will throw an appropriate exception to signal why the request couldn't be satisfied.
2.2. Removing Source Buffers
Removing a SourceBuffer
with removeSourceBuffer()
releases all resources associated with the object. This includes destroying the all the segment data, track buffers, and decoders. The media element will also remove the appropriate tracks from audioTracks
, videoTracks
, & textTracks
and fire the necessary change
events. Playback may become degraded or stop if the currently selected VideoTrack
or the only enabled AudioTracks
are removed.
2.3. Basic appending model
Updating the state of a source buffer requires appending at least one initialization segment and one or more media segments via append()
. The following list outlines some of the basic rules for appending segments.
- The first segment appended must be an initialization segment.
- All media segments are associated with the most recently appended initialization segment.
- A whole segment must be appended before another segment can be started unless
abort()
is called. - Segments can be appended in pieces. (i.e. A 4096 byte segment can be spread across four 1024 byte calls to
append()
). - If a media segment requires different configuration information (e.g. codec parameters, new Track IDs, metadata) from what is in the most recently appended initialization segment, a new initialization segment with the new configuration information must be appended before the media segment requiring this information is appended.
- A new media segment can overlap, in presentation time, a segment that was previously appended. The new segment will override the previous data.
- Media segments can be appended in any order.
Note: In practice finite buffer space and maintaining uninterrupted playback will bias appending towards time increasing order near the current playback position. Out of order appends facilitate adaptive streaming, ad insertion, and video editing use cases. - The media element may start copying data from a media segment to the track buffers before the entire segment has been appended. This prevents unnecessary delays for media segments that cover a large time range.
2.4. Initialization Segment constraints
To simplify the implementation and facilitate interoperability, a few constraints are placed on the initialization segments that are appended to a specific SourceBuffer
:
- The number and type of tracks must be consistent across all initialization segments.
For example, if the first initialization segment has 2 audio tracks and 1 video track, then all initialization segments that follow, for thisSourceBuffer
must describe 2 audio tracks and 1 video track. -
Track IDs do not need to be the same across initialization segments only if the segment describes one track of each type.
For example, if an initialization segment describes a single audio track and a single video track, the internal Track IDs do not need to be the same. - Track IDs must be the same across initialization segments if multiple tracks for a single type are described. (e.g. 2 audio tracks).
- Codecs changes are not allowed.
For example, you can't have an initialization segment that specifies a single AAC track and then follows it with one that contains AMR-WB. Support for multiple codecs is handled with multipleSourceBuffer
objects. - Video frame size changes are allowed and must be supported seamlessly.
Note: This will cause the <video> display region to change size if you don't use CSS or HTML attributes (width/height) to constrain the element size. - Audio channel count changes are allowed, but they may not be seamless and could trigger downmixing.
Note: This is a quality of implementation issue because changing the channel count may require reinitializing the audio device, resamplers, and channel mixers which tends to be audible.
2.5. Media Segment constraints
To simplify the implementation and facilitate interoperability, a few constraints are placed on the media segments that are appended to a specific SourceBuffer
:
- All timestamps must be mapped to the same presentation timeline.
- Segments must start with a random access point to facilitate seamless splicing at the segment boundary.
- Gaps between media segments that are smaller than the audio frame size are allowed and must not cause playback to stall. Such gaps must not be reflected by
buffered
.Note: This is intended to simplify switching between audio streams where the frame boundaries don't always line up across encodings (e.g. Vorbis).
2.6. Appending the first Initialization Segment
Once a new SourceBuffer
has been created, it expects an initialization segment to be appended first. This first segment indicates the number and type of streams contained in the media segments that follow. This allows the media element to configure the necessary decoders and output devices. This first segment can also cause a HTMLMediaElement.readyState
transition to HAVE_METADATA
if this is the first SourceBuffer
, or if it is the first track of a specific type (i.e. first audio, first video track, or first text track). If neither of the conditions hold then the tracks for this new SourceBuffer
will just appear as disabled tracks and won't affect the current HTMLMediaElement.readyState
until they are selected. The media element will also add the appropriate tracks to the audioTracks
, videoTracks
, & textTracks
collections and fire the necessary change
events. The description for append()
contains all the details.
2.7. Appending a Media Segment to an unbuffered region
If a media segment is appended to a time range that is not covered by existing segments in the source buffer, then its data is copied directly into the source buffer. Addition of this data may trigger HTMLMediaElement.readyState
transitions depending on what other data is buffered and whether the media element has determined if it can start playback. Calls to buffered
will always reflect the current TimeRanges
buffered in the SourceBuffer
.
2.8. Appending a Media Segment over a buffered region
There are several ways that media segments can overlap segments in the source buffer. Behavior for the different overlap situations are described below. If more than one overlap applies, then the start overlap gets resolved first, followed by any complete overlaps, and finally the end overlap. If a segment contains multiple tracks then the overlap is resolved independently for each track.
2.8.1 Complete Overlap

The figure above shows how the source buffer gets updated when a new media segment completely overlaps a segment in the buffer. In this case, the new segment completely replaces the old segment.
2.8.2 Start Overlap

The figure above shows how the source buffer gets updated when the beginning of a new media segment overlaps a segment in the buffer. In this case the new segment replaces all the old media data in the overlapping region. Since media segments are constrained to starting with random access points, this provides a seamless transition between segments.
When an audio frame in the source buffer overlaps with the start of the new media segment special behavior is required. At a minimum implementations must support dropping the old audio frame that overlaps the start of the new segment and insert silence for the small gap that is created. Higher quality implementations may support crossfading or crosslapping between the overlapping audio frames. No matter which strategy is implemented, no gaps are created in the ranges reported by buffered
and playback must never stall at the overlap.
2.8.3 End Overlap

The figure above shows how the source buffer gets updated when the end of a new media segment overlaps a segment in the buffer. In this case, the media element tries to keep as much of the old segment as possible. The amount saved depends on where the closest random access point, in the old segment, is to the end of the new segment. In the case of audio, if the gap is smaller than the size of an audio frame, then the media element should insert silence for this gap and not reflect it in buffered
.
An implementation may keep old segment data before the end of the new segment to avoid creating a gap if it wishes. Doing this though can significantly increase implementation complexity and could cause delays at the splice point. The key property that must be preserved is the entirety of the new segment gets added to the source buffer and it is up to the implementation how much of the old segment data is retained. The web application can use buffered
to determine how much of the old segment was preserved.
2.8.4 Middle Overlap

The figure above shows how the source buffer gets updated when the new media segment is in the middle of the old segment. This condition is handled by first resolving the start overlap and then resolving the end overlap.
2.9. Source Buffer to Track Buffer transfer
The source buffer represents the media that the web application would like the media element to play. The track buffer contains the data that will actually get decoded and rendered. In most cases the track buffer will simply contain a subset of the source buffer near the current playback position. These two buffers start to diverge though when media segments that overlap or are very close to the current playback position are appended. Depending on the contents of the new media segment it may not be possible to switch to the new data immediately because there isn't a random access point close enough to the current playback position. The quality of the implementation determines how much data is considered "in the track buffer". It should transfer data to the track buffer as late as possible whilst maintaining seamless playback. Some implementations may be able to instantiate multiple decoders or decode the new data significantly faster than real-time to achieve a seamless splice immediately. Other implementations may delay until the next random access point before switching to the newly appended data. Notice that this difference in behavior is only observable when appending close to the current playback position. The track buffer represents a media subsegment, like a group of pictures or something with similar decode dependencies, that the media element commits to playing. This commitment may be influenced by a variety of things like limited decoding resources, hardware decode buffers, a jitter buffer, or the desire to limit implementation complexity.
Here is an example to help clarify the role of the track buffer. Say the current playback position has a timestamp of 8 and the media element pulled frames with timestamp 9 & 10 into the track buffer. The web application then appends a higher quality media segment that starts with a random access point at timestamp 9. The source buffer will get updated with the higher quality data, but the media element won't be able to switch to this higher quality data until the next random access point at timestamp 20. This is because a frame for timestamp 9 is already in the track buffer. As you can see the track buffer represents the "point of no return." for decoding. If a seek occurs the media element may choose to use the higher quality data since a seek might imply flushing the track buffer and the user expects a break in playback.
2.10. Media Segment Eviction
When a new media segment is appended, memory constraints may cause previously appended segments to get evicted from the source buffer. The eviction algorithm is implementation dependent, but segments that aren't likely to be needed soon are the most likely to get evicted. The buffered
attribute allows the web application to monitor what time ranges are currently buffered in the source buffer.
2.11. Applying Timestamp Offsets
For some use cases like ad-insertion or seamless playlists, the web application may want to insert a media segment in the presentation timeline at a location that is different than what the internal timestamps indicate. This can be accomplished by using the timestampOffset
attribute on the SourceBuffer
object. The value of timestampOffset
is added to all timestamps inside a media segment before the contents of that segment are added to the source buffer. The timestampOffset
applies to an entire media segment. An exception is thrown if the application tries to update the attribute when only part of a media segment has been appended. Both positive or negative offsets can be assigned to timestampOffset
. If an offset causes a media segment timestamp to get converted to a time before the presentation start time, playback will terminate with a MediaError.MEDIA_ERR_DECODE
error.
Here is a simple example to clarify how timestampOffset
can be used. Say I have two sounds I want to play in sequence. The first sound is 5 seconds long and the second one is 10 seconds. Both sound files have timestamps that start at 0. First append the initialization segment and all media segments for the first sound. Now set timestampOffset
to 5 seconds. Finally append the initialization segment and media segments for the second sound. This will result in a 15 second presentation that plays the two sounds in sequence.
3. MediaSource Object
The MediaSource object represents a source of media data for an HTMLMediaElement. It keeps track of the readyState
for this source as well as a list of SourceBuffer
objects that can be used to add media data to the presentation. MediaSource objects are created by the web application and then attached to an HTMLMediaElement. The application uses the SourceBuffer
objects in sourceBuffers
to add media data to this source. The HTMLMediaElement fetches this media data from the MediaSource
object when it is needed during playback.
[Constructor] interface MediaSource : EventTarget { // All the source buffers created by this object. readonly attribute SourceBufferList sourceBuffers; // Subset of sourceBuffers that provide data for the selected/enabled tracks. readonly attribute SourceBufferList activeSourceBuffers; attribute unrestricted double duration; SourceBuffer addSourceBuffer(DOMString type); void removeSourceBuffer(SourceBuffer sourceBuffer); enum State { "closed", "open", "ended" }; readonly attribute State readyState; enum EndOfStreamError { "network", "decode" }; void endOfStream(optional EndOfStreamError error); };
3.1. Methods and Attributes
The sourceBuffers
attribute contains the list of SourceBuffer
objects associated with this MediaSource
. When readyState
equals "closed"
this list will be empty. Once readyState
transitions to "open"
SourceBuffer objects can be added to this list by using addSourceBuffer()
.
The activeSourceBuffers
attribute contains the subset of sourceBuffers
that represents the active source buffers.
The duration
attribute allows the web application to set the presentation duration. The duration is initially set to NaN when the MediaSource
object is created.
On getting, run the following steps:
- If the
readyState
attribute is"closed"
then return NaN and abort these steps. - Return the current value of the attribute.
On setting, run the following steps:
- If the value being set is negative or NaN then throw an
INVALID_ACCESS_ERR
exception and abort these steps. - If the
readyState
attribute is not"open"
then throw anINVALID_STATE_ERR
exception and abort these steps. - Run the duration change algorithm with new duration set to the value being set.
Note:
append()
andendOfStream()
can update the duration under certain circumstances.
The addSourceBuffer(type)
method must run the following steps:
- If type is null or an empty string then throw an
INVALID_ACCESS_ERR
exception and abort these steps. - If type contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified for the other
SourceBuffer
objects insourceBuffers
, then throw aNOT_SUPPORTED_ERR
exception and abort these steps. - If the user agent can't handle any more SourceBuffer objects then throw a
QUOTA_EXCEEDED_ERR
exception and abort these steps. - If the
readyState
attribute is not in the"open"
state then throw anINVALID_STATE_ERR
exception and abort these steps. - Create a new
SourceBuffer
object and associated resources. - Add the new object to
sourceBuffers
and queue a task to fire a simple event namedaddsourcebuffer
atsourceBuffers
. - Return the new object.
The removeSourceBuffer(sourceBuffer)
method must run the following steps:
- If sourceBuffer is null then throw an
INVALID_ACCESS_ERR
exception and abort these steps. - If sourceBuffer specifies an object that is not in
sourceBuffers
then throw aNOT_FOUND_ERR
exception and abort these steps. - Remove track information from
audioTracks
,videoTracks
, andtextTracks
for all tracks associated with sourceBuffer and queue a task to fire a simple event namedchange
at the modified lists. - If sourceBuffer is in
activeSourceBuffers
, then remove it fromactiveSourceBuffers
and queue a task to fire a simple event namedremovesourcebuffer
atactiveSourceBuffers
. - Remove sourceBuffer from
sourceBuffers
and queue a task to fire a simple event namedremovesourcebuffer
atsourceBuffers
. - Destroy all resources for sourceBuffer.
The readyState
attribute indicates the current state of the MediaSource
object. It can have the following values:
"closed"
- Indicates the source is not currently attached to a media element.
"open"
- The source has been opened by a media element and is ready for data to be appended to the
SourceBuffer
objects insourceBuffers
. "ended"
- The source is still attached to a media element, but
endOfStream()
has been called. Appending data toSourceBuffer
objects in this state is not allowed.
When the MediaSource
is created readyState
must be set to "closed"
.
End of stream error values:
"network"
- Terminates playback and signals that a network error has occured.
"decode"
- Terminates playback and signals that a decoding error has occured.
Note: If the JavaScript fetching media data encounters a network error it should use this status code to terminate playback.
Note: If the JavaScript code fetching media data has problems parsing the data it should use this status code to terminate playback.
The endOfStream(error)
method must run the following steps:
- If the
readyState
attribute is not in the"open"
state then throw anINVALID_STATE_ERR
exception and abort these steps. - Change the
readyState
attribute value to"ended"
. -
queue a task to fire a simple event named
sourceended
at theMediaSource
. - If error is not set, null, or an empty string
-
- Run the duration change algorithm with new duration set to the highest end timestamp across all
SourceBuffer
objects insourceBuffers
.
Note: This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.
- Notify the media element that it now has all of the media data. Playback should continue until all the media passed in via
append()
has been played.
- Run the duration change algorithm with new duration set to the highest end timestamp across all
- If error is set to
"network"
-
- If the
HTMLMediaElement.readyState
attribute equalsHAVE_NOTHING
- Run the "If the media data cannot be fetched at all, due to network errors, causing the user agent to give up trying to fetch the resource" steps of the resource fetch algorithm.
- If the
HTMLMediaElement.readyState
attribute is greater thanHAVE_NOTHING
- Run the "If the connection is interrupted after some media data has been received, causing the user agent to give up trying to fetch the resource" steps of the resource fetch algorithm.
- If the
- If error is set to
"decode"
-
- If the
HTMLMediaElement.readyState
attribute equalsHAVE_NOTHING
- Run the "If the media data can be fetched but is found by inspection to be in an unsupported format, or can otherwise not be rendered at all" steps of the resource fetch algorithm.
- If the
HTMLMediaElement.readyState
attribute is greater thanHAVE_NOTHING
- Run the media data is corrupted steps of the resource fetch algorithm.
- If the
- Otherwise
- Throw an
INVALID_ACCESS_ERR
exception.
3.2. Event Summary
Event name | Interface | Dispatched when... |
---|---|---|
sourceopen |
Event |
When readyState transitions from "closed" to "open" or from "ended" to "open" . |
sourceended |
Event |
When readyState transitions from "open" to "ended" . |
sourceclose |
Event |
When readyState transitions from "open" to "closed" or "ended" to "closed" . |
3.3. Algorithms
3.3.1 Attaching to a media element
A MediaSource
object can be attached to a media element by assigning a MediaSource object URL to the media element src
attribute or the src attribute of a <source> inside a media element. A MediaSource object URL is created by passing a MediaSource object to createObjectURL()
.
If the resource fetch algorithm absolute URL matches the MediaSource object URL, run the following steps right before the "Perform a potentially CORS-enabled fetch" step in the resource fetch algorithm.
- If
readyState
is NOT set to"closed"
- Run the "If the media data cannot be fetched at all, due to network errors, causing the user agent to give up trying to fetch the resource" steps of the resource fetch algorithm.
- Otherwise
-
- Set the
readyState
attribute to"open"
. -
queue a task to fire a simple event named
sourceopen
at theMediaSource
. - Allow the resource fetch algorithm to progress based on data passed in via
append()
.
- Set the
3.3.2 Detaching from a media element
The following steps are run in any case where the media element is going to transition to NETWORK_EMPTY
and queue a task to fire a simple event named
emptied
at the media element. These steps should be run right before the transition.
- Set the
readyState
attribute to"closed"
. - Set the
duration
attribute to NaN. - Remove all the
SourceBuffer
objects fromactiveSourceBuffers
. -
queue a task to fire a simple event named
removesourcebuffer
atactiveSourceBuffers
. - Remove all the
SourceBuffer
objects fromsourceBuffers
. -
queue a task to fire a simple event named
removesourcebuffer
atsourceBuffers
. -
queue a task to fire a simple event named
sourceclose
at theMediaSource
.
3.3.3 Seeking
Run the following steps as part of the "Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position" step of the media element seek algorithm:
- The media element looks for media segments containing the new playback position in each
SourceBuffer
object inactiveSourceBuffers
. - If one or more of the objects in
activeSourceBuffers
is missing media segments for the new playback position -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_METADATA
. - The media element waits for the necessary media segments to be passed to
append()
. The web application can usebuffered
to determine what the media element needs to resume playback.
- Set the
- Otherwise
- Continue
- The media element resets all decoders and initializes each one with data from the appropriate initialization segment.
- The media element feeds data from the media segments into the decoders until the new playback position is reached.
- Resume the media element seek algorithm at the "Await a stable state" step.
3.3.4 SourceBuffer Monitoring
The following steps are periodically run during playback to make sure that all of the SourceBuffer
objects in activeSourceBuffers
have enough data to ensure uninterrupted playback. Appending new segments and changes to activeSourceBuffers
also cause these steps to run because they affect the conditions that trigger state transitions. The web application can monitor changes in HTMLMediaElement.readyState
to drive media segment appending.
- If
buffered
for all objects inactiveSourceBuffers
do not containTimeRanges
for the current playback position: -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_METADATA
. - If this is the first transition to
HAVE_METADATA
, then queue a task to fire a simple event namedloadedmetadata
at the media element. - Abort these steps.
- Set the
- If
buffered
for all objects inactiveSourceBuffers
containTimeRanges
that include the current playback position and enough data to ensure uninterrupted playback: -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_ENOUGH_DATA
. -
queue a task to fire a simple event named
canplaythrough
at the media element. - Playback may resume at this point if it was previously suspended by a transition to
HAVE_CURRENT_DATA
. - Abort these steps.
- Set the
- If
buffered
for at least one object inactiveSourceBuffers
contains aTimeRange
that includes the current playback position but not enough data to ensure uninterrupted playback: -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_FUTURE_DATA
. - If the previous value of
HTMLMediaElement.readyState
was less thanHAVE_FUTURE_DATA
, then queue a task to fire a simple event namedcanplay
at the media element. - Playback may resume at this point if it was previously suspended by a transition to
HAVE_CURRENT_DATA
. - Abort these steps.
- Set the
- If
buffered
for at least one object inactiveSourceBuffers
contains aTimeRange
that ends at the current playback position and does not have a range covering the time immediately after the current position: -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_CURRENT_DATA
. - If this is the first transition to
HAVE_CURRENT_DATA
, then queue a task to fire a simple event namedloadeddata
at the media element. - Playback is suspended at this point since the media element doesn't have enough data to advance the timeline.
- Abort these steps.
- Set the
3.3.5 Changes to selected/enabled track state
During playback activeSourceBuffers
needs to be updated if the selected video track
, the enabled audio tracks
, or a text track mode
changes. When one or more of these changes occur the following steps need to be followed.
- If the selected video track changes:
-
- If the
SourceBuffer
associated with the previously selected video track is not associated with any other enabled tracks, run the following steps:- Remove the
SourceBuffer
fromactiveSourceBuffers
. -
queue a task to fire a simple event named
removesourcebuffer
atactiveSourceBuffers
- Remove the
- If the
SourceBuffer
associated with the newly selected video track is not already inactiveSourceBuffers
, run the following steps:- Add the
SourceBuffer
toactiveSourceBuffers
. -
queue a task to fire a simple event named
addsourcebuffer
atactiveSourceBuffers
- Add the
- If the
- If an audio track becomes disabled and the
SourceBuffer
associated with this track is not associated with any other enabled or selected track -
- Remove the
SourceBuffer
associated with the audio track fromactiveSourceBuffers
-
queue a task to fire a simple event named
removesourcebuffer
atactiveSourceBuffers
- Remove the
- If an audio track becomes enabled and the
SourceBuffer
associated with this track is not already inactiveSourceBuffers
-
- Add the
SourceBuffer
associated with the audio track toactiveSourceBuffers
-
queue a task to fire a simple event named
addsourcebuffer
atactiveSourceBuffers
- Add the
- If a text track
mode
becomes"disabled"
and theSourceBuffer
associated with this track is not associated with any other enabled or selected track -
- Remove the
SourceBuffer
associated with the text track fromactiveSourceBuffers
-
queue a task to fire a simple event named
removesourcebuffer
atactiveSourceBuffers
- Remove the
- If a text track
mode
becomes"showing"
or"hidden"
and theSourceBuffer
associated with this track is not already inactiveSourceBuffers
-
- Add the
SourceBuffer
associated with the text track toactiveSourceBuffers
-
queue a task to fire a simple event named
addsourcebuffer
atactiveSourceBuffers
- Add the
3.3.6 Duration change
Follow these steps when duration
needs to change to a new duration.
- If the current value of
duration
is equal to new duration, then abort these steps. - Update
duration
to new duration. - Remove all media data with timestamps that are greater than new duration from all
SourceBuffer
objects insourceBuffers
.Note: This preserves audio frames that start before and end after the
duration
. The user agent must end playback atduration
even if the audio frame extends beyond this time. - Update the
media controller duration
to new duration and run the HTMLMediaElement duration change algorithm.
4. SourceBuffer Object
interface SourceBuffer : EventTarget { // Returns the time ranges buffered. readonly attribute TimeRanges buffered; // Applies an offset to media segment timestamps. attribute double timestampOffset; // Append segment data. void append(Uint8Array data); // Abort the current segment append sequence. void abort(); };
The buffered
attribute indicates what TimeRanges
are buffered in the SourceBuffer
. When the attribute is read the following steps must occur:
- If this object has been removed from the
sourceBuffers
attribute of theMediaSource
object that created it then throw anINVALID_STATE_ERR
exception and abort these steps. - Return a new static normalized TimeRanges object for the media segments buffered.
The timestampOffset
attribute controls the offset applied to timestamps inside subsequent media segments that are appended to this SourceBuffer
. The timestampOffset
is initially set to 0 which indicates that no offset is being applied. On getting, the initial value or the last value that was successfully set is returned. On setting, run following steps:
- If this object has been removed from the
sourceBuffers
attribute of theMediaSource
object that created it, then throw anINVALID_STATE_ERR
exception and abort these steps. - If the
readyState
attribute of theMediaSource
object that created this object is not in the"open"
state, then throw anINVALID_STATE_ERR
exception and abort these steps. - If this object is waiting for the end of a media segment to be appended, then throw an
INVALID_STATE_ERR
and abort these steps. - Update the attribute to the new value.
The append(data)
method must run the following steps:
- Let media source be the
MediaSource
object that created this object. - If data is null then throw an
INVALID_ACCESS_ERR
exception and abort these steps. - If this object has been removed from the
sourceBuffers
attribute of media source then throw anINVALID_STATE_ERR
exception and abort these steps. - If the
readyState
attribute of media source is in the"closed"
state then throw anINVALID_STATE_ERR
exception and abort these steps. - If the
readyState
attribute of media source is in the"ended"
state then run the following steps:- Set the
readyState
attribute of media source to"open"
-
queue a task to fire a simple event named
sourceopen
at media source .
- Set the
- If data.byteLength is 0 abort these steps.
- If data contains anything that violates the byte stream format specifications, then call
endOfStream("decode")
, and abort these steps. - Add data to the source buffer:
- If data is part of a media segment and
timestampOffset
is not 0: -
- Find all timestamps inside data and add
timestampOffset
to them. - If any of the modified timestamps are earlier than the presentation start time, then call
endOfStream("decode")
, and abort these steps. - Copy the contents of data, with the modified timestamps, into the source buffer.
- Find all timestamps inside data and add
- Otherwise
- Copy the contents of data into the source buffer.
- If data is part of a media segment and
- Handle end of segment cases:
- If data completes the first initialization segment appended to the source buffer run the following steps:
-
- Update
duration
attribute if it currently equals NaN: - If the initialization segment contains a duration:
- Run the duration change algorithm with new duration set to the duration in the initialization segment.
- Otherwise:
- Run the duration change algorithm with new duration set to positive Infinity.
- Handle state transitions:
- If the
HTMLMediaElement.readyState
attribute isHAVE_NOTHING
: -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_METADATA
. -
queue a task to fire a simple event named
loadedmetadata
at the media element.
- Set the
- If the
HTMLMediaElement.readyState
attribute is greater thanHAVE_CURRENT_DATA
and the initialization segment contains the first video or first audio track in the presentation: -
Set the
HTMLMediaElement.readyState
attribute toHAVE_METADATA
. - Otherwise:
- Continue
- Update
audioTracks
- If initialization segment contains the first audio track:
-
- Add an
AudioTrack
and mark it as enabled. - Add this
SourceBuffer
toactiveSourceBuffers
.
- Add an
- If initialization segment contains audio tracks beyond those already in the presentation:
- Add a disabled
AudioTrack
for each audio track in the initialization segment. - Update
videoTracks
: - If initialization segment contains the first video track:
-
- Add a
VideoTrack
and mark it as selected. - Add this
SourceBuffer
toactiveSourceBuffers
.
- Add a
- If initialization segment contains the video tracks beyond those already in the presentation:
- Add a disabled
VideoTrack
for each video track in the initialization segment. - Update
textTracks
-
- Add a
TextTrack
for each text track in the initialization segment. - If the text track
mode
is"showing"
or"hidden"
then add thisSourceBuffer
toactiveSourceBuffers
.
- Add a
- Update
- If the
HTMLMediaElement.readyState
attribute isHAVE_METADATA
and data causes all objects inactiveSourceBuffers
to have media data for the current playback position. -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_CURRENT_DATA
. - If this is the first transition to
HAVE_CURRENT_DATA
, then queue a task to fire a simple event namedloadeddata
at the media element.
- Set the
- If the
HTMLMediaElement.readyState
attribute isHAVE_CURRENT_DATA
and data causes all objects inactiveSourceBuffers
to have media data beyond the current playback position. -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_FUTURE_DATA
. -
queue a task to fire a simple event named
canplay
at the media element.
- Set the
- If the
HTMLMediaElement.readyState
attribute isHAVE_FUTURE_DATA
and data causes all objects inactiveSourceBuffers
to have enough data to start playback. -
- Set the
HTMLMediaElement.readyState
attribute toHAVE_ENOUGH_DATA
. -
queue a task to fire a simple event named
canplaythrough
at the media element.
- Set the
- If the media segment contains data beyond the current
duration
- Run the duration change algorithm with new duration set to the maximum of the current duration and the highest end timestamp reported by
HTMLMediaElement.buffered
.
The abort()
method must run the following steps:
- If this object has been removed from the
sourceBuffers
attribute of theMediaSource
object that created it then throw anINVALID_STATE_ERR
exception and abort these steps. - If the
readyState
attribute of theMediaSource
object that created this object is not in the"open"
state then throw anINVALID_STATE_ERR
exception and abort these steps. - The media element aborts parsing the current segment.
- If waiting for the start of a new segment
- Continue
- If the current segment is an initialization segment
- Flush any data associated with this partial segment.
- If the current segment is a media segment
- The media element may keep any media data it finds valuable in the partial segment. For example if the abort happens in the middle of a 10 second media segment, the media element may choose to keep the 5 seconds of media data it has already parsed in the source buffer.
buffered
will reflect what data, if any, was kept. - The media element resets the segment parser so that it can accept a new initialization segment or media segment.
5. SourceBufferList Object
SourceBufferList is a simple container object for SourceBuffer
objects. It provides read-only array access and fires events when the list is modified.
interface SourceBufferList : EventTarget { readonly attribute unsigned long length; getter SourceBuffer (unsigned long index); };
5.1. Methods and Attributes
The length
attribute indicates the number of SourceBuffer
objects in the list.
The getter SourceBuffer (unsigned long index)
method allows the SourceBuffer objects in the list to be accessed with an array operator (i.e. []). This method must run the following steps:
- If index is greater than or equal to the
length
attribute then return undefined and abort these steps. - Return the index'th
SourceBuffer
object in the list.
5.2. Event Summary
Event name | Interface | Dispatched when... |
---|---|---|
addsourcebuffer |
Event |
When a SourceBuffer is added to the list. |
removesourcebuffer |
Event |
When a SourceBuffer is removed from the list. |
6. URL Object
partial interface URL { static DOMString createObjectURL(MediaSource mediaSource); };
6.1. Methods
The createObjectURL(mediaSource)
method must run the following steps.
- If mediaSource is NULL the return null.
- Return a unique MediaSource object URL that can be used to dereference the mediaSource argument, and run the rest of the algorithm asynchronously.
- provide a stable state
- Revoke the MediaSource object URL by calling revokeObjectURL() on it.
Note: This algorithm is intended to mirror the behavior of the File API createObjectURL() method with autoRevoke set to true.
7. HTMLMediaElement attributes
This section specifies what existing attributes on the HTMLMediaElement
should return when a MediaSource
is attached to the element.
The HTMLMediaElement.seekable
attribute returns a new static normalized TimeRanges object created based on the following steps:
- If
duration
equals NaN - Return an empty
TimeRanges
object. - If
duration
equals positive Infinity - Return a single range with a start time of 0 and an end time equal to the highest end time reported by the
HTMLMediaElement.buffered
attribute. - Otherwise
- Return a single range with a start time of 0 and an end time equal to
duration
.
The HTMLMediaElement.buffered
attribute returns a new static normalized TimeRanges object created based on the following steps:
- Let active ranges be the ranges returned by
buffered
for eachSourceBuffer
object inactiveSourceBuffers
. - Let intersection range be the intersection of the active ranges.
-
If
readyState
is"ended"
, then run the following steps:- Let highest end time be the largest end time in the active ranges.
- Let highest intersection end time be the highest end time in the intersection range.
- If the highest intersection end time is less than the highest end time, then update the intersection range so that the highest intersection end time equals the highest end time.
- Return the intersection range.
8. Byte Stream Formats
The bytes provided through append()
for a SourceBuffer
form a logical byte stream. The format of this byte stream depends on the media container format in use and is defined in a byte stream format specification. Byte stream format specifications based on WebM and the ISO Base Media File Format are provided below. If these formats are supported then the byte stream formats described below must be supported.
This section provides general requirements for all byte stream formats:
- A byte stream format specification may define initialization segments and must define media segments.
- It must be possible to identify segment boundaries and segment type (initialization or media) by examining the byte stream alone.
- The combination of an Initialization Segment and any contiguous sequence of Media Segments associated with it must:
- Identify the number and type (audio, video, text, etc.) of tracks in the Segments
- Identify the decoding capabilities needed to decode each track (i.e. codec and codec parameters)
- If a track is encrypted, provide any encryption parameters necessary to decrypt the content (except the encryption key itself)
- For each track, provide all information necessary to decode and render the earliest random access point in the sequence of Media Segments and all subsequent samples in the sequence (in presentation time). This includes, in particular,
- Information that determines the intrinsic width and height of the video (specifically, this requires either the picture or pixel aspect ratio, together with the encoded resolution).
- Information necessary to convert the video decoder output to a format suitable for display
- Identify the global presentation timestamp of every sample in the sequence of Media Segments
For example, if I1 is associated with M1, M2, M3 then the above must hold for all the combinations I1+M1, I1+M2, I1+M1+M2, I1+M2+M3, etc.
Byte stream specifications must at a minimum define constraints which ensure that the above requirements hold. Additional constraints may be defined, for example to simplify implementation.
Initialization segments are an optimization. They allow a byte stream format to avoid duplication of information in Media Segments that is the same for many Media Segments. Byte stream format specifications need not specify Initialization Segment formats, however. They may instead require that such information is duplicated in every Media Segment.
8.1 WebM Byte Streams
This section defines segment formats for implementations that choose to support WebM.
8.1.1. Initialization Segments
A WebM initialization segment must contain a subset of the elements at the start of a typical WebM file.
The following rules apply to WebM initialization segments:
- The initialization segment must start with an EBML Header element, followed by a Segment header.
- The size value in the Segment header must signal an "unknown size" or contain a value large enough to include the Segment Information and Tracks elements that follow.
- A Segment Information element and a Tracks element must appear, in that order, after the Segment header and before any further EBML Header or Cluster elements.
- Any elements other than an EBML Header or a Cluster that occur before, in between, or after the Segment Information and Tracks elements are ignored.
8.1.2. Media Segments
A WebM media segment is a single Cluster element.
The following rules apply to WebM media segments:
- The Timecode element in the Cluster contains a presentation timestamp in TimecodeScale units.
- The TimecodeScale in the WebM initialization segment most recently appended applies to all timestamps in the Cluster
- The Cluster header may contain an "unknown" size value. If it does then the end of the cluster is reached when another Cluster header or an element header that indicates the start of an WebM initialization segment is encountered.
- Block & SimpleBlock elements must be in time increasing order consistent with the WebM spec.
- If the most recent WebM initialization segment describes multiple tracks, then blocks from all the tracks must be interleaved in time increasing order. At least one block from all audio and video tracks must be present.
- Cues or Chapters elements may follow a Cluster element. These elements must be accepted and ignored by the user agent.
8.1.3. Random Access Points
A SimpleBlock element with its Keyframe flag set signals the location of a random access point for that track. Media segments containing multiple tracks are only considered a random access point if the first SimpleBlock for each track has its Keyframe flag set. The order of the multiplexed blocks must conform to the WebM Muxer Guidelines.
8.2 ISO Base Media File Format Byte Streams
This section defines segment formats for implementations that choose to support the ISO Base Media File Format ISO/IEC 14496-12 (ISO BMFF).
8.2.1. Initialization Segments
An ISO BMFF initialization segment must contain a single Movie Header Box (moov). The tracks in the Movie Header Box must not contain any samples (i.e. the entry_count in the stts, stsc and stco boxes must be set to zero). A Movie Extends (mvex) box must be contained in the Movie Header Box to indicate that Movie Fragments are to be expected.
The initialization segment may contain Edit Boxes (edts) which provide a mapping of composition times for each track to the global presentation time.
8.2.2. Media Segments
An ISO BMFF media segment must contain a single Movie Fragment Box (moof) followed by one or more Media Data Boxes (mdat).
The following rules apply to ISO BMFF media segments:
- The Movie Fragment Box must contain at least one Track Fragment Box (traf).
- The Movie Fragment Box must use movie-fragment relative addressing and the flag default-base-is-moof must be set; absolute byte-offsets must not be used.
- External data references must not be used.
- If the Movie Fragment contains multiple tracks, the duration by which each track extends should be as close to equal as practical.
- Each Track Fragment Box must contain a Track Fragment Decode Time Box (tfdt)
- The Media Data Boxes must contain all the samples referenced by the Track Run Boxes (trun) of the Movie Fragment Box.
8.2.3. Random Access Points
A random access point as defined in this specification corresponds to a Stream Access Point of type 1 or 2 as defined in Annex I of ISO/IEC 14496-12.
9. Examples
Example use of the Media Source Extensions
<script> function onSourceOpen(videoTag, e) { var mediaSource = e.target; var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"'); videoTag.addEventListener('seeking', onSeeking.bind(videoTag, mediaSource)); videoTag.addEventListener('progress', onProgress.bind(videoTag, mediaSource)); var initSegment = GetInitializationSegment(); if (initSegment == null) { // Error fetching the initialization segment. Signal end of stream with an error. mediaSource.endOfStream("network"); return; } // Append the initialization segment. sourceBuffer.append(initSegment); // Append some initial media data. appendNextMediaSegment(mediaSource); } function appendNextMediaSegment(mediaSource) { if (mediaSource.readyState == "ended") return; // If we have run out of stream data, then signal end of stream. if (!HaveMoreMediaSegments()) { mediaSource.endOfStream(); return; } var mediaSegment = GetNextMediaSegment(); if (!mediaSegment) { // Error fetching the next media segment. mediaSource.endOfStream("network"); return; } mediaSource.sourceBuffers[0].append(mediaSegment); } function onSeeking(mediaSource, e) { var video = e.target; // Abort current segment append. mediaSource.sourceBuffers[0].abort(); // Notify the media segment loading code to start fetching data at the // new playback position. SeekToMediaSegmentAt(video.currentTime); // Append media segments from the new playback position. appendNextMediaSegment(mediaSource); appendNextMediaSegment(mediaSource); } function onProgress(mediaSource, e) { appendNextMediaSegment(mediaSource); } </script> <video id="v" autoplay> </video> <script> var video = document.getElementById('v'); var mediaSource = new MediaSource(); mediaSource.addEventListener('sourceopen', onSourceOpen.bind(this, video)); video.src = window.URL.createObjectURL(mediaSource); </script>
10. Revision History
Version | Comment |
---|---|
8 October 2012 |
|
1 October 2012 | Fixed various addsourcebuffer & removesourcebuffer bugs and allow append() in ended state. |
13 September 2012 | Updated endOfStream() behavior to change based on the value of HTMLMediaElement.readyState. |
24 August 2012 |
|
22 August 2012 |
|
17 August 2012 | Minor editorial fixes. |
09 August 2012 | Change presentation start time to always be 0 instead of using format specific rules about the first media segment appended. |
30 July 2012 | Added SourceBuffer.timestampOffset and MediaSource.duration. |
17 July 2012 | Replaced SourceBufferList.remove() with MediaSource.removeSourceBuffer(). |
02 July 2012 | Converted to the object-oriented API |
26 June 2012 | Converted to Editor's draft. |
0.5 | Minor updates before proposing to W3C HTML-WG. |
0.4 | Major revision. Adding source IDs, defining buffer model, and clarifying byte stream formats. |
0.3 | Minor text updates. |
0.2 | Updates to reflect initial WebKit implementation. |
0.1 | Initial Proposal |