The low-level mechanisms of an Ogg stream (as described in the Ogg Bitstream Overview) provide means for mixing multiple logical streams and media types into a single linear-chronological stream. This document specifices the high-level arrangement and use of page structure to multiplex multiple streams of mixed media type within a physical Ogg stream.
The only exception to arranging pages in strictly ascending time order by granule position is those pages that do not set the granule position value. This is a special case when exceptionally large packets span multiple pages; the specifics of handling this special case are described later under 'Continuous and Discontinuous Streams'.
Even making an index optional then requires an application to support multiple methods (bisection search for a one-pass stream, indexing for a two-pass stream), which adds no additional functionality as bisection search delivers the same functionality for both stream types.
Seek operations are by absolute time; a direct bisection search must find the exact time position requested. Information in the Ogg bitstream is arranged such that all information to be presented for playback from the desired seek point will occur at or after the desired seek point. Seek operations are neither 'fuzzy' nor heuristic.
Although keyframe handling in video appears to be an exception to "all needed playback information lies ahead of a given seek", keyframes can still be handled directly within this indexless framework. Seeking to a keyframe in video (as well as seeking in other media types with analagous restraints) is handled as two seeks; first a seek to the desired time which extracts state information that decodes to the time of the last keyframe, followed by a second seek directly to the keyframe. The location of the previous keyframe is embedded as state information in the granulepos; this mechanism is described in more detail later.
A stream that provides a gapless, time-continuous media type with a fine-grained timebase is considered to be 'Continuous'. A continuous stream should never be starved of data. Clear examples of continuous data types include broadcast audio and video.
A stream that delivers data in a potentially irregular pattern or with widely spaced timing gaps is considered to be 'Discontinuous'. A discontinuous stream may be best thought of as data representing scattered events; although they happen in order, they are typically unconnected data often located far apart. One possible example of a discontinuous stream types would be captioning. Although it's possible to design captions as a continuous stream type, it's most natural to think of captions as widely spaced pieces of text with little happing between.
The fundamental design distinction between continuous and discontinuous streams concerns buffering.
Discontinuous stream data may occur on a farily regular basis, but the timing of, for example, a specific caption is impossible to predict with certainty in most captioning systems. Thus the buffering system should take discontinuous data 'as it comes' rather than working ahead (for a potentially unbounded period) to look for future discontinuous data. As such, discontinuous streams are ingored when managing buffering; their pages simply 'fall out' of the stream when continuous streams are handled properly.
Buffering requirements need not be explicitly declared or managed for the encoded stream; the decoder simply reads as much data as is necessary to keep all continuous stream types gapless (also ensuring discontinuous data arrives in time) and no more, resulting in optimum implicit buffer usage for a given stream. Because all pages of all data types are stamped with absolute timing information within the stream, inter-stream synchronization timing is always explicitly maintained without the need for explicitly declared buffer-ahead hinting.
Further details, mechanisms and reasons for the differing arrangement and behavior of continuous and discontinuous streams is discussed later.
First Example: seeking to a desired time position in a multiplexed (or unmultiplexed) Ogg stream can be accomplished through a bisection search on time position of all pages in the stream (as encoded in the granule position). More powerful searches (such as a keyframe-aware seek within video) are also possible with additional search complexity, but similar computational compelxity.
Second Example: A bitstream section may consist of three multiplexed streams of differing lengths. The result of multiplexing these streams should be thought of as a single mixed stream with a length equal to the longest of the three component streams. Although it is also possible to think of the multiplexed results as three concurrent streams of different lenghts and it is possible to recover the three original streams, it will also become obvious that once multiplexed, it isn't possible to find the internal lengths of the component streams without a linear search of the whole bitstream section. However, it is possible to find the length of the whole bitstream section easily (in near-constant time per section) just as it is for a single-media unmultiplexed stream.
The granule position is governed by the following rules:
A simple granule position could encode a timestamp directly. For example, a granule position that encoded milliseconds from beginning of stream would allow a logical stream length of over 100,000,000,000 days before beginning a new logical stream (to avoid the granule position wrapping).
In the event that a audio frames always encode the same number of samples, the granule position could simple be a linear count of frames since beginning of stream. This has the advantages of being exact and efficient. Position in time would simply be [granule_position] * [samples_per_frame] / [samples_per_second].
The third point appears trickier at first glance, but it too can be handled through the granule position mapping mechanism. Here we arrange the granule position in such a way that granule positions of keyframes are easy to find. Divide the granule position into two fields; the most-significant bits are an absolute frame counter, but it's only updated at each keyframe. The least significant bits encode the number of frames since the last keyframe. In this way, each granule position both encodes the absolute time of the current frame as well as the absolute time of the last keyframe.
Seeking to a most recent preceeding keyframe is then accomplished by first seeking to the original desired point, inspecting the granulepos of the resulting video page, extracting from that granulepos the absolute time of the desired keyframe, and then seeking directly to that keyframe's page. Of course, it's still possible for an application to ignore keyframes and use a simpler seeking algorithm (decode would be unable to present decoded video until the next keyframe). Surprisingly many player applications do choose the simpler approach.
Because Ogg functions at the page, not packet, level, this once-per-page time information provides Ogg with the finest-grained time information is can use. Ogg passes this granule positioning data to the codec (along with the packets extracted from a page); it is the responsibility of codecs to track timing information at granularities finer than a single page.
An Ogg stream type is declared continuous or discontinuous by its codec. A given codec may support both continuous and discontinuous operation so long as any given logical stream is continuous or discontinuous for its entirety and the codec is able to ascertain (and inform the Ogg layer) as to which after decoding the initial stream header. The majority of codecs will always be continuous (such as Vorbis) or discontinuous (such as Writ).
Start- and end-time encoding do not affect multiplexing sort-order; pages are still sorted by the absolute time a given granulepos maps to regardless of whether that granulepos prepresents start- or end-time.
Implementation of more complex operations does require codec knowledge, however. Unlike other framing systems, Ogg maintains strict seperation between framing and the framed bistream data; Ogg does not replicate codec-specific information in the page/framing data, nor does Ogg blur the line between framing and stream data/metadata. Because Ogg is fully data-agnostic toward the data it frames, operations which require specifics of bitstream data (such as 'seek to keyframe') also require interaction with the codec layer (because, in this example, the Ogg layer is not aware of the concept of keyframes). This is different from systems that blur the seperation between framing and stream data in order to simplify the seperation of code. The Ogg system purposely keeps the distinction in data simple so that later codec innovations are not constrained by framing design.
For this reason, however, complex seeking operations require interaction with the codecs in order to decode the granule position of a given stream type back to absolute time or in order to find 'decodable points' such as keyframes in video.