diff options
Diffstat (limited to 'doc/muxers.texi')
-rw-r--r-- | doc/muxers.texi | 399 |
1 files changed, 349 insertions, 50 deletions
diff --git a/doc/muxers.texi b/doc/muxers.texi index e368e684c5..965a4bb124 100644 --- a/doc/muxers.texi +++ b/doc/muxers.texi @@ -1,10 +1,10 @@ @chapter Muxers @c man begin MUXERS -Muxers are configured elements in Libav which allow writing +Muxers are configured elements in FFmpeg which allow writing multimedia streams to a particular type of file. -When you configure your Libav build, all the supported muxers +When you configure your FFmpeg build, all the supported muxers are enabled by default. You can list all available muxers using the configure option @code{--list-muxers}. @@ -35,20 +35,20 @@ CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to For example to compute the CRC of the input, and store it in the file @file{out.crc}: @example -avconv -i INPUT -f crc out.crc +ffmpeg -i INPUT -f crc out.crc @end example You can print the CRC to stdout with the command: @example -avconv -i INPUT -f crc - +ffmpeg -i INPUT -f crc - @end example -You can select the output format of each frame with @command{avconv} by +You can select the output format of each frame with @command{ffmpeg} by specifying the audio and video codec and format. For example to compute the CRC of the input audio converted to PCM unsigned 8-bit and the input video converted to MPEG-2 video, use the command: @example -avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc - +ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc - @end example See also the @ref{framecrc} muxer. @@ -56,40 +56,79 @@ See also the @ref{framecrc} muxer. @anchor{framecrc} @section framecrc -Per-frame CRC (Cyclic Redundancy Check) testing format. +Per-packet CRC (Cyclic Redundancy Check) testing format. -This muxer computes and prints the Adler-32 CRC for each decoded audio -and video frame. By default audio frames are converted to signed +This muxer computes and prints the Adler-32 CRC for each audio +and video packet. By default audio frames are converted to signed 16-bit raw audio and video frames to raw video before computing the CRC. The output of the muxer consists of a line for each audio and video -frame of the form: @var{stream_index}, @var{frame_dts}, -@var{frame_size}, 0x@var{CRC}, where @var{CRC} is a hexadecimal -number 0-padded to 8 digits containing the CRC of the decoded frame. +packet of the form: +@example +@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC} +@end example -For example to compute the CRC of each decoded frame in the input, and -store it in the file @file{out.crc}: +@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the +CRC of the packet. + +For example to compute the CRC of the audio and video frames in +@file{INPUT}, converted to raw audio and video packets, and store it +in the file @file{out.crc}: @example -avconv -i INPUT -f framecrc out.crc +ffmpeg -i INPUT -f framecrc out.crc @end example -You can print the CRC of each decoded frame to stdout with the command: +To print the information to stdout, use the command: @example -avconv -i INPUT -f framecrc - +ffmpeg -i INPUT -f framecrc - @end example -You can select the output format of each frame with @command{avconv} by -specifying the audio and video codec and format. For example, to +With @command{ffmpeg}, you can select the output format to which the +audio and video frames are encoded before computing the CRC for each +packet by specifying the audio and video codec. For example, to compute the CRC of each decoded input audio frame converted to PCM unsigned 8-bit and of each decoded input video frame converted to MPEG-2 video, use the command: @example -avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc - +ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc - @end example See also the @ref{crc} muxer. +@anchor{framemd5} +@section framemd5 + +Per-packet MD5 testing format. + +This muxer computes and prints the MD5 hash for each audio +and video packet. By default audio frames are converted to signed +16-bit raw audio and video frames to raw video before computing the +hash. + +The output of the muxer consists of a line for each audio and video +packet of the form: +@example +@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5} +@end example + +@var{MD5} is a hexadecimal number representing the computed MD5 hash +for the packet. + +For example to compute the MD5 of the audio and video frames in +@file{INPUT}, converted to raw audio and video packets, and store it +in the file @file{out.md5}: +@example +ffmpeg -i INPUT -f framemd5 out.md5 +@end example + +To print the information to stdout, use the command: +@example +ffmpeg -i INPUT -f framemd5 - +@end example + +See also the @ref{md5} muxer. + @anchor{hls} @section hls @@ -102,7 +141,7 @@ receive the same basename as the playlist, a sequential number and a .ts extension. @example -avconv -i in.nut out.m3u8 +ffmpeg -i in.nut out.m3u8 @end example @table @option @@ -116,6 +155,39 @@ Set the number after which index wraps. Start the sequence from @var{number}. @end table +@anchor{ico} +@section ico + +ICO file muxer. + +Microsoft's icon file format (ICO) has some strict limitations that should be noted: + +@itemize +@item +Size cannot exceed 256 pixels in any dimension + +@item +Only BMP and PNG images can be stored + +@item +If a BMP image is used, it must be one of the following pixel formats: +@example +BMP Bit Depth FFmpeg Pixel Format +1bit pal8 +4bit pal8 +8bit pal8 +16bit rgb555le +24bit bgr24 +32bit bgra +@end example + +@item +If a BMP image is used, it must use the BITMAPINFOHEADER DIB header + +@item +If a PNG image is used, it must use the rgba pixel format +@end itemize + @anchor{image2} @section image2 @@ -146,39 +218,78 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg}, etc. -The following example shows how to use @command{avconv} for creating a +The following example shows how to use @command{ffmpeg} for creating a sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ..., taking one image every second from the input video: @example -avconv -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg' +ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg' @end example -Note that with @command{avconv}, if the format is not specified with the +Note that with @command{ffmpeg}, if the format is not specified with the @code{-f} option and the output filename specifies an image file format, the image2 muxer is automatically selected, so the previous command can be written as: @example -avconv -i in.avi -vsync 1 -r 1 'img-%03d.jpeg' +ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg' @end example Note also that the pattern must not necessarily contain "%d" or "%0@var{N}d", for example to create a single image file @file{img.jpeg} from the input video you can employ the command: @example -avconv -i in.avi -f image2 -frames:v 1 img.jpeg +ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg @end example @table @option -@item -start_number @var{number} -Start the sequence from @var{number}. +@item start_number @var{number} +Start the sequence from @var{number}. Default value is 1. Must be a +positive number. + +@item updatefirst 1|0 +If set to 1, update the first written image file again and +again. Default value is 0. @end table +The image muxer supports the .Y.U.V image file format. This format is +special in that that each image frame consists of three files, for +each of the YUV420P components. To read or write this image file format, +specify the name of the '.Y' file. The muxer will automatically open the +'.U' and '.V' files as required. + +@anchor{md5} +@section md5 + +MD5 testing format. + +This muxer computes and prints the MD5 hash of all the input audio +and video frames. By default audio frames are converted to signed +16-bit raw audio and video frames to raw video before computing the +hash. + +The output of the muxer consists of a single line of the form: +MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing +the computed MD5 hash. + +For example to compute the MD5 hash of the input converted to raw +audio and video, and store it in the file @file{out.md5}: +@example +ffmpeg -i INPUT -f md5 out.md5 +@end example + +You can print the MD5 to stdout with the command: +@example +ffmpeg -i INPUT -f md5 - +@end example + +See also the @ref{framemd5} muxer. + @section MOV/MP4/ISMV The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for -better playback using the @command{qt-faststart} tool). A fragmented +better playback by adding @var{faststart} to the @var{movflags}, or +using the @command{qt-faststart} tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the @@ -192,6 +303,9 @@ Fragmentation is enabled by setting one of the AVOptions that define how to cut the file into fragments: @table @option +@item -moov_size @var{bytes} +Reserves space for the moov atom at the beginning of the file instead of placing the +moov atom at the end. If the space reserved is insufficient, muxing will fail. @item -movflags frag_keyframe Start a new fragment at each video keyframe. @item -frag_duration @var{duration} @@ -202,7 +316,7 @@ Create fragments that contain up to @var{size} bytes of payload data. Allow the caller to manually choose when to cut fragments, by calling @code{av_write_frame(ctx, NULL)} to write a fragment with the packets written so far. (This is only useful with other -applications integrating libavformat, not from @command{avconv}.) +applications integrating libavformat, not from @command{ffmpeg}.) @item -min_frag_duration @var{duration} Don't create fragments that are shorter than @var{duration} microseconds long. @end table @@ -233,12 +347,18 @@ more efficient), but with this option set, the muxer writes one moof/mdat pair for each track, making it easier to separate tracks. This option is implicitly set when writing ismv (Smooth Streaming) files. +@item -movflags faststart +Run a second pass moving the moov atom on top of the file. This +operation can take a while, and will not work in various situations such +as fragmented output, thus it is not enabled by default. +@item -movflags rtphint +Add RTP hinting tracks to the output file. @end table Smooth Streaming content can be pushed in real time to a publishing point on IIS with this muxer. Example: @example -avconv -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1) +ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1) @end example @section mpegts @@ -267,11 +387,11 @@ Set the first PID for data packets (default 0x0100, max 0x0f00). The recognized metadata settings in mpegts muxer are @code{service_provider} and @code{service_name}. If they are not set the default for -@code{service_provider} is "Libav" and the default for +@code{service_provider} is "FFmpeg" and the default for @code{service_name} is "Service01". @example -avconv -i file.mpg -c copy \ +ffmpeg -i file.mpg -c copy \ -mpegts_original_network_id 0x1122 \ -mpegts_transport_stream_id 0x3344 \ -mpegts_service_id 0x5566 \ @@ -289,19 +409,19 @@ Null muxer. This muxer does not generate any output file, it is mainly useful for testing or benchmarking purposes. -For example to benchmark decoding with @command{avconv} you can use the +For example to benchmark decoding with @command{ffmpeg} you can use the command: @example -avconv -benchmark -i INPUT -f null out.null +ffmpeg -benchmark -i INPUT -f null out.null @end example Note that the above command does not read or write the @file{out.null} -file, but specifying the output file is required by the @command{avconv} +file, but specifying the output file is required by the @command{ffmpeg} syntax. Alternatively you can write the command as: @example -avconv -benchmark -i INPUT -f null - +ffmpeg -benchmark -i INPUT -f null - @end example @section matroska @@ -326,7 +446,7 @@ Specifies the language of the track in the Matroska languages form @table @option -@item STEREO_MODE=@var{mode} +@item stereo_mode=@var{mode} Stereo 3D video layout of two views in a single video track @table @option @item mono @@ -364,10 +484,10 @@ Both eyes laced in one Block, Right-eye view is first For example a 3D WebM clip can be created using the following command line: @example -avconv -i sample_left_right_clip.mpg -an -c:v libvpx -metadata STEREO_MODE=left_right -y stereo_clip.webm +ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm @end example -@section segment +@section segment, stream_segment, ssegment Basic stream segmenter. @@ -375,29 +495,208 @@ The segmenter muxer outputs streams to a number of separate files of nearly fixed duration. Output filename pattern can be set in a fashion similar to @ref{image2}. -Every segment starts with a video keyframe, if a video stream is present. +@code{stream_segment} is a variant of the muxer used to write to +streaming output formats, i.e. which do not require global headers, +and is recommended for outputting e.g. to MPEG transport stream segments. +@code{ssegment} is a shorter alias for @code{stream_segment}. + +Every segment starts with a keyframe of the selected reference stream, +which is set through the @option{reference_stream} option. + +Note that if you want accurate splitting for a video file, you need to +make the input key frames correspond to the exact splitting times +expected by the segmenter, or the segment muxer will start the new +segment with the key frame found next after the specified start +time. + The segment muxer works best with a single constant frame rate video. -Optionally it can generate a flat list of the created segments, one segment -per line. +Optionally it can generate a list of the created segments, by setting +the option @var{segment_list}. The list type is specified by the +@var{segment_list_type} option. + +The segment muxer supports the following options: @table @option +@item reference_stream @var{specifier} +Set the reference stream, as specified by the string @var{specifier}. +If @var{specifier} is set to @code{auto}, the reference is choosen +automatically. Otherwise it must be a stream specifier (see the ``Stream +specifiers'' chapter in the ffmpeg manual) which specifies the +reference stream. The default value is ``auto''. + @item segment_format @var{format} Override the inner container format, by default it is guessed by the filename extension. -@item segment_time @var{t} -Set segment duration to @var{t} seconds. + @item segment_list @var{name} -Generate also a listfile named @var{name}. +Generate also a listfile named @var{name}. If not specified no +listfile is generated. + +@item segment_list_flags @var{flags} +Set flags affecting the segment list generation. + +It currently supports the following flags: +@table @var +@item cache +Allow caching (only affects M3U8 list files). + +@item live +Allow live-friendly file generation. +@end table + +Default value is @code{cache}. + @item segment_list_size @var{size} -Overwrite the listfile once it reaches @var{size} entries. +Update the list file so that it contains at most the last @var{size} +segments. If 0 the list file will contain all the segments. Default +value is 0. + +@item segment_list type @var{type} +Specify the format for the segment list file. + +The following values are recognized: +@table @option +@item flat +Generate a flat list for the created segments, one segment per line. + +@item csv, ext +Generate a list for the created segments, one segment per line, +each line matching the format (comma-separated values): +@example +@var{segment_filename},@var{segment_start_time},@var{segment_end_time} +@end example + +@var{segment_filename} is the name of the output file generated by the +muxer according to the provided pattern. CSV escaping (according to +RFC4180) is applied if required. + +@var{segment_start_time} and @var{segment_end_time} specify +the segment start and end time expressed in seconds. + +A list file with the suffix @code{".csv"} or @code{".ext"} will +auto-select this format. + +@code{ext} is deprecated in favor or @code{csv}. + +@item m3u8 +Generate an extended M3U8 file, version 3, compliant with +@url{http://tools.ietf.org/id/draft-pantos-http-live-streaming}. + +A list file with the suffix @code{".m3u8"} will auto-select this format. +@end table + +If not specified the type is guessed from the list file name suffix. + +@item segment_time @var{time} +Set segment duration to @var{time}, the value must be a duration +specification. Default value is "2". See also the +@option{segment_times} option. + +Note that splitting may not be accurate, unless you force the +reference stream key-frames at the given time. See the introductory +notice and the examples below. + +@item segment_time_delta @var{delta} +Specify the accuracy time when selecting the start time for a +segment, expressed as a duration specification. Default value is "0". + +When delta is specified a key-frame will start a new segment if its +PTS satisfies the relation: +@example +PTS >= start_time - time_delta +@end example + +This option is useful when splitting video content, which is always +split at GOP boundaries, in case a key frame is found just before the +specified split time. + +In particular may be used in combination with the @file{ffmpeg} option +@var{force_key_frames}. The key frame times specified by +@var{force_key_frames} may not be set accurately because of rounding +issues, with the consequence that a key frame time may result set just +before the specified time. For constant frame rate videos a value of +1/2*@var{frame_rate} should address the worst case mismatch between +the specified time and the time set by @var{force_key_frames}. + +@item segment_times @var{times} +Specify a list of split points. @var{times} contains a list of comma +separated duration specifications, in increasing order. See also +the @option{segment_time} option. + +@item segment_frames @var{frames} +Specify a list of split video frame numbers. @var{frames} contains a +list of comma separated integer numbers, in increasing order. + +This option specifies to start a new segment whenever a reference +stream key frame is found and the sequential number (starting from 0) +of the frame is greater or equal to the next value in the list. + @item segment_wrap @var{limit} Wrap around segment index once it reaches @var{limit}. + +@item segment_start_number @var{number} +Set the sequence number of the first segment. Defaults to @code{0}. + +@item reset_timestamps @var{1|0} +Reset timestamps at the begin of each segment, so that each segment +will start with near-zero timestamps. It is meant to ease the playback +of the generated segments. May not work with some combinations of +muxers/codecs. It is set to @code{0} by default. @end table +@section Examples + +@itemize +@item +To remux the content of file @file{in.mkv} to a list of segments +@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of +generated segments to @file{out.list}: +@example +ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut +@end example + +@item +As the example above, but segment the input file according to the split +points specified by the @var{segment_times} option: +@example +ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut +@end example + +@item +As the example above, but use the @code{ffmpeg} @var{force_key_frames} +option to force key frames in the input at the specified location, together +with the segment option @var{segment_time_delta} to account for +possible roundings operated when setting key frame times. +@example +ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \ +-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut +@end example +In order to force key frames on the input file, transcoding is +required. + +@item +Segment the input file by splitting the input file according to the +frame numbers sequence specified with the @var{segment_frames} option: +@example +ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut +@end example + +@item +To convert the @file{in.mkv} to TS segments using the @code{libx264} +and @code{libfaac} encoders: +@example +ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts +@end example + +@item +Segment the input file, and create an M3U8 live playlist (can be used +as live HLS source): @example -avconv -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut +ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \ +-segment_list_flags +live -segment_time 10 out%03d.mkv @end example +@end itemize @section mp3 @@ -425,12 +724,12 @@ Examples: Write an mp3 with an ID3v2.3 header and an ID3v1 footer: @example -avconv -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3 +ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3 @end example Attach a picture to an mp3: @example -avconv -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover" +ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3 @end example |