| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
.. and other formats where ffmpeg gives us multiple
subframes per input frame.
Since we now support non-interleaved audio, we can't
just concat buffers any more. Also, audio metas won't
be combined when buffers are merged, so when we push
out the combined buffer we'll look at the meta describing
only the first subframe and think it covers the whole
frame leading to stutter/gaps in the output.
We could fix this by copying the output data into a new
buffer when we merge buffers, but that's suboptimal, so
let's add some API to GstAudioDecoder to push out subframes
and use that instead.
https://gitlab.freedesktop.org/gstreamer/gst-libav/issues/49
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=792900
|
| |
|
|
|
|
| |
which is required by avcodec_decode_audio4 ()
|
| |
|
|
|
|
|
|
|
| |
buffers
This might cause less memcpies as the GstMemories of the buffers
are just appended into a single buffer.
|
|
|
|
|
|
|
|
| |
The base audio decoder wants a 1:1 mapping for input and output
buffers, so this decoder must accumulate data in an adapter and push
it all at once after all input has been processed.
https://bugzilla.gnome.org/show_bug.cgi?id=689565
|
|
|
|
| |
Fixes bug #666435.
|
| |
|
|
|