| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
frame is provided but audio is
CID 1358390
|
|
|
|
|
|
|
|
|
| |
We have no idea which timestamps they are supposed to have so the only thing
we can do at this point is to drop them. Packets without timestamps happen if
audio was captured but no corresponding video, which shouldn't happen under
normal circumstances.
https://bugzilla.gnome.org/show_bug.cgi?id=747633
|
|
|
|
|
|
| |
And mark these events as disconts to reset time tracking in the audio source.
https://bugzilla.gnome.org/show_bug.cgi?id=747633
|
|
|
|
|
|
|
| |
For some reason we seem to sometimes get NULL video_frames in the
::VideoInputFrameArrived() callback, observed on Intensity Pro cards.
https://bugzilla.gnome.org/show_bug.cgi?id=747633
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=763081
|
|
|
|
|
| |
Don't reset the marker that's tracking disconts until
either the discont disappears or we resync.
|
|
|
|
|
|
| |
Combine mode and format to generate caps and support the flags from VideoChanged callback to support RGB capture.
https://bugzilla.gnome.org/show_bug.cgi?id=760594
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the mode of decklinkvideosink is set to "auto", the sink claims to
support the full set of caps that it can support for all modes. Then, every
time new caps are set, the sink will automatically find the correct mode for
these caps and set it.
Caveat: We have no way to know whether a specific mode will actually work for
your hardware. Therefore, if you try sending 4K video to a 1080 screen, it
will silently fail, we have no way to know that in advance. Manually setting
that mode at least gave the user a way to double-check what they are doing.
https://bugzilla.gnome.org/show_bug.cgi?id=759600
|
|
|
|
|
|
|
| |
Otherwise we're going to return times starting at 0 again after shutting down
an element for a specific input/output and then using it again later.
https://bugzilla.gnome.org/show_bug.cgi?id=755426
|
|
|
|
|
|
| |
again from there
https://bugzilla.gnome.org/show_bug.cgi?id=755426
|
|
|
|
|
|
| |
Use the correct type, GstClockTimeDiff, instead.
CID 1323742
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
around 0 properly
We were converting all times to our internal running times, that is the time
the sink itself spent in PLAYING already. But forgot to do that for the
running time calculated from the buffer timestamps. As such, all buffers were
scheduled much later if the pipeline's running time did not start at 0.
This happens for example if a base time is explicitly set on the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=754528
|
|
|
|
|
|
|
|
| |
when scheduling frames
Without this, we will schedule all frames too late in live pipelines.
https://bugzilla.gnome.org/show_bug.cgi?id=754666
|
|
|
|
|
| |
This was fixed by https://bugzilla.gnome.org/show_bug.cgi?id=749258
in basesink, and is not necessary to duplicate here anymore.
|
|
|
|
|
|
| |
mode's framerate
We only really care about the timestamps for the sink.
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=749218
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=749218
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=749218
|
|
|
|
|
|
| |
The autodetection mode was broken because a race condition in the input mode
setting. The mode could be reverted back when it was replaced in
the streaming thread by the old mode in the middle of mode changed callback.
|
|
|
|
|
| |
The first entry in the modes array is used as default mode for autodetection.
There's no need to copy it into the caps template.
|
|
|
|
|
|
| |
time and numbers of samples
This should prevent any accumulating rounding errors with the duration.
|
| |
|
|
|
|
|
| |
We already have the real capture time, not the time when we received
the end of the packet.
|
|
|
|
|
| |
Otherwise the old calibration will stick around for the next time we use it,
potentially giving us completely wrong times.
|
| |
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=744386
|
| |
|
|
|
|
|
|
|
| |
all cases
Even if both clocks have the same rate, we need to apply this diff. Only when
it's the same clock we don't, as it's our clock then.
|
|
|
|
|
|
|
|
|
| |
start to play
Add the diff between the external time when we went to playing and
the external time when the pipeline went to playing. Otherwise we
will always start outputting from 0 instead of the current running
time.
|
| |
|
|
|
|
|
|
|
| |
starting the device
Otherwise we might stay at 0.0s for too long because we will take the first
timestamp we ever see as 0.0... which will be after we started the device.
|
|
|
|
|
|
|
|
|
|
| |
gstdecklink.cpp: In member function 'virtual HRESULT GStreamerDecklinkInputCallback::VideoInputFrameArrived(IDeckLinkVideoInputFrame*, IDeckLinkAudioInputPacket*)':
gstdecklink.cpp:498:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_start_time)
^
gstdecklink.cpp:503:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_offset)
^
|
|
|
|
| |
not the decklink clock
|
|
|
|
| |
a videosrc at this point
|
|
|
|
|
| |
The audio source only works together with the video source, and the video
source is already providing a clock.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The driver has an internal buffer of unspecified and unconfigurable size, and
it will pull data from our ring buffer as fast as it can until that is full.
Unfortunately that means that we pull silence from the ringbuffer unless its
size is by conincidence larger than the driver's internal ringbuffer.
The good news is that it's not required to completely fill the buffer for
proper playback. So we now throttle reading from the ringbuffer whenever
the driver has buffered more than half of our ringbuffer size by waiting
on the clock for the amount of time until it has buffered less than that
again.
|
|
|
|
|
|
| |
The ringbuffer's acquire() is too early, and ringbuffer's start() will only be
called after the clock has advanced a bit... which it won't unless we start
scheduled playback.
|
|
|
|
|
|
|
| |
Not from the decklink clock. Both will return exactly the same time once the
decklink clock got slaved to the pipeline clock and received the first
observation, but until then it will return bogus values. But as both return
exactly the same values, we can as well use the pipeline clock directly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
are ready and we are in PLAYING
Otherwise we might start the scheduled playback before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
|
|
|
|
|
|
|
|
|
|
| |
and we are in PLAYING
Otherwise we might start the streams before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
|
|
|
|
|
|
| |
not jump when going from PAUSED to PLAYING
It basically behaves the same as the audio clocks.
|
| |
|
|
|
|
|
|
|
| |
enabling the video input/output only when getting the actual caps
This will also make it easier later to support caps changes and support
selecting the mode based on the caps if that should ever be implemented.
|
|
|
|
| |
properly for mode=auto
|
| |
|
| |
|
|
|
|
| |
late frames
|
|
|
|
|
|
|
|
| |
was running already
This fixes handling of flushing seeks, where we will get a PAUSED->PLAYING
state transition after the previous one without actually going to PAUSED
first.
|