| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
meson already sets it on warning_level >= 2
|
|
|
|
| |
The previous patch didn't get triggered
|
| |
|
|
|
|
|
| |
Older gtkdoc versions expect to find a Makefile, so generate
a fake one with the information it wants.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reenables an interesting side effect that existed before
commit 263c0903, when the state of a pair state in the triggered check
list was changed to in-progress. Such "triggered" pairs with this state
were selectively pruned from the conncheck list according to their
priority in priv_prune_pending_checks(), meaning that pairs with a high
priority were conserved, and quickly rechecked.
Retrospectively, I suspect that this side effect was the initial
motivation for changing the state of a "triggered" pair.
Commit 263c0903 disabled that behaviour, for the sake of clarity, but it
seems important to restore it, because these "triggered" pairs are often
retriggered for a good reason, and frequently lead to a nominated pair.
And loosing the opportunity to nominate a pair may be critical in
controlled role when the peer agent is in aggressive nomination mode.
|
|
|
|
| |
The timeout has a weak ref that should be enough.
|
| |
|
|
|
|
|
|
|
|
| |
This patch ensures that the retransmit flag is more tightly in sync with
the stun transaction list, by now clearing it when the list becomes
empty. It makes the code a bit more readable by dropping some cases. In
a couple of places, the retransmit flag was also used as a way to
compare the priority of a pair and the priority of the selected pair.
|
|
|
|
|
|
|
|
|
|
|
| |
When reactivating a high priority pair, we have to change back the
component state from ready to connected, since there is a new pair to be
tested.
The case of the succeeded pair is also a bit simplified, the invocation
of the function conn_check_update_check_list_state_for_ready() to
complete the ready - connected - ready flip-flop transition is not
required for the trickle test any longer.
|
|
|
|
| |
This test is redundant with the previous one.
|
|
|
|
|
|
| |
We prefer to not change the state of the pair, when it is added to the
triggered check queue. Previously its state was changed to in-progress,
which was a bit misleading, as it somewhat anticipated a future state.
|
|
|
|
|
|
|
|
|
| |
Since commit fcd6bc86 a pair is not always created, when its priority
is lower than the selected pair priority. We have to deal with this
possibility when calling the function priv_add_new_check_pair().
More precisely, the component state update really requires the addition
of a new pair.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch, we merge the two variables stun_sent and
keep_timer_going. The three functions that are a possible source of a
new stun request returns a boolean value stating if a request has been
sent. The semantic of keep_timer_going can now be deduced from
stun_sent and from the result of priv_conn_check_stream_nominate().
The trick that makes this merge possible is to repurpose the return
value of priv_conn_check_tick_stream(), because keep_timer_going set
when the conncheck list contains in-progress pairs in this function is
redundant with the same check later in function
priv_conn_check_tick_stream_nominate().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch, we try to make more explicit the process order between
the different types of stun requets, according that only one request is
sent per callback timer tick, ie every 20ms, to respect the stun pacing
of the spec. We implement the follow priority:
* triggered checks
* stun retransmissions
* ordinary checks
In a concrete case, while a stream has stun requests related to
triggered checks to be sent, all other stun transactions are delayed to
the next timer ticks.
The goal of this patch is to make this priority explicit, and more
easily swappable if needed. Triggered checks have more probability to
succeed than stun retransmissions, this is the reason why they are
handled before. Ordinary checks on the contrary can be performed on a
lower priority basis, after all other stun requests.
The problem that can be sometime observed with a large number of stun
transactions is that stun retransmissions may suffer from a delay after
they have reached their deadline. This delay should remain small thanks
to the design of the initial retransmission timer (RTO), that takes into
account the overall number of scheduled stun requests. It allows all
stun requests to be sent and resent at a predefined "pacing" frequency
without much extra delay.
This ordering not perfect, because stun requests of a given type are
examinated per-stream, by looking at the first stream before the others,
so it introduces a natural priority for the first stream.
|
| |
|
| |
|
| |
|
|
|
|
| |
Also, update the RFC numbers that are implemented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In OC2007R2 compatibility mode, we observed the behaviour of a skype
turn server, when returning code 300 (try-alternate) stun error on its
tls connections. This value is returned apparently when the turn server
is overloaded already.
We noticed that the actual code in priv_handle_turn_alternate_server()
cannot handle a non-udp turn server, because a tcp one would require
to create a new socket.
But, even when creating such a new socket stack (tcp-bsd socket +
pseudossl socket), libnice still fails to establish a new connection to
the alternate server on port 443, in a very systematic way. I'm not sure
whether this problem is specific to this skype server infrastructure
(the skype client fails in a similar way). Anyway, this code path works
as expected with a non-microsoft turn server (tested with coturn).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A previous commit broke the logic used to start a discovery request for
tcp turn servers. The ambiguity came from the distinction between the
type of the turn server (turn->type), the compatibility of the
transport of the local base candidate (turn_tcp), and the reliability
of the underlying tcp socket (reliable_tcp).
reliable_tcp indicates whether the turn allocate request should be
"framed" in a tcp packet, according to RFC 4571. This is required in
OC2007R2 only.
This commit also puts the setup of the tcp turn socket in a separate
function, because such setup is also required when handling
try-alternate (code 300) stun errors on these tcp sockets, where we have
to setup a new connection to another tcp turn server.
|
|
|
|
|
| |
Relay candidates obtained from TLS turn server don't have to be
refreshed in OC2007R2 compatibility mode.
|
| |
|
| |
|
|
|
|
| |
This is more friendly with stun pacing.
|
| |
|
|
|
|
|
|
|
|
| |
This patch updates the previous commit "agent: stay in aggressive mode
after conncheck has started", by accepting to switch from aggressive to
regular mode, while no stun request has been sent. It gives the agent
some extra delay to still accept remote tcp candidates, after its state
already changed from gathering to connecting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates the stun timing constants and provides the rationale
with the choice of these new values, in the context of the ice
connection check algorithm.
One important value during the discovery state is the combination of the
initial timeout and the number of retransmissions, because this state
may complete after the last stun discovery binding request has timed
out. With the combination of 500ms and 3 retransmissions, the discovery
state is bound to 2000ms to discover server reflexive and relay
candidates.
The retransmission delay doubles at each retransmission except for the
last one. Generally, this state will complete sooner, when all
discovery requests get a reply before the timeout.
Another mechanism is used during the connection check, where an stun
request is sent with an initial timeout defined by :
RTO = MAX(500ms, Ta * (number of in-progress + waiting pairs))
with Ta = 20ms
The initial timeout is bounded by a minimum value, 500ms, and scales
linearly depending of the number of pairs on the way to be emited. The
same number of retransmissions than in the discovery state in used
during the connection check. The total time to wait for a pair to fail
is then RTO + 2*RTO + RTO = 4*RTO with 3 retransmissions.
On a typical laptop setup, with a wired and a wifi interface with
IPv4/IPv6 dual stack, a link-local and a link-global IPv6 address, a
couple a virtual addresses, a server-reflexive address, a turn relay
one, we end up with a total of 90 local candidates for 2 streams and 2
components each. The connection checks list includes up to 200 pairs
when tcp pairs are discarded, with :
<33 in-progress and waiting pairs in 50% cases (RTO = 660ms),
<55 in-progress and waiting pairs in 90% cases (RTO = 1100ms),
and up to 86 in-progres and waiting pairs (RTO = 1720ms)
The number of retransmission of 3 seems to be quite robust to handle
sporadic packets loss, if we consider for example a typical packet loss
frequency of 1% of the overall packets transmitted.
And a relatevely large initial timeout is interesting because it reduces
the overall network overhead caused by the stun requests and replies,
mesured around 3KB/s during a connection check with 4 components.
Finally, the total time to wait until all retransmissions have completed
and have timed out (2000ms with an initial timeout of 500ms and 3
retransmissions) gives a bound to the worst network latency we can
accept, when no packet is lost on the wire.
|
|
|
|
|
|
|
|
|
|
| |
The way pairs are unfrozen between RFC5245 and RFC8445 changed a bit,
and made the code much more simple. Previously pairs were unfrozen "per
stream", not they are unfrozen "per foundation". The principle of the
priv_conn_check_unfreeze_next function is now to unfreeze one and only
one frozen pair per foundation, all components and streams included.
The function is now idemporent: calling it when the connchecks still
contains waiting pairs does nothing.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new version of the RFC suppressed the difference between reliable
and not reliable maximum value for RTO. We choose to keep the value of
100ms that we used previously, which is lower that the recommended
value, but could be overriden most of the time, when a significant
number of pairs are handled.
We also compute exactly the number of in-progress and waiting
pairs for all streams of the agent, without relying on the value
per-stream, multiplied by the number of active streams.
|
|
|
|
|
|
|
|
|
|
| |
An inbound stun request may come on a tcp pair, whose tcp-active socket
has just been created and connected (the local candidate port is zero),
but has not caused the creation of a discovered peer-reflexive local
candidate (with a non-zero port). This inbound request is stored in an
early icheck structure to be replayed later. When being processed after
remote creds have been received, we have to find which local candidate
it belongs to, by matching with the address only, without the port.
|
|
|
|
|
|
| |
An inbound STUN request on a pair having another STUN request already
inflight already should generate to new triggered check, no matter the
type of the underlying socket.
|
|
|
|
|
|
|
| |
An inbound stun request on a newly discovered pair should trigger a
conncheck in the reverse direction, and not promote the pair directly in
state succeeded. This is particulary required if the agent is in
aggressive controlling mode.
|
|
|
|
|
|
| |
Since we keep a relation between a succeeded and its discovered pair, we
can just test for the socket associated to a given pair, and eventually
follow the link to the parent succeeded pair.
|
|
|
|
|
|
|
| |
Some tcp-active discovered peer-reflexive local candidates may only be
recognised by their local socket, if they have the same address and same
port. It may happen when a nat generates an identical mapping from two
different base local candidates.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We may have situation when stun_timer_refresh is called with a
significant delay after the current deadline. In the actual situation,
this delay is just included to the computation of the new deadline of the
next stun retransmission. We think this may lead to unfair situations,
where the next deadline may be too short, just to compensate the first
deadline that was too long.
For example, if a stun request is scheduled with a delay of
200ms for the 2nd transmission, and 400ms for the 3rd transmission,
if stun_timer_remainder() is called 300ms after the start of the
timer, the second delay will last only 300ms, instead of 400ms.
|
|
|
|
|
|
|
|
|
| |
The port number must be different for all local host candidates, not
just in the same component, but across all components and all streams.
A candidate ambiguity between a host local host and an identical server
reflexive candidate have more unwanted consequences when it concerns two
different components, because an inbound stun request may be associated
to a wrong component.
|
| |
|
|
|
|
|
|
| |
Also adds a unit test
Fixes #67
|
|
|
|
| |
The refresh list may be modified while being iterated
|
| |
|
| |
|
|
|
|
|
| |
This makes clang happy
Fixes #100
|
|
|
|
|
|
|
| |
This other rare situation happens when a role conflict is detected by an
stun reply message, on a component that already has a nominated pair
with a higher priority. In that case, the retransmit flag should be
honored, and the pair with "role conflict" should not be retransmitted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When pruning pending checks (after at least one nominated pair has been
obtained), some supplementary cases need to be handled, to ensure that
the property "all pairs and only the pairs having a higher priority than
the nominated pair should have the stun retransmit flag set" remains
true during the whole conncheck:
- a pair "not to be retransmitted" must be removed from the triggered check
list (because a triggered check would create a new stun request, that
would defacto ignore the retransmit flag)
- an in-progress pair "not to be retransmitted", for which no stun
request has been sent (p->stun_transactions == NULL, a transient
state) must be removed from the conncheck list, just like a waiting
pair.
- a failed pair must have its flag "retransmit" updated too, just like
another pair, since a failed pair could match an inbound check, and
generate a triggered check, based on retransmit flag value : ie only
if this pair has a chance to become a better nominated pair. See
NICE_CHECK_FAILED case in priv_schedule_triggered_check().
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function conn_check_update_retransmit_flag() that was introduced to
reenable the retransmit flag on pairs with higher priority than the
nominated one can be merged in priv_prune_pending_checks(), and its
invocation replaced by conn_check_update_check_list_state_for_ready().
The function priv_prune_pending_checks() can also be tweaked to use
the component selected pair priority, instead of getting it from
the checklist. This function is called when at least one nominated pair
exists, so selected_pair is this nominated pair.
|