| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
Add "BindsTo=dbus.service" to NetworkManager.service so that when the
D-Bus service gets restarted, NM is also restarted instead of staying
stopped.
https://bugzilla.redhat.com/show_bug.cgi?id=2161915
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1605
|
| |
|
|
|
|
|
|
| |
The actual formatting depends on the version of clang-format. Print the
used version, which is in particular interesting when we get an error in
our gitlab-ci check (which uses the correct version).
|
|
|
|
|
|
|
|
| |
Fixes: e1648d0665a0 ('core: commit l3cd asynchronously on DHCP bound event')
Co-authored-by: Thomas Haller <thaller@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=2179537
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1609
|
|
|
|
|
| |
It contains "getenforce" and "setenforce", which are needed by some
NMCI tests.
|
|\
| |
| |
| |
| |
| |
| | |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/1272
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1558
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1607
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Using the ppp code is rather ugly.
Historically, the pppd headers don't follow a good naming convention,
and define things that cause conflicts with our headers:
/usr/include/pppd/patchlevel.h:#define VERSION "2.4.9"
/usr/include/pppd/pppd.h:typedef unsigned char bool;
Hence we had to include the pppd headers in certain order, and be
careful.
ppp 2.5 changes API and cleans that up. But since we need to support
also old versions, it does not immediately simplify anything.
Only include "pppd" headers in "nm-pppd-compat.c" and expose a wrapper
API from "nm-pppd-compat.h". The purpose is that "nm-pppd-compat.h"
exposes clean names, while all the handling of ppp is in the source
file.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change does the following
* Adding in nm-pppd-compat.h to mask details regarding different
versions of pppd.
* Fix the nm-pppd-plugin.c regarding differences in API between
2.4.9 (current) and latet pppd 2.5.0 in master branch
* Additional fixes to the configure.ac to appropriately set defines used
for compilation
|
|/
|
|
|
|
|
| |
Ppp 2.5 adds a pkg-config file, so we can detect the version.
Use it.
[thaller@redhat.com: split out patch]
|
|
|
|
| |
Fixes: 5d28a0dd899b ('doc: replace all (allow-none) annotations by (optional) and/or (nullable)')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The permissions for running CI will be restricted to external
contributors. It will only work for projects that use "detached MR
pipelines" ([1]).
Note that for it to actually work, a member with permission might have
to go to the "pipeline" tab of the merge request and click "run
pipeline". But this snippet is necessary for that.
[1] https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html
https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/540#what-it-means-for-me-a-maintainer-of-a-project-part-of-gitlabfreedesktoporg
|
|\
| |
| |
| |
| |
| | |
Obsoletes: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1595
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1601
|
| |
| |
| |
| |
| |
| |
| |
| | |
- We need to fetch more entries per page. 100 is the maximum without
pagination, but that is enough for us.
- Previously, we checked all stages. Now, let's skip the "prep" and "tier3" stages.
This change should work both with old and new pipelines.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We want that the tier2+ tests are only run manually. As those tests
depend on the respective prep step, there are 3 possibilities:
1) make prep manual and the tier test automatic. That is what we would
want, because then we can just manually trigger the prep step (one
click). However, in the past this didn't work.
2) make the prep automatic and the test manual. That works, the downside
is that we often run the prep step when its not needed. This is what
we used to do to workaround 1).
3) make prep and the test manual. Then there are no unnecessary tests
run, but triggering a manual test is cumbersome. First click to start
the prep step, then wait, then click again.
Revisit this. It seems 1) is working now. Yeay.
Also rename the prep stages, so that it's clear to which tier they
belong. I guess, I could move them instead to prep1, prep2, prep3
stages, but then there are a lot of columns on the web site.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The distro.name is not just a pretty name, its the name under which we fetch
the container. It is thus a well-known name, that we can rely on.
The "base_type" only depends on the distro name, and it makes no sense
to ever choose a different name. Tracking it in the "distributions"
array is thus redundant.
Move the mapping of distro.name to the base type to a separate place.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The tag we actually use already contains a hash of the input files and
is generated (by `ci-fairy generate-templates`). There is no need for having
this fixed prefix. As also seens by having a date there, which is maintained
badly and meaningless.
Drop it.
|
| |
| |
| |
| |
| | |
The long name looks verbose and takes away space on the web page.
Shorten the name.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The benefit is that instead of one long running job for fedora:37 (the
current tier1 test), we have several smaller.
A minor downside is, that if the build is broken, then usually the very
first test would already fail. Previously, that meant that the follow up
tests were skipped. Now, they run all in parallel. However, test
failures should be the exception, so the wasted resources are probably
irrelevant. The upside is, that we can see which tests fail, and we run
them much faster (in parallel).
This is only done for the tier1 test, because those tests are started
automatically. Other tiers need to be triggered manually, which already
means a lot of clicking. Making those also matrix tests, would result in
an insane amount of clicking. As those other tests are run much more
seldom, having them huge is probably fine.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have many test configurations (i.e. distros like fedora:37,
debian:9). Almost all of them run manually triggered, because running
them every time would be wasteful.
Still, even as we trigger those tests only seldom, whenever we trigger
them all together, they consume still too many resources of the
freedesktop.org gitlab infrastructure.
One possibility would be to just drop old distros (e.g. fedora:30).
Which tests are setup in gitlab-ci is constantly refined and adjusted.
So dropping some distros is not necessarily wrong and bound to happen
eventually.
However, I also don't find it great to just disable tests that are still
passing. If we want to avoid consuming too many resources, we can just
choose not to run those tests. We don't need to enforce that by deleting
tests. Once deleted, such a configuration cannot be tested anymore as it
would be too cumbersome to recreate the setup manually.
Instead, introduce stages/tiers to clearer mark configuration that we
should test even less frequently.
Note that it is still required from the developer to not trigger too
many tests at once, to not monopolize the CI resources. The stages
should make that clearer to see, but don't solve it. Deleting tests
might solve it, but only if we delete a significant number of those
tests, which seems not desirable.
|
| | |
|
|/
|
|
|
| |
The script now fails, if the user passes an invalid "$NM_TEST_SELECT_RUN"
or if the script references an invalid name.
|
|
|
|
| |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1598
|
|
|
|
|
|
|
|
|
|
|
| |
pip on Debian 12 semi-forces us to use a venv. That's hard enough but
even more so when we just want to run meson which only relies on the
standard library anyway.
Since that flag doesn't exist on earlier versions, try both and hope one
invocation succeeds.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1595
|
|
|
|
|
|
|
|
|
|
|
| |
When the unmanaged state is queued, we must ensure that the current
activation doesn't overwrite the queue stated with a new one. This can
happen for example if a dispatcher script or a firewall call
terminate, or if the next activation stage is dispatched.
Fixes-test: @preserve_master_and_ip_settings
https://bugzilla.redhat.com/show_bug.cgi?id=2178269
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1599
|
|
|
|
|
|
|
| |
Older versions of iproute2 don't support the "enclimit" argument. Work
around that from the unit tests.
Fixes: 1505ca3626b2 ('platform/tests: ip6gre & ip6gretap test cases (ip6 tunnel flags)')
|
|
|
|
|
| |
git_ref_exists() memoizes the result. But while it looks up the SHA sum
for "ref", it also can cache the result for the SHA sum itself.
|
|
|
|
| |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1596
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Every branch (for example "nm-1-40") has exactly one next branch, from
which patches should be backported (in that example that branch is
"nm-1-42").
While "find-backports" searches all newer branches for patches, it does
not make it clear form where the patch should come from.
That means, if you run the script `contrib/scripts/find-backports origin/nm-1-40`
it will check nm-1-42 and main branch, and might suggest to backport
patches that are only on main, but not "nm-1-42". That would be wrong,
because patches need to first go into nm-1-42, and then backported (from
there) further to nm-1-40.
Print a warning to highlight that.
|
|
|
|
|
| |
- avoid list([...]).
- use some f-strings.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the client was waiting for IPv6 DAD to complete and the lease was
updated or lost, `wait_ipv6_dad` needs to be cleared; otherwise, at
the next platform change the client will try to evaluate the DAD state
with a different or no lease. In particular if there is no lease the
client will try to decline it because there are no valid addresses,
leading to an assertion failure:
../src/core/dhcp/nm-dhcp-client.c:997:_dhcp_client_decline: assertion failed: (l3cd)
Backtrace:
__GI_raise ()
__GI_abort ()
g_assertion_message ()
g_assertion_message_expr ()
_dhcp_client_decline (self=0x1af13b0, l3cd=0x0, error_message=0x8e25e1 "DAD failed", error=0x7ffec2c45cb0) at ../src/core/dhcp/nm-dhcp-client.c:997
l3_cfg_notify_cb (l3cfg=0x1bc47f0, notify_data=0x7ffec2c46c60, self=0x1af13b0) at ../src/core/dhcp/nm-dhcp-client.c:1190
g_closure_invoke ()
g_signal_emit_valist ()
g_signal_emit ()
_nm_l3cfg_emit_signal_notify () at ../src/core/nm-l3cfg.c:629
_nm_l3cfg_notify_platform_change_on_idle () at ../src/core/nm-l3cfg.c:1390
_platform_signal_on_idle_cb () at ../src/core/nm-netns.c:411
g_idle_dispatch ()
Fixes: 393bc628ff69 ('dhcp: wait DAD completion for DHCPv6 addresses')
https://bugzilla.redhat.com/show_bug.cgi?id=2179890
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1594
|
| |
|
|\
| |
| |
| | |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1593
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
<error> is mostly about "really should not happen" scenarios. It's
closer to an assertion failure, and something that NetworkManager should
not happen.
Of course, things can go wrong, but <warn> is a sufficient. When ovsdb
gives unexpected communication, it's just a warning. At least, that's
also what all the similar cases in "nm-ovsdb.c" already do
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
GSocketConnection/GOutputStream/GInputStream seems rather unnecessary.
Maybe they make sense when you want to write portable code (for
Windows). Otherwise, watching a file descriptor and reading/writing it
directly is simpler (and also more efficient).
For example, we passed no GCancellable to g_input_stream_read_async().
What does that mean w.r.t. destroying the NMOvsdb instance? I suspect
it's wrong, but it's hard to say, because there are so many layers of
code.
Note that we anyway keep state in NMOvsdb, namely the data we want to
send (output_buf) and the data we partially received (input_buf). All we
need, are poll notifications when the file descriptor is ready. To
those, we hook up the read/write callbacks. Also before was the code
async, and there were callbacks when when read/write was done. That does
not simplify the code in any way.
- we no longer use separate NMOvsdbPrivate.buf and NMOvsdbPrivate.input
buffers. There is just a NMOvsdbPrivate.input_buf that can we can fill
directly.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
messages
The "priv->bufp" offset is only used while parsing a message at a time.
It's unnecessary to track it in NMOvsdbPrivate and keep it between
parsing messages. Tracking the state in NMOvsdbPrivate makes it more
complicated to understand, because one needs to reason at which times
the state is used (when it really is not used).
Also, move the parsing to a separate function.
|
| |
| |
| |
| |
| | |
We did not initialize "child_stderr". If that were necessary, we would need
to add it too. However, it is clearly not necessary to initialize those fields.
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
G_SPAWN_CLOEXEC_PIPES is supported since glib 2.40, which we already
depend on.
|
| |
| |
| |
| |
| |
| |
| | |
It's not used. It's better to use SOCK_NONBLOCK flag for socket(), as we do.
Also, the implementation that blindly calls F_SETFL without merging the
existing flags from F_GETFL is just wrong. Drop it altogether.
|
| |
| |
| |
| | |
Fixes: df1d214b2ea7 ('clients: polkit-agent: implement polkit agent without using libpolkit')
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
F_SETFL will reset the flags. That is wrong, as we only want to add
O_NONBLOCK flag and leaving the other flags alone. Usually, we would
need to call F_GETFL first.
Note that on Linux, F_SETFL can only set certain flags, so the
O_RDWR|O_CLOEXEC flags were unaffected by this. That means, most likely
there are no other flags that our use of F_SETFL would wrongly clear.
Still, it's ugly, because it's not obvious whether there might be other
flags.
Avoid that altogether, by setting the flag already during open().
Fixes: 67e092abcbde ('core: better handling of rfkill for WiMAX and WiFi (bgo #629589) (rh #599002)')
|
| |
| |
| |
| | |
Fixes: d65702803cb0 ('core: print stderr from nm-daemon-helper')
|
|/
|
|
| |
Fixes: 6ac21ba916b3 ('core: add infrastructure for spawning a helper process')
|