| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| | |
stable/ussuri
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After a trunk VM has been migrated the trunk status remains
DOWN, After the parent port is back to active modify the trunk
status.
Depends-On: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/873351
Closes-Bug: #1988549
Change-Id: Ia0f7a6e8510af2c3545993e0d0d4bb06a9b70b79
(cherry picked from commit 178ee6fd3d76802cd7f577ad3d0d190117e78962)
|
|\ \
| | |
| | |
| | | |
into stable/ussuri
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch is covering an edge case that could happen when the number
of DHCP agents ("dhcp_agents_per_network") or L3 agents
("max_l3_agents_per_router") has been reduced and there are more agents
assigned than the current number. If the user removes any agent
assignation from a L3 router or a DHCP agent, it is possible to remove
first the lower binding assigned registers.
Now the method ``get_vacant_binding_index`` calculates the number of
agents bound and the number required. If a new one is needed, the
method returns first the lower binding indexes not used.
Closes-Bug: #2006496
Conflicts:
neutron/common/_constants.py
neutron/objects/l3agent.py
Change-Id: I25145c088ffdca47acfcb7add02b1a4a615e4612
(cherry picked from commit 5250598c804a38c55ff78cfb457b73d1b3cd7e07)
(cherry picked from commit 0920f17f476ce8b398deea3e54e9f90b5251cfc9)
(cherry picked from commit 7dcf8be112ed205a6c694c1f3549e08b4234d82d)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Port disable security should not have been in acl neutron_pg_drop, but
when run neutron-ovn-db-sync-util, port disable security still added
to ACL neutron_pg_drop. It because port disable security is not
trusted port.
Co-authored-by: archiephan <chungphan7819@gmail.com>
Closes-Bug: #1939723
Change-Id: Iebce0929e3e68ac5be0acaf5cdac4f5833cb9f2f
(cherry picked from commit 4511290b726f605384285228a28ad7b32a4b8c43)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix the following deprecation warnings.
PkgResourcesDeprecationWarning:
<MagicMock name='execute().split().__getitem__().__getitem__()'
id='140417024565696'> is an invalid version and will not be
supported in a future release
DeprecationWarning: Creating a LegacyVersion has been
deprecated and will be removed in the next major release
Change-Id: I23540114120f6ea52754116cfaaeac35e09543b4
Closes-Bug: 1986428
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
(cherry picked from commit 76cf6b4a9e9a8c8f7b01f44d787b66b9b894331b)
(cherry picked from commit aaafcbef3367a8b937a0b3283170031384d5504f)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
notify() is called from python-ovs code which is not built to
recover from an exception in this user-overriden code. If there
is an exception (e.g. the DB server is down when we process
the hash ring), this exception can cause an unrecoverable error
in processing OVSDB messages, rendering the neutron worker useless.
Change-Id: I5f703d82175d71a222c76df37a82b5ccad890d14
(cherry picked from commit 67e616b2380d6549308a15077b2043721dbea5d0)
(cherry picked from commit 848787785eb1140ee7d0eac72f3967b39345e625)
Conflicts: neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py
(cherry picked from commit 3566cc065eb7e811822a472bd40e37ecf7668971)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In fullstack security group test, after fake VMs are created there
check if connectivity to some custom port is working fine with port
security disabled.
After that there is called "block_until_ping" method for each vms.
This patch changes that to first wait if we can ping vms and later do
netcat tests.
Even if that will not solve problems with failures of this test, we
may know more if the issue is caused by netcat or it's just no
ICMP connectivity between VMs at all.
Change-Id: Ie9e2170c761c9a10f3daa991c3fb77f304bb07e2
Related-Bug: #1742401
(cherry picked from commit 1e9a3bffd2cf565568ac479710104cd3a4cdae53)
|
|
|
|
|
|
|
| |
This patch limitis the tox version to <4 in stable/ussuri.
Related-Bug: #1999558
Change-Id: I9c62d429bb819336da05055fecd08e3816986bf8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Linuxbridge and OVS PortFixture, when port is created, in the fake
vm's namespace it needs to have correct mac address configured.
It seems that for some reason it's not properly configured sometimes and
that may cause failure of e.g. DHCP tests.
So this patch adds retries for 10 seconds to ensure that MAC address is
configured to the one which should be.
Closes-bug: #2000150
Change-Id: I8c6d226e626812c3ccf0a2681be68a5b080b3463
(cherry picked from commit 370d8bcea3ae728c1aacba7b36800ecd759f3f8e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During the OVN DB inconsistency check, a OVN LSP could not be present
in the Neutron DB. In this case, continue processing other LSPs and
let other ``DBInconsistenciesPeriodics`` methods to resolve this
issue.
Closes-Bug: #1999517
Conflicts:
neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py
Change-Id: Ifb8bdccf6819f7f8af1abd3b82ccb1cd2e4c2fb8
(cherry picked from commit dfe69472a82d7eee96299122c8a9520aeed73ddd)
(cherry picked from commit 951e2c74ae0baff84e98b5580071d09db95f00d5)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we are hitting some memory limits (again) in functional tests
job, lets configure 8GB of swap in all functional and fullstack jobs.
Before that patch swap was configured to 4GB only in the FIPS jobs. Now
it will be set to 8GB in all functional and fullstack jobs.
Closes-Bug: #1966394
Conflicts:
zuul.d/base.yaml
Change-Id: I6b097d8409318a5dfe17e9673adb6c1431a59b0b
(cherry picked from commit b8dcb0b7afe06f92090d5d2c366b9aae3a532ebc)
(cherry picked from commit 8f76285bbb4c8343c784cdf593f0a36eb5b37863)
(cherry picked from commit df3ea8c765648093343f9eaab005d57dfc84cd66)
|
|\
| |
| |
| | |
stable/ussuri
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Nothing much else, what the title says...
Change-Id: Ib1d41a6e4c869e108f31c1eb604f22c794d66467
Closes-Bug: #1996759
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
(cherry picked from commit bf44e70db6219e7f3a45bd61b7dd14a31ae33bb0)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While the reverse order may work, it's considered invalid by OVN and not
guaranteed to work properly since OVN may not necessarily know which of
two ports is the one to configure.
This configuration also triggered a bug in OVN where tearing down a port
after deploying a new one resulted in removing flows that serve the
port.
There is a patch up for review for OVN [1] to better handle multiple
assignment of the same port, but it doesn't make the setup any more
valid.
[1] http://patchwork.ozlabs.org/project/ovn/patch/20221114092437.2807815-1-xsimonar@redhat.com/
Conflicts:
neutron/agent/ovn/metadata/agent.py
Closes-Bug: #1997092
Change-Id: Ic7dbc4e8b00423e58f69646a9e3cedc6f72d6c63
(cherry picked from commit 3093aaab13dd6ba04ef0e686eb4c6cc386c58941)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not fail during the creation of a security group when trying to
make a quota reservation for the security group rules. This feature
was added in [1], in order to prevent the rule quota excess during
the security group creation.
However, as reported in LP#1992161, this method can be called from
the RPC worker. If this RPC worker is spawned alone (not with the API
workers), the extensions are not loaded and the security group rule
quota resources are not created. That means the quota engine does not
have the security group rules as managed resources (in this worker).
When a new network (and the first subnet) is created, the DHCP agent
(or agents) handling this network will try to create the DHCP port.
If, as commented in the LP bug, the default security group is not
created, the RPC worker will try to create it. In this case this
patch skips the quota check.
This patch is for stable releases only. Since Xena, this check is
done using a new method called "quota_limit_check" [2]. This method
does not fail in the related case.
[1]https://review.opendev.org/q/I0a9b91b09d6260ff96fdba2f0a455de53bbc1f00
[2]https://review.opendev.org/q/Id73368576a948f78a043d7cf0be16661a65626a9
Conflicts:
neutron/db/securitygroups_db.py
Closes-Bug: #1992161
Related-Bug: #1858680
Change-Id: I0f20b17c1b13c3cf56de70588fca4a6956d276df
(cherry picked from commit 02bdd0470246dd768227affa2d6a8dd8328d3463)
(cherry picked from commit 90865c06afe9780ac3116be9e527da9a75944c96)
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: Ie8dd684a7b79b0a322b1f2d17fffb4d58cfe94fc
(cherry picked from commit 562e9704f8ed292889f5786d373f9f09cf87ae8a)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will subnets from shared networks to be added on routers using:
$ openstack router add subnet router_id subnet_id
Without this, neutron user must use a multi-router solution, which is
not convenient at all.
Conflicts:
neutron/db/l3_db.py
Closes-Bug: #1975603
Related-Bug: #1757482
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: I50f07d41428e57e6bed9be16980a6c605b7d130e
(cherry picked from commit 8619c104b886517266f5b7ae7d19816aa5764dc0)
(cherry picked from commit 05569382481fadb05cc69449b19364647a8c4cdb)
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 28f3017a90ecec6208aef9696cd7947972ec17d8.
Reason for revert: It breaks Neutron server to start if OVN
database is large. Also Neutron server shouldn't
be configuring other services, it should be done
rather by an installer.
Change-Id: Ia1dc8072ecec1c019dd02039dadd78d544dbd843
(cherry picked from commit 30d1a40c508386905b12c43be1891ce503a8b634)
|
|
|
|
|
|
|
|
|
| |
https://docs.openstack.org/nova/latest/admin/aggregates.html link failure,
From openstack U version, this link is not in the user directory. Currently, only the latest version has been changed.
You are advised to change all the links
Change-Id: Ic3b5a0ac7d832b162848b363396264ed0bfc4a25
(cherry picked from commit 210f5297f5ac5c180677d3f6419f436f77e56f1d)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Disabling in-band management for bridge will effectively disable it for
all controllers which are or will be set for the bridge. This will
prevent us from having short time between configuring controller and
setting connection_mode of the controller to "out-of-band" when
controller works in the default "in-band" connection mode and adds some
hidden flows to the bridge.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py
neutron/tests/functional/agent/test_ovs_lib.py
neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge_test_base.py
Closes-Bug: #1992953
Change-Id: Ibca81eb59fbfad71f223832228f408fb248c5dfa
(cherry picked from commit 8fcf00a36dfcec80bba73b63896d806f48835faf)
(cherry picked from commit 9d826bc77aab36dd194cc471397af68bd4ad39e5)
(cherry picked from commit 32a8b2d3388b0e1ed4b19a5f3b68a99762f9f70b)
|
|\
| |
| |
| | |
agents" into stable/ussuri
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In case when HA router isn't active on any L3 agent,
_ensure_host_set_on_port method shouldn't try to update port's host to
the host from which there was an rpc message sent, as this can be host
on which router is in the "standby" mode.
This method should only update port's host to the router's "active_host"
if there is such active_host found already.
Depends-On: https://review.opendev.org/c/openstack/requirements/+/841489
Closes-Bug: #1973162
Closes-Bug: #1942190
Change-Id: Ib3945d294601b35f9b268c25841cd284b52c4ca3
(cherry picked from commit cd8bf18150c8b0a4bc64979d800726483d9cdb6e)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch avoids the clash of the hash ring cleaning operation and the
API workers by ensuring that the cleaning happens before the nodes for
that host are added to the ring and the connections to the OVSDBs (meaning
no events therefore no SELECTS on the hash ring table for that hostname).
This patch does this by re-using the same hash ring lock that starts
the probing thread. Now, the first worker that acquire the lock is
responsible for cleaning the hash ring for it's own host as well as
starting the probing thread. Subsequently workers only need to register
themselves to the hash ring.
Conflicts:
neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py
Change-Id: Iba73f7944592a003232eb397ba1d4da3dcba5c3a
Closes-Bug: #1990174
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
(cherry picked from commit b7b8f7c571440577a40aacf9d8d93abc3a5a48b3)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When port with IP address from the auto allocation subnet (SLAAC or
dhcp_stateless) is created or updated and fixed IP address is specified,
Neutron will fail as in case of such subnets IP address is assigned
automatically based on the subnet's prefix and port's MAC address.
But in case when given IP address is actually correct EUI64 generated IP
address (the same as Neutron would generate for that port) there is no
need to raise an exception and fail request.
Additionally this patch fixes imports sections in the
ipam_pluggable_backend module.
Closes-bug: #1991398
Change-Id: Iaee5608c9581228a83f7ad75dbf2cc31dafaa9ea
(cherry picked from commit d7b44f7218ff665045adb923b1aa92c35b371af9)
|
|/
|
|
|
|
|
|
|
|
| |
This module couldn't be run alone due to missing registratration of
the "ipam_driver" config option from the core neutron configuration.
To solve that problem there is test plugin configured for those tests
and it registers that option which is necessary.
Change-Id: Iffdca967340c01d4e8c063c516d0d5441750c653
(cherry picked from commit a3e68e8f76e2792c30d1527bb20ee5961c6a8a53)
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch split out the Hash Ring probing out of the maitenance task
into it's own thread. The idea is to speed up the start of probing by
doing it right after adding a node to the Hash Ring.
By doing that, we avoid the problem of delaying probing in case the
connection with OVSDB takes longer than expected to connect and the hash
ring nodes are considered dead as they weren't probed in time.
The patch re-uses the same classes as before to start this new thread
(instead of reusing the maintenance task thread). It adds a layer of
synchronization with a lock to make sure that only one new Hash Ring
probing thread is started.
(cherry picked from commit 240f2c6aebb5a958e3cdea9b9188e7f605238494)
Closes-Bug: #1991655
Change-Id: Ic04493f20eb9aecda563942c51f343dc4202523a
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In I87489596e2ff224431f7e83f43a1725172ee0953, the intent was to
make sure all event rows were converted to "frozen" rows to avoid
race conditions. Since OnvDbNotifyHandler.notify() does not call
super(), then it is necessary to do this conversion ourselves.
Related-Bug: #1896816
Change-Id: Ic281dec74a46c95024a6df1db3b1f9ee7d7d1227
(cherry picked from commit 27255fce30f570b549b026346fb10b7d0dca9039)
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds support for deleting OVN controller/metadata agents.
Behavior is undefined if the agents are still actually up as per
the Agent API docs.
As part of this, it is necessary to be able to tell all workers
that the agent is gone. This can't be done by deleting the
Chassis, because ovn-controller deletes the Chassis if it is
stopped gracefully and we need to still display those agents as
down until ovn-controller is restarted. This also means we can't
write a value to the Chassis marking the agent as 'deleted'
because the Chassis may not be there. And of course you can't
use the cache because then other workers won't see that the
agent is deleted.
Due to the hash ring implementation, we also cannot naively just
send some pre-defined event that all workers can listen for to
update their status of the agent. Only one worker would process
the event. So we need some kind of GLOBAL event type that is
processed by all workers.
When the hash ring implementation was done, the agent API
implementation was redesigned to work around moving from having
a single OVN Worker to having distributed events. That
implementation relied on marking the agents 'alive' in the
OVSDB. With large numbers of Chassis entries, this induces
significant load, with 2 DB writes per Chassis per
cfg.CONF.agent_down_time / 2 seconds (37 by default).
This patch reverts that change and goes back to using events
to store agent information in the cache, but adds support for
"GLOBAL" events that are run on each worker that uses a particular
connection.
Change-Id: I4581848ad3e176fa576f80a752f2f062c974c2d1
(cherry picked from commit da3ce7319866e8dc874d405e91f9af160e2c3d31)
|
|\ \ \
| |/ /
| | /
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is a small enhancement to https://review.opendev.org/#/c/737106/
that breaks apart the ControllerAgent and ControllerGatewayAgent.
Change-Id: I34c9e9621e84112dc950e1d274e867093f3d12e4
(cherry picked from commit 827c5878d3cca5807e72d90f3fb8b4a92d513426)
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Sometimes Neutron is failing to send notification to Nova
due to timeout, refused connection or another HTTP error.
Retry send in those cases.
Conflicts:
neutron/notifiers/nova.py
neutron/tests/unit/notifiers/test_nova.py
Closes-Bug: #1987780
Change-Id: Iaaccec770484234b704f70f3c144efac4d8ffba0
(cherry picked from commit cd475f9af898b81d98b3e0d3f55b94ea653c193c)
(cherry picked from commit b5e9148cc7311080ba1b2a410145949c1adaa0ca)
(cherry picked from commit 49f49bc2bf3b477df00a92e25a8de66221ec8ee6)
|
|\ \ \
| | | |
| | | |
| | | | |
into stable/ussuri
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
To ensure the "standardatributes" register is present during the
creation of the "provisioningblocks" register, the method
"add_provisioning_component" is wrapped with a database writer
context. That will ensure the standard attribute ID, used as a
foreign key in the "provisioningblocks" table, is present during
the transaction.
Closes-Bug: #1991222
Change-Id: If57d822ff617c2f9b5e5cade7d4ca74065376e55
(cherry picked from commit 9db730764c3b6bc394c4262e68bfcec7cea65b50)
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The url of Floodlight[1] is unavailable.
Update it to an accessible project url[2].
[1] http://www.projectfloodlight.org/floodlight/
[2] https://github.com/floodlight/floodlight
Change-Id: I8de0cb952e0fa3b9a12fc27ab2764b1552524128
|
|\ \
| | |
| | |
| | | |
stable/ussuri
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In order to avoid multiple LogicalSwitchPortUpdateUpEvent and
LogicalSwitchPortUpdateDownEvent is mandatory to specify the
router port type when udpating the port.
If the port is not specified when updating the port, the
transactions will trigger a modification on the ovnnb db
that will set the port status to down[0]. Triggering an unnecessary
DownEvent followed by another UpEvent. Those unnecessary event
most likely will trigger a revision conflict.
[0] - https://github.com/ovn-org/ovn/blob/
4f93381d7d38aa21f56fb3ff4ec00490fca12614/northd/northd.c#L15604
Conflicts:
neutron/common/ovn/constants.py
Closes-Bug: #1955578
Change-Id: I296003a936db16dd3a7d184ec44908fb3f261876
(cherry picked from commit 8c482b83f2cf6f5495f4df2e5698595db704798d)
|
|\ \
| | |
| | |
| | | |
stable/ussuri
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds a condition prior to logging the "Disallow caching" log
message from the hash ring.
Prior to this patch, this message was logged when the number of nodes
connected to the hash ring was different from the number of API workers
at Neutron's startup. This is because the hash ring waits until all API
workers are connected to build the hash ring cache.
With this patch, we will only log the message once (per worker) until
the number of connected nodes changes. When nodes connect and the cache
is built the _wait_startup_before_caching() is no longer used until the
service is restarted again.
Change-Id: I4f62b723083215483a2277cfcb798506671e1a2d
Closes-Bug: 1989480
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
(cherry picked from commit 9655466763282f47a06862a47f7f31b48130277e)
|
|\ \
| | |
| | |
| | | |
stable/ussuri
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The attempt to list security groups for a project, or any
random string, can create a default SG for it. Only allow if
privileges support it.
Closes-bug: #1988026
Change-Id: Ieef7011f48cd2188d4254ff16d90a6465bbabfe3
(cherry picked from commit 01fc2b9195f999df4d810df4ee63f77ecbc81f7e)
|
|/
|
|
|
|
|
|
|
|
|
| |
When a port is created the dns-assignment (dns-domain part)
was always taken form Neutron config dns_domain which is not
always true, since it could be Neutron network dns_domain or
the dns_domain sent when creating the port
Change-Id: I7f4366ff5a26f73013433bfbfb299fd06294f359
Closes-Bug:1873091
(cherry picked from commit ea13f2e83f8c2de3def69b6c883a5c161c3a6180)
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In some specific use case, the cloud operator expects the source port
of a packet to stay the same across all masquerading layer up to the
destination host. With the implementation of the random-fully code,
this behavior was changed as source_port is always rewritten no matter
which type of architecture / network CIDRs is being used in the backend.
This setting allows a user to fallback to the original behavior of the
masquerading process which is to keep the source_port consistent across
all layers. The initial random-fully fix prevents packet drops when
duplicate tuples are generated from two different namespace when the
source_ip:source_port goes toward the same destination so enabling this
setting would allow this issue to show again. Perhaps a right approach
here would be to fix this "racey" situation in the kernel by perhaps
using the mac address as a seed to the tuple ...
Change-Id: Idfe5e51007b9a3eaa48779cd01edbca2f586eee5
Closes-bug: #1987396
(cherry picked from commit bbefe5285e7ab799422fab81488f57c9c22769b6)
(cherry picked from commit fa77abbc153dcf040a95f6a001d6661e07c25096)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"Description" attribute belongs to the StandardAttribute class from
which many other classes inherits (like e.g. Network, Port or Subnet).
In case when only description of object is updated, revision number of
the object should be bumped but it wasn't the case for all of the
objects. For example updated description of the Network or Router didn't
bumped its revision_number. It was like that because StandardAttribute
object was the only one which was dirty in the session, and as it is not
member of the HasStandardAttibutes class, it was filtered out.
Now, to fix that problem revision_plugin looks in the session.dirty list
for objects which inherits from HasStandardAttibutes class (as it was
before) but also for StandardAttribute objects to bump revision numbers.
Closes-Bug: #1981817
Closes-Bug: #1865173
Change-Id: I79b40a8ae5d594ed6fc875572663469c8b701202
(cherry picked from commit 4c9cb83d6b46a6425e603194649a61f51a07a307)
|