diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2021-10-21 14:55:04 +0200 |
---|---|---|
committer | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2021-10-21 14:55:04 +0200 |
commit | 67a598b143dd00bd304592f8f4a564eb97598fed (patch) | |
tree | 0e6ae259f9c1cabea7b4e245f4ca6c68397af822 | |
parent | a93d17136d05e5b987819fbc25c222156fdd4325 (diff) | |
download | linux-rt-67a598b143dd00bd304592f8f4a564eb97598fed.tar.gz |
[ANNOUNCE] v5.15-rc6-rt13v5.15-rc6-rt13-patches
Dear RT folks!
I'm pleased to announce the v5.15-rc6-rt13 patch set.
Changes since v5.15-rc6-rt12:
- The net series, which removes seqcount_t from Qdisc, has been updated
to the vesion which upstream applied plus a few fixes.
Known issues
- netconsole triggers WARN.
- The "Memory controller" (CONFIG_MEMCG) has been disabled.
- Valentin Schneider reported a few splats on ARM64, see
https://lkml.kernel.org/r/20210810134127.1394269-1-valentin.schneider@arm.com
The delta patch against v5.15-rc6-rt12 is appended below and can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/incr/patch-5.15-rc6-rt12-rt13.patch.xz
You can get this release via the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.15-rc6-rt13
The RT patch against v5.15-rc6 can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/older/patch-5.15-rc6-rt13.patch.xz
The split quilt queue is available at:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/older/patches-5.15-rc6-rt13.tar.xz
Sebastian
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
16 files changed, 367 insertions, 60 deletions
diff --git a/patches/0001_gen_stats_add_instead_set_the_value_in___gnet_stats_copy_basic.patch b/patches/0001-gen_stats-Add-instead-Set-the-value-in-__gnet_stats_.patch index d9efdbe8ad18..245fb1322564 100644 --- a/patches/0001_gen_stats_add_instead_set_the_value_in___gnet_stats_copy_basic.patch +++ b/patches/0001-gen_stats-Add-instead-Set-the-value-in-__gnet_stats_.patch @@ -1,6 +1,7 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Subject: gen_stats: Add instead Set the value in __gnet_stats_copy_basic(). Date: Sat, 16 Oct 2021 10:49:02 +0200 +Subject: [PATCH 1/9] gen_stats: Add instead Set the value in + __gnet_stats_copy_basic(). __gnet_stats_copy_basic() always assigns the value to the bstats argument overwriting the previous value. The later added per-CPU version @@ -19,7 +20,7 @@ Add the values in __gnet_stats_copy_basic() instead overwriting. Rename the function to gnet_stats_add_basic() to make it more obvious. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-2-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/net/gen_stats.h | 8 ++++---- net/core/gen_estimator.c | 2 +- diff --git a/patches/0002_gen_stats_add_gnet_stats_add_queue.patch b/patches/0002-gen_stats-Add-gnet_stats_add_queue.patch index 99de73f9a76e..7dec1acbbdd8 100644 --- a/patches/0002_gen_stats_add_gnet_stats_add_queue.patch +++ b/patches/0002-gen_stats-Add-gnet_stats_add_queue.patch @@ -1,13 +1,13 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Subject: gen_stats: Add gnet_stats_add_queue(). Date: Sat, 16 Oct 2021 10:49:03 +0200 +Subject: [PATCH 2/9] gen_stats: Add gnet_stats_add_queue(). This function will replace __gnet_stats_copy_queue(). It reads all arguments and adds them into the passed gnet_stats_queue argument. In contrast to __gnet_stats_copy_queue() it also copies the qlen member. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-3-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/net/gen_stats.h | 3 +++ net/core/gen_stats.c | 32 ++++++++++++++++++++++++++++++++ diff --git a/patches/0003_mq_mqprio_use_gnet_stats_add_queue.patch b/patches/0003-mq-mqprio-Use-gnet_stats_add_queue.patch index 0ead455ea276..417c843407e3 100644 --- a/patches/0003_mq_mqprio_use_gnet_stats_add_queue.patch +++ b/patches/0003-mq-mqprio-Use-gnet_stats_add_queue.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Subject: mq, mqprio: Use gnet_stats_add_queue(). Date: Sat, 16 Oct 2021 10:49:04 +0200 +Subject: [PATCH 3/9] mq, mqprio: Use gnet_stats_add_queue(). gnet_stats_add_basic() and gnet_stats_add_queue() add up the statistics so they can be used directly for both the per-CPU and global case. @@ -15,7 +15,7 @@ the per-CPU gnet_stats_queue::qlen was assigned to sch->q.qlen and sch->qstats.qlen. Now both fields are copied individually. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-4-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- net/sched/sch_mq.c | 24 +++++------------------- net/sched/sch_mqprio.c | 49 ++++++++++++------------------------------------- diff --git a/patches/0004_gen_stats_move_remaining_users_to_gnet_stats_add_queue.patch b/patches/0004-gen_stats-Move-remaining-users-to-gnet_stats_add_que.patch index b212ffc7877b..f682d8dc2acb 100644 --- a/patches/0004_gen_stats_move_remaining_users_to_gnet_stats_add_queue.patch +++ b/patches/0004-gen_stats-Move-remaining-users-to-gnet_stats_add_que.patch @@ -1,6 +1,7 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Subject: gen_stats: Move remaining users to gnet_stats_add_queue(). Date: Sat, 16 Oct 2021 10:49:05 +0200 +Subject: [PATCH 4/9] gen_stats: Move remaining users to + gnet_stats_add_queue(). The gnet_stats_queue::qlen member is only used in the SMP-case. @@ -15,7 +16,7 @@ Let both functions use gnet_stats_add_queue() and remove unused __gnet_stats_copy_queue(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-5-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/net/gen_stats.h | 3 --- include/net/sch_generic.h | 5 ++--- diff --git a/patches/0005_u64_stats_introduce_u64_stats_set.patch b/patches/0005-u64_stats-Introduce-u64_stats_set.patch index e5030d202c82..c28d5b32c431 100644 --- a/patches/0005_u64_stats_introduce_u64_stats_set.patch +++ b/patches/0005-u64_stats-Introduce-u64_stats_set.patch @@ -1,6 +1,6 @@ -From: Ahmed S. Darwish <a.darwish@linutronix.de> -Subject: u64_stats: Introduce u64_stats_set() +From: "Ahmed S. Darwish" <a.darwish@linutronix.de> Date: Sat, 16 Oct 2021 10:49:06 +0200 +Subject: [PATCH 5/9] u64_stats: Introduce u64_stats_set() Allow to directly set a u64_stats_t value which is used to provide an init function which sets it directly to zero intead of memset() the value. @@ -11,7 +11,7 @@ Add u64_stats_set() to the u64_stats API. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-6-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/linux/u64_stats_sync.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/patches/0006_net_sched_protect_qdisc_bstats_with_u64_stats.patch b/patches/0006-net-sched-Protect-Qdisc-bstats-with-u64_stats.patch index 4e5d233758c2..567756c9c728 100644 --- a/patches/0006_net_sched_protect_qdisc_bstats_with_u64_stats.patch +++ b/patches/0006-net-sched-Protect-Qdisc-bstats-with-u64_stats.patch @@ -1,6 +1,6 @@ -From: Ahmed S. Darwish <a.darwish@linutronix.de> -Subject: net: sched: Protect Qdisc::bstats with u64_stats +From: "Ahmed S. Darwish" <a.darwish@linutronix.de> Date: Sat, 16 Oct 2021 10:49:07 +0200 +Subject: [PATCH 6/9] net: sched: Protect Qdisc::bstats with u64_stats The not-per-CPU variant of qdisc tc (traffic control) statistics, Qdisc::gnet_stats_basic_packed bstats, is protected with Qdisc::running @@ -34,7 +34,7 @@ still be valid. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-7-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/net/gen_stats.h | 2 ++ include/net/sch_generic.h | 2 ++ @@ -45,7 +45,7 @@ Link: https://lore.kernel.org/r/20211016084910.4029084-7-bigeasy@linutronix.de net/sched/sch_atm.c | 1 + net/sched/sch_cbq.c | 1 + net/sched/sch_drr.c | 1 + - net/sched/sch_ets.c | 1 + + net/sched/sch_ets.c | 2 +- net/sched/sch_generic.c | 1 + net/sched/sch_gred.c | 4 +++- net/sched/sch_hfsc.c | 1 + @@ -53,7 +53,7 @@ Link: https://lore.kernel.org/r/20211016084910.4029084-7-bigeasy@linutronix.de net/sched/sch_mq.c | 2 +- net/sched/sch_mqprio.c | 5 +++-- net/sched/sch_qfq.c | 1 + - 17 files changed, 39 insertions(+), 9 deletions(-) + 17 files changed, 39 insertions(+), 10 deletions(-) --- a/include/net/gen_stats.h +++ b/include/net/gen_stats.h @@ -188,14 +188,15 @@ Link: https://lore.kernel.org/r/20211016084910.4029084-7-bigeasy@linutronix.de cl->qdisc = qdisc_create_dflt(sch->dev_queue, --- a/net/sched/sch_ets.c +++ b/net/sched/sch_ets.c -@@ -662,6 +662,7 @@ static int ets_qdisc_change(struct Qdisc - q->nbands = nbands; - for (i = nstrict; i < q->nstrict; i++) { - INIT_LIST_HEAD(&q->classes[i].alist); +@@ -689,7 +689,7 @@ static int ets_qdisc_change(struct Qdisc + q->classes[i].qdisc = NULL; + q->classes[i].quantum = 0; + q->classes[i].deficit = 0; +- memset(&q->classes[i].bstats, 0, sizeof(q->classes[i].bstats)); + gnet_stats_basic_packed_init(&q->classes[i].bstats); - if (q->classes[i].qdisc->q.qlen) { - list_add_tail(&q->classes[i].alist, &q->active); - q->classes[i].deficit = quanta[i]; + memset(&q->classes[i].qstats, 0, sizeof(q->classes[i].qstats)); + } + return 0; --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -892,6 +892,7 @@ struct Qdisc *qdisc_alloc(struct netdev_ diff --git a/patches/0007_net_sched_use__bstats_update_set_instead_of_raw_writes.patch b/patches/0007-net-sched-Use-_bstats_update-set-instead-of-raw-writ.patch index 3fa7bdcf7558..e849692053d9 100644 --- a/patches/0007_net_sched_use__bstats_update_set_instead_of_raw_writes.patch +++ b/patches/0007-net-sched-Use-_bstats_update-set-instead-of-raw-writ.patch @@ -1,6 +1,7 @@ -From: Ahmed S. Darwish <a.darwish@linutronix.de> -Subject: net: sched: Use _bstats_update/set() instead of raw writes +From: "Ahmed S. Darwish" <a.darwish@linutronix.de> Date: Sat, 16 Oct 2021 10:49:08 +0200 +Subject: [PATCH 7/9] net: sched: Use _bstats_update/set() instead of raw + writes The Qdisc::running sequence counter, used to protect Qdisc::bstats reads from parallel writes, is in the process of being removed. Qdisc::bstats @@ -13,7 +14,7 @@ appropriate. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-8-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- net/core/gen_stats.c | 9 +++++---- net/sched/sch_cbq.c | 3 +-- diff --git a/patches/0008_net_sched_merge_qdisc_bstats_and_qdisc_cpu_bstats_data_types.patch b/patches/0008-net-sched-Merge-Qdisc-bstats-and-Qdisc-cpu_bstats-da.patch index cacf560fbf9a..73aca29667c8 100644 --- a/patches/0008_net_sched_merge_qdisc_bstats_and_qdisc_cpu_bstats_data_types.patch +++ b/patches/0008-net-sched-Merge-Qdisc-bstats-and-Qdisc-cpu_bstats-da.patch @@ -1,6 +1,7 @@ -From: Ahmed S. Darwish <a.darwish@linutronix.de> -Subject: net: sched: Merge Qdisc::bstats and Qdisc::cpu_bstats data types +From: "Ahmed S. Darwish" <a.darwish@linutronix.de> Date: Sat, 16 Oct 2021 10:49:09 +0200 +Subject: [PATCH 8/9] net: sched: Merge Qdisc::bstats and Qdisc::cpu_bstats + data types The only factor differentiating per-CPU bstats data type (struct gnet_stats_basic_cpu) from the packed non-per-CPU one (struct @@ -17,7 +18,7 @@ protection. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-9-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- drivers/net/ethernet/netronome/nfp/abm/qdisc.c | 2 include/net/act_api.h | 10 ++-- @@ -776,15 +777,15 @@ Link: https://lore.kernel.org/r/20211016084910.4029084-9-bigeasy@linutronix.de struct gnet_stats_queue qstats; }; -@@ -662,7 +662,7 @@ static int ets_qdisc_change(struct Qdisc - q->nbands = nbands; - for (i = nstrict; i < q->nstrict; i++) { - INIT_LIST_HEAD(&q->classes[i].alist); +@@ -689,7 +689,7 @@ static int ets_qdisc_change(struct Qdisc + q->classes[i].qdisc = NULL; + q->classes[i].quantum = 0; + q->classes[i].deficit = 0; - gnet_stats_basic_packed_init(&q->classes[i].bstats); + gnet_stats_basic_sync_init(&q->classes[i].bstats); - if (q->classes[i].qdisc->q.qlen) { - list_add_tail(&q->classes[i].alist, &q->active); - q->classes[i].deficit = quanta[i]; + memset(&q->classes[i].qstats, 0, sizeof(q->classes[i].qstats)); + } + return 0; --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -892,12 +892,12 @@ struct Qdisc *qdisc_alloc(struct netdev_ diff --git a/patches/0009_net_sched_remove_qdisc_running_sequence_counter.patch b/patches/0009-net-sched-Remove-Qdisc-running-sequence-counter.patch index 99df0b33dbb9..ee6cb3f0e3bc 100644 --- a/patches/0009_net_sched_remove_qdisc_running_sequence_counter.patch +++ b/patches/0009-net-sched-Remove-Qdisc-running-sequence-counter.patch @@ -1,6 +1,6 @@ -From: Ahmed S. Darwish <a.darwish@linutronix.de> -Subject: net: sched: Remove Qdisc::running sequence counter +From: "Ahmed S. Darwish" <a.darwish@linutronix.de> Date: Sat, 16 Oct 2021 10:49:10 +0200 +Subject: [PATCH 9/9] net: sched: Remove Qdisc::running sequence counter The Qdisc::running sequence counter has two uses: @@ -37,7 +37,7 @@ values will still be valid. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Link: https://lore.kernel.org/r/20211016084910.4029084-10-bigeasy@linutronix.de +Signed-off-by: David S. Miller <davem@davemloft.net> --- include/linux/netdevice.h | 4 --- include/net/gen_stats.h | 19 +++++++---------- diff --git a/patches/Add_localversion_for_-RT_release.patch b/patches/Add_localversion_for_-RT_release.patch index 7bc90935ffb7..7b0058411028 100644 --- a/patches/Add_localversion_for_-RT_release.patch +++ b/patches/Add_localversion_for_-RT_release.patch @@ -15,4 +15,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- /dev/null +++ b/localversion-rt @@ -0,0 +1 @@ -+-rt12 ++-rt13 diff --git a/patches/net-sched-Allow-statistics-reads-from-softirq.patch b/patches/net-sched-Allow-statistics-reads-from-softirq.patch new file mode 100644 index 000000000000..d4a8504c3988 --- /dev/null +++ b/patches/net-sched-Allow-statistics-reads-from-softirq.patch @@ -0,0 +1,33 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Tue, 19 Oct 2021 12:12:04 +0200 +Subject: [PATCH] net: sched: Allow statistics reads from softirq. + +Eric reported that the rate estimator reads statics from the softirq +which in turn triggers a warning introduced in the statistics rework. + +The warning is too cautious. The updates happen in the softirq context +so reads from softirq are fine since the writes can not be preempted. +The updates/writes happen during qdisc_run() which ensures one writer +and the softirq context. +The remaining bad context for reading statistics remains in hard-IRQ +because it may preempt a writer. + +Fixes: 29cbcd8582837 ("net: sched: Remove Qdisc::running sequence counter") +Reported-by: Eric Dumazet <eric.dumazet@gmail.com> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Signed-off-by: David S. Miller <davem@davemloft.net> +--- + net/core/gen_stats.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/core/gen_stats.c ++++ b/net/core/gen_stats.c +@@ -154,7 +154,7 @@ void gnet_stats_add_basic(struct gnet_st + u64 bytes = 0; + u64 packets = 0; + +- WARN_ON_ONCE((cpu || running) && !in_task()); ++ WARN_ON_ONCE((cpu || running) && in_hardirq()); + + if (cpu) { + gnet_stats_add_basic_cpu(bstats, cpu); diff --git a/patches/net-sched-fix-logic-error-in-qdisc_run_begin.patch b/patches/net-sched-fix-logic-error-in-qdisc_run_begin.patch new file mode 100644 index 000000000000..700606934ffb --- /dev/null +++ b/patches/net-sched-fix-logic-error-in-qdisc_run_begin.patch @@ -0,0 +1,35 @@ +From: Eric Dumazet <edumazet@google.com> +Date: Mon, 18 Oct 2021 17:34:01 -0700 +Subject: [PATCH] net: sched: fix logic error in qdisc_run_begin() +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +For non TCQ_F_NOLOCK qdisc, qdisc_run_begin() tries to set +__QDISC_STATE_RUNNING and should return true if the bit was not set. + +test_and_set_bit() returns old bit value, therefore we need to invert. + +Fixes: 29cbcd858283 ("net: sched: Remove Qdisc::running sequence counter") +Signed-off-by: Eric Dumazet <edumazet@google.com> +Cc: Ahmed S. Darwish <a.darwish@linutronix.de> +Tested-by: Ido Schimmel <idosch@nvidia.com> +Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Tested-by: Toke Høiland-Jørgensen <toke@redhat.com> +Signed-off-by: Jakub Kicinski <kuba@kernel.org> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + include/net/sch_generic.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -217,7 +217,7 @@ static inline bool qdisc_run_begin(struc + */ + return spin_trylock(&qdisc->seqlock); + } +- return test_and_set_bit(__QDISC_STATE_RUNNING, &qdisc->state); ++ return !test_and_set_bit(__QDISC_STATE_RUNNING, &qdisc->state); + } + + static inline void qdisc_run_end(struct Qdisc *qdisc) diff --git a/patches/net-sched-remove-one-pair-of-atomic-operations.patch b/patches/net-sched-remove-one-pair-of-atomic-operations.patch new file mode 100644 index 000000000000..ba8f0172e9f8 --- /dev/null +++ b/patches/net-sched-remove-one-pair-of-atomic-operations.patch @@ -0,0 +1,75 @@ +From: Eric Dumazet <edumazet@google.com> +Date: Mon, 18 Oct 2021 17:34:02 -0700 +Subject: [PATCH] net: sched: remove one pair of atomic operations +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +__QDISC_STATE_RUNNING is only set/cleared from contexts owning qdisc lock. + +Thus we can use less expensive bit operations, as we were doing +before commit f9eb8aea2a1e ("net_sched: transform qdisc running bit into a seqcount") + +Fixes: 29cbcd858283 ("net: sched: Remove Qdisc::running sequence counter") +Signed-off-by: Eric Dumazet <edumazet@google.com> +Cc: Ahmed S. Darwish <a.darwish@linutronix.de> +Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Tested-by: Toke Høiland-Jørgensen <toke@redhat.com> +Signed-off-by: Jakub Kicinski <kuba@kernel.org> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + include/net/sch_generic.h | 12 ++++++++---- + 1 file changed, 8 insertions(+), 4 deletions(-) + +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -38,10 +38,13 @@ enum qdisc_state_t { + __QDISC_STATE_DEACTIVATED, + __QDISC_STATE_MISSED, + __QDISC_STATE_DRAINING, ++}; ++ ++enum qdisc_state2_t { + /* Only for !TCQ_F_NOLOCK qdisc. Never access it directly. + * Use qdisc_run_begin/end() or qdisc_is_running() instead. + */ +- __QDISC_STATE_RUNNING, ++ __QDISC_STATE2_RUNNING, + }; + + #define QDISC_STATE_MISSED BIT(__QDISC_STATE_MISSED) +@@ -114,6 +117,7 @@ struct Qdisc { + struct gnet_stats_basic_sync bstats; + struct gnet_stats_queue qstats; + unsigned long state; ++ unsigned long state2; /* must be written under qdisc spinlock */ + struct Qdisc *next_sched; + struct sk_buff_head skb_bad_txq; + +@@ -154,7 +158,7 @@ static inline bool qdisc_is_running(stru + { + if (qdisc->flags & TCQ_F_NOLOCK) + return spin_is_locked(&qdisc->seqlock); +- return test_bit(__QDISC_STATE_RUNNING, &qdisc->state); ++ return test_bit(__QDISC_STATE2_RUNNING, &qdisc->state2); + } + + static inline bool nolock_qdisc_is_empty(const struct Qdisc *qdisc) +@@ -217,7 +221,7 @@ static inline bool qdisc_run_begin(struc + */ + return spin_trylock(&qdisc->seqlock); + } +- return !test_and_set_bit(__QDISC_STATE_RUNNING, &qdisc->state); ++ return !__test_and_set_bit(__QDISC_STATE2_RUNNING, &qdisc->state2); + } + + static inline void qdisc_run_end(struct Qdisc *qdisc) +@@ -229,7 +233,7 @@ static inline void qdisc_run_end(struct + &qdisc->state))) + __netif_schedule(qdisc); + } else { +- clear_bit(__QDISC_STATE_RUNNING, &qdisc->state); ++ __clear_bit(__QDISC_STATE2_RUNNING, &qdisc->state2); + } + } + diff --git a/patches/net-sched-sch_ets-properly-init-all-active-DRR-list-.patch b/patches/net-sched-sch_ets-properly-init-all-active-DRR-list-.patch new file mode 100644 index 000000000000..16c8aa4c3037 --- /dev/null +++ b/patches/net-sched-sch_ets-properly-init-all-active-DRR-list-.patch @@ -0,0 +1,65 @@ +From: Davide Caratti <dcaratti@redhat.com> +Date: Thu, 7 Oct 2021 15:05:02 +0200 +Subject: [PATCH] net/sched: sch_ets: properly init all active DRR list handles + +leaf classes of ETS qdiscs are served in strict priority or deficit round +robin (DRR), depending on the value of 'nstrict'. Since this value can be +changed while traffic is running, we need to be sure that the active list +of DRR classes can be updated at any time, so: + +1) call INIT_LIST_HEAD(&alist) on all leaf classes in .init(), before the + first packet hits any of them. +2) ensure that 'alist' is not overwritten with zeros when a leaf class is + no more strict priority nor DRR (i.e. array elements beyond 'nbands'). + +Link: https://lore.kernel.org/netdev/YS%2FoZ+f0Nr8eQkzH@dcaratti.users.ipa.redhat.com +Suggested-by: Cong Wang <cong.wang@bytedance.com> +Signed-off-by: Davide Caratti <dcaratti@redhat.com> +Signed-off-by: David S. Miller <davem@davemloft.net> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + net/sched/sch_ets.c | 12 +++++++++--- + 1 file changed, 9 insertions(+), 3 deletions(-) + +--- a/net/sched/sch_ets.c ++++ b/net/sched/sch_ets.c +@@ -661,7 +661,6 @@ static int ets_qdisc_change(struct Qdisc + + q->nbands = nbands; + for (i = nstrict; i < q->nstrict; i++) { +- INIT_LIST_HEAD(&q->classes[i].alist); + if (q->classes[i].qdisc->q.qlen) { + list_add_tail(&q->classes[i].alist, &q->active); + q->classes[i].deficit = quanta[i]; +@@ -687,7 +686,11 @@ static int ets_qdisc_change(struct Qdisc + ets_offload_change(sch); + for (i = q->nbands; i < oldbands; i++) { + qdisc_put(q->classes[i].qdisc); +- memset(&q->classes[i], 0, sizeof(q->classes[i])); ++ q->classes[i].qdisc = NULL; ++ q->classes[i].quantum = 0; ++ q->classes[i].deficit = 0; ++ memset(&q->classes[i].bstats, 0, sizeof(q->classes[i].bstats)); ++ memset(&q->classes[i].qstats, 0, sizeof(q->classes[i].qstats)); + } + return 0; + } +@@ -696,7 +699,7 @@ static int ets_qdisc_init(struct Qdisc * + struct netlink_ext_ack *extack) + { + struct ets_sched *q = qdisc_priv(sch); +- int err; ++ int err, i; + + if (!opt) + return -EINVAL; +@@ -706,6 +709,9 @@ static int ets_qdisc_init(struct Qdisc * + return err; + + INIT_LIST_HEAD(&q->active); ++ for (i = 0; i < TCQ_ETS_MAX_BANDS; i++) ++ INIT_LIST_HEAD(&q->classes[i].alist); ++ + return ets_qdisc_change(sch, opt, extack); + } + diff --git a/patches/net-stats-Read-the-statistics-in-___gnet_stats_copy_.patch b/patches/net-stats-Read-the-statistics-in-___gnet_stats_copy_.patch new file mode 100644 index 000000000000..84d7e034c9f6 --- /dev/null +++ b/patches/net-stats-Read-the-statistics-in-___gnet_stats_copy_.patch @@ -0,0 +1,89 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Thu, 21 Oct 2021 11:59:19 +0200 +Subject: [PATCH] net: stats: Read the statistics in ___gnet_stats_copy_basic() + instead of adding. + +Since the rework, the statistics code always adds up the byte and packet +value(s). On 32bit architectures a seqcount_t is used in +gnet_stats_basic_sync to ensure that the 64bit values are not modified +during the read since two 32bit loads are required. The usage of a +seqcount_t requires a lock to ensure that only one writer is active at a +time. This lock leads to disabled preemption during the update. + +The lack of disabling preemption is now creating a warning as reported +by Naresh since the query done by gnet_stats_copy_basic() is in +preemptible context. + +For ___gnet_stats_copy_basic() there is no need to disable preemption +since the update is performed on stack and can't be modified by another +writer. Instead of disabling preemption, to avoid the warning, +simply create a read function to just read the values and return as u64. + +Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> +Fixes: 67c9e6270f301 ("net: sched: Protect Qdisc::bstats with u64_stats") +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Link: https://lore.kernel.org/r/20211021095919.bi3szpt3c2kcoiso@linutronix.de +--- + net/core/gen_stats.c | 43 +++++++++++++++++++++++++++++++++++++------ + 1 file changed, 37 insertions(+), 6 deletions(-) + +--- a/net/core/gen_stats.c ++++ b/net/core/gen_stats.c +@@ -171,20 +171,51 @@ void gnet_stats_add_basic(struct gnet_st + } + EXPORT_SYMBOL(gnet_stats_add_basic); + ++static void gnet_stats_read_basic(u64 *ret_bytes, u64 *ret_packets, ++ struct gnet_stats_basic_sync __percpu *cpu, ++ struct gnet_stats_basic_sync *b, bool running) ++{ ++ unsigned int start; ++ ++ if (cpu) { ++ u64 t_bytes = 0, t_packets = 0; ++ int i; ++ ++ for_each_possible_cpu(i) { ++ struct gnet_stats_basic_sync *bcpu = per_cpu_ptr(cpu, i); ++ unsigned int start; ++ u64 bytes, packets; ++ ++ do { ++ start = u64_stats_fetch_begin_irq(&bcpu->syncp); ++ bytes = u64_stats_read(&bcpu->bytes); ++ packets = u64_stats_read(&bcpu->packets); ++ } while (u64_stats_fetch_retry_irq(&bcpu->syncp, start)); ++ ++ t_bytes += bytes; ++ t_packets += packets; ++ } ++ *ret_bytes = t_bytes; ++ *ret_packets = t_packets; ++ return; ++ } ++ do { ++ if (running) ++ start = u64_stats_fetch_begin_irq(&b->syncp); ++ *ret_bytes = u64_stats_read(&b->bytes); ++ *ret_packets = u64_stats_read(&b->packets); ++ } while (running && u64_stats_fetch_retry_irq(&b->syncp, start)); ++} ++ + static int + ___gnet_stats_copy_basic(struct gnet_dump *d, + struct gnet_stats_basic_sync __percpu *cpu, + struct gnet_stats_basic_sync *b, + int type, bool running) + { +- struct gnet_stats_basic_sync bstats; + u64 bstats_bytes, bstats_packets; + +- gnet_stats_basic_sync_init(&bstats); +- gnet_stats_add_basic(&bstats, cpu, b, running); +- +- bstats_bytes = u64_stats_read(&bstats.bytes); +- bstats_packets = u64_stats_read(&bstats.packets); ++ gnet_stats_read_basic(&bstats_bytes, &bstats_packets, cpu, b, running); + + if (d->compat_tc_stats && type == TCA_STATS_BASIC) { + d->tc_stats.bytes = bstats_bytes; diff --git a/patches/series b/patches/series index 92fdb8989de3..e1d4d3cc3d63 100644 --- a/patches/series +++ b/patches/series @@ -40,6 +40,9 @@ mm-Disable-zsmalloc-on-PREEMPT_RT.patch net-core-disable-NET_RX_BUSY_POLL-on-PREEMPT_RT.patch samples_kfifo__Rename_read_lock_write_lock.patch crypto_testmgr_only_disable_migration_in_crypto_disable_simd_for_test.patch +mm_allow_only_slub_on_preempt_rt.patch +mm_page_alloc_use_migrate_disable_in_drain_local_pages_wq.patch +mm_scatterlist_replace_the_preemptible_warning_in_sg_miter_stop.patch # KCOV (akpm) 0001_documentation_kcov_include_types_h_in_the_example.patch @@ -48,15 +51,34 @@ crypto_testmgr_only_disable_migration_in_crypto_disable_simd_for_test.patch 0004_kcov_avoid_enable_disable_interrupts_if_in_task.patch 0005_kcov_replace_local_irq_save_with_a_local_lock_t.patch +# net-next, Qdics's seqcount removal. +net-sched-sch_ets-properly-init-all-active-DRR-list-.patch +0001-gen_stats-Add-instead-Set-the-value-in-__gnet_stats_.patch +0002-gen_stats-Add-gnet_stats_add_queue.patch +0003-mq-mqprio-Use-gnet_stats_add_queue.patch +0004-gen_stats-Move-remaining-users-to-gnet_stats_add_que.patch +0005-u64_stats-Introduce-u64_stats_set.patch +0006-net-sched-Protect-Qdisc-bstats-with-u64_stats.patch +0007-net-sched-Use-_bstats_update-set-instead-of-raw-writ.patch +0008-net-sched-Merge-Qdisc-bstats-and-Qdisc-cpu_bstats-da.patch +0009-net-sched-Remove-Qdisc-running-sequence-counter.patch +net-sched-Allow-statistics-reads-from-softirq.patch +net-sched-fix-logic-error-in-qdisc_run_begin.patch +net-sched-remove-one-pair-of-atomic-operations.patch +net-stats-Read-the-statistics-in-___gnet_stats_copy_.patch + +# tip, irqwork +0001_sched_rt_annotate_the_rt_balancing_logic_irqwork_as_irq_work_hard_irq.patch +0002_irq_work_allow_irq_work_sync_to_sleep_if_irq_work_no_irq_support.patch +0003_irq_work_handle_some_irq_work_in_a_per_cpu_thread_on_preempt_rt.patch +0004_irq_work_also_rcuwait_for_irq_work_hard_irq_on_preempt_rt.patch + ########################################################################### # Posted ########################################################################### irq_poll-Use-raise_softirq_irqoff-in-cpu_dead-notifi.patch smp_wake_ksoftirqd_on_preempt_rt_instead_do_softirq.patch x86-softirq-Disable-softirq-stacks-on-PREEMPT_RT.patch -mm_allow_only_slub_on_preempt_rt.patch -mm_page_alloc_use_migrate_disable_in_drain_local_pages_wq.patch -mm_scatterlist_replace_the_preemptible_warning_in_sg_miter_stop.patch # sched 0001_sched_clean_up_the_might_sleep_underscore_zoo.patch @@ -74,23 +96,6 @@ mm_scatterlist_replace_the_preemptible_warning_in_sg_miter_stop.patch 0004_sched_delay_task_stack_freeing_on_rt.patch 0005_sched_move_mmdrop_to_rcu_on_rt.patch -# irqwork: Needs upstream consolidation -0001_sched_rt_annotate_the_rt_balancing_logic_irqwork_as_irq_work_hard_irq.patch -0002_irq_work_allow_irq_work_sync_to_sleep_if_irq_work_no_irq_support.patch -0003_irq_work_handle_some_irq_work_in_a_per_cpu_thread_on_preempt_rt.patch -0004_irq_work_also_rcuwait_for_irq_work_hard_irq_on_preempt_rt.patch - -# Qdics's seqcount removal. -0001_gen_stats_add_instead_set_the_value_in___gnet_stats_copy_basic.patch -0002_gen_stats_add_gnet_stats_add_queue.patch -0003_mq_mqprio_use_gnet_stats_add_queue.patch -0004_gen_stats_move_remaining_users_to_gnet_stats_add_queue.patch -0005_u64_stats_introduce_u64_stats_set.patch -0006_net_sched_protect_qdisc_bstats_with_u64_stats.patch -0007_net_sched_use__bstats_update_set_instead_of_raw_writes.patch -0008_net_sched_merge_qdisc_bstats_and_qdisc_cpu_bstats_data_types.patch -0009_net_sched_remove_qdisc_running_sequence_counter.patch - ########################################################################### # Post ########################################################################### |