diff options
author | Kevin Traynor <ktraynor@redhat.com> | 2018-05-23 14:41:30 +0100 |
---|---|---|
committer | Ian Stokes <ian.stokes@intel.com> | 2018-05-25 09:09:50 +0100 |
commit | 1f84a2d5b57b9db62abfb7cfe79385f5768005bb (patch) | |
tree | 4f7e8eaa2dbc1a6799af17689fffd212c212bb04 /lib | |
parent | 55b259471719ceca1f0083bdd6a5f8c3e7690bae (diff) | |
download | openvswitch-1f84a2d5b57b9db62abfb7cfe79385f5768005bb.tar.gz |
netdev-dpdk: Remove use of rte_mempool_ops_get_count.
rte_mempool_ops_get_count is not exported by DPDK so it means it
cannot be used by OVS when using DPDK as a shared library.
Remove rte_mempool_ops_get_count but still use rte_mempool_full
and document it's behavior.
Fixes: 91fccdad72a2 ("netdev-dpdk: Free mempool only when no in-use mbufs.")
Reported-by: Timothy Redaelli <tredaelli@redhat.com>
Reported-by: Markos Chandras <mchandras@suse.de>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Ian Stokes <ian.stokes@intel.com>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/netdev-dpdk.c | 25 |
1 files changed, 13 insertions, 12 deletions
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 46f32a91e..0400bddc9 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -529,19 +529,20 @@ ovs_rte_pktmbuf_init(struct rte_mempool *mp OVS_UNUSED, static int dpdk_mp_full(const struct rte_mempool *mp) OVS_REQUIRES(dpdk_mp_mutex) { - unsigned ring_count; - /* This logic is needed because rte_mempool_full() is not guaranteed to - * be atomic and mbufs could be moved from mempool cache --> mempool ring - * during the call. However, as no mbufs will be taken from the mempool - * at this time, we can work around it by also checking the ring entries - * separately and ensuring that they have not changed. + /* At this point we want to know if all the mbufs are back + * in the mempool. rte_mempool_full() is not atomic but it's + * the best available and as we are no longer requesting mbufs + * from the mempool, it means mbufs will not move from + * 'mempool ring' --> 'mempool cache'. In rte_mempool_full() + * the ring is counted before caches, so we won't get false + * positives in this use case and we handle false negatives. + * + * If future implementations of rte_mempool_full() were to change + * it could be possible for a false positive. Even that would + * likely be ok, as there are additional checks during mempool + * freeing but it would make things racey. */ - ring_count = rte_mempool_ops_get_count(mp); - if (rte_mempool_full(mp) && rte_mempool_ops_get_count(mp) == ring_count) { - return 1; - } - - return 0; + return rte_mempool_full(mp); } /* Free unused mempools. */ |