diff options
author | Kevin Traynor <ktraynor@redhat.com> | 2018-05-23 14:41:30 +0100 |
---|---|---|
committer | Ian Stokes <ian.stokes@intel.com> | 2018-05-23 16:37:53 +0100 |
commit | d620091dd371386b11e6f6b3a6b7b250860e267d (patch) | |
tree | cc6d92eb19c93875372fe04fcfceb9e5819facef | |
parent | 1c6f23af5ab95dce109e099b32fa5693caa14058 (diff) | |
download | openvswitch-d620091dd371386b11e6f6b3a6b7b250860e267d.tar.gz |
netdev-dpdk: Remove use of rte_mempool_ops_get_count.
rte_mempool_ops_get_count is not exported by DPDK so it means it
cannot be used by OVS when using DPDK as a shared library.
Remove rte_mempool_ops_get_count but still use rte_mempool_full
and document it's behavior.
Fixes: 91fccdad72a2 ("netdev-dpdk: Free mempool only when no in-use mbufs.")
Reported-by: Timothy Redaelli <tredaelli@redhat.com>
Reported-by: Markos Chandras <mchandras@suse.de>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Ian Stokes <ian.stokes@intel.com>
-rw-r--r-- | lib/netdev-dpdk.c | 25 |
1 files changed, 13 insertions, 12 deletions
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 84744a3ab..46b81d942 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -471,19 +471,20 @@ ovs_rte_pktmbuf_init(struct rte_mempool *mp, static int dpdk_mp_full(const struct rte_mempool *mp) OVS_REQUIRES(dpdk_mutex) { - unsigned ring_count; - /* This logic is needed because rte_mempool_full() is not guaranteed to - * be atomic and mbufs could be moved from mempool cache --> mempool ring - * during the call. However, as no mbufs will be taken from the mempool - * at this time, we can work around it by also checking the ring entries - * separately and ensuring that they have not changed. + /* At this point we want to know if all the mbufs are back + * in the mempool. rte_mempool_full() is not atomic but it's + * the best available and as we are no longer requesting mbufs + * from the mempool, it means mbufs will not move from + * 'mempool ring' --> 'mempool cache'. In rte_mempool_full() + * the ring is counted before caches, so we won't get false + * positives in this use case and we handle false negatives. + * + * If future implementations of rte_mempool_full() were to change + * it could be possible for a false positive. Even that would + * likely be ok, as there are additional checks during mempool + * freeing but it would make things racey. */ - ring_count = rte_mempool_ops_get_count(mp); - if (rte_mempool_full(mp) && rte_mempool_ops_get_count(mp) == ring_count) { - return 1; - } - - return 0; + return rte_mempool_full(mp); } /* Free unused mempools. */ |