summaryrefslogtreecommitdiff
path: root/lib/cmap.c
Commit message (Collapse)AuthorAgeFilesLines
* cmap: Fix hashing in cmap_find_protected().Zang MingJie2018-12-241-1/+1
| | | | | | | | | cmap_find_protected calculated wrong h2 hash which causing entries with duplicated id inserted into the cmap. Reported-at: https://mail.openvswitch.org/pipermail/ovs-discuss/2018-December/047945.html Signed-off-by: Zang MingJie <zealot0630@gmail.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
* dpif-netdev: Add SMC cache after EMC cacheYipeng Wang2018-07-241-0/+74
| | | | | | | | | | | | | | | | | | | | | | This patch adds a signature match cache (SMC) after exact match cache (EMC). The difference between SMC and EMC is SMC only stores a signature of a flow thus it is much more memory efficient. With same memory space, EMC can store 8k flows while SMC can store 1M flows. It is generally beneficial to turn on SMC but turn off EMC when traffic flow count is much larger than EMC size. SMC cache will map a signature to an dp_netdev_flow index in flow_table. Thus, we add two new APIs in cmap for lookup key by index and lookup index by key. For now, SMC is an experimental feature that it is turned off by default. One can turn it on using ovsdb options. Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com> Co-authored-by: Jan Scheurich <jan.scheurich@ericsson.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Acked-by: Billy O'Mahony <billy.o.mahony@intel.com> Signed-off-by: Ian Stokes <ian.stokes@intel.com>
* util: Document and rely on ovs_assert() always evaluating its argument.Ben Pfaff2018-02-011-4/+2
| | | | | | | | | | The ovs_assert() macro always evaluates its argument, even when NDEBUG is defined so that failure is ignored. This behavior wasn't documented, and thus a lot of code didn't rely on it. This commit documents the behavior and simplifies bits of code that heretofore didn't rely on it. Signed-off-by: Ben Pfaff <blp@ovn.org> Reviewed-by: Yifeng Sun <pkusunyifeng@gmail.com>
* cmap: Use PADDED_MEMBERS macro for cmap_bucket padding.Ilya Maximets2017-11-301-24/+19
| | | | | | | | | | | | | | | | | Current implementation of manual padding inside struct cmap_bucket doesn't work for some cacheline sizes. For example, if CACHE_LINE_SIZE equals to 128, compiler adds an additional 8 bytes: 4 bytes between 'hashes' and 'nodes' and 4 bytes after the manual 'pad'. This leads to build time assertion, because sizeof(struct cmap_bucket) == 136. Fix that by using PADDED_MEMBERS macro, which will handle all the unexpected compiler paddings. This is safe because we still have build time assert for the structure size. Other possible solution is to pack the structure, but the padding marco looks better and matches the other code. Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
* cmap: Use PADDED_MEMBERS_CACHELINE_MARKER in cmap_impl.Bhanuprakash Bodireddy2017-11-031-7/+12
| | | | | | | | | | Instead of explicitly adding the pad bytes to force the structure an exact multiple of cacheline size, let the macro do the job. This way the pad bytes will be auto adjusted when the new members get introduced in to the structure. Signed-off-by: Bhanuprakash Bodireddy <bhanuprakash.bodireddy@intel.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
* cmap: New macro CMAP_INITIALIZER, for initializing an empty cmap.Ben Pfaff2016-05-091-12/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes code is much simpler if we can statically initialize data structures. Until now, this has not been possible for cmap-based data structures, so this commit introduces a CMAP_INITIALIZER macro. This works by adding a singleton empty cmap_impl that simply forces the first insertion into any cmap that points to it to allocate a real cmap_impl. There could be some risk that rogue code modifies the singleton, so for safety it is also marked 'const' to allow the linker to put it into a read-only page. This adds a new OVS_ALIGNED_VAR macro with GCC and MSVC implementations. The latter is based on Microsoft webpages, so developers who know Windows might want to scrutinize it. As examples of the kind of simplification this can make possible, this commit removes an initialization function from ofproto-dpif-rid.c and a call to cmap_init() from tnl-neigh-cache.c. An upcoming commit will add another user. CC: Jarno Rajahalme <jarno@ovn.org> CC: Gurucharan Shetty <guru@ovn.org> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Ryan Moats <rmoats@us.ibm.com>
* bitmap: Convert single bitmap functions to 64-bit.Jesse Gross2015-06-251-5/+5
| | | | | | | | | | Currently the functions to set, clear, and iterate over bitmaps only operate over 32 bit values. If we convert them to handle 64 bit bitmaps, they can be used in more places. Suggested-by: Ben Pfaff <blp@nicira.com> Signed-off-by: Jesse Gross <jesse@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* cmap: Shrink cmap when load factor is below 20%.Alex Wang2014-11-211-1/+27
| | | | | | | | | | This commit adds check in cmap_remove() and shrinks the cmap by half if the load factor is below 20%. This is to reduce the memory utilization of cmap and to avoid the allocated cmap memory occupying the top of heap memory, preventing the trim of heap. Signed-off-by: Alex Wang <alexw@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: cmap_find_batch().Jarno Rajahalme2014-10-061-5/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Batching the cmap find improves the memory behavior with large cmaps and can make searches twice as fast: $ tests/ovstest test-cmap benchmark 2000000 8 0.1 16 Benchmarking with n=2000000, 8 threads, 0.10% mutations, batch size 16: cmap insert: 533 ms cmap iterate: 57 ms batch search: 146 ms cmap destroy: 233 ms cmap insert: 552 ms cmap iterate: 56 ms cmap search: 299 ms cmap destroy: 229 ms hmap insert: 222 ms hmap iterate: 198 ms hmap search: 2061 ms hmap destroy: 209 ms Batch size 1 has small performance penalty, but all other batch sizes are faster than non-batched cmap_find(). The batch size 16 was experimentally found better than 8 or 32, so now classifier_lookup_miniflow_batch() performs the cmap find operations in batches of 16. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: split up cmap_find().Jarno Rajahalme2014-10-061-56/+60
| | | | | | | | | This makes the following patch easier and cleans up the code. Explicit "inline" keywords seem necessary to prevent performance regression on cmap_find() with GCC 4.7 -O2. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: Use non-atomic access to hash.Jarno Rajahalme2014-10-061-34/+22
| | | | | | | | | | | | | | | | | | | | We use the 'counter' as a "lock" providing acquire-release semantics. Therefore we can use normal, non-atomic access to the memory accesses between the atomic accesses to 'counter'. The cmap_node.next needs to be RCU, so that can not be changed. For the writer this is straightforward, as we first acquire-read the counter and after all the changes we release-store the counter. For the reader this is a bit more complex, as we need to make sure the last counter read is not reordered with the preceding read operations on the bucket contents. Also rearrange code to benefit from the fact that hash values are unique in any bucket. This patch seems to make cmap_insert() a bit faster. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: More efficient cmap_find().Jarno Rajahalme2014-10-061-17/+27
| | | | | | These makes cmap_find 10% faster on GCC 4.7 (-O2 -g). Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: Return number of nodes from functions modifying the cmap.Jarno Rajahalme2014-10-061-19/+15
| | | | | | | | | We already update the count field as the last step of these functions, so returning the current count is very cheap. Callers that care about the count become a bit more efficient, as they avoid extra non-inlineable function call. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* cmap: ovsrcu postpone the cmap destroy.Alex Wang2014-10-011-1/+1
| | | | | | | | | | | | | | | Currently, the cmap_destroy() directly frees the cmap memory. Some callers of cmap_destroy() (e.g. destroy_subtable()) still allows other threads (e.g. pmd threads) accessing the cmap at the same time (e.g. via classifier_lookup_miniflow_batch()), which could cause segfault. To fix the above issue, this commit use ovsrcu to postpone the free of cmap memory. Reported-by: Ethan Jackson <ethan@nicira.com> Signed-off-by: Alex Wang <alexw@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* cmap, classifier: Avoid unsafe aliasing in iterators.Ben Pfaff2014-07-211-25/+20
| | | | | | | | | | | | | | | | CMAP_FOR_EACH and CLS_FOR_EACH and their variants tried to use void ** as a "pointer to any kind of pointer". That is a violation of the aliasing rules in ISO C which technically yields undefined behavior. With GCC 4.1, it causes both warnings and actual misbehavior. One option would to add -fno-strict-aliasing to the compiler flags, but that would only help with GCC; who knows whether this can be worked around with other compilers. Instead, this commit rewrites the iterators to avoid disallowed pointer aliasing. VMware-BZ: #1287651 Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jarno Rajahalme <jrajahalme@nicira.com>
* cmap: Fix cmap_next_position()Daniele Di Proietto2014-07-161-1/+1
| | | | | | | | | cmap_next_position() didn't update the node pointer while iterating through a list of nodes with the same hash. This commit fixes the bug and improve test-cmap to detect it. Signed-off-by: Daniele Di Proietto <ddiproietto@vmware.com> Signed-off-by: Ben Pfaff <blp@nicira.com>
* lib/hash: Abstract hash interface.Jarno Rajahalme2014-07-041-1/+1
| | | | | | | | | Use generic names hash_add() and hash_finish() instead of mhash_* equivalents. This makes future changes to hash implentations more localized. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* cmap: Rename a enum constant.Gurucharan Shetty2014-06-101-5/+5
| | | | | | The constant MAX_PATH is already defined in Windows. Signed-off-by: Gurucharan Shetty <gshetty@nicira.com>
* lib/ovs-rcu: Rename ovsrcu_init() as ovsrcu_set_hidden().Jarno Rajahalme2014-06-031-3/+3
| | | | | | | | | | | | | | | | | | | | ovsrcu_init() was named after the atomic_init(), but the semantics are different enough to warrant a different name. Basically C11 atomic_init is defined in a way that allows the implementation to assign the value without any syncronization, so in theory stores via atomic_init could be seen by others after stores via atomic_set, even if the atomic_init appeared earlier in the code. ovsrcu_set_hidden() can be used to set an RCU protected variable when it is not yet accessible by any active reader, but will be made visible later via an ovsrcu_set call on some other pointer. This patch also adds a new ovsrcu_init() that can be used to initilize RCU protected variables when the readers are not yet executing. The new ovsrcu_init() is implemented with atomic_init(), so it does not provide any kind of syncronization. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
* lib/cmap: Use atomics for all bucket data.Jarno Rajahalme2014-05-281-52/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | The documentation of the memory order argument of atomic operations states that memory loads after an atomic load with a memory_order_acquire cannot be moved before the atomic operation. This still leaves open the possibility that non-atomic loads before the atomic load could be moved after it, hence breaking down the synchronization used for cmap_find(). This patch fixes this my using atomics (with appropriate memory_order) also for the struct cmap_node pointer and hash values. struct cmap_node itself is used for the pointer, since it already contains an RCU pointer (which is atomic) for a struct cmap_node. This also helps simplify some of the code previously dealing separately with the cmap_node pointer in the bucket v.s. a cmap_node. Hash values are also accessed using atomics. Otherwise it might be legal for a compiler to read the hash values once, and keep using the same values through all the retries. These should have no effect on performance. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* lib/cmap: Add more hmap-like functionality.Jarno Rajahalme2014-05-281-7/+30
| | | | | | | | | | | | Add cmap_replace() and cmap_first(), as well as CMAP_FOR_EACH_SAFE and CMAP_FOR_EACH_CONTINUE to make porting existing hmap using code a bit easier. CMAP_FOR_EACH_SAFE is useful in RCU postponed destructors, when it is known that additional postponing is not needed. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
* cmap: Fix memory ordering for counter_changed().Ben Pfaff2014-05-211-4/+4
| | | | | | | | | | | Release memory ordering only affects visibility of stores, and is not allowed on a memory read. Some compilers enforce this, making this code fail to compile. Reported-by: Alex Wang <alexw@nicira.com> Reported-by: Kmindg G <kmindg@gmail.com> Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jarno Rajahalme <jrajahalme@nicira.com>
* cmap: New module for cuckoo hash table.Ben Pfaff2014-05-201-0/+835
This implements an "optimistic concurrent cuckoo hash", a single-writer, multiple-reader hash table data structure. The point of this data structure is performance, so this commit message focuses on performance. I tested the performance of cmap with the test-cmap utility included in this commit. It takes three parameters for benchmarking: - n, the number of elements to insert. - n_threads, the number of threads to use for searching and mutating the hash table. - mutations, the percentage of operations that should modify the hash table, from 0% to 100%. e.g. "test-cmap 1000000 16 1" inserts one million elements, uses 16 threads, and 1% of the operations modify the hash table. Any given run does the following for both hmap and cmap implementations: - Inserts n elements into a hash table. - Iterates over all of the elements. - Spawns n_threads threads, each of which searches for each of the elements in the hash table, once, and removes the specified percentage of them. - Removes each of the (remaining) elements and destroys the hash table. and reports the time taken by each step, The tables below report results for various parameters with a draft version of this library. The tests were not formally rerun for the final version, but the intermediate changes should only have improved performance, and this seemed to be the case in some informal testing. n_threads=16 was used each time, on a 16-core x86-64 machine. The compiler used was Clang 3.5. (GCC yields different numbers but similar relative results.) The results show: - Insertion is generally 3x to 5x faster in an hmap. - Iteration is generally about 3x faster in a cmap. - Search and mutation is 4x faster with .1% mutations and the advantage grows as the fraction of mutations grows. This is because a cmap does not require locking for read operations, even in the presence of a writer. With no mutations, however, no locking is required in the hmap case, and the hmap is somewhat faster. This is because raw hmap search is somewhat simpler and faster than raw cmap search. - Destruction is faster, usually by less than 2x, in an hmap. n=10,000,000: .1% mutations 1% mutations 10% mutations no mutations cmap hmap cmap hmap cmap hmap cmap hmap insert: 6132 2182 6136 2178 6111 2174 6124 2180 iterate: 370 1203 378 1201 370 1200 371 1202 search: 1375 8692 2393 28197 18402 80379 1281 1097 destroy: 1382 1187 1197 1034 324 241 1405 1205 n=1,000,000: .1% mutations 1% mutations 10% mutations no mutations cmap hmap cmap hmap cmap hmap cmap hmap insert: 311 25 310 60 311 59 310 60 iterate: 25 62 25 64 25 57 25 60 search: 115 637 197 2266 1803 7284 101 67 destroy: 103 64 90 59 25 13 104 66 n=100,000: .1% mutations 1% mutations 10% mutations no mutations cmap hmap cmap hmap cmap hmap cmap hmap insert: 25 6 26 5 25 5 25 5 iterate: 1 3 1 3 1 3 2 3 search: 12 57 27 219 164 735 10 5 destroy: 5 3 6 3 2 1 6 4 Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jarno Rajahalme <jrajahalme@nicira.com>