summaryrefslogtreecommitdiff
path: root/src/basic/hashmap.c
Commit message (Collapse)AuthorAgeFilesLines
* meson: merge our two valgrind configuration conditions into oneZbigniew Jędrzejewski-Szmek2023-02-221-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | Most of the support for valgrind was under HAVE_VALGRIND_VALGRIND_H, i.e. we would enable if the valgrind headers were found. The operations then we be conditionalized on RUNNING_UNDER_VALGRIND. But in a few places we had code which was conditionalized on VALGRIND, i.e. the config option. I noticed because I compiled with -Dvalgrind=true on a machine that didn't have valgrind.h, and the build failed because RUNNING_UNDER_VALGRIND was not defined. My first idea was to add a check that the header is present if the option is set, but it seems better to just remove the option. The code to support valgrind is trivial, and if we're !RUNNING_UNDER_VALGRIND, it has negligible cost. And the case of running under valgrind is always some special testing/debugging mode, so we should just do those extra steps to make valgrind output cleaner. Removing the option makes things simpler and we don't have to think if something should be covered by the one or the other configuration bit. I had a vague recollection that in some places we used -Dvalgrind=true not for valgrind support, but to enable additional cleanup under other sanitizers. But that code would fail to build without the valgrind headers anyway, so I'm not sure if that was still used. If there are uses like that, we can extend the condition for cleanup_pools().
* hashmap: fix build with valgrindYu Watanabe2023-02-171-1/+1
| | | | Follow-up for a2b052b29f8bc141e94a4af95d1653a38a57eaeb.
* mempool: rework mempool_cleanup() to only release freed tilesLennart Poettering2023-02-171-3/+3
| | | | | | | | | | | | | | | | | | This substantially reworks mempool_cleanup() so that it releases pools with all freed tiles only, but keeps all pools with still-allocated tiles around. This is more correct, as the previous implementation just released all pools regardless if anything was still used or not. This would make valgrind shut up but would just hide memory leaks altogether. Moreover if called during regular runtime of a program would result in bad memory accesses all over. Hence, let's add a proper implementation and only trim pools we really know are empty. This way we can safely call these functions later, when under memory pressure, at any time.
* hashmap: expose helper for releasing memory pools independently of valgrindLennart Poettering2023-02-171-14/+18
| | | | | Let's clean this up and export this always, so that we can later call when we are under memory pressure.
* process-util: add helper get_process_threads()Lennart Poettering2023-02-171-3/+1
| | | | | Let's add a proper helper for querying the number of threads in a process.
* basic: Fix incompatible type for arguments errors in C2XCristian Rodríguez2023-01-031-1/+1
| | | | | | GCC-13 -std=gnu2x FTBS with: error: incompatible type for argument 3 of ‘_hashmap_free’
* basic/hashmap: add commentZbigniew Jędrzejewski-Szmek2022-12-191-1/+1
| | | | | Coverity complains that the check is suspicious. Add a comment to help the reader.
* all: avoid various "-Wcast-align=strict" warningsThomas Haller2022-12-091-2/+3
|
* basic: rename util.h to logarithm.hZbigniew Jędrzejewski-Szmek2022-11-081-0/+1
| | | | | util.h is now about logarithms only, so we can rename it. Many files included util.h for no apparent reason… Those includes are dropped.
* tree-wide: use ASSERT_PTR moreDavid Tardon2022-09-131-2/+1
|
* hashmap: add comment explaining that set_fnmatch() handles fnmatch() errors ↵Lennart Poettering2022-08-311-0/+2
| | | | as non-matches
* hashmap: use assert_se() to make clang happyFrantisek Sumsal2022-08-201-1/+1
| | | | | | | | | | | Otherwise it complains about a set but unused variable: ``` ../src/basic/hashmap.c:1070:48: error: variable 'n_rehashed' set but not used [-Werror,-Wunused-but-set-variable] unsigned old_n_buckets, new_n_buckets, n_rehashed, new_n_entries; ^ 1 error generated. ```
* Turn mempool_enabled() into a weak symbolZbigniew Jędrzejewski-Szmek2022-06-291-4/+3
| | | | | | | | | | | | | | | Before we had the following scheme: mempool_enabled() would check mempool_use_allowed, and libsystemd-shared would be linked with a .c file that provides mempool_use_allowed=true, while other things would linked with a different .c file with mempool_use_allowed=false. In the new scheme, mempool_enabled() itself is a weak symbol. If it's not found, we assume false. So it only needs to be provided for libsystemd-shared, where it can return false or true. test-set-disable-mempool is libshared, so it gets the symbol. But then we actually disable the mempool via envvar. mempool_enable() is called to check its return value directly.
* set: introduce set_put_strndup()Yu Watanabe2022-06-171-5/+8
| | | | | | | Note, if `n != SIZE_MAX`, we cannot check the existence of the specified string in the set without duplicating the string. And, set_consume() also checks the existence of the string. Hence, it is not necessary to call set_contains() if `n != SIZE_MAX`.
* set: introduce set_fnmatch()Yu Watanabe2022-04-271-0/+25
|
* tree-wide: add a space after if, switch, for, and whileYu Watanabe2022-04-011-1/+1
|
* strv: make iterator in STRV_FOREACH() declaread in the loopYu Watanabe2022-03-191-1/+0
| | | | This also avoids multiple evaluations in STRV_FOREACH_BACKWARDS()
* Drop the text argument from assert_not_reached()Zbigniew Jędrzejewski-Szmek2021-08-031-3/+3
| | | | | | | | | | | | | | | | | In general we almost never hit those asserts in production code, so users see them very rarely, if ever. But either way, we just need something that users can pass to the developers. We have quite a few of those asserts, and some have fairly nice messages, but many are like "WTF?" or "???" or "unexpected something". The error that is printed includes the file location, and function name. In almost all functions there's at most one assert, so the function name alone is enough to identify the failure for a developer. So we don't get much extra from the message, and we might just as well drop them. Dropping them makes our code a tiny bit smaller, and most importantly, improves development experience by making it easy to insert such an assert in the code without thinking how to phrase the argument.
* hashmap: make sure hashmap_get_strv()+set_get_strv() work with a NULL objectLennart Poettering2021-07-021-0/+3
| | | | | | | Before we invoke n_entries() we need to check for non-NULL here, like in all other calls to the helper function. Otherwise we'll crash when invoked with a NULL object, which we usually consider equivalent to an empty one though.
* tree-wide: "a" -> "an"Yu Watanabe2021-06-301-1/+1
|
* alloc-util: simplify GREEDY_REALLOC() logic by relying on malloc_usable_size()Lennart Poettering2021-05-191-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | We recently started making more use of malloc_usable_size() and rely on it (see the string_erase() story). Given that we don't really support sytems where malloc_usable_size() cannot be trusted beyond statistics anyway, let's go fully in and rework GREEDY_REALLOC() on top of it: instead of passing around and maintaining the currenly allocated size everywhere, let's just derive it automatically from malloc_usable_size(). I am mostly after this for the simplicity this brings. It also brings minor efficiency improvements I guess, but things become so much nicer to look at if we can avoid these allocation size variables everywhere. Note that the malloc_usable_size() man page says relying on it wasn't "good programming practice", but I think it does this for reasons that don't apply here: the greedy realloc logic specifically doesn't rely on the returned extra size, beyond the fact that it is equal or larger than what was requested. (This commit was supposed to be a quick patch btw, but apparently we use the greedy realloc stuff quite a bit across the codebase, so this ends up touching *a*lot* of code.)
* basic: add set_equal() helperLennart Poettering2021-02-181-0/+32
|
* basic: introuce hashmap_ensure_putSusant Sahani2021-01-151-0/+10
|
* sd-device: make TAGS= property prefixed and suffixed with ":"Yu Watanabe2020-12-141-5/+20
| | | | | | | | | The commit 6f3ac0d51766b0b9101676cefe5c4ba81feba436 drops the prefix and suffix in TAGS= property. But there exists several rules that have like `TAGS=="*:tag:*"`. So, the property must be always prefixed and suffixed with ":". Fixes #17930.
* set: introduce set_strjoin()Yu Watanabe2020-12-081-0/+35
|
* license: LGPL-2.1+ -> LGPL-2.1-or-laterYu Watanabe2020-11-091-1/+1
|
* hashmap: introduce {hashmap,set}_put_strdup_full()Yu Watanabe2020-10-131-6/+6
| | | | They can take hash_ops.
* hashmap: make sure to initialize shared hash key atomicallyLennart Poettering2020-09-121-6/+7
| | | | | | | | | | | if we allocate a bunch of hash tables all at the same time, with none earlier than the other, there's a good chance we'll initialize the shared hash key multiple times, so that some threads will see a different shared hash key than others. Let's fix that, and make sure really everyone sees the same hash key. Fixes: #17007
* basic/hashmap,set: move pointer symbol adjactent to the returned valueZbigniew Jędrzejewski-Szmek2020-09-011-22/+22
| | | | | | | | | | I think this is nicer in general, and here in particular we have a lot of code like: static inline IteratedCache* hashmap_iterated_cache_new(Hashmap *h) { return (IteratedCache*) _hashmap_iterated_cache_new(HASHMAP_BASE(h)); } and it's visually appealing to use the same whitespace in the function signature and the cast in the body of the function.
* basic/hashmap,set: inline trivial set_iterate() wrapperZbigniew Jędrzejewski-Szmek2020-09-011-4/+0
| | | | | | | | | | | The compiler would do this to, esp. with LTO, but we can short-circuit the whole process and make everything a bit simpler by avoiding the separate definition. (It would be nice to do the same for _set_new(), _set_ensure_allocated() and other similar functions which are one-line trivial wrappers too. Unfortunately that would require enum HashmapType to be made public, which we don't want to do.)
* basic: Introduce ordered_hashmap_ensure_putSusant Sahani2020-09-011-0/+10
|
* basic/hashmap,set: propagate allocation location info in _copy()Zbigniew Jędrzejewski-Szmek2020-06-241-15/+13
| | | | | | Also use double space before the tracking args at the end. Without the comma this looks ugly, but it's a bit better with the double space. At least it doesn't look like a variable with a type.
* basic/set,hashmap: pass through allocation info in more casesZbigniew Jędrzejewski-Szmek2020-06-241-6/+6
|
* basic/set: add set_ensure_consume()Zbigniew Jędrzejewski-Szmek2020-06-241-0/+14
| | | | | | | This combines set_ensure_allocated() with set_consume(). The cool thing is that because we know the hash ops, we can correctly free the item if appropriate. Similarly to set_consume(), the goal is to simplify handling of the case where the item needs to be freed on error and if already present in the set.
* basic/set: add set_ensure_put()Zbigniew Jędrzejewski-Szmek2020-06-221-0/+10
| | | | | | | | It's such a common operation to allocate the set and put an item in it, that it deserves a helper. set_ensure_put() has the same return values as set_put(). Comes with tests!
* Merge pull request #15940 from keszybz/names-set-optimizationLennart Poettering2020-06-101-1/+1
|\ | | | | Try to optimize away Unit.names set
| * basic/hashmap: make _ensure_allocated return 1 on actual allocationsZbigniew Jędrzejewski-Szmek2020-05-271-1/+1
| | | | | | | | | | Also, make test_hashmap_ensure_allocated() actually test hashmap_ensure_allocated().
* | basic/hashmap,set: change "internal_" to "_" as the prefixZbigniew Jędrzejewski-Szmek2020-05-301-28/+28
| | | | | | | | | | | | "internal" is a lot of characters. Let's take a leaf out of the Python's book and simply use _ to mean private. Much less verbose, but the meaning is just as clear, or even more.
* | basic/hashmap: drop unneeded macroZbigniew Jędrzejewski-Szmek2020-05-301-7/+5
| |
* | hashmap: don't allow hashmap_type_info table to be optimized awayZbigniew Jędrzejewski-Szmek2020-05-301-1/+1
|/ | | | | This makes debugging hashmaps harder, because we can't query the size. Make sure that table is always present.
* basic/hashmap: allow NULL values in strdup hashmaps and add testZbigniew Jędrzejewski-Szmek2020-05-061-6/+14
|
* basic/set: let set_put_strdup() create the set with string hash opsZbigniew Jędrzejewski-Szmek2020-05-061-4/+9
| | | | | | | | | | | | | | | | | | If we're using a set with _put_strdup(), most of the time we want to use string hash ops on the set, and free the strings when done. This defines the appropriate a new string_hash_ops_free structure to automatically free the keys when removing the set, and makes set_put_strdup() and set_put_strdupv() instantiate the set with those hash ops. hashmap_put_strdup() was already doing something similar. (It is OK to instantiate the set earlier, possibly with a different hash ops structure. set_put_strdup() will then use the existing set. It is also OK to call set_free_free() instead of set_free() on a set with string_hash_ops_free, the effect is the same, we're just overriding the override of the cleanup function.) No functional change intended.
* tree-wide: drop string.h when string-util.h or friends are includedYu Watanabe2019-11-041-1/+0
|
* tree-wide: drop missing.hYu Watanabe2019-10-311-1/+1
|
* basic/set: constify operations which don't modify SetZbigniew Jędrzejewski-Szmek2019-07-191-2/+2
| | | | No functional change, but it's nicer to the reader.
* basic/hashmap: add hashops variant that does strdup/freeing on its ownZbigniew Jędrzejewski-Szmek2019-07-191-0/+26
| | | | | | | So far, we'd use hashmap_free_free to free both keys and values along with the hashmap. I think it's better to make this more encapsulated: in this variant the way contents are freed can be decided when the hashmap is created, and users of the hashmap can always use hashmap_free.
* hashmap: avoid using TLS in a destructorFrantisek Sumsal2019-06-181-1/+6
| | | | | | | | | Using C11 thread-local storage in destructors causes uninitialized read. Let's avoid that using a direct comparison instead of using the cached values. As this code path is taken only when compiled with -DVALGRIND=1, the performance cost shouldn't matter too much. Fixes #12814
* util: split out memcmp()/memset() related calls into memory-util.[ch]Lennart Poettering2019-03-131-1/+1
| | | | Just some source rearranging.
* shared/hashmap: trivial style updatesZbigniew Jędrzejewski-Szmek2019-02-211-4/+1
|
* hashmap: always set key output argument of ↵Thomas Haller2019-02-041-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | internal_hashmap_first_key_and_value() internal_hashmap_first_key_and_value() returns the first value, or %NULL if the hashmap is empty. However, hashmaps may contain %NULL values. That means, a caller getting %NULL doesn't know whether the hashmap is empty or whether the first value is %NULL. For example, a caller may be tempted to do something like: if ((val = hashmap_steal_first_key_and_value (h, (void **) key))) { // process first entry. } But this is only correct if the caller made sure that the hash is either not empty or contains no NULL values. Anyway, since a %NULL return value can signal an empty hash or a %NULL value, it seems error prone to leave the key output argument uninitialized in situations that the caller cannot clearly distinguish (without making additional assumptions).