summaryrefslogtreecommitdiff
path: root/src/shared/json.c
Commit message (Collapse)AuthorAgeFilesLines
* tree-wide: fix typo and comment style updateYu Watanabe2023-02-151-1/+1
|
* shared/json: avoid use of fake flex arrayZbigniew Jędrzejewski-Szmek2023-02-061-11/+6
|
* json: add helper for adding variant to array suppressing duplicatesLennart Poettering2022-12-151-1/+21
|
* bootctl: use output mode where "[]" is written instead for empty outputZbigniew Jędrzejewski-Szmek2022-12-011-2/+6
| | | | | It's easier for the caller if output is always a list, even if there are no entries.
* shared/json: optimize appending objects to arraysZbigniew Jędrzejewski-Szmek2022-12-011-33/+79
| | | | | | | | | | | | | | | | When repeatedly appending an object to a growing array, we would create a new array larger by one slot, insert all the old entries and the new element with ref count bumps into the new array, and then unref the old array. This would cause problems when building an array with more than a few thousand elements. If userdbctl is modified to construct an array, 'userdbctl --json=pretty group >/dev/null' with 31k groups: 0.74s (existing code) 102.17s (returning an array) 0.79s (with this patch) We append arrays in various places, so it seems nice to make this generally fast.
* shared/json: make it possible to specify source name for strings too, add testsZbigniew Jędrzejewski-Szmek2022-12-011-14/+44
| | | | | | | | The source would be set implicitly when parsing from a named file. But it's useful to specify the source also for cases where we're parsing a ready string. I noticed the lack of this API when trying to write tests, but it seems generally useful to be specify a source name when parsing things.
* json: add build helpers to insert id128 in uuid formatting into json objectLennart Poettering2022-11-101-2/+9
|
* shared/json: use different return code for empty inputZbigniew Jędrzejewski-Szmek2022-10-191-2/+4
| | | | | It is useful to distinguish if json_parse_file() got no input or invalid input. Use different return codes for the two cases.
* shared/json: allow json_variant_dump() to return an errorZbigniew Jędrzejewski-Szmek2022-10-181-3/+4
|
* json: explicitly support offsets relative to NULL when dispatchingLennart Poettering2022-09-301-1/+14
| | | | | | Let's trick out UndefinedBehaviourSanitizer: https://github.com/systemd/systemd/pull/24853#issuecomment-1263380745
* json: add helper for json builder for octescape/base32hexLennart Poettering2022-09-301-32/+35
| | | | | These encodings for binary data are mandated by DNS RFCs, so let's give make them nice and easy to use with json builder logic.
* json: add dispatchers for 16bit integersLennart Poettering2022-09-301-0/+30
|
* tree-wide: use ASSERT_PTR moreDavid Tardon2022-09-131-18/+9
|
* json: introduce json_append()Yu Watanabe2022-09-031-0/+24
|
* tree-wide: Fix format specifier warnings for %xJan Janssen2022-08-301-1/+1
| | | | | | Unfortunately, hex output can only be produced with unsigned types. Some cases can be fixed by producing the correct type, but a few simply have to be cast. At least casting makes it explicit.
* tree-wide: Use correct format specifiersJan Janssen2022-08-301-4/+4
| | | | gcc will complain about all these with -Wformat-signedness.
* json: use fpclassify() or its helper functionsYu Watanabe2022-07-211-27/+15
|
* json: actually use numeric C locale we just allocatedLennart Poettering2022-07-051-1/+3
| | | | | | | | | | | This fixes formatting of JSON real values, and uses C locale for them. It's kinda interesting that this wasn't noticed before: the C locale object we allocated was not used, hence doing the dance had zero effect. This makes "test-varlink" pass again on systems with non-C locale. (My guess: noone noticed this because "long double" was used before by the JSON code and that had no locale supporting printer or so?)
* shared/json: fix memleak in sortZbigniew Jędrzejewski-Szmek2022-05-101-2/+2
|
* shared/json: fix another memleak in normalizationZbigniew Jędrzejewski-Szmek2022-05-101-2/+2
|
* shared/json: add helper to ref first, unref secondZbigniew Jędrzejewski-Szmek2022-05-101-26/+10
| | | | | | | This normally wouldn't happen, but if some of those places were called with lhs and rhs being the same object, we could unref the last ref first, and then try to take the ref again. It's easier to be safe, and with the helper we save some lines too.
* shared/json: fix memory leak on failed normalizationZbigniew Jędrzejewski-Szmek2022-05-101-2/+3
| | | | | We need to increase the counter immediately after taking the ref, otherwise we may not unref it properly if we fail before incrementing.
* shared/json: wrap long commentsZbigniew Jędrzejewski-Szmek2022-05-101-18/+17
|
* shared/json: reduce scope of variablesZbigniew Jędrzejewski-Szmek2022-05-101-79/+54
|
* json: align tableZbigniew Jędrzejewski-Szmek2022-05-101-10/+8
|
* json: use unsigned for refernce counterYu Watanabe2022-04-191-2/+2
| | | | For other places, we use unsigned for reference counter.
* shared: avoid x86_64-specific size assertion on x32Mike Gilbert2021-12-101-1/+1
| | | | Fixes: https://github.com/systemd/systemd/issues/21713
* json: make JSON_BUILD_PAIR_IN_ADDR_NON_NULL or friends handle NULL gracefullyYu Watanabe2021-11-301-5/+5
| | | | Fixes #21567.
* json: introduce several macros for building json objectYu Watanabe2021-11-251-4/+314
|
* json: don't assert() if we add a NULL element via json_variant_set_field()Lennart Poettering2021-11-251-1/+0
| | | | | | The rest of our JSON code tries hard to magically convert NULL inputs into "null" JSON objects, let's make sure this also works with json_variant_set_field().
* shared/json: use int64_t instead of intmax_tZbigniew Jędrzejewski-Szmek2021-11-181-41/+41
| | | | | | | | | | | We were already asserting that the intmax_t and uintmax_t types are the same as int64_t and uint64_t. Pretty much everywhere in the code base we use the latter types. In principle intmax_t could be something different on some new architecture, and then the code would fail to compile or behave differently. We actually do not want the code to behave differently on those architectures, because that'd break interoperability. So let's just use int64_t/uint64_t since that's what we indend to use.
* shared/json: stop using long doubleZbigniew Jędrzejewski-Szmek2021-11-181-29/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the implementation of long double on ppc64el doesn't really work: long double cast to integer and back compares as unequal to itself. Strangely, this effect happens without optimization and both with gcc and clang, so it seems to be an effect of how long double is implemented by the architecture. Dumping the values shows the following pattern: 00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10; 00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v Instead of trying to make this work, I think it's most reasonable to switch to normal doubles. Notably, we had no tests for floating point behaviour. The first test we added (for values even not in the range outside of double), showed failures. Common implementations of JSON (in particular JavaScript) use 64 bit double. If we stick to this, users are likely to be happy when they exchange data with those tools. Exporting values that cannot be represented in other tools would just cause interop problems. I don't think the extra precision would be much used. Long double seems to make most sense as a transient format used in calculations to get extra precision in operations, and not a storage or exchange format. So I expect low-level numerical routines that have to know about hardware to make use of it, but it shouldn't be used by our (higher-level) system library. In particular, we would have to add tests for implementations conforming to IEEE 754, and those that don't conform, and account for various implementation differences. It just doesn't seem worth the effort. https://en.wikipedia.org/wiki/Long_double#Implementations shows that the situation is "complicated": > On the x86 architecture, most C compilers implement long double as the 80-bit > extended precision type supported by x86 hardware. An exception is Microsoft > Visual C++ for x86, which makes long double a synonym for double. The Intel > C++ compiler on Microsoft Windows supports extended precision, but requires > the /Qlong‑double switch for long double to correspond to the hardware's > extended precision format. > Compilers may also use long double for the IEEE 754 quadruple-precision > binary floating-point format (binary128). This is the case on HP-UX, > Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on > operating systems using the standard AAPCS calling conventions, such as > Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but > some processors have hardware support. > On some PowerPC and SPARCv9 machines, long double is implemented as a > double-double arithmetic, where a long double value is regarded as the exact > sum of two double-precision values, giving at least a 106-bit precision; with > such a format, the long double type does not conform to the IEEE > floating-point standard. Otherwise, long double is simply a synonym for > double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on > Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32). > With the GNU C Compiler, long double is 80-bit extended precision on x86 > processors regardless of the physical storage used for the type (which can be > either 96 or 128 bits). On some other architectures, long double can be > double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on > SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as > the nonstandard type __float128 rather than long double. > Although the x86 architecture, and specifically the x87 floating-point > instructions on x86, supports 80-bit extended-precision operations, it is > possible to configure the processor to automatically round operations to > double (or even single) precision. Conversely, in extended-precision mode, > extended precision may be used for intermediate compiler-generated > calculations even when the final results are stored at a lower precision > (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is > the default; on several BSD operating systems (FreeBSD and OpenBSD), > double-precision mode is the default, and long double operations are > effectively reduced to double precision. (NetBSD 7.0 and later, however, > defaults to 80-bit extended precision). However, it is possible to override > this within an individual program via the FLDCW "floating-point load > control-word" instruction. On x86_64, the BSDs default to 80-bit extended > precision. Microsoft Windows with Visual C++ also sets the processor in > double-precision mode by default, but this can again be overridden within an > individual program (e.g. by the _controlfp_s function in Visual C++). The > Intel C++ Compiler for x86, on the other hand, enables extended-precision > mode by default. On IA-32 OS X, long double is 80-bit extended precision. So, in short, the only thing that can be said is that nothing can be said. In common scenarios, we are getting only a bit of extra precision (80 bits instead of 64), but use space for padding. In other scenarios we are getting no extra precision. And the variance in implementations is a big issue: we can expect strange differences in behaviour between architectures, systems, compiler versions, compilation options, and even the other things that the program is doing. Fixes #21390.
* json: do something remotely reasonable when we see NaN/infinityLennart Poettering2021-10-261-0/+6
| | | | | | | | | | | | | | | | | | JSON doesn't have NaN/infinity/-infinity concepts in the spec. Implementations vary what they do with it. JSON5 + Python simply generate special words "NAN" and "Inifinity" from it. Others generate "null" for it. At this point we never actually want to output this, so let's be conservative and generate RFC compliant JSON, i.e. convert to null. One day should JSON5 actually become a thing we can revisit this, but in that case we should implement things via a flag, and only optinally process nan/infinity/-infinity. This patch is extremely simple: whenever accepting a nan/infinity/-infinity from outside it converts it to NULL. I.e. we convert on input, not output.
* json: rework JSON_BUILD_XYZ() macros to use compound literals instead of ↵Lennart Poettering2021-08-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | compound statements Compound statements is this stuff: ({ … }) Compound literals is this stuff: (type) { … } We use compound statements a lot in macro definitions: they have one drawback though: they define a code block of their own, hence if macro invocations are nested within them that use compound literals their lifetime is limited to the code block, which might be unexpected. Thankfully, we can rework things from compound statements to compund literals in the case of json.h: they don't open a new codeblack, and hence do not suffer by the problem explained above. The interesting thing about compound statements is that they also work for simple types, not just for structs/unions/arrays. We can use this here for a typechecked implicit conversion: we want to superficially typecheck arguments to the json_build() varargs function, and we do that by assigning the specified arguments to our compound literals, which does the minimal amount of typechecks and ensures that types are propagated on correctly. We need one special tweak for this: sd_id128_t is not a simple type but a union. Using compound literals for initialzing that would mean specifiying the components of the union, not a complete sd_id128_t. Our hack around that: instead of passing the object directly via the stack we now take a pointer (and thus a simple type) instead. Nice side-effect of all this: compound literals is C99, while compound statements are a GCC extension, hence we move closer to standard C. Fixes: #20501 Replaces: #20512
* tree-wide: port everything over to new sd-id128 compund literal blissLennart Poettering2021-08-201-3/+1
|
* Drop the text argument from assert_not_reached()Zbigniew Jędrzejewski-Szmek2021-08-031-6/+6
| | | | | | | | | | | | | | | | | In general we almost never hit those asserts in production code, so users see them very rarely, if ever. But either way, we just need something that users can pass to the developers. We have quite a few of those asserts, and some have fairly nice messages, but many are like "WTF?" or "???" or "unexpected something". The error that is printed includes the file location, and function name. In almost all functions there's at most one assert, so the function name alone is enough to identify the failure for a developer. So we don't get much extra from the message, and we might just as well drop them. Dropping them makes our code a tiny bit smaller, and most importantly, improves development experience by making it easy to insert such an assert in the code without thinking how to phrase the argument.
* tree-wide: "a" -> "an"Yu Watanabe2021-06-301-1/+1
|
* alloc-util: simplify GREEDY_REALLOC() logic by relying on malloc_usable_size()Lennart Poettering2021-05-191-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | We recently started making more use of malloc_usable_size() and rely on it (see the string_erase() story). Given that we don't really support sytems where malloc_usable_size() cannot be trusted beyond statistics anyway, let's go fully in and rework GREEDY_REALLOC() on top of it: instead of passing around and maintaining the currenly allocated size everywhere, let's just derive it automatically from malloc_usable_size(). I am mostly after this for the simplicity this brings. It also brings minor efficiency improvements I guess, but things become so much nicer to look at if we can avoid these allocation size variables everywhere. Note that the malloc_usable_size() man page says relying on it wasn't "good programming practice", but I think it does this for reasons that don't apply here: the greedy realloc logic specifically doesn't rely on the returned extra size, beyond the fact that it is equal or larger than what was requested. (This commit was supposed to be a quick patch btw, but apparently we use the greedy realloc stuff quite a bit across the codebase, so this ends up touching *a*lot* of code.)
* tree-wide: use UINT64_MAX or friendsYu Watanabe2021-03-051-12/+12
|
* json: rename json_dispatch_{integer,unsigned} -> json_dispatch_{intmax,uintmax}Anita Zhang2021-02-261-2/+2
| | | | | | Prompted by https://bugzilla.redhat.com/show_bug.cgi?id=1930875 in which I had previously used json_dispatch_unsigned and passed a return variable of type unsigned when json_dispatch_unsigned writes a uintmax_t.
* Move and rename parse_json_argument() functionZbigniew Jędrzejewski-Szmek2021-02-151-21/+0
| | | | | json.[ch] is a very generic implementation, and cmdline argument parsing doesn't fit there.
* shared/json: make JsonVariant.type field widerZbigniew Jędrzejewski-Szmek2021-02-101-4/+4
| | | | | pahole shows that this doesn't make a difference, but we can fit -EINVAL into .type without warnings.
* log: drop unused LogRealmYu Watanabe2021-01-251-3/+3
| | | | | Already no binary is built with LOG_REALM= argument. Hence, we can safely drop LogRealm now.
* json: add generic cmdline parser for --json= switchLennart Poettering2021-01-091-0/+21
|
* json: add new json format flag for disabling JSON outputLennart Poettering2021-01-091-0/+3
| | | | | | | | | | | | | | This adds a new flag JSON_FORMAT_OFF that is a marker for "no JSON output please!". Of course, this flag sounds pointless in a JSON implementation, however this is useful in code that can generate JSON output, but also more human friendly output (for example our table formatters). With this in place various tools that so far maintained one boolean field "arg_json" that controlled whether JSON output was requested at all and another field "arg_json_format_flags" for selecing the precise json output flags may merge them into one, simplifying code a bit.
* json: add APIs for quickly inserting hex blobs into as JSON stringsLennart Poettering2020-12-171-0/+51
| | | | | | This is similar to the base64 support, but fixed-size hash values are typically preferably presented as series of hex values, hence store them here like that too.
* Merge pull request #17702 from rnhmjoj/masterLennart Poettering2020-12-161-4/+4
|\ | | | | Extend $SYSTEMD_COLORS to switch colors mode
| * tree-wide: avoid direct use of color macrosrnhmjoj2020-12-151-4/+4
| |
* | json: log location also when there is no fileZbigniew Jędrzejewski-Szmek2020-12-101-0/+10
|/ | | | | | | | | E.g. in nss-resolve it is still useful to print the location of the error: src/test/test-nss.c:231: dlsym(0x0x1dc6fb0, _nss_resolve_gethostbyname2_r) → 0x0x7fdbfc53f626 (string):1:40: JSON field ifindex is out of bounds for an interface index. I opted to use a partially duplicated if condition to avoid nesting. It's nice to have the log calls vertically aligned. The compiler will optimize this nicely.
* fileio: teach read_full_file_full() to read from offset/with maximum sizeLennart Poettering2020-12-011-1/+1
|