summaryrefslogtreecommitdiff
path: root/src/test/test-json.c
Commit message (Collapse)AuthorAgeFilesLines
* json: add helper for adding variant to array suppressing duplicatesLennart Poettering2022-12-151-0/+25
|
* shared/json: make it possible to specify source name for strings too, add testsZbigniew Jędrzejewski-Szmek2022-12-011-0/+63
| | | | | | | | The source would be set implicitly when parsing from a named file. But it's useful to specify the source also for cases where we're parsing a ready string. I noticed the lack of this API when trying to write tests, but it seems generally useful to be specify a source name when parsing things.
* basic: rename util.h to logarithm.hZbigniew Jędrzejewski-Szmek2022-11-081-1/+0
| | | | | util.h is now about logarithms only, so we can rename it. Many files included util.h for no apparent reason… Those includes are dropped.
* shared/json: use different return code for empty inputZbigniew Jędrzejewski-Szmek2022-10-191-0/+18
| | | | | It is useful to distinguish if json_parse_file() got no input or invalid input. Use different return codes for the two cases.
* json: introduce json_append()Yu Watanabe2022-09-031-0/+15
|
* json: use fpclassify() or its helper functionsYu Watanabe2022-07-211-6/+12
|
* test: use fabs() as the argument is doubleYu Watanabe2022-07-211-6/+6
| | | | This also drop unnecessary cast.
* test: JSON_BUILD_REAL nowadays expects 'double', not 'long double'Lennart Poettering2022-05-091-1/+1
| | | | | Follow-up for 337712e777bff389f53e26d5b378d2ceba7d98a8, aka "the great un-long-double-ification of 2021".
* test: Use TEST macroJan Janssen2021-11-251-79/+56
| | | | | | | | | This converts to TEST macro where it is trivial. Some additional notable changes: - simplify HAVE_LIBIDN #ifdef in test-dns-domain.c - use saved_argc/saved_argv in test-copy.c, test-path-util.c, test-tmpfiles.c and test-unit-file.c
* json: add new JSON_BUILD_CONST_STRING() macroLennart Poettering2021-11-251-7/+7
| | | | | | | | | | | | | | | | This macro is like JSON_BUILD_STRING() but uses our json library's ability to use literal strings directly as JsonVariant objects. The changes all our codebase to use this new macro whenever we build JSON objects from literal strings. (I tried to make this automatic, i.e. to detect in JSON_BUILD_STRING() whether something is a literal string nicely and thus do this stuff automatically, but I couldn't find a way.) This should reduce memory usage of our JSON code a bit. Constant strings we use very often will now be shared and mapped directly from the ELF image.
* json: don't assert() if we add a NULL element via json_variant_set_field()Lennart Poettering2021-11-251-0/+24
| | | | | | The rest of our JSON code tries hard to magically convert NULL inputs into "null" JSON objects, let's make sure this also works with json_variant_set_field().
* shared/json: use int64_t instead of intmax_tZbigniew Jędrzejewski-Szmek2021-11-181-12/+12
| | | | | | | | | | | We were already asserting that the intmax_t and uintmax_t types are the same as int64_t and uint64_t. Pretty much everywhere in the code base we use the latter types. In principle intmax_t could be something different on some new architecture, and then the code would fail to compile or behave differently. We actually do not want the code to behave differently on those architectures, because that'd break interoperability. So let's just use int64_t/uint64_t since that's what we indend to use.
* shared/json: stop using long doubleZbigniew Jędrzejewski-Szmek2021-11-181-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the implementation of long double on ppc64el doesn't really work: long double cast to integer and back compares as unequal to itself. Strangely, this effect happens without optimization and both with gcc and clang, so it seems to be an effect of how long double is implemented by the architecture. Dumping the values shows the following pattern: 00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10; 00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v Instead of trying to make this work, I think it's most reasonable to switch to normal doubles. Notably, we had no tests for floating point behaviour. The first test we added (for values even not in the range outside of double), showed failures. Common implementations of JSON (in particular JavaScript) use 64 bit double. If we stick to this, users are likely to be happy when they exchange data with those tools. Exporting values that cannot be represented in other tools would just cause interop problems. I don't think the extra precision would be much used. Long double seems to make most sense as a transient format used in calculations to get extra precision in operations, and not a storage or exchange format. So I expect low-level numerical routines that have to know about hardware to make use of it, but it shouldn't be used by our (higher-level) system library. In particular, we would have to add tests for implementations conforming to IEEE 754, and those that don't conform, and account for various implementation differences. It just doesn't seem worth the effort. https://en.wikipedia.org/wiki/Long_double#Implementations shows that the situation is "complicated": > On the x86 architecture, most C compilers implement long double as the 80-bit > extended precision type supported by x86 hardware. An exception is Microsoft > Visual C++ for x86, which makes long double a synonym for double. The Intel > C++ compiler on Microsoft Windows supports extended precision, but requires > the /Qlong‑double switch for long double to correspond to the hardware's > extended precision format. > Compilers may also use long double for the IEEE 754 quadruple-precision > binary floating-point format (binary128). This is the case on HP-UX, > Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on > operating systems using the standard AAPCS calling conventions, such as > Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but > some processors have hardware support. > On some PowerPC and SPARCv9 machines, long double is implemented as a > double-double arithmetic, where a long double value is regarded as the exact > sum of two double-precision values, giving at least a 106-bit precision; with > such a format, the long double type does not conform to the IEEE > floating-point standard. Otherwise, long double is simply a synonym for > double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on > Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32). > With the GNU C Compiler, long double is 80-bit extended precision on x86 > processors regardless of the physical storage used for the type (which can be > either 96 or 128 bits). On some other architectures, long double can be > double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on > SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as > the nonstandard type __float128 rather than long double. > Although the x86 architecture, and specifically the x87 floating-point > instructions on x86, supports 80-bit extended-precision operations, it is > possible to configure the processor to automatically round operations to > double (or even single) precision. Conversely, in extended-precision mode, > extended precision may be used for intermediate compiler-generated > calculations even when the final results are stored at a lower precision > (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is > the default; on several BSD operating systems (FreeBSD and OpenBSD), > double-precision mode is the default, and long double operations are > effectively reduced to double precision. (NetBSD 7.0 and later, however, > defaults to 80-bit extended precision). However, it is possible to override > this within an individual program via the FLDCW "floating-point load > control-word" instruction. On x86_64, the BSDs default to 80-bit extended > precision. Microsoft Windows with Visual C++ also sets the processor in > double-precision mode by default, but this can again be overridden within an > individual program (e.g. by the _controlfp_s function in Visual C++). The > Intel C++ Compiler for x86, on the other hand, enables extended-precision > mode by default. On IA-32 OS X, long double is 80-bit extended precision. So, in short, the only thing that can be said is that nothing can be said. In common scenarios, we are getting only a bit of extra precision (80 bits instead of 64), but use space for padding. In other scenarios we are getting no extra precision. And the variance in implementations is a big issue: we can expect strange differences in behaviour between architectures, systems, compiler versions, compilation options, and even the other things that the program is doing. Fixes #21390.
* test-json: add test that makes sure floats are somewhat reasonably implementedLennart Poettering2021-11-151-0/+54
| | | | | Test that we don't loose accuracy without bounds for extreme values, and validate that nan/inf/-inf actually get converted to null properly.
* license: LGPL-2.1+ -> LGPL-2.1-or-laterYu Watanabe2020-11-091-1/+1
|
* test-json: add function headersZbigniew Jędrzejewski-Szmek2020-09-011-8/+28
|
* shared/json: reject non-utf-8 stringsZbigniew Jędrzejewski-Szmek2020-09-011-1/+1
| | | | | | | | | | | | | JSON strings must be utf-8-clean. We also verify this in json_parse_string() so we would reject a message with invalid utf-8 anyway. It would probably be slightly cheaper to detect non-conformaning strings in serialization, but then we'd have to fail serialization. By doing this early, we give the caller a chance to handle the error nicely. The test is adjusted to contain a valid utf-8 string after decoding of the utf-32 encoding in json ("विवेकख्यातिरविप्लवा हानोपायः।", something about the cessation of ignorance).
* json: use our regular way to turn off compiler warningsLennart Poettering2020-05-251-3/+2
|
* json: add concept of normalizationLennart Poettering2019-12-021-2/+90
| | | | | | | | | | | | | | | | | | | Let's add a concept of normalization: as preparation for signing json records let's add a mechanism to bring JSON records into a well-defined order so that we can safely validate JSON records. This adds two booleans to each JsonVariant object: "sorted" and "normalized". The latter indicates whether a variant is fully sorted (i.e. all keys of objects listed in alphabetical order) recursively down the tree. The former is a weaker property: it only checks whether the keys of the object itself are sorted. All variants which are "normalized" are also "sorted", but not vice versa. The knowledge of the "sorted" property is then used to optimize searching for keys in the variant by using bisection. Both properties are determined at the moment the variants are allocated. Since our objects are immutable this is safe.
* json: add flags parameter to json_parse_file(), for parsing "sensitive" dataLennart Poettering2019-12-021-6/+6
| | | | | | | This will call json_variant_sensitive() internally while parsing for each allocated sub-variant. This is better than calling it a posteriori at the end, because partially parsed variants will always be properly erased from memory this way.
* shared/varlink: add missing terminator in json stringsZbigniew Jędrzejewski-Szmek2019-05-301-0/+4
| | | | | | | | | Should finally fix oss-fuzz-14688. 8688c29b5aece49805a244676cba5bba0196f509 wasn't enough. The buffer retrieved from memstream has the size that the same as the written data. When we write do write(f, s, strlen(s)), then no terminating NUL is written, and the buffer is not (necessarilly) a proper C string.
* Add fmemopen_unlocked() and use unlocked ops in fuzzers and some other testsZbigniew Jędrzejewski-Szmek2019-04-121-1/+2
| | | | This might make things marginially faster. I didn't benchmark though.
* test-json: use standard test introZbigniew Jędrzejewski-Szmek2019-02-251-4/+2
|
* test-json: avoid deep stack recursion under msanZbigniew Jędrzejewski-Szmek2019-02-251-0/+7
|
* test-json: do not pass ephemeral array as intializer to JSON_BUILD_STRVZbigniew Jędrzejewski-Szmek2019-02-111-2/+4
| | | | | | | | | | Fixes #11600. The code was effectively doing: json_build(..., ({ char **_x = ((char**) ((const char*[]) {"one", "two", "three", "four", NULL })); _x; })); but there was no guarantee that the storage for the array that _x points to survives pass the end of the block. Essentially, STRV_MAKE cannot be used inline inside of a block like this.
* Delete duplicate linesTopi Miettinen2019-01-121-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Found by inspecting results of running this small program: int main(int argc, const char **argv) { for (int i = 1; i < argc; i++) { FILE *f; char line[1024], prev[1024], *r; int lineno; prev[0] = '\0'; lineno = 1; f = fopen(argv[i], "r"); if (!f) exit(1); do { r = fgets(line, sizeof(line), f); if (!r) break; if (strcmp(line, prev) == 0) printf("%s:%d: error: dup %s", argv[i], lineno, line); lineno++; strcpy(prev, line); } while (!feof(f)); fclose(f); } }
* test-json: check absolute and relative difference in floating point testZbigniew Jędrzejewski-Szmek2019-01-031-9/+7
| | | | | | | | | | | The test fails under valgrind, so there was an exception for valgrind. Unfortunately that check only works when valgrind-devel headers are available during build. But it is possible to have just valgrind installed, or simply install it after the build, and then "valgrind test-json" would fail. It also seems that even without valgrind, this fails on some arm32 CPUs. Let's do the usual-style test for absolute and relative differences.
* fileio: when reading a full file into memory, refuse inner NUL bytesLennart Poettering2018-12-171-1/+1
| | | | Just some extra care to avoid any ambiguities in what we read.
* json: teach json builder "conditional" object fieldsLennart Poettering2018-11-281-0/+16
| | | | | | | | | | | | Quite often when we generate objects some fields should only be generated in some conditions. Let's add high-level support for that. Matching the existing JSON_BUILD_PAIR() this adds JSON_BUILD_PAIR_CONDITIONAL() which is very similar, but takes an additional parameter: a boolean condition. If "true" this acts like JSON_BUILD_PAIR(), but if false then the whole pair is suppressed. This sounds simply, but requires a tiny bit of complexity: when complex sub-variants are used in fields, then we also need to suppress them.
* json: add support for using static const strings directly as JsonVariant objectsLennart Poettering2018-10-181-4/+5
| | | | | | This is a nice little optimization when using static const strings: we can now use them directly as JsonVariant objecs, without any additional allocation.
* json: enforce a maximum nesting depth for json variantsLennart Poettering2018-10-181-0/+34
| | | | | | | | | | | Simply as a safety precaution so that json objects we read are not arbitrary amounts deep, so that code that processes json objects recursively can't be easily exploited (by hitting stack limits). Follow-up for oss-fuzz#10908 (Nice is that we can accomodate for this counter without increasing the size of the JsonVariant object.)
* test: use fabsl instead of fabs as json_variant_real() returns 'long double'Yu Watanabe2018-10-141-1/+1
|
* json: add testLennart Poettering2018-10-101-0/+411
|
* util-lib: drop json parserLennart Poettering2016-02-131-202/+0
| | | | | | | | | This was used by the dkr logic, which is gone now, hence remove this too. Should we need it one day again the git history never forgets... Note that this only covers the JSON parser. The JSON generator used by "journalctl -o json" remains, as its much much simpler and requires no infrastructure except printf() and the most basic escaping.
* tree-wide: remove Emacs lines from all filesDaniel Mack2016-02-101-2/+0
| | | | | This should be handled fine now by .dir-locals.el, so need to carry that stuff in every file.
* util-lib: split out allocation calls into alloc-util.[ch]Lennart Poettering2015-10-271-0/+1
|
* util-lib: split our string related calls from util.[ch] into its own file ↵Lennart Poettering2015-10-241-1/+2
| | | | | | | | | | | | | | string-util.[ch] There are more than enough calls doing string manipulations to deserve its own files, hence do something about it. This patch also sorts the #include blocks of all files that needed to be updated, according to the sorting suggestions from CODING_STYLE. Since pretty much every file needs our string manipulation functions this effectively means that most files have sorted #include blocks now. Also touches a few unrelated include files.
* json: minor style fixesv220Lennart Poettering2015-05-211-2/+3
|
* test.json: fix build on x86-32 where int and intmax_t differLennart Poettering2015-05-211-1/+1
|
* json: fix a mem leakThomas Hindoe Paaboel Andersen2015-05-191-3/+1
|
* test/test-json: Tests for the tokenizer bugfix and the DOM parserPavel Odvody2015-05-191-0/+97
| | | | The DOM parser tests are accompanied with structure and element analysis
* remove unused includesThomas Hindoe Paaboel Andersen2015-02-231-1/+0
| | | | | | This patch removes includes that are not used. The removals were found with include-what-you-use which checks if any of the symbols from a header is in use.
* shared: json - support escaping utf16 surrogate pairsTom Gundersen2014-12-221-0/+3
| | | | | | We originally only supported escaping ucs2 encoded characters (as \uxxxx). This only covers the BMP. Support escaping also utf16 surrogate pairs (on the form \uxxxx\uyyyy) to cover all of unicode.
* shared: utf8 - support ucs4 -> utf8Tom Gundersen2014-12-221-0/+1
| | | | | Originally we only supported ucs2, so move the ucs4 version from libsystemd-terminal to shared and use that everywhere.
* test-json: use fabsThomas Hindoe Paaboel Andersen2014-12-161-1/+3
|
* shared: add minimal JSON tokenizerLennart Poettering2014-12-151-0/+101