summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* Handle the "und" locale in ICU versions 54 and older.Jeff Davis2023-03-234-1/+44
| | | | | | | | | | | | | | | | | | | | | | The "und" locale is an alternative spelling of the root locale, but it was not recognized until ICU 55. To maintain common behavior across all supported ICU versions, check for "und" and replace with "root" before opening. Previously, the lack of support for "und" was dangerous, because versions 54 and older fall back to the environment when a locale is not found. If the user specified "und" for the language (which is expected and documented), it could not only resolve to the wrong collator, but it could unexpectedly change (which could lead to corrupt indexes). This effectively reverts commit d72900bded, which worked around the problem for the built-in "unicode" collation, and is no longer necessary. Discussion: https://postgr.es/m/60da0cecfb512a78b8666b31631a636215d8ce73.camel@j-davis.com Discussion: https://postgr.es/m/0c6fa66f2753217d2a40480a96bd2ccf023536a1.camel@j-davis.com Reviewed-by: Peter Eisentraut
* amcheck: Fix a few bugs in new update chain validation.Robert Haas2023-03-231-3/+7
| | | | | | | | | | | | | | | | | | | | We shouldn't set successor[whatever] to an offset number that is less than FirstOffsetNumber or more than maxoff. We already avoided that for redirects, but not for CTID links. Allowing bad offset numbers into the successor[] array causes core dumps. We shouldn't use HeapTupleHeaderIsHotUpdated() because it checks stuff other than the status of the infomask2 bit HEAP_HOT_UPDATED. We only care about the status of that bit, not the other stuff that HeapTupleHeaderIsHotUpdated() checks. This mistake can cause verify_heapam() to report corruption when none is present. The first hunk of this patch was written by me. The other two were written by Andres Freund. This could probably do with more review before commit, but I'd like to try to get the buildfarm green again sooner rather than later. Discussion: http://postgr.es/m/20230322204552.s6cv3ybqkklhhybb@awork3.anarazel.de
* Add missing "-I." flag when building pg_bsd_indent.Tom Lane2023-03-231-0/+2
| | | | | | This is evidently not required by most compilers, but buildfarm member fairywren is unhappy without it. It looks like the meson infrastructure has this right already.
* Minor comment improvements for compress_lz4Tomas Vondra2023-03-231-5/+20
| | | | | | Author: Tomas Vondra Reviewed-by: Georgios Kokolatos, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
* Unify buffer sizes in pg_dump compression APITomas Vondra2023-03-235-25/+21
| | | | | | | | | | | | | | | | | | | | Prior to the introduction of the compression API in e9960732a9, pg_dump would use the ZLIB_IN_SIZE/ZLIB_OUT_SIZE to size input/output buffers. Commit 0da243fed0 introduced similar constants for LZ4, but while gzip defined both buffers to be 4kB, LZ4 used 4kB and 16kB without any clear reasoning why that's desirable. Furthermore, parts of the code unaware of which compression is used (e.g. pg_backup_directory.c) continued to use ZLIB_OUT_SIZE directly. Simplify by replacing the various constants with DEFAULT_IO_BUFFER_SIZE, set to 4kB. The compression implementations still have an option to use a custom value, but considering 4kB was fine for 20+ years, I find that unlikely (and we'd probably just increase the default buffer size). Author: Georgios Kokolatos Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
* Improve type handling in pg_dump's compress file APITomas Vondra2023-03-237-115/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
* Wrap ICU ucol_open().Jeff Davis2023-03-231-27/+43
| | | | | | | | | | Hide details of supporting older ICU versions in a wrapper function. The current code only needs to handle icu_set_collation_attributes(), but a subsequent commit will add additional version-specific code. Discussion: https://postgr.es/m/7ee414ad-deb5-1144-8a0e-b34ae3b71cd5@enterprisedb.com Reviewed-by: Peter Eisentraut
* Ignore generated columns during apply of update/delete.Amit Kapila2023-03-232-5/+16
| | | | | | | | | | | | We fail to apply updates and deletes when the REPLICA IDENTITY FULL is used for the table having generated columns. We didn't use to ignore generated columns while doing tuple comparison among the tuples from the publisher and subscriber during apply of updates and deletes. Author: Onder Kalaci Reviewed-by: Shi yu, Amit Kapila Backpatch-through: 12 Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
* Allow logical replication to copy tables in binary format.Amit Kapila2023-03-235-18/+216
| | | | | | | | | | | | | | | This patch allows copying tables in the binary format during table synchronization when the binary option for a subscription is enabled. Previously, tables are copied in text format even if the subscription is created with the binary option enabled. Copying tables in binary format may reduce the time spent depending on column types. A binary copy for initial table synchronization is supported only when both publisher and subscriber are v16 or later. Author: Melih Mutlu Reviewed-by: Peter Smith, Shi yu, Euler Taveira, Vignesh C, Kuroda Hayato, Osumi Takamichi, Bharath Rupireddy, Hou Zhijie Discussion: https://postgr.es/m/CAGPVpCQvAziCLknEnygY0v1-KBtg%2BOm-9JHJYZOnNPKFJPompw%40mail.gmail.com
* Improve a bit the tests of pg_walinspectMichael Paquier2023-03-234-45/+47
| | | | | | | | | | | | | This commit improves the tests of pg_walinspect on a few things: - Remove aggregates for queries that should fail. If the code is reworked in such a way that the behavior of these queries is changed, we would get more input from them, written this way. - Expect at least one record reported in the valid queries doing scans across ranges, rather than zero records. - Adjust a few comments, for consistency. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACVaoXW3nJD9zq8E66BEf-phgJfFcKRVJq9GXkuX0b3ULQ@mail.gmail.com
* Improve the naming of Parallel Hash Join phases.Thomas Munro2023-03-236-118/+119
| | | | | | | | | | | | | | | | | | | | | | | | | * Commit 3048898e dropped -ING from PHJ wait event names. Update the corresponding barrier phases names to match. * Rename the "DONE" phases to "FREE". That's symmetrical with "ALLOCATE", and names the activity that actually happens in that phase (as we do for the other phases) rather than a state. The bug fixed by commit 8d578b9b might have been more obvious with this name. * Rename the batch/bucket growth barriers' "ALLOCATE" phases to "REALLOCATE", a better description of what they do. * Update the high level comments about phases to highlight phases are executed by a single process with an asterisk (mostly memory management phases). No behavior change, as this is just improving internal identifiers. The only user-visible sign of this is that a couple of wait events' display names change from "...Allocate" to "...Reallocate" in pg_stat_activity, to stay in sync with the internal names. Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKG%2BMDpwF2Eo2LAvzd%3DpOh81wUTsrwU1uAwR-v6OGBB6%2B7g%40mail.gmail.com
* Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2023-03-236-186/+285
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, in read committed transaction isolation mode (default), we have the following sequence of actions when tuple_update()/tuple_delete() finds the tuple updated by concurrent transaction. 1. Attempt to update/delete tuple with tuple_update()/tuple_delete(), which returns TM_Updated. 2. Lock tuple with tuple_lock(). 3. Re-evaluate plan qual (recheck if we still need to update/delete and calculate the new tuple for update). 4. Second attempt to update/delete tuple with tuple_update()/tuple_delete(). This attempt should be successful, since the tuple was previously locked. This patch eliminates step 2 by taking the lock during first tuple_update()/tuple_delete() call. Heap table access method saves some efforts by checking the updated tuple once instead of twice. Future undo-based table access methods, which will start from the latest row version, can immediately place a lock there. The code in nodeModifyTable.c is simplified by removing the nested switch/case. Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp Reviewed-by: Andres Freund, Chris Travers
* Evade extra table_tuple_fetch_row_version() in ExecUpdate()/ExecDelete()Alexander Korotkov2023-03-231-13/+35
| | | | | | | | | | | When we lock tuple using table_tuple_lock() then we at the same time fetch the locked tuple to the slot. In this case we can skip extra table_tuple_fetch_row_version() thank to we've already fetched the 'old' tuple and nobody can change it concurrently since it's locked. Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp Reviewed-by: Andres Freund, Chris Travers
* Fix new test case to work on (some?) big-endian architectures.Tom Lane2023-03-221-2/+2
| | | | | | | | | | Use of pack("L") gets around the basic endian problem, but it doesn't deal with the fact that the order of the bitfields within the struct may differ. This patch fixes it to work with gcc on NetBSD/macppc, but I wonder whether that will be enough --- in principle, there could be four different combinations of bitpatterns needed here. Discussion: https://postgr.es/m/1650745.1679513221@sss.pgh.pa.us
* Fix initdb's handling of min_wal_size and max_wal_size.Tom Lane2023-03-221-6/+7
| | | | | | | | | | In commit 3e51b278d, I misinterpreted the coding in setup_config() as setting min_wal_size and max_wal_size to compile-time-constant values. But it's not: there's a hidden dependency on --wal-segsize. Therefore leaving these variables commented out is the wrong thing. Per report from Andres Freund. Discussion: https://postgr.es/m/20230322200751.jvfvsuuhd3hgm6vv@awork3.anarazel.de
* Reduce memory leakage in initdb.Tom Lane2023-03-221-53/+34
| | | | | | | | | | | | | | | While testing commit 3e51b278d, I noted that initdb leaks about a megabyte worth of data due to the sloppy bookkeeping in its string-manipulating code. That's not a huge amount on modern machines, but it's still kind of annoying, and it's easy to fix by recognizing that we might as well treat these arrays of strings as modifiable-in-place. There's no caller that cares about preserving the old state of the array after replace_token or replace_guc_value. With this fix, valgrind sees only a few hundred bytes leaked during an initdb run. Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
* Add "-c name=value" switch to initdb.Tom Lane2023-03-223-117/+385
| | | | | | | | | | | | | This option, or its long form --set, sets the GUC "name" to "value". The setting applies in the bootstrap and standalone servers run by initdb, and is also written into the generated postgresql.conf. This can save an extra editing step when creating a new cluster, but the real use-case is for coping with situations where the bootstrap server fails to start due to environmental issues; for example, if it's necessary to force huge_pages to off. Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
* Fix memory leak and inefficiency in CREATE DATABASE ... STRATEGY WAL_LOGAndres Freund2023-03-221-3/+4
| | | | | | | | | | | | | | | RelationCopyStorageUsingBuffer() did not free the strategies used to access the source / target relation. They memory was released at the end of the transaction, but when using a template database with a lot of relations, the temporary leak can become big prohibitively big. RelationCopyStorageUsingBuffer() acquired the buffer for the target relation with RBM_NORMAL, therefore requiring a read of a block guaranteed to be zero. Use RBM_ZERO_AND_LOCK instead. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/20230321070113.o2vqqxogjykwgfrr@awork3.anarazel.de Backpatch: 15-, where STRATEGY WAL_LOG was introduced
* Teach verify_heapam() to validate update chains within a page.Robert Haas2023-03-222-17/+524
| | | | | | | | | | | Prior to this commit, we only consider each tuple or line pointer on the page in isolation, but now we can do some validation of a line pointer against its successor. For example, a redirect line pointer shouldn't point to another redirect line pointer, and if a tuple is HOT-updated, the result should be a heap-only tuple. Himanshu Upadhyaya and Robert Haas, reviewed by Aleksander Alekseev, Andres Freund, and Peter Geoghegan.
* doc: Add description of some missing monitoring functionsMichael Paquier2023-03-221-0/+28
| | | | | | | | | | | | | | | | | | | | | This commit adds some documentation about two monitoring functions: - pg_stat_get_xact_blocks_fetched() - pg_stat_get_xact_blocks_hit() The description of these functions has been removed in ddfc2d9, later simplified by 5f2b089, assuming that all the functions whose descriptions were removed are used in system views. Unfortunately, some of them were are not used in any system views, so they lacked documentation. This gap exists in the docs for a long time, so backpatch all the way down. Reported-by: Michael Paquier Author: Bertrand Drouvot Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/ZBeeH5UoNkTPrwHO@paquier.xyz Backpatch-through: 11
* Fix a couple of typosMichael Paquier2023-03-223-5/+5
| | | | | | | | PL/pgSQL was misspelled in a few places, so fix these. Author: Zhang Mingli Reviewed-by: Richard Guo Discussion: https://postgr.es/m/1bd41572-9cd9-465e-9f59-ee45385e51b4@Spark
* Support language tags in older ICU versions (53 and earlier).Jeff Davis2023-03-214-11/+50
| | | | | | | | | | | By calling uloc_canonicalize() before parsing the attributes, the existing locale attribute parsing logic works on language tags as well. Fix a small memory leak, too. Discussion: http://postgr.es/m/60da0cecfb512a78b8666b31631a636215d8ce73.camel@j-davis.com Reviewed-by: Peter Eisentraut
* Fix make maintainer-clean with queryjumblefuncs.*.c files in src/backend/nodes/Michael Paquier2023-03-222-5/+6
| | | | | | | | | | | | | | | The files generated by gen_node_support.pl for query jumbling (queryjumblefuncs.funcs.c and queryjumblefuncs.switch.c) were not being removed on make maintainer-clean (they need to remain around after a simple "clean"). This commit makes the operation consistent with the copy, equal, out and read files. While on it, update a comment in the nodes'README where a reference to queryjumblefuncs.funcs.c was missing. Reported-by: Nathan Bossart Reviewed-by: Richard Guo, Daniel Gustafsson Discussion: https://postgr.es/m/ZBgAfTHcL6W7zGdW@paquier.xyz
* Fix incorrect comment in preptlist.cDavid Rowley2023-03-221-1/+1
| | | | | | Author: Etsuro Fujita Reviewed-by: Richard Guo, Tom Lane Discussion: https://postgr.es/m/CAPmGK15V8dcVxL9vcgVWPHV6pw1qzM42LzoUkQDB7-e+1onnJw@mail.gmail.com
* Correct Memoize's estimated cache hit ratio calculationDavid Rowley2023-03-221-4/+3
| | | | | | | | | | | | | | | | | | | | | | | As demonstrated by David Johnston, the Memoize cache hit ratio calculation wasn't quite correct. This change only affects the estimated hit ratio when the estimated number of entries to cache is estimated not to fit inside the cache. For example, if we expect 2000 distinct cache key values and only expect to be able to cache 1000 of those at once due to memory constraints, with an estimate of 10000 calls, if we could store all entries then the hit ratio should be 80% to account for the first 2000 of the 10000 calls to be a cache miss due to the value not being cached yet. If we can only store 1000 entries for each of the 2000 distinct possible values at once then the 80% should be reduced by half to make the final estimate of 40%. Previously, the calculation would have produced an estimated hit ratio of 30%, which wasn't correct. Apply to master only so as not to destabilize plans in the back branches. Reported-by: David G. Johnston Discussion: https://postgr.es/m/CAKFQuwZEmcNk3YQo2Xj4EDUOdY6qakad31rOD1Vc4q1_s68-Ew@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvrV44LwiF4W_qf_RpbGYWSgp1kF=cZr+kTRRaALUfmXqw@mail.gmail.com
* Add SHELL_ERROR and SHELL_EXIT_CODE magic variables to psql.Tom Lane2023-03-216-3/+88
| | | | | | | | | | | These are set after a \! command or a backtick substitution. SHELL_ERROR is just "true" for error (nonzero exit status) or "false" for success, while SHELL_EXIT_CODE records the actual exit status following standard shell/system(3) conventions. Corey Huinker, reviewed by Maxim Orlov and myself Discussion: https://postgr.es/m/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com
* docs: use consistent markup for PostgreSQLDaniel Gustafsson2023-03-211-1/+1
| | | | | | | | "PostgreSQL" should use <productname> markup consistenktly, so that if we do apply styling on it it will be consistently applied. Fix by renaming the one exception to the rule. Discussion: https://postgr.es/m/F2EF5217-27A3-4962-9AE5-2E6C2CB3D0FF@yesql.se
* Avoid using atooid for numerical comparisons which arent OidsDaniel Gustafsson2023-03-211-1/+1
| | | | | | | | | | | | The check for the number of roles in the target cluster for an upgrade selects the existing roles and performs a COUNT(*) over the result. A value of one is the expected query result value indicating that only the install user is present in the new cluster. The result was converted with the function for converting a string containing an Oid into a numeric, which avoids potential overflow but makes the code less readable since it's not actually an Oid at all. Discussion: https://postgr.es/m/41AB5F1F-4389-4B25-9668-5C430375836C@yesql.se
* pg_waldump: Allow hexadecimal values for -t/--timeline optionPeter Eisentraut2023-03-212-5/+35
| | | | | | | | | | | | This makes it easier to specify values taken directly from WAL file names. The option parsing is arranged in the style of option_parse_int() (but we need to parse unsigned int), to allow future refactoring in the same manner. Reviewed-by: Sébastien Lardière <sebastien@lardiere.net> Discussion: https://www.postgresql.org/message-id/flat/8fef346e-2541-76c3-d768-6536ae052993@lardiere.net
* Ignore dropped columns during apply of update/delete.Amit Kapila2023-03-212-2/+63
| | | | | | | | | | | We fail to apply updates and deletes when the REPLICA IDENTITY FULL is used for the table having dropped columns. We didn't use to ignore dropped columns while doing tuple comparison among the tuples from the publisher and subscriber during apply of updates and deletes. Author: Onder Kalaci, Shi yu Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
* Fix race in parallel hash join batch cleanup, take II.Thomas Munro2023-03-213-33/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With unlucky timing and parallel_leader_participation=off (not the default), PHJ could attempt to access per-batch shared state just as it was being freed. There was code intended to prevent that by checking for a cleared pointer, but it was racy. Fix, by introducing an extra barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to access the per-batch state to find a batch to help with, and PHJ_BUILD_DONE means that it is too late. The last to detach will free the array of per-batch state as before, but now it will also atomically advance the phase, so that late attachers can avoid the hazard. This mirrors the way per-batch hash tables are freed (see phases PHJ_BATCH_PROBING and PHJ_BATCH_DONE). An earlier attempt to fix this (commit 3b8981b6, later reverted) missed one special case. When the inner side is empty (the "empty inner optimization), the build barrier would only make it to PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from the hashtable. In that case, fast-forward the build barrier to PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold and we can still negotiate who is cleaning up. Revealed by build farm failures, where BarrierAttach() failed a sanity check assertion, because the memory had been clobbered by dsa_free(). In non-assert builds, the result could be a segmentation fault. Back-patch to all supported releases. Author: Thomas Munro <thomas.munro@gmail.com> Author: Melanie Plageman <melanieplageman@gmail.com> Reported-by: Michael Paquier <michael@paquier.xyz> Reported-by: David Geier <geidav.pg@gmail.com> Tested-by: David Geier <geidav.pg@gmail.com> Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
* Stabilize pg_stat_io writes testAndres Freund2023-03-202-9/+8
| | | | | | | | | | | Counting writes only for io_context = 'normal' is unreliable, as backends using a buffer access strategy could flush all of the dirty buffers out from under the other backends and checkpointer. Change the test to count writes in any context. This achieves roughly the same coverage anyway. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Author: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://www.postgresql.org/message-id/ZAnWU8WbXEDjrfUE%40telsasoft.com
* meson: rename html_help target to htmlhelpAndres Freund2023-03-201-2/+2
| | | | | Reported-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/3fc3bb9b-f7f8-d442-35c1-ec82280c564a@enterprisedb.com
* Add @extschema:name@ and no_relocate options to extensions.Tom Lane2023-03-2012-8/+272
| | | | | | | | | | | | | | | | | | | | | | @extschema:name@ extends the existing @extschema@ feature so that we can also insert the schema name of some required extension, thus making cross-extension references robust even if they are in different schemas. However, this has the same hazard as @extschema@: if the schema name is embedded literally in an installed object, rather than being looked up once during extension script execution, then it's no longer safe to relocate the other extension to another schema. To deal with that without restricting things unnecessarily, add a "no_relocate" option to extension control files. This allows an extension to specify that it cannot handle relocation of some of its required extensions, even if in themselves those extensions are relocatable. We detect "no_relocate" requests of dependent extensions during ALTER EXTENSION SET SCHEMA. Regina Obe, reviewed by Sandro Santilli and myself Discussion: https://postgr.es/m/003001d8f4ae$402282c0$c0678840$@pcorp.us
* doc/PDF: Add page breaks for <sect1> in contrib appendixAlvaro Herrera2023-03-201-0/+6
| | | | | | | This better separates the content for each extension/module. Author: Karl Pinc <kop@karlpinc.com> Discussion: https://postgr.es/m/20230120142225.3d3be8a3@slate.karlpinc.com
* Ignore BRIN indexes when checking for HOT updatesTomas Vondra2023-03-2030-76/+448
| | | | | | | | | | | | | | | | | | | | | | | | When determining whether an index update may be skipped by using HOT, we can ignore attributes indexed by block summarizing indexes without references to individual tuples that need to be cleaned up. A new type TU_UpdateIndexes provides a signal to the executor to determine which indexes to update - no indexes, all indexes, or only the summarizing indexes. This also removes rd_indexattr list, and replaces it with rd_attrsvalid flag. The list was not used anywhere, and a simple flag is sufficient. This was originally committed as 5753d4ee32, but then got reverted by e3fcca0d0d because of correctness issues. Original patch by Josef Simanek, various fixes and improvements by Tomas Vondra and me. Authors: Matthias van de Meent, Josef Simanek, Tomas Vondra Reviewed-by: Tomas Vondra, Alvaro Herrera Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com Discussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com
* Fix netmask handling in inet_minmax_multi_opsTomas Vondra2023-03-203-2/+15
| | | | | | | | | | | | | | | | | | | When calculating distance in brin_minmax_multi_distance_inet(), the netmask was applied incorrectly. This results in (seemingly) incorrect ordering of values, triggering an assert. For builds without asserts this is mostly harmless - we may merge other ranges, possibly resulting in slightly less efficient index. But it's still correct and the greedy algorithm doesn't guarantee optimality anyway. Backpatch to 14, where minmax-multi indexes were introduced. Reported by Dmitry Dolgov, investigation and fix by me. Reported-by: Dmitry Dolgov Backpatch-through: 14 Discussion: https://postgr.es/m/17774-c6f3e36dd4471e67@postgresql.org
* doc: Additional information about timeline ID hexadecimal formatPeter Eisentraut2023-03-202-1/+18
| | | | | | | | | Timeline IDs are sometimes presented to the user in hexadecimal format (for example in WAL file names). Add a few bits of information to clarify this. Author: Sébastien Lardière <sebastien@lardiere.net> Discussion: https://www.postgresql.org/message-id/flat/8fef346e-2541-76c3-d768-6536ae052993@lardiere.net
* Have the planner account for the Memoize cache key memoryDavid Rowley2023-03-201-46/+58
| | | | | | | | | | | | | | | | | | | | | The Memoize executor node stores the cache key values along with the tuple(s) which were found in the outer node which match each key value, however, when the planner tried to estimate how many entries could be stored in the cache, it didn't take into account that the cache key must also be stored. In many cases, this won't make a large difference as the key is likely small in comparison to the tuple(s) being stored, however, it's not impossible to craft cases where the key could take more memory than the tuple(s) stored for it. Here we adjust the planner so it takes into account the estimated amount of memory to store the cache key. Effectively, this change will reduce the estimated cache hit ratio when it's thought that not all items will fit in the cache, thus Memoize will become more expensive in such cases. The executor already takes into account the memory consumed by the cache key, so here we only need to adjust the planner. Discussion: https://postgr.es/m/CAApHDvqGErGuyBfQvBQrTCHDbzLTqoiW=_G9sOzeFxWEc_7auA@mail.gmail.com
* Fix memory leak in Memoize cache key evaluationDavid Rowley2023-03-201-1/+8
| | | | | | | | | | | | | | | | | | When probing the Memoize cache to check if the current cache key values exist in the cache, we perform an evaluation of the expressions making up the cache key before probing the hash table for those values. This operation could leak memory as it is possible that the cache key is an expression which requires allocation of memory, as was the case in bug 17844. Here we fix this by correctly switching to the per tuple context before evaluating the cache expressions so that the memory is freed next time the per tuple context is reset. Bug: 17844 Reported-by: Alexey Ermakov Discussion: https://postgr.es/m/17844-d2f6f9e75a622bed@postgresql.org Backpatch-through: 14, where Memoize was introduced
* Avoid copying undefined data in _readA_Const().Tom Lane2023-03-192-6/+27
| | | | | | | | | | | | | | nodeRead() will have created a Node struct that's only allocated big enough for the specific node type, so copying sizeof(union ValUnion) can be copying too much. This provokes valgrind complaints, and with very bad luck could perhaps result in SIGSEGV. While at it, tidy up _equalA_Const to avoid duplicate checks of isnull. Per report from Alexander Lakhin. This code is new as of a6bc33019, so no need to back-patch. Discussion: https://postgr.es/m/4995256b-cc65-170e-0b22-60ad2cd535f1@gmail.com
* Doc: fix documentation example for bytea hex output format.Tom Lane2023-03-181-1/+6
| | | | | | Per report from rsindlin Discussion: https://postgr.es/m/167907221210.1803488.5939223864945604536@wrigleys.postgresql.org
* Add functions to do timestamptz arithmetic in a non-default timezone.Tom Lane2023-03-186-44/+291
| | | | | | | | | | | | | | | | | | Add versions of timestamptz + interval, timestamptz - interval, and generate_series(timestamptz, ...) in which a timezone can be specified explicitly instead of defaulting to the TimeZone GUC setting. The new functions for the first two are named date_add and date_subtract. This might seem too generic, but we could use overloading to add additional variants if that seems useful. Along the way, improve the docs' pretty inadequate explanation of how timestamptz +- interval works. Przemysław Sztoch and Gurjeet Singh; cosmetic changes and most of the docs work by me Discussion: https://postgr.es/m/01a84551-48dd-1359-bf7e-f6b0203a6bd0@sztoch.pl
* Add files related to query jumbling in src/include/nodes/ for mesonMichael Paquier2023-03-181-0/+2
| | | | | | | | | This caused ninja clean to not remove the two files generated by gen_node_support.pl for the query jumbling, for example: queryjumblefuncs.funcs.c and queryjumblefuncs.switch.c. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRBFiWVRyGYSPziyFuXJbHirNmfWwzbfTyCf8YOdiwK74w@mail.gmail.com
* Refactor datetime functions' timezone lookup code to reduce duplication.Tom Lane2023-03-174-180/+142
| | | | | | | | | | | We already had five copies of essentially the same logic, and an upcoming patch introduces yet another use-case. That's past my threshold of pain, so introduce a common subroutine. There's not that much net code savings, but the chance of typos should go down. Inspired by a patch from Przemysław Sztoch, but different in detail. Discussion: https://postgr.es/m/01a84551-48dd-1359-bf7e-f6b0203a6bd0@sztoch.pl
* Fix typoPeter Eisentraut2023-03-171-1/+1
| | | | | | Introduced in de4d456b40. Reported-by: Erik Rijkers <er@xs4all.nl>
* Fix t_isspace(), etc., when datlocprovider=i and datctype=C.Jeff Davis2023-03-178-42/+16
| | | | | | | | | | | | | Check whether the datctype is C to determine whether t_isspace() and related functions use isspace() or iswspace(). Previously, t_isspace() checked whether the database default collation was C; which is incorrect when the default collation uses the ICU provider. Discussion: https://postgr.es/m/79e4354d9eccfdb00483146a6b9f6295202e7890.camel@j-davis.com Reviewed-by: Peter Eisentraut Backpatch-through: 15
* Simplify and speed up pg_dump's creation of parent-table links.Tom Lane2023-03-173-80/+56
| | | | | | | | | | | | | | | | | Instead of trying to optimize this by skipping creation of the links for tables we don't plan to dump, just create them all in bulk with a single scan over the pg_inherits data. The previous approach was more or less O(N^2) in the number of pg_inherits entries, not to mention being way too complicated. Also, don't create useless TableAttachInfo objects. It's silly to create a TableAttachInfo object that we're not going to dump, when we know perfectly well at creation time that it won't be dumped. Patch by me; thanks to Julien Rouhaud for review. Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
* Fix pg_dump for hash partitioning on enum columns.Tom Lane2023-03-178-52/+288
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hash partitioning on an enum is problematic because the hash codes are derived from the OIDs assigned to the enum values, which will almost certainly be different after a dump-and-reload than they were before. This means that some rows probably end up in different partitions than before, causing restore to fail because of partition constraint violations. (pg_upgrade dodges this problem by using hacks to force the enum values to keep the same OIDs, but that's not possible nor desirable for pg_dump.) Users can work around that by specifying --load-via-partition-root, but since that's a dump-time not restore-time decision, one might find out the need for it far too late. Instead, teach pg_dump to apply that option automatically when dealing with a partitioned table that has hash-on-enum partitioning. Also deal with a pre-existing issue for --load-via-partition-root mode: in a parallel restore, we try to TRUNCATE target tables just before loading them, in order to enable some backend optimizations. This is bad when using --load-via-partition-root because (a) we're likely to suffer deadlocks from restore jobs trying to restore rows into other partitions than they came from, and (b) if we miss getting a deadlock we might still lose data due to a TRUNCATE removing rows from some already-completed restore job. The fix for this is conceptually simple: just don't TRUNCATE if we're dealing with a --load-via-partition-root case. The tricky bit is for pg_restore to identify those cases. In dumps using COPY commands we can inspect each COPY command to see if it targets the nominal target table or some ancestor. However, in dumps using INSERT commands it's pretty impractical to examine the INSERTs in advance. To provide a solution for that going forward, modify pg_dump to mark TABLE DATA items that are using --load-via-partition-root with a comment. (This change also responds to a complaint from Robert Haas that the dump output for --load-via-partition-root is pretty confusing.) pg_restore checks for the special comment as well as checking the COPY command if present. This will fail to identify the combination of --load-via-partition-root and --inserts in pre-existing dump files, but that should be a pretty rare case in the field. If it does happen you will probably get a deadlock failure that you can work around by not using parallel restore, which is the same as before this bug fix. Having done this, there seems no remaining reason for the alarmism in the pg_dump man page about combining --load-via-partition-root with parallel restore, so remove that warning. Patch by me; thanks to Julien Rouhaud for review. Back-patch to v11 where hash partitioning was introduced. Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
* Improve several permission-related error messages.Peter Eisentraut2023-03-1718-109/+282
| | | | | | | | | Mainly move some detail from errmsg to errdetail, remove explicit mention of superuser where appropriate, since that is implied in most permission checks, and make messages more uniform. Author: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://www.postgresql.org/message-id/20230316234701.GA903298@nathanxps13