summaryrefslogtreecommitdiff
path: root/src/tools/pgindent
Commit message (Collapse)AuthorAgeFilesLines
* Add back SQLValueFunction for SQL keywordsMichael Paquier2023-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | This is equivalent to a revert of f193883 and fb32748, with the addition that the declaration of the SQLValueFunction node needs to gain a couple of node_attr for query jumbling. The performance impact of removing the function call inlining is proving to be too huge for some workloads where these are used. A worst-case test case of involving only simple SELECT queries with a SQL keyword is proving to lead to a reduction of 10% in TPS via pgbench and prepared queries on a high-end machine. None of the tests I ran back for this set of changes saw such a huge gap, but Alexander Lakhin and Andres Freund have found that this can be noticeable. Keeping the older performance would mean to do more inlining in the executor when using COERCE_SQL_SYNTAX for a function expression, similarly to what SQLValueFunction does. This requires more redesign work and there is little time until 16beta1 is released, so for now reverting the change is the best way forward, bringing back the previous performance. Bump catalog version. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/b32bed1b-0746-9b20-1472-4bdc9ca66d52@gmail.com
* Doc: update pgindent/README some more.Tom Lane2023-04-221-2/+2
| | | | | In commit 2e6ba1315, I missed that there was a redundant second reference to pg_bsd_indent's old repo.
* Improve indentation of multiline initialization expressions.Tom Lane2023-04-081-1/+1
| | | | | | | | | | | | | If a variable has an initialization expression that wraps onto the next line(s), pg_bsd_indent will now indent the continuation lines one stop, instead of aligning them flush with the variable declaration. We've been holding off applying this until the last v16 CF finished, but now it's time. Thomas Munro and Tom Lane Discussion: https://postgr.es/m/20230120013137.7ky7nl4e4zjorrfa@awork3.anarazel.de
* Replace replication slot's invalidated_at LSN with an enumAndres Freund2023-04-071-0/+1
| | | | | | | | | | | | | | | | | | This is mainly useful because the upcoming logical-decoding-on-standby feature adds further reasons for invalidating slots, and we don't want to end up with multiple invalidated_* fields, or check different attributes. Eventually we should consider not resetting restart_lsn when invalidating a slot due to max_slot_wal_keep_size. But that's a user visible change, so left for later. Increases SLOT_VERSION, due to the changed field (with a different alignment, no less). Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20230407075009.igg7be27ha2htkbt@awork3.anarazel.de
* Introduce PG_IO_ALIGN_SIZE and align all I/O buffers.Thomas Munro2023-04-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to have the option to use O_DIRECT/FILE_FLAG_NO_BUFFERING in a later commit, we need the addresses of user space buffers to be well aligned. The exact requirements vary by OS and file system (typically sectors and/or memory pages). The address alignment size is set to 4096, which is enough for currently known systems: it matches modern sectors and common memory page size. There is no standard governing O_DIRECT's requirements so we might eventually have to reconsider this with more information from the field or future systems. Aligning I/O buffers on memory pages is also known to improve regular buffered I/O performance. Three classes of I/O buffers for regular data pages are adjusted: (1) Heap buffers are now allocated with the new palloc_aligned() or MemoryContextAllocAligned() functions introduced by commit 439f6175. (2) Stack buffers now use a new struct PGIOAlignedBlock to respect PG_IO_ALIGN_SIZE, if possible with this compiler. (3) The buffer pool is also aligned in shared memory. WAL buffers were already aligned on XLOG_BLCKSZ. It's possible for XLOG_BLCKSZ to be configured smaller than PG_IO_ALIGNED_SIZE and thus for O_DIRECT WAL writes to fail to be well aligned, but that's a pre-existing condition and will be addressed by a later commit. BufFiles are not yet addressed (there's no current plan to use O_DIRECT for those, but they could potentially get some incidental speedup even in plain buffered I/O operations through better alignment). If we can't align stack objects suitably using the compiler extensions we know about, we disable the use of O_DIRECT by setting PG_O_DIRECT to 0. This avoids the need to consider systems that have O_DIRECT but can't align stack objects the way we want; such systems could in theory be supported with more work but we don't currently know of any such machines, so it's easier to pretend there is no O_DIRECT support instead. That's an existing and tested class of system. Add assertions that all buffers passed into smgrread(), smgrwrite() and smgrextend() are correctly aligned, unless PG_O_DIRECT is 0 (= stack alignment tricks may be unavailable) or the block size has been set too small to allow arrays of buffers to be all aligned. Author: Thomas Munro <thomas.munro@gmail.com> Author: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/CA+hUKGK1X532hYqJ_MzFWt0n1zt8trz980D79WbjwnT-yYLZpg@mail.gmail.com
* Track IO times in pg_stat_ioAndres Freund2023-04-071-0/+1
| | | | | | | | | | | | | | | | a9c70b46dbe and 8aaa04b32S added counting of IO operations to a new view, pg_stat_io. Now, add IO timing for reads, writes, extends, and fsyncs to pg_stat_io as well. This combines the tracking for pgBufferUsage with the tracking for pg_stat_io into a new function pgstat_count_io_op_time(). This should make it a bit easier to avoid the somewhat costly instr_time conversion done for pgBufferUsage. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/flat/CAAKRu_ay5iKmnbXZ3DsauViF3eMxu4m1oNnJXqV_HyqYeg55Ww%40mail.gmail.com
* bufmgr: Introduce infrastructure for faster relation extensionAndres Freund2023-04-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
* pg_dump: Add support for zstd compressionTomas Vondra2023-04-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Allow pg_dump to use the zstd compression, in addition to gzip/lz4. Bulk of the new compression method is implemented in compress_zstd.{c,h}, covering the pg_dump compression APIs. The rest of the patch adds test and makes various places aware of the new compression method. The zstd library (which this patch relies on) supports multithreaded compression since version 1.5. We however disallow that feature for now, as it might interfere with parallel backups on platforms that rely on threads (e.g. Windows). This can be improved / relaxed in the future. This also fixes a minor issue in InitDiscoverCompressFileHandle(), which was not updated to check if the file already has the .lz4 extension. Adding zstd compression was originally proposed in 2020 (see the second thread), but then was reworked to use the new compression API introduced in e9960732a9. I've considered both threads when compiling the list of reviewers. Author: Justin Pryzby Reviewed-by: Tomas Vondra, Jacob Champion, Andreas Karlsson Discussion: https://postgr.es/m/20230224191840.GD1653@telsasoft.com Discussion: https://postgr.es/m/20201221194924.GI30237@telsasoft.com
* Revert 11470f544eAlexander Korotkov2023-04-031-2/+0
| | | | Discussion: https://postgr.es/m/20230323003003.plgaxjqahjgkuxrk%40awork3.anarazel.de
* Doc: update pgindent/README.Tom Lane2023-04-021-3/+3
| | | | I missed updating this when we pulled pg_bsd_indent into the tree.
* pg_stat_wal: Accumulate time as instr_time instead of microsecondsAndres Freund2023-03-301-0/+1
| | | | | | | | | | | | | | | | | | | In instr_time.h it is stated that: * When summing multiple measurements, it's recommended to leave the * running sum in instr_time form (ie, use INSTR_TIME_ADD or * INSTR_TIME_ACCUM_DIFF) and convert to a result format only at the end. The reason for that is that converting to microseconds is not cheap, and can loose precision. Therefore this commit changes 'PendingWalStats' to use 'instr_time' instead of 'PgStat_Counter' while accumulating 'wal_write_time' and 'wal_sync_time'. Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/1feedb83-7aa9-cb4b-5086-598349d3f555@gmail.com
* Support connection load balancing in libpqDaniel Gustafsson2023-03-291-0/+1
| | | | | | | | | | | | | | | | | | | | | This adds support for load balancing connections with libpq using a connection parameter: load_balance_hosts=<string>. When setting the param to random, hosts and addresses will be connected to in random order. This then results in load balancing across these addresses and hosts when multiple clients or frequent connection setups are used. The randomization employed performs two levels of shuffling: 1. The given hosts are randomly shuffled, before resolving them one-by-one. 2. Once a host its addresses get resolved, the returned addresses are shuffled, before trying to connect to them one-by-one. Author: Jelte Fennema <postgres@jeltef.nl> Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Reviewed-by: Michael Banck <mbanck@gmx.net> Reviewed-by: Andrey Borodin <amborodin86@gmail.com> Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.
* Copy and store addrinfo in libpq-owned private memoryDaniel Gustafsson2023-03-291-0/+1
| | | | | | | | | | | | | | | | This refactors libpq to copy addrinfos returned by getaddrinfo to memory owned by libpq such that future improvements can alter for example the order of entries. As a nice side effect of this refactor the mechanism for iteration over addresses in PQconnectPoll is now identical to its iteration over hosts. Author: Jelte Fennema <postgres@jeltef.nl> Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Reviewed-by: Michael Banck <mbanck@gmx.net> Reviewed-by: Andrey Borodin <amborodin86@gmail.com> Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.com
* SQL/JSON: add standard JSON constructor functionsAlvaro Herrera2023-03-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces the SQL/JSON standard-conforming constructors for JSON types: JSON_ARRAY() JSON_ARRAYAGG() JSON_OBJECT() JSON_OBJECTAGG() Most of the functionality was already present in PostgreSQL-specific functions, but these include some new functionality such as the ability to skip or include NULL values, and to allow duplicate keys or throw error when they are found, as well as the standard specified syntax to specify output type and format. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Amit Langote <amitlangote09@gmail.com> Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/CAF4Au4w2x-5LTnN_bxky-mq4=WOqsGsxSpENCzHRAzSnEd8+WQ@mail.gmail.com Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
* Avoid syncing data twice for the 'publish_via_partition_root' option.Amit Kapila2023-03-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | When there are multiple publications for a subscription and one of those publishes via the parent table by using publish_via_partition_root and the other one directly publishes the child table, we end up copying the same data twice during initial synchronization. The reason for this was that we get both the parent and child tables from the publisher and try to copy the data for both of them. This patch extends the function pg_get_publication_tables() to take a publication list as its input parameter. This allows us to exclude a partition table whose ancestor is published by the same publication list. This problem does exist in back-branches but we decide to fix it there in a separate commit if required. The fix for back-branches requires quite complicated changes to fetch the required table information from the publisher as we can't update the function pg_get_publication_tables() in back-branches. We are not sure whether we want to deviate and complicate the code in back-branches for this problem as there are no field reports yet. Author: Wang wei Reviewed-by: Peter Smith, Jacob Champion, Kuroda Hayato, Vignesh C, Osumi Takamichi, Amit Kapila Discussion: https://postgr.es/m/OS0PR01MB57167F45D481F78CDC5986F794B99@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2023-03-231-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, in read committed transaction isolation mode (default), we have the following sequence of actions when tuple_update()/tuple_delete() finds the tuple updated by concurrent transaction. 1. Attempt to update/delete tuple with tuple_update()/tuple_delete(), which returns TM_Updated. 2. Lock tuple with tuple_lock(). 3. Re-evaluate plan qual (recheck if we still need to update/delete and calculate the new tuple for update). 4. Second attempt to update/delete tuple with tuple_update()/tuple_delete(). This attempt should be successful, since the tuple was previously locked. This patch eliminates step 2 by taking the lock during first tuple_update()/tuple_delete() call. Heap table access method saves some efforts by checking the updated tuple once instead of twice. Future undo-based table access methods, which will start from the latest row version, can immediately place a lock there. The code in nodeModifyTable.c is simplified by removing the nested switch/case. Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp Reviewed-by: Andres Freund, Chris Travers
* Remove PgStat_BackendFunctionEntryMichael Paquier2023-03-161-1/+0
| | | | | | | | | | This structure included only PgStat_FunctionCounts, and removing it facilitates some upcoming refactoring for pgstatfuncs.c to use more macros rather that mostly-duplicated functions. Author: Bertrand Drouvot Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/11d531fe-52fc-c6ea-7e8e-62f1b6ec626e@gmail.com
* Add LZ4 compression to pg_dumpTomas Vondra2023-02-231-0/+2
| | | | | | | | | | | | Expand pg_dump's compression streaming and file APIs to support the lz4 algorithm. The newly added compress_lz4.{c,h} files cover all the functionality of the aforementioned APIs. Minor changes were necessary in various pg_backup_* files, where code for the 'lz4' file suffix has been added, as well as pg_dump's compression option parsing. Author: Georgios Kokolatos Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Shi Yu, Tomas Vondra Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
* Introduce a generic pg_dump compression APITomas Vondra2023-02-231-0/+2
| | | | | | | | | | | | | | | | | Switch pg_dump to use the Compression API, implemented by bf9aa490db. The CompressFileHandle replaces the cfp* family of functions with a struct of callbacks for accessing (compressed) files. This allows adding new compression methods simply by introducing a new struct instance with appropriate implementation of the callbacks. Archives compressed using custom compression methods store an identifier of the compression algorithm in their header instead of the compression level. The header version is bumped. Author: Georgios Kokolatos Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Tomas Vondra Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
* Redesign archive modulesMichael Paquier2023-02-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new callback named startup_cb, called shortly after a module is loaded, is added. This makes possible the initialization of any additional state data required by a module. This initial state data can be saved in a ArchiveModuleState, that is now passed down to all the callbacks that can be defined in a module. With this design, it is possible to have a per-module state, aimed at opening the door to the support of more than one archive module. The initialization of the callbacks is changed so as _PG_archive_module_init() does not anymore give in input a ArchiveModuleCallbacks that a module has to fill in with callback definitions. Instead, a module now needs to return a const ArchiveModuleCallbacks. All the structure and callback definitions of archive modules are moved into their own header, named archive_module.h, from pgarch.h. Command-based archiving follows the same line, with a new set of files named shell_archive.{c,h}. There are a few more items that are under discussion to improve the design of archive modules, like the fact that basic_archive calls sigsetjmp() by itself to define its own error handling flow. These will be adjusted later, the changes done here cover already a good portion of what has been discussed. Any modules created for v15 will need to be adjusted to this new design. Author: Nathan Bossart Reviewed-by: Andres Freund Discussion: https://postgr.es/m/20230130194810.6fztfgbn32e7qarj@awork3.anarazel.de
* pgindent: mention directory arguments in help textAndrew Dunstan2023-02-161-1/+1
| | | | Shi Yu
* Remove obsolete pgindent options --code-base and --buildAndrew Dunstan2023-02-132-99/+14
| | | | | | | | | | | | | | | | | | | | Now that we have the sources for pg_bsd_indent in our code base these are redundant. It is now required to provide a list of files or directories to pgindent, either by using --commit or on the command line. The equivalent of previously running pgindent with no parameters is now `pgindent .` Some extra checks are also added. duplicate files in the file list are skipped, and there is a warning if no files are specified. If the --commit option is used, the script now chdir's to the source root, as git always reports files relative to that. (Fixes a gripe from Justin Pryzby) Reviewed by Tom Lane Discussion: https://postgr.es/m/842819.1676219054@sss.pgh.pa.us
* Integrate pg_bsd_indent into our build/test infrastructure.Tom Lane2023-02-121-0/+4
| | | | | | | | | | | | | | | | | | | | | Update the Makefile and build directions for in-tree build, and add Meson build infrastructure. Also convert the ad-hoc test target into a TAP test. Currently, the Make build system will not build pg_bsd_indent by default, while the Meson system will. Both will test it during "make check-world" or "ninja test". Neither will install it automatically. (We might change some of these decisions later.) Also fix a few portability nits noted during early testing. Also, exclude pg_bsd_indent from pgindent's purview; at least for now, we'll leave it formatted similarly to the FreeBSD original. Tom Lane and Andres Freund Discussion: https://postgr.es/m/3935719.1675967430@sss.pgh.pa.us Discussion: https://postgr.es/m/20200812223409.6di3y2qsnvynao7a@alap3.anarazel.de
* pgindent: filter files for the --commit optionAndrew Dunstan2023-02-121-1/+13
| | | | | | | | | | | per gripe from Shi Yu, solution from Jelte Fennema Also add a check that the file exists, and issue a warning if it doesn't. As an efficiency measure, avoid processing any file more than once. Discussion: https://postgr.es/m/TYAPR01MB6315B86619944D4A6B56842DFDDE9@TYAPR01MB6315.jpnprd01.prod.outlook.com
* Add pg_stat_io view, providing more detailed IO statisticsAndres Freund2023-02-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
* Fix help text spacing in pgindentAndrew Dunstan2023-02-091-1/+1
| | | | Author: Noriyoshi Shinoda
* pgstat: Infrastructure for more detailed IO statisticsAndres Freund2023-02-081-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds the infrastructure for more detailed IO statistics. The calls to actually count IOs, a system view to access the new statistics, documentation and tests will be added in subsequent commits, to make review easier. While we already had some IO statistics, e.g. in pg_stat_bgwriter and pg_stat_database, they did not provide sufficient detail to understand what the main sources of IO are, or whether configuration changes could avoid IO. E.g., pg_stat_bgwriter.buffers_backend does contain the number of buffers written out by a backend, but as that includes extending relations (always done by backends) and writes triggered by the use of buffer access strategies, it cannot easily be used to tune background writer or checkpointer. Similarly, pg_stat_database.blks_read cannot easily be used to tune shared_buffers / compute a cache hit ratio, as the use of buffer access strategies will often prevent a large fraction of the read blocks to end up in shared_buffers. The new IO statistics count IO operations (evict, extend, fsync, read, reuse, and write), and are aggregated for each combination of backend type (backend, autovacuum worker, bgwriter, etc), target object of the IO (relations, temp relations) and context of the IO (normal, vacuum, bulkread, bulkwrite). What is tracked in this series of patches, is sufficient to perform the aforementioned analyses. Further details, e.g. tracking the number of buffer hits, would make that even easier, but was left out for now, to keep the scope of the already large patchset manageable. Bumps PGSTAT_FILE_FORMAT_ID. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
* pgindent: more ways to find files to indentAndrew Dunstan2023-02-081-22/+38
| | | | | | | | | | | | | | A new --commit option will add all the files in a commit to the file list. The option can be specified more than once. Also, if a directory is given on the command line, all the files in that directory tree will be added to the file list. Per suggestions from Robert Haas Reviewed by Jelte Fennema Discussion: https://postgr.es/m/CA+TgmoY59Ksso81RNLArNxj0a7xaqV_F_u7gSMHbgdc2kG5Vpw@mail.gmail.com
* Fix the logical replication timeout during large DDLs.Amit Kapila2023-02-081-0/+1
| | | | | | | | | | | | | | | | | | | The DDLs like Refresh Materialized views that generate lots of temporary data due to rewrite rules may not be processed by output plugins (for example pgoutput). So, we won't send keep-alive messages for a long time while processing such commands and that can lead the subscriber side to timeout. We have previously fixed a similar case for large transactions in commit f95d53eded where the output plugin filters all or most of the changes but missed to handle the DDLs. We decided not to backpatch this as this adds a new callback in the existing exposed structure and moreover, users can increase the wal_sender_timeout and wal_receiver_timeout to avoid this problem. Author: Wang wei, Hou Zhijie Reviewed-by: Peter Smith, Ashutosh Bapat, Shi yu, Amit Kapila Discussion: https://postgr.es/m/OS3PR01MB6275478E5D29E4A563302D3D9E2B9@OS3PR01MB6275.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
* Document installing perltidy with cpanmAndrew Dunstan2023-02-021-0/+2
| | | | | | | Installing with plain cpan failed for me recently, as the archive it searched has been purged of old releases. However, you can give cpanm a complete URL to the exact version you want to install, so document using that.
* Rename GUC logical_decoding_mode to logical_replication_mode.Amit Kapila2023-01-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Rename the developer option 'logical_decoding_mode' to the more flexible name 'logical_replication_mode' because doing so will make it easier to extend this option in the future to help test other areas of logical replication. Currently, it is used on the publisher side to allow streaming or serializing each change in logical decoding. In the upcoming patch, we are planning to use it on the subscriber. On the subscriber, it will allow serializing the changes to file and notifies the parallel apply workers to read and apply them at the end of the transaction. We discussed exposing this parameter as a subscription option but it did not seem advisable since it is primarily used for testing/debugging and there is no other such parameter. We also discussed having separate GUCs for publisher and subscriber but for current testing/debugging requirements, one GUC is sufficient. Author: Hou Zhijie Reviewed-by: Peter Smith, Kuroda Hayato, Sawada Masahiko, Amit Kapila Discussion: https://postgr.es/m/CAD21AoAy2c=Mx=FTCs+EwUsf2kQL5MmU3N18X84k0EmCXntK4g@mail.gmail.com Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
* Allow multiple --excludes options in pgindentAndrew Dunstan2023-01-272-42/+45
| | | | | | | | | | This includes a unification of the logic used to find the excludes file and the typedefs file. Also, remove the dangerous and deprecated feature where the first non-option argument was taken as a typdefs file if it wasn't a .c or .h file, remove some extraneous blank lines, and improve the documentation somewhat.
* Improve exclude pattern file processing in pgindentAndrew Dunstan2023-01-241-2/+7
| | | | | | | | | | | | | This makes two small changes that will improve pgindent's usefulness in a git hook. First, it looks for the exclude file relative to the current directory. And second, it applies the filters to filenames given on the command line as well as those found in a directory sweep. It might prove necessary to make further efforts to find the exclude file, and even to allow multiple exclude files, but for now this should be enough for most purposes. Reviewed by Jelte Fennema
* Fix pgindent --show-diff option.Tom Lane2023-01-231-1/+2
| | | | | | | | | At least on my machine, the initial coding of this didn't actually work, because interpolation of "$post_fh->filename" doesn't act as intended. I threw in some double quotes too, just in case anybody tries to run this in a path containing spaces.
* Add non-destructive modes to pgindentAndrew Dunstan2023-01-232-26/+79
| | | | | | | | | | | | | | | This adds two modes of running pgindent, neither of which results in any changes being made to the source code. The --show-diff option shows what changes would have been made, and the --silent-diff option just exits with a status of 2 if any changes would be made. The second of these is intended for scripting use in places such as git hooks. Along the way some code cleanup is done, and a --help option is also added. Reviewed by Tom Lane Discussion: https://postgr.es/m/c9c9fa6d-6de6-48c2-4f8b-0fbeef026439@dunslane.net
* Run pgindent on heapam.cDavid Rowley2023-01-231-0/+3
| | | | | | | | | | | An upcoming patch by Melanie Plageman does some refactoring work in this area. Run pgindent on that file now before making any changes so that it's easier to maintain/evolve each of the individual patches doing the refactor work. Additionally, add a few new required typedefs to the list to make it easier to do future pgindent runs on this file during the refactor work. Discussion: https://postgr.es/m/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ@mail.gmail.com
* Remove SHM_QUEUEAndres Freund2023-01-191-1/+0
| | | | | | | | | | Prior patches got rid of all the uses of SHM_QUEUE. ilist.h style lists are more widely used and have an easier to use interface. As there are no users left, remove SHM_QUEUE. Reviewed-by: Thomas Munro <thomas.munro@gmail.com> (in an older version) Discussion: https://postgr.es/m/20221120055930.t6kl3tyivzhlrzu2@awork3.anarazel.de Discussion: https://postgr.es/m/20200211042229.msv23badgqljrdg2@alap3.anarazel.de
* Use dlist/dclist instead of PROC_QUEUE / SHM_QUEUE for heavyweight locksAndres Freund2023-01-181-1/+0
| | | | | | | | | | | Part of a series to remove SHM_QUEUE. ilist.h style lists are more widely used and have an easier to use interface. As PROC_QUEUE is now unused, remove it. Reviewed-by: Thomas Munro <thomas.munro@gmail.com> (in an older version) Discussion: https://postgr.es/m/20221120055930.t6kl3tyivzhlrzu2@awork3.anarazel.de Discussion: https://postgr.es/m/20200211042229.msv23badgqljrdg2@alap3.anarazel.de
* Perform apply of large transactions by parallel workers.Amit Kapila2023-01-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, for large transactions, the publisher sends the data in multiple streams (changes divided into chunks depending upon logical_decoding_work_mem), and then on the subscriber-side, the apply worker writes the changes into temporary files and once it receives the commit, it reads from those files and applies the entire transaction. To improve the performance of such transactions, we can instead allow them to be applied via parallel workers. In this approach, we assign a new parallel apply worker (if available) as soon as the xact's first stream is received and the leader apply worker will send changes to this new worker via shared memory. The parallel apply worker will directly apply the change instead of writing it to temporary files. However, if the leader apply worker times out while attempting to send a message to the parallel apply worker, it will switch to "partial serialize" mode - in this mode, the leader serializes all remaining changes to a file and notifies the parallel apply workers to read and apply them at the end of the transaction. We use a non-blocking way to send the messages from the leader apply worker to the parallel apply to avoid deadlocks. We keep this parallel apply assigned till the transaction commit is received and also wait for the worker to finish at commit. This preserves commit ordering and avoid writing to and reading from files in most cases. We still need to spill if there is no worker available. This patch also extends the SUBSCRIPTION 'streaming' parameter so that the user can control whether to apply the streaming transaction in a parallel apply worker or spill the change to disk. The user can set the streaming parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means the streaming will be applied via a parallel apply worker, if available. The parameter value 'on' means the streaming transaction will be spilled to disk. The default value is 'off' (same as current behaviour). In addition, the patch extends the logical replication STREAM_ABORT message so that abort_lsn and abort_time can also be sent which can be used to update the replication origin in parallel apply worker when the streaming transaction is aborted. Because this message extension is needed to support parallel streaming, parallel streaming is not supported for publications on servers < PG16. Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
* Update copyright for 2023Bruce Momjian2023-01-021-1/+1
| | | | Backpatch-through: 11
* Add 'logical_decoding_mode' GUC.Amit Kapila2022-12-261-0/+1
| | | | | | | | | | | | | | | | This enables streaming or serializing changes immediately in logical decoding. This parameter is intended to be used to test logical decoding and replication of large transactions for which otherwise we need to generate the changes till logical_decoding_work_mem is reached. This helps in reducing the timing of existing tests related to logical replication of in-progress transactions and will help in writing tests for for the upcoming feature for parallelly applying large in-progress transactions. Author: Shi yu Reviewed-by: Sawada Masahiko, Shveta Mallik, Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi Discussion: https://postgr.es/m/OSZPR01MB63104E7449DBE41932DB19F1FD1B9@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Create infrastructure for "soft" error reporting.Tom Lane2022-12-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Postgres' standard mechanism for reporting errors (ereport() or elog()) is used for all sorts of error conditions. This means that throwing an exception via ereport(ERROR) requires an expensive transaction or subtransaction abort and cleanup, since the exception catcher dare not make many assumptions about what has gone wrong. There are situations where we would rather have a lighter-weight mechanism for dealing with errors that are known to be safe to recover from without a full transaction cleanup. This commit creates infrastructure to let us adapt existing error-reporting code for that purpose. See the included documentation changes for details. Follow-on commits will provide test code and usage examples. The near-term plan is to convert most if not all datatype input functions to report invalid input "softly". This will enable implementing some SQL/JSON features cleanly and without the cost of subtransactions, and it will also allow creating COPY options to deal with bad input without cancelling the whole COPY. This patch is mostly by me, but it owes very substantial debt to earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul. Thanks also to Andres Freund for review. Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
* Rework query relation permission checkingAlvaro Herrera2022-12-061-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, information about the permissions to be checked on relations mentioned in a query is stored in their range table entries. So the executor must scan the entire range table looking for relations that need to have permissions checked. This can make the permission checking part of the executor initialization needlessly expensive when many inheritance children are present in the range range. While the permissions need not be checked on the individual child relations, the executor still must visit every range table entry to filter them out. This commit moves the permission checking information out of the range table entries into a new plan node called RTEPermissionInfo. Every top-level (inheritance "root") RTE_RELATION entry in the range table gets one and a list of those is maintained alongside the range table. This new list is initialized by the parser when initializing the range table. The rewriter can add more entries to it as rules/views are expanded. Finally, the planner combines the lists of the individual subqueries into one flat list that is passed to the executor for checking. To make it quick to find the RTEPermissionInfo entry belonging to a given relation, RangeTblEntry gets a new Index field 'perminfoindex' that stores the corresponding RTEPermissionInfo's index in the query's list of the latter. ExecutorCheckPerms_hook has gained another List * argument; the signature is now: typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable, List *rtePermInfos, bool ereport_on_violation); The first argument is no longer used by any in-core uses of the hook, but we leave it in place because there may be other implementations that do. Implementations should likely scan the rtePermInfos list to determine which operations to allow or deny. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
* Switch pg_dump to use compression specificationsMichael Paquier2022-12-021-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
* Replace SQLValueFunction by COERCE_SQL_SYNTAXMichael Paquier2022-11-211-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This switch impacts 9 patterns related to a SQL-mandated special syntax for function calls: - LOCALTIME [ ( typmod ) ] - LOCALTIMESTAMP [ ( typmod ) ] - CURRENT_TIME [ ( typmod ) ] - CURRENT_TIMESTAMP [ ( typmod ) ] - CURRENT_DATE Five new entries are added to pg_proc to compensate the removal of SQLValueFunction to provide backward-compatibility and making this change transparent for the end-user (for example for the attribute generated when a keyword is specified in a SELECT or in a FROM clause without an alias, or when specifying something else than an Iconst to the parser). The parser included a set of checks coming from the files in charge of holding the C functions used for the SQLValueFunction calls (as of transformSQLValueFunction()), which are now moved within each function's execution path, so this reduces the dependencies between the execution and the parsing steps. As of this change, all the SQL keywords use the same paths for their work, relying only on COERCE_SQL_SYNTAX. Like fb32748, no performance difference has been noticed, while the perf profiles get reduced with ExecEvalSQLValueFunction() gone. Bump catalog version. Reviewed-by: Corey Huinker, Ted Yu Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
* Add error context callback when tokenizing authentication filesMichael Paquier2022-11-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The parsing of the authentication files for HBA and ident entries happens in two phases: - Tokenization of the files, creating a list of TokenizedAuthLines. - Validation of the HBA and ident entries, building a set of HbaLines or IdentLines. The second phase doing the validation provides already some error context about the configuration file and the line where a problem happens, but there is no such information in the first phase when tokenizing the files. This commit adds an ErrorContextCallback in tokenize_auth_file(), with a context made of the line number and the configuration file name involved in a problem. This is useful for files included in an HBA file for user and database lists, and it will become much more handy to track problems for files included via a potential @include[_dir,_if_exists]. The error context is registered so as the full chain of events is reported when using cascaded inclusions when for example tokenize_auth_file() recurses over itself on new files, displaying one context line for each file gone through when tokenizing things. Author: Michael Paquier Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/Y2xUBJ+S+Z0zbxRW@paquier.xyz
* Suppress useless wakeups in walreceiver.Thomas Munro2022-11-081-0/+1
| | | | | | | | | | | | Instead of waking up 10 times per second to check for various timeout conditions, keep track of when we next have periodic work to do. Author: Thomas Munro <thomas.munro@gmail.com> Author: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CA%2BhUKGJGhX4r2LPUE3Oy9BX71Eum6PBcS8L3sJpScR9oKaTVaA%40mail.gmail.com
* Add doubly linked count list implementationDavid Rowley2022-11-021-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | We have various requirements when using a dlist_head to keep track of the number of items in the list. This, traditionally, has been done by maintaining a counter variable in the calling code. Here we tidy this up by adding "dclist", which is very similar to dlist but also keeps track of the number of items stored in the list. Callers may use the new dclist_count() function when they need to know how many items are stored. Obtaining the count is an O(1) operation. For simplicity reasons, dclist and dlist both use dlist_node as their node type and dlist_iter/dlist_mutable_iter as their iterator type. dclists have all of the same functionality as dlists except there is no function named dclist_delete(). To remove an item from a list dclist_delete_from() must be used. This requires knowing which dclist the given item is stored in. Additionally, here we also convert some dlists where additional code exists to keep track of the number of items stored and to make these use dclists instead. Author: David Rowley Reviewed-by: Bharath Rupireddy, Aleksander Alekseev Discussion: https://postgr.es/m/CAApHDvrtVxr+FXEX0VbViCFKDGxA3tWDgw9oFewNXCJMmwLjLg@mail.gmail.com
* Remove pgpid_t type, use pid_t insteadPeter Eisentraut2022-10-221-1/+0
| | | | | | | | | | | | | It's unclear why a separate type would be needed here. We use plain pid_t (or int) everywhere else. (The only relevant platform where pid_t is not int is 64-bit MinGW, where it is long long int. So defining pid_t as long (which is 32-bit on Windows), as was done here, doesn't even accommodate that one.) Reverts 66fa6eba5a61be740a6c07de92c42221fae79e9c. Discussion: https://www.postgresql.org/message-id/289c2e45-c7d9-5ce4-7eff-a9e2a33e1580@enterprisedb.com
* Add support for COPY TO callback functionsMichael Paquier2022-10-111-0/+1
| | | | | | | | | | | | | | | | | This is useful as a way for extensions to process COPY TO rows in the way they see fit (say auditing, analytics, backend, etc.) without the need to invoke an external process running as the OS user running the backend through PROGRAM that requires superuser rights. COPY FROM already provides a similar callback for logical replication. For COPY TO, the callback is triggered when we are ready to send a row in CopySendEndOfRow(), which is the same code path as when sending a row to a frontend or a pipe/file. A small test module, test_copy_callbacks, is added to provide some coverage for this facility. Author: Bilva Sanaba, Nathan Bossart Discussion: https://postgr.es/m/253C21D1-FCEB-41D9-A2AF-E6517015B7D7@amazon.com