summaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAgeFilesLines
* Fix more portability issues in new amcheck code.Tom Lane2020-10-232-35/+73
| | | | | | | | | | verify_heapam() wasn't being careful to sanity-check tuple line pointers before using them, resulting in SIGBUS on alignment-picky architectures. Fix that, add some more test coverage. Mark Dilger, some tweaking by me Discussion: https://postgr.es/m/30B8E99A-2D9C-48D4-A55C-741C9D5F1563@enterprisedb.com
* Fix portability issues in new amcheck test.Tom Lane2020-10-231-102/+46
| | | | | | | | | | | | The tests added by commit 866e24d47 failed on big-endian machines due to lack of attention to endianness considerations. Fix that. While here, improve a few small cosmetic things, such as running it through perltidy. Mark Dilger Discussion: https://postgr.es/m/30B8E99A-2D9C-48D4-A55C-741C9D5F1563@enterprisedb.com
* Try to avoid a compiler warning about using fxid uninitialized.Robert Haas2020-10-221-36/+31
| | | | | | Mark Dilger, with a couple of stray semicolons removed by me. Discussion: http://postgr.es/m/2A7DA1A8-C4AA-43DF-A985-3CA52F4DC775@enterprisedb.com
* Extend amcheck to check heap pages.Robert Haas2020-10-227-3/+2035
| | | | | | | | Mark Dilger, reviewed by Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, Amul Sul, and by me. Some last-minute cosmetic revisions by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
* Change the attribute name in pg_stat_replication_slots view.Amit Kapila2020-10-202-11/+11
| | | | | | | | | | | | | | | Change the attribute 'name' to 'slot_name' in pg_stat_replication_slots view to make it clear and that way we will be consistent with the other places like pg_stat_wal_receiver view where we display the same attribute. In the passing, fix the typo in one of the macros in the related code. Bump the catversion as we have modified the name in the catalog as well. Reported-by: Noriyoshi Shinoda Author: Noriyoshi Shinoda Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
* Fix potential memory leak in pgcryptoMichael Paquier2020-10-191-0/+1
| | | | | | | | | | | | | | | | | | | | | When allocating a EVP context, it would have been possible to leak some memory allocated directly by OpenSSL, that PostgreSQL lost track of if the initialization of the context allocated failed. The cleanup can be done with EVP_MD_CTX_destroy(). Note that EVP APIs exist since OpenSSL 0.9.7 and we have in the tree equivalent implementations for older versions since ce9b75d (code removed with 9b7cd59a as of 10~). However, in 9.5 and 9.6, the existing code makes use of EVP_MD_CTX_destroy() and EVP_MD_CTX_create() without an equivalent implementation when building the tree with OpenSSL 0.9.6 or older, meaning that this code is in reality broken with such versions since it got introduced in e2838c5. As we have heard no complains about that, it does not seem worth bothering with in 9.5 and 9.6, so I have left that out for simplicity. Author: Michael Paquier Discussion: https://postgr.es/m/20201015072212.GC2305@paquier.xyz Backpatch-through: 9.5
* Add missing error check in pgcrypto/crypt-md5.c.Tom Lane2020-10-161-1/+7
| | | | | | | | | | | | | | | In theory, the second px_find_digest call in px_crypt_md5 could fail even though the first one succeeded, since resource allocation is required. Don't skip testing for a failure. (If one did happen, the likely result would be a crash rather than clean recovery from an OOM failure.) The code's been like this all along, so back-patch to all supported branches. Daniel Gustafsson Discussion: https://postgr.es/m/AA8D6FE9-4AB2-41B4-98CB-AE64BA668C03@yesql.se
* postgres_fdw: Restructure connection retry logic.Fujii Masao2020-10-161-46/+90
| | | | | | | | | | | | | | | | | | | | | Commit 32a9c0bdf introduced connection retry logic into postgres_fdw. Previously it used goto statement for retry. This commit gets rid of that goto from the logic to make the code simpler and easier-to-read. When getting out of PG_CATCH() for the retry, the error state should be cleaned up and the memory context should be reset. But commit 32a9c0bdf forgot to do that. This commit also fixes this bug. Previously only PQstatus()==CONNECTION_BAD was verified to detect connection failure. But this could cause false detection in the case where any error other than connection failure (e.g., out-of-memory) was thrown after a broken connection was detected in libpq and CONNECTION_BAD is set. To fix this issue, this commit changes the logic so that it also checks the error's sqlstate is ERRCODE_CONNECTION_FAILURE. Author: Fujii Masao Reviewed-by: Tom Lane Discussion: https://postgr.es/m/2943611.1602375376@sss.pgh.pa.us
* Fixup some appendStringInfo and appendPQExpBuffer callsDavid Rowley2020-10-152-8/+8
| | | | | | | | | | | | | | | | | | A number of places were using appendStringInfo() when they could have been using appendStringInfoString() instead. While there's no functionality change there, it's just more efficient to use appendStringInfoString() when no formatting is required. Likewise for some appendStringInfoString() calls which were just appending a single char. We can just use appendStringInfoChar() for that. Additionally, many places were using appendPQExpBuffer() when they could have used appendPQExpBufferStr(). Change those too. Patch by Zhijie Hou, but further searching by me found significantly more places that deserved the same treatment. Author: Zhijie Hou, David Rowley Discussion: https://postgr.es/m/cb172cf4361e4c7ba7167429070979d4@G08CNEXMBPEKD05.g08.fujitsu.local
* Include result relation info in direct modify ForeignScan nodes.Heikki Linnakangas2020-10-141-7/+9
| | | | | | | | | | | | | | | | | FDWs that can perform an UPDATE/DELETE remotely using the "direct modify" set of APIs need to access the ResultRelInfo of the target table. That's currently available in EState.es_result_relation_info, but the next commit will remove that field. This commit adds a new resultRelation field in ForeignScan, to store the target relation's RT index, and the corresponding ResultRelInfo in ForeignScanState. The FDW's PlanDirectModify callback is expected to set 'resultRelation' along with 'operation'. The core code doesn't need them for anything, they are for the convenience of FDW's Begin- and IterateDirectModify callbacks. Authors: Amit Langote, Etsuro Fujita Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Fix the unstable output of tests added by commit 8fccf75834.Amit Kapila2020-10-132-14/+18
| | | | | | | | | | | The test cases added by that commit were trying to test the exact number of times a particular transaction has spilled. However, that number can vary if any background transaction (say by autovacuum) happens in parallel to the main transaction. So let's not try to verify the exact count. Author: Amit Kapila Reviewed-by: Sawada Masahiko Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
* Add tests for logical replication spilled stats.Amit Kapila2020-10-133-1/+172
| | | | | | | | Commit 9868167500 added a mechanism to track statistics corresponding to the spilling of changes from ReorderBuffer but didn't add any tests. Author: Amit Kapila and Sawada Masahiko Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
* Band-aid new postgres_fdw test case to remove error text dependency.Tom Lane2020-10-102-4/+7
| | | | | | | | | | | | | | | Buildfarm member lorikeet is still failing the test from commit 32a9c0bdf, but now it's down to the should-have-foreseen-it problem that the error message isn't what the expected-output file expects. Let's see if we can get stable results by printing just the SQLSTATE. I believe we'll reliably see ERRCODE_CONNECTION_FAILURE, since pgfdw_report_error() will report that for any libpq-originated error. There may be a better way to do this, but I'd like to get the buildfarm back to green before we discuss further improvements. Discussion: https://postgr.es/m/E1kPc9v-0005L4-2l@gemulon.postgresql.org Discussion: https://postgr.es/m/2621622.1602184554@sss.pgh.pa.us
* postgres_fdw: reestablish new connection if cached one is detected as broken.Fujii Masao2020-10-063-12/+131
| | | | | | | | | | | | | | | | | | In postgres_fdw, once remote connections are established, they are cached and re-used for subsequent queries and transactions. There can be some cases where those cached connections are unavaiable, for example, by the restart of remote server. In these cases, previously an error was reported and the query accessing to remote server failed if new remote transaction failed to start because the cached connection was broken. This commit improves postgres_fdw so that new connection is remade if broken connection is detected when starting new remote transaction. This is useful to avoid unnecessary failure of queries when connection is broken but can be reestablished. Author: Bharath Rupireddy, tweaked a bit by Fujii Masao Reviewed-by: Ashutosh Bapat, Tatsuhito Kasahara, Fujii Masao Discussion: https://postgr.es/m/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d=78YQ@mail.gmail.com
* Remove custom memory allocation layer in pgcryptoMichael Paquier2020-09-2517-143/+106
| | | | | | | | | | | PX_OWN_ALLOC was intended as a way to disable the use of palloc(), and over the time new palloc() or equivalent calls have been added like in 32984d8, making this extra layer losing its original purpose. This simplifies on the way some code paths to use palloc0() rather than palloc() followed by memset(0). Author: Daniel Gustafsson Discussion: https://postgr.es/m/A5BFAA1A-B2E8-4CBC-895E-7B1B9475A527@yesql.se
* Add new 'old_snapshot' contrib module.Robert Haas2020-09-245-0/+201
| | | | | | | | | | | | | You can use this to view the contents of the time to XID mapping which the server maintains when old_snapshot_threshold != -1. Being able to view that information may be interesting for users, and it's definitely useful for figuring out whether the mapping is being maintained correctly. It isn't, so that will need to be fixed in a subsequent commit. Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar. Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
* pg_surgery: Try to stabilize regression tests.Robert Haas2020-09-182-13/+8
| | | | | | | | | | | | | | | | | | According to buildfarm member sungazer, the behavior of VACUUM can be unstable in these tests even if we prevent autovacuum from running on the tables in question, apparently because even a manual vacuum can behave differently depending on whether anything else is running that holds back the global xmin. So use a temporary table instead, which as of commit a7212be8b9e0885ee769e8c55f99ef742cda487b enables vacuuming using a more aggressive cutoff. This approach can't be used for the regression test that involves a materialized view, but that test doesn't run vacuum, so it shouldn't be prone to this particular failure mode. Analysis by Tom Lane. Patch by Ashutosh Sharma and me. Discussion: http://postgr.es/m/665524.1599948007@sss.pgh.pa.us
* Remove support for postfix (right-unary) operators.Tom Lane2020-09-171-13/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | This feature has been a thorn in our sides for a long time, causing many grammatical ambiguity problems. It doesn't seem worth the pain to continue to support it, so remove it. There are some follow-on improvements we can make in the grammar, but this commit only removes the bare minimum number of productions, plus assorted backend support code. Note that pg_dump and psql continue to have full support, since they may be used against older servers. However, pg_dump warns about postfix operators. There is also a check in pg_upgrade. Documentation-wise, I (tgl) largely removed the "left unary" terminology in favor of saying "prefix operator", which is a more standard and IMO less confusing term. I included a catversion bump, although no initial catalog data changes here, to mark the boundary at which oprkind = 'r' stopped being valid in pg_operator. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Remove factorial operators, leaving only the factorial() function.Tom Lane2020-09-172-9/+0
| | | | | | | | | | | | | | | | The "!" operator is our only built-in postfix operator. Remove it, on the way to removal of grammar support for postfix operators. There is also a "!!" prefix operator, but since it's been marked deprecated for most of its existence, we might as well remove it too. Also zap the SQL alias function numeric_fac(), which seems to have equally little reason to live. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Fix amcheck child check pg_upgrade bug.Peter Geoghegan2020-09-161-6/+28
| | | | | | | | | | | | | | | | | | | Commit d114cc53 overlooked the fact that pg_upgrade'd B-Tree indexes have leaf page high keys whose offset numbers do not match the one from the copy of the tuple one level up (the copy stored with a downlink for leaf page's right sibling page). This led to false positive reports of corruption from bt_index_parent_check() when it was called to verify a pg_upgrade'd index. To fix, skip comparing the offset number on pg_upgrade'd B-Tree indexes. Author: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Andrew Bille <andrewbille@gmail.com> Diagnosed-By: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Bug: #16619 Discussion: https://postgr.es/m/16619-aaba10f83fdc1c3c@postgresql.org Backpatch: 13-, where child check was enhanced.
* Skip empty transaction stream in test_decoding.Amit Kapila2020-09-115-23/+95
| | | | | | | | | | | | | | | | We were decoding empty transactions via streaming APIs added in commit 45fdc9738b even when the user used the option 'skip-empty-xacts'. The APIs makes no effort to skip empty xacts under the assumption that we will never try to stream such transactions. However, that is not true because we can pick to stream a transaction that has change messages for REORDER_BUFFER_CHANGE_INTERNAL_SNAPSHOT and we don't send such messages to downstream rather they are just to update the internal state. So, we need to skip such xacts when plugin uses the option 'skip-empty-xacts'. Diagnosed-By: Amit Kapila Author: Dilip Kumar Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAA4eK1+OqgFNZkf7=ETe_y5ntjgDk3T0wcdkd4Sot_u1hySGfw@mail.gmail.com
* New contrib module, pg_surgery, with heap surgery functions.Robert Haas2020-09-108-0/+750
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes it happens that the visibility information for a tuple becomes corrupted, either due to bugs in the database software or external factors. Provide a function heap_force_kill() that can be used to truncate such dead tuples to dead line pointers, and a function heap_force_freeze() that can be used to overwrite the visibility information in such a way that the tuple becomes all-visible. These functions are unsafe, in that you can easily use them to corrupt a database that was not previously corrupted, and you can use them to further corrupt an already-corrupted database or to destroy data. The documentation accordingly cautions against casual use. However, in some cases they permit recovery of data that would otherwise be very difficult to recover, or to allow a system to continue to function when it would otherwise be difficult to do so. Because we may want to add other functions for performing other kinds of surgery in the future, the new contrib module is called pg_surgery rather than something specific to these functions. I proposed back-patching this so that it could be more easily used by people running existing releases who are facing these kinds of problems, but that proposal did not attract enough support, so no back-patch for now. Ashutosh Sharma, reviewed and tested by Andrey M. Borodin, M. Beena Emerson, Masahiko Sawada, Rajkumar Raghuwanshi, Asim Praveen, and Mark Dilger, and somewhat revised by me. Discussion: http://postgr.es/m/CA+TgmoZW1fsU-QUNCRUQMGUygBDPVeOTLCqRdQZch=EYZnctSA@mail.gmail.com
* Expose internal function for converting int64 to numericPeter Eisentraut2020-09-092-4/+2
| | | | | | | | Existing callers had to take complicated detours via DirectFunctionCall1(). This simplifies a lot of code. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
* Remove unused parameterPeter Eisentraut2020-09-081-4/+3
| | | | | | unused since f0d6f20278b7c5c412ce40a9b86c6b31dc2fbfdd Discussion: https://www.postgresql.org/message-id/flat/511bb100-f829-ba21-2f10-9f952ec06ead%402ndquadrant.com
* Add support for partitioned tables and indexes in REINDEXMichael Paquier2020-09-082-0/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, REINDEX was not able to work with partitioned tables and indexes, forcing users to reindex partitions one by one. This extends REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned index and table in input, respectively, to reindex all the partitions assigned to them with physical storage (foreign tables, partitioned tables and indexes are then discarded). This shares some logic with schema and database REINDEX as each partition gets processed in its own transaction after building a list of relations to work on. This choice has the advantage to minimize the number of invalid indexes to one partition with REINDEX CONCURRENTLY in the event a cancellation or failure in-flight, as the only indexes handled at once in a single REINDEX CONCURRENTLY loop are the ones from the partition being working on. Isolation tests are added to emulate some cases I bumped into while developing this feature, particularly with the concurrent drop of a leaf partition reindexed. However, this is rather limited as LOCK would cause REINDEX to block in the first transaction building the list of partitions. Per its multi-transaction nature, this new flavor cannot run in a transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE. Author: Justin Pryzby, Michael Paquier Reviewed-by: Anastasia Lubennikova Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
* Remove unused parameterPeter Eisentraut2020-09-061-3/+0
| | | | | | unused since 84d723b6cefcf25b8c800f8aa6cf3c9538a546b4 Discussion: https://www.postgresql.org/message-id/flat/511bb100-f829-ba21-2f10-9f952ec06ead%402ndquadrant.com
* Remove unused parameterPeter Eisentraut2020-09-041-3/+3
| | | | | | unused since 93ee38eade1b2b4964354b95b01b09e17d6f098d Discussion: https://www.postgresql.org/message-id/flat/511bb100-f829-ba21-2f10-9f952ec06ead%402ndquadrant.com
* Remove arbitrary restrictions on password length.Tom Lane2020-09-032-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch started out with the goal of harmonizing various arbitrary limits on password length, but after awhile a better idea emerged: let's just get rid of those fixed limits. recv_password_packet() has an arbitrary limit on the packet size, which we don't really need, so just drop it. (Note that this doesn't really affect anything for MD5 or SCRAM password verification, since those will hash the user's password to something shorter anyway. It does matter for auth methods that require a cleartext password.) Likewise remove the arbitrary error condition in pg_saslprep(). The remaining limits are mostly in client-side code that prompts for passwords. To improve those, refactor simple_prompt() so that it allocates its own result buffer that can be made as big as necessary. Actually, it proves best to make a separate routine pg_get_line() that has essentially the semantics of fgets(), except that it allocates a suitable result buffer and hence will never return a truncated line. (pg_get_line has a lot of potential applications to replace randomly-sized fgets buffers elsewhere, but I'll leave that for another patch.) I built pg_get_line() atop stringinfo.c, which requires moving that code to src/common/; but that seems fine since it was a poor fit for src/port/ anyway. This patch is mostly mine, but it owes a good deal to Nathan Bossart who pressed for a solution to the password length problem and created a predecessor patch. Also thanks to Peter Eisentraut and Stephen Frost for ideas and discussion. Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
* Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.Tom Lane2020-08-303-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we've considered the state with relpages and reltuples both zero as indicating that we do not know the table's tuple density. This is problematic because it's impossible to distinguish "never yet vacuumed" from "vacuumed and seen to be empty". In particular, a user cannot use VACUUM or ANALYZE to override the planner's normal heuristic that an empty table should not be believed to be empty because it is probably about to get populated. That heuristic is a good safety measure, so I don't care to abandon it, but there should be a way to override it if the table is indeed intended to stay empty. Hence, represent the initial state of ignorance by setting reltuples to -1 (relpages is still set to zero), and apply the minimum-ten-pages heuristic only when reltuples is still -1. If the table is empty, VACUUM or ANALYZE (but not CREATE INDEX) will override that to reltuples = relpages = 0, and then we'll plan on that basis. This requires a bunch of fiddly little changes, but we can get rid of some ugly kluges that were formerly needed to maintain the old definition. One notable point is that FDWs' GetForeignRelSize methods will see baserel->tuples = -1 when no ANALYZE has been done on the foreign table. That seems like a net improvement, since those methods were formerly also in the dark about what baserel->tuples = 0 really meant. Still, it is an API change. I bumped catversion because code predating this change would get confused by seeing reltuples = -1. Discussion: https://postgr.es/m/F02298E0-6EF4-49A1-BCB6-C484794D9ACC@thebuild.com
* passwordcheck: Log cracklib diagnosticsPeter Eisentraut2020-08-281-2/+6
| | | | | | | | | | | When calling cracklib to check the password, the diagnostic from cracklib was thrown away. This would hide essential information such as no dictionary being installed. Change this to show the cracklib error message using errdetail_log(). Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://www.postgresql.org/message-id/flat/f7266133-618a-0adc-52ef-f43c78806b0e%402ndquadrant.com
* Add regression tests for REPLICA IDENTITY with dropped indexesMichael Paquier2020-08-262-1/+101
| | | | | | | | | | | REPLICA IDENTITY USING INDEX behaves the same way as NOTHING if the associated index is dropped, even if there is a primary key that could be used as a fallback for the changes generated. There have never been any tests to cover such scenarios, so this commit closes the gap. Author: Michael Paquier Reviewed-by: Masahiko Sawada, Rahila Syed, Euler Taveira Discussion: https://postgr.es/m/20200522035028.GO2355@paquier.xyz
* Make xact.h usable in frontend.Heikki Linnakangas2020-08-172-0/+2
| | | | | | | | xact.h included utils/datetime.h, which cannot be used in the frontend (it includes fmgr.h, which needs Datum). But xact.h only needs the definition of TimestampTz from it, which is available directly in datatypes/timestamp.h. Change xact.h to include that instead of utils/datetime.h, so that it can be used in client programs.
* Remove no-longer-usable hstore--1.0--1.1.sql update script.Tom Lane2020-08-152-8/+1
| | | | | | | | | Since commit 865f14a2d made "=>" unusable as an operator name, it's been impossible either to install hstore 1.0 or to execute this update script. There's not much point in continuing to ship it. Discussion: https://postgr.es/m/653936.1597431032@sss.pgh.pa.us
* Fix compilation warnings with libselinux 3.1 in contrib/sepgsql/Michael Paquier2020-08-143-12/+12
| | | | | | | | | | | | | | | Upstream SELinux has recently marked security_context_t as officially deprecated, causing warnings with -Wdeprecated-declarations. This is considered as legacy code for some time now by upstream as security_context_t got removed from most of the code tree during the development of 2.3 back in 2014. This removes all the references to security_context_t in sepgsql/ to be consistent with SELinux, fixing the warnings. Note that this does not impact the minimum version of libselinux supported. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20200813012735.GC11663@paquier.xyz
* snapshot scalability: Don't compute global horizons while building snapshots.Andres Freund2020-08-123-15/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
* Replace remaining StrNCpy() by strlcpy()Peter Eisentraut2020-08-101-1/+1
| | | | | | | | | | | | | | | | | They are equivalent, except that StrNCpy() zero-fills the entire destination buffer instead of providing just one trailing zero. For all but a tiny number of callers, that's just overhead rather than being desirable. Remove StrNCpy() as it is now unused. In some cases, namestrcpy() is the more appropriate function to use. While we're here, simplify the API of namestrcpy(): Remove the return value, don't check for NULL input. Nothing was using that anyway. Also, remove a few unused name-related functions. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/44f5e198-36f6-6cdb-7fa9-60e34784daae%402ndquadrant.com
* Move connect.h from fe_utils to src/include/common.Noah Misch2020-08-102-2/+2
| | | | | | | Any libpq client can use the header. Clients include backend components postgres_fdw, dblink, and logical replication apply worker. Back-patch to v10, because another fix needs this. In released branches, just copy the header and keep the original.
* Make contrib modules' installation scripts more secure.Tom Lane2020-08-1016-87/+262
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hostile objects located within the installation-time search_path could capture references in an extension's installation or upgrade script. If the extension is being installed with superuser privileges, this opens the door to privilege escalation. While such hazards have existed all along, their urgency increases with the v13 "trusted extensions" feature, because that lets a non-superuser control the installation path for a superuser-privileged script. Therefore, make a number of changes to make such situations more secure: * Tweak the construction of the installation-time search_path to ensure that references to objects in pg_catalog can't be subverted; and explicitly add pg_temp to the end of the path to prevent attacks using temporary objects. * Disable check_function_bodies within installation/upgrade scripts, so that any security gaps in SQL-language or PL-language function bodies cannot create a risk of unwanted installation-time code execution. * Adjust lookup of type input/receive functions and join estimator functions to complain if there are multiple candidate functions. This prevents capture of references to functions whose signature is not the first one checked; and it's arguably more user-friendly anyway. * Modify various contrib upgrade scripts to ensure that catalog modification queries are executed with secure search paths. (These are in-place modifications with no extension version changes, since it is the update process itself that is at issue, not the end result.) Extensions that depend on other extensions cannot be made fully secure by these methods alone; therefore, revert the "trusted" marking that commit eb67623c9 applied to earthdistance and hstore_plperl, pending some better solution to that set of issues. Also add documentation around these issues, to help extension authors write secure installation scripts. Patch by me, following an observation by Andres Freund; thanks to Noah Misch for review. Security: CVE-2020-14350
* Remove <@ from contrib/intarray's GiST operator classes.Tom Lane2020-08-085-2/+36
| | | | | | | | | | | | | | | | | | | Since commit efc77cf5f, an indexed query using <@ has required a full-index scan, so that it actually performs worse than a plain seqscan would do. As I noted at the time, we'd be better off to not treat <@ as being indexable by such indexes at all; and that's what this patch does. It would have been difficult to remove these opclass members without dropping the whole opclass before commit 9f9682783 fixed GiST opclass member dependency rules, but now it's quite simple, so let's do it. I left the existing support code in place for the time being, with comments noting it's now unreachable. At some point, perhaps we should remove that code in favor of throwing an error telling people to upgrade the extension version. Discussion: https://postgr.es/m/2176979.1596389859@sss.pgh.pa.us Discussion: https://postgr.es/m/458.1565114141@sss.pgh.pa.us
* Teach amcheck to verify sibling links in all cases.Peter Geoghegan2020-08-081-23/+150
| | | | | | | | | | | | | | | | | | | Teach contrib/amcheck's bt_index_check() function to check agreement between siblings links. The left sibling's right link should point to a right sibling page whose left link points back to the same original left sibling. This extends a check that bt_index_parent_check() always performed to bt_index_check(). This is the first time amcheck has been taught to perform buffer lock coupling, which we have explicitly avoided up until now. The sibling link check tends to catch a lot of real world index corruption with little overhead, so it seems worth accepting the complexity. Note that the new lock coupling logic would not work correctly on replica servers without the changes made by commits 0a7d771f and 9a9db08a (there could be false positives without those changes). Author: Andrey Borodin, Peter Geoghegan Discussion: https://postgr.es/m/0EB0CFA8-CBD8-4296-8049-A2C0F28FAE8C@yandex-team.ru
* Fix the logical streaming test.Amit Kapila2020-08-082-4/+4
| | | | | | | | | | | Commit 7259736a6e added the capability to stream changes in ReorderBuffer which has some tests to test the streaming mode. It is quite possible that while this test is running a parallel transaction could be logged by autovacuum. Such a transaction won't perform any insert/update/delete to non-catalog tables so will be shown as an empty transaction. Fix it by skipping the empty transactions during this test. Per report by buildfarm.
* Implement streaming mode in ReorderBuffer.Amit Kapila2020-08-086-1/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of serializing the transaction to disk after reaching the logical_decoding_work_mem limit in memory, we consume the changes we have in memory and invoke stream API methods added by commit 45fdc9738b. However, sometimes if we have incomplete toast or speculative insert we spill to the disk because we can't generate the complete tuple and stream. And, as soon as we get the complete tuple we stream the transaction including the serialized changes. We can do this incremental processing thanks to having assignments (associating subxact with toplevel xacts) in WAL right away, and thanks to logging the invalidation messages at each command end. These features are added by commits 0bead9af48 and c55040ccd0 respectively. Now that we can stream in-progress transactions, the concurrent aborts may cause failures when the output plugin consults catalogs (both system and user-defined). We handle such failures by returning ERRCODE_TRANSACTION_ROLLBACK sqlerrcode from system table scan APIs to the backend or WALSender decoding a specific uncommitted transaction. The decoding logic on the receipt of such a sqlerrcode aborts the decoding of the current transaction and continue with the decoding of other transactions. We have ReorderBufferTXN pointer in each ReorderBufferChange by which we know which xact it belongs to. The output plugin can use this to decide which changes to discard in case of stream_abort_cb (e.g. when a subxact gets discarded). We also provide a new option via SQL APIs to fetch the changes being streamed. Author: Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke Reviewed-by: Amit Kapila, Kuntal Ghosh, Ajin Cherian Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
* Remove obsolete amcheck comment.Peter Geoghegan2020-08-061-1/+0
| | | | Oversight in commit d114cc53871.
* amcheck: Sanitize metapage's allequalimage field.Peter Geoghegan2020-08-061-1/+13
| | | | | | | This will be helpful if it ever proves necessary to revoke an opclass's support for deduplication. Backpatch: 13-, where nbtree deduplication was introduced.
* Remove btree page items after page unlinkAlexander Korotkov2020-08-051-5/+2
| | | | | | | | | | | | | | Currently, page unlink leaves remaining items "as is", but replay of corresponding WAL-record re-initializes page leaving it with no items. For the sake of consistency, this commit makes primary delete all the items during page unlink as well. Thanks to this change, we now don't mask contents of deleted btree page for WAL consistency checking. Discussion: https://postgr.es/m/CAPpHfdt_OTyQpXaPJcWzV2N-LNeNJseNB-K_A66qG%3DL518VTFw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan
* Invent "amadjustmembers" AM method for validating opclass members.Tom Lane2020-08-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This allows AM-specific knowledge to be applied during creation of pg_amop and pg_amproc entries. Specifically, the AM knows better than core code which entries to consider as required or optional. Giving the latter entries the appropriate sort of dependency allows them to be dropped without taking out the whole opclass or opfamily; which is something we'd like to have to correct obsolescent entries in extensions. This callback also opens the door to performing AM-specific validity checks during opclass creation, rather than hoping than an opclass developer will remember to test with "amvalidate". For the most part I've not actually added any such checks yet; that can happen in a follow-on patch. (Note that we shouldn't remove any tests from "amvalidate", as those are still needed to cross-check manually constructed entries in the initdb data. So adding tests to "amadjustmembers" will be somewhat duplicative, but it seems like a good idea anyway.) Patch by me, reviewed by Alexander Korotkov, Hamid Akhtar, and Anastasia Lubennikova. Discussion: https://postgr.es/m/4578.1565195302@sss.pgh.pa.us
* Restore lost amcheck TOAST test coverage.Peter Geoghegan2020-07-312-8/+22
| | | | | | | | | | | | | | | | | | | | Commit eba77534 fixed an amcheck false positive bug involving inconsistencies in TOAST input state between table and index. A test case was added that verified that such an inconsistency didn't result in a spurious corruption related error. Test coverage from the test was accidentally lost by commit 501e41dd, which propagated ALTER TABLE ... SET STORAGE attstorage state to indexes. This broke the test because the test specifically relied on attstorage not being propagated. This artificially forced there to be index tuples whose datums were equivalent to the datums in the heap without the datums actually being bitwise equal. Fix this by updating pg_attribute directly instead. Commit 501e41dd made similar changes to a test_decoding TOAST-related test case which made the same assumption, but overlooked the amcheck test case. Backpatch: 11-, just like commit eba77534 (and commit 501e41dd).
* Cache smgrnblocks() results in recovery.Thomas Munro2020-07-311-1/+1
| | | | | | | | | | | Avoid repeatedly calling lseek(SEEK_END) during recovery by caching the size of each fork. For now, we can't use the same technique in other processes, because we lack a shared invalidation mechanism. Do this by generalizing the pre-existing caching used by FSM and VM to support all forks. Discussion: https://postgr.es/m/CAEepm%3D3SSw-Ty1DFcK%3D1rU-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com
* pg_stat_statements: track number of rows processed by some utility commands.Fujii Masao2020-07-293-1/+102
| | | | | | | | | | | This commit makes pg_stat_statements track the total number of rows retrieved or affected by CREATE TABLE AS, SELECT INTO, CREATE MATERIALIZED VIEW and FETCH commands. Suggested-by: Pascal Legrand Author: Fujii Masao Reviewed-by: Asif Rehman Discussion: https://postgr.es/m/1584293755198-0.post@n3.nabble.com
* Extend the logical decoding output plugin API with stream methods.Amit Kapila2020-07-281-0/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds seven methods to the output plugin API, adding support for streaming changes of large in-progress transactions. * stream_start * stream_stop * stream_abort * stream_commit * stream_change * stream_message * stream_truncate Most of this is a simple extension of the existing methods, with the semantic difference that the transaction (or subtransaction) is incomplete and may be aborted later (which is something the regular API does not really need to deal with). This also extends the 'test_decoding' plugin, implementing these new stream methods. The stream_start/start_stop are used to demarcate a chunk of changes streamed for a particular toplevel transaction. This commit simply adds these new APIs and the upcoming patch to "allow the streaming mode in ReorderBuffer" will use these APIs. Author: Tomas Vondra, Dilip Kumar, Amit Kapila Reviewed-by: Amit Kapila Tested-by: Neha Sharma and Mahendra Singh Thalor Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com