| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it. But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that. Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.
In passing, remove a comment that was obsoleted later.
Per report from Duncan Sands. Back-patch to v14 where this code
came in. (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)
Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A future commit will move the checkAsUser field from RangeTblEntry
to a new node that, unlike RTEs, will only be created for tables
mentioned in the query but not for the inheritance child relations
added to the query by the planner. So, checkAsUser value for a
given child relation will have to be obtained by referring to that
for its ancestor mentioned in the query.
In preparation, it seems better to expand the use of RelOptInfo.userid
during planning in place of rte->checkAsUser so that there will be
fewer places to adjust for the above change.
Given that the child-to-ancestor mapping is not available during the
execution of a given "child" ForeignScan node, add a checkAsUser
field to ForeignScan to carry the child relation's RelOptInfo.userid.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, an idle startup (recovery) process would wake up every 5
seconds to have a chance to poll for promote_trigger_file, even if that
GUC was not configured. That promotion triggering mechanism was
effectively superseded by pg_ctl promote and pg_promote() a long time
ago. There probably aren't many users left and it's very easy to change
to the modern mechanisms, so we agreed to remove the feature.
This is part of a campaign to reduce wakeups on idle systems.
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CANbhV-FsjnzVOQGBpQ589%3DnWuL1Ex0Ykn74Nh1hEjp2usZSR5g%40mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
|
| |
|
|
|
|
|
|
|
|
| |
The code has been assuming already in a few places that the initial
recursion nesting depth is 0, and the recent changes in hba.c (mainly
783e8c6) have relies on this assumption in more places. The maximum
recursion nesting level is assumed to be 10 for hba.c and GUCs.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20221124090724.n7amf5kpdhx6vb76@jrouhaud
|
| |
|
|
|
|
|
|
|
| |
7b378237a widened AclMode to 64 bits which resulted in 3 new additional
warnings on MSVC. Here we make use of UINT64CONST to reassure the
compiler that we do intend the bit shift expression to yield a 64-bit
result.
Discussion: https://postgr.es/m/CAApHDvo=pn01Y_3zASZZqn+cotF1c4QFCwWgk6MiF0VscaE5ug@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_hba.conf and pg_ident.conf gain support for three record keywords:
- "include", to include a file.
- "include_if_exists", to include a file, ignoring it if missing.
- "include_dir", to include a directory of files. These are classified
by name (C locale, mostly) and need to be prefixed by ".conf", hence
following the same rules as GUCs.
This commit relies on the refactoring pieces done in efc9816, ad6c528,
783e8c6 and 1b73d0b, adding a small wrapper to build a list of
TokenizedAuthLines (tokenize_include_file), and the code is shaped to
offer some symmetry with what is done for GUCs with the same options.
pg_hba_file_rules and pg_ident_file_mappings gain a new field called
file_name, to track from which file a record is located, taking
advantage of the addition of rule_number in c591300 to offer an
organized view of the HBA or ident records loaded.
Bump catalog version.
Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list of TokenizedAuthLines generated at parsing for the HBA and
ident files is now stored in a static context called tokenize_context,
where only all the parsed tokens are stored. This context is created
when opening the first authentication file of a HBA/ident set (hba_file
or ident_file), and is cleaned up once we are done all the work around
it through a new routine called free_auth_file(). One call of
open_auth_file() should have one matching call of free_auth_file(), the
creation and deletion of the tokenization context is controlled by the
recursion depth of the tokenization.
Rather than having tokenize_auth_file() return a memory context that
includes all the records, the tokenization logic now creates and deletes
one memory context each time this function is called. This will
simplify recursive calls to this routine for the upcoming inclusion
record logic.
While on it, rename tokenize_inc_file() to tokenize_expand_file() as
this would conflict with the upcoming patch that will add inclusion
records for HBA/ident files. An '@' file has its tokens added to an
existing list.
Reloading HBA/indent configuration in a tight loop shows no leaks, as of
one type of test done (with and without -DEXEC_BACKEND).
Author: Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/Y324HvGKiWxW2yxe@paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples. While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits. It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.
Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages. Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.
Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.
Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lwlock wait queue scalability issue fixed in a4adc31f690 was quite hard to
find because of the exponential backoff and because we adjust spins_per_delay
over time within a backend.
To make it easier to find similar issues in the future, add a wait event for
the pg_usleep() in perform_spin_delay(). Showing a wait event while spinning
without sleeping would increase the overhead of spinlocks, which we do not
want.
We may at some later point want to have more granular wait events, but that'd
be a substantial amount of work. This provides at least some insights into
something currently hard to observe.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
https://postgr.es/m/20221120204310.xywrhyxyytsajuuq@awork3.anarazel.de
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function. Add some commentary about that
while it's fresh in mind. Also remove some unused macros in
wparser_def.c.
While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.
Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The postmaster normally sends SIGQUIT to force-terminate its
child processes after a child crash or immediate-stop request.
If that doesn't result in child exit within a few seconds,
we follow it up with SIGKILL. This patch provides GUC flags
that allow either of these signals to be replaced with SIGABRT.
On typically-configured Unix systems, that will result in a
core dump being produced for each such child. This can be
useful for debugging problems, although it's not something you'd
want to have on in production due to the risk of disk space
bloat from lots of core files.
The old postmaster -T switch, which sent SIGSTOP in place of
SIGQUIT, is changed to be the same as send_abort_for_crash.
As far as I can tell from the code comments, the intent of
that switch was just to block things for long enough to force
core dumps manually, which seems like an unnecessary extra step.
(Maybe at the time, there was no way to get most kernels to
produce core files with per-PID names, requiring manual core
file renaming after each one. But now it's surely the hard way.)
I also took the opportunity to remove the old postmaster -n
(skip shmem reinit) switch, which hasn't actually done anything
in decades, though the documentation still claimed it did.
Discussion: https://postgr.es/m/2251016.1668797294@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This switch impacts 9 patterns related to a SQL-mandated special syntax
for function calls:
- LOCALTIME [ ( typmod ) ]
- LOCALTIMESTAMP [ ( typmod ) ]
- CURRENT_TIME [ ( typmod ) ]
- CURRENT_TIMESTAMP [ ( typmod ) ]
- CURRENT_DATE
Five new entries are added to pg_proc to compensate the removal of
SQLValueFunction to provide backward-compatibility and making this
change transparent for the end-user (for example for the attribute
generated when a keyword is specified in a SELECT or in a FROM clause
without an alias, or when specifying something else than an Iconst to
the parser).
The parser included a set of checks coming from the files in charge of
holding the C functions used for the SQLValueFunction calls (as of
transformSQLValueFunction()), which are now moved within each function's
execution path, so this reduces the dependencies between the execution
and the parsing steps. As of this change, all the SQL keywords use the
same paths for their work, relying only on COERCE_SQL_SYNTAX. Like
fb32748, no performance difference has been noticed, while the perf
profiles get reduced with ExecEvalSQLValueFunction() gone.
Bump catalog version.
Reviewed-by: Corey Huinker, Ted Yu
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
| |
As the list of shared relations is fixed, we can just dispatch based
IsSharedRelation(), instead of first trying to look up stats for a non-shared
rel and falling back to shared stats.
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de
Discussion: https://postgr.es/m/8c1851a2-a98e-e1bc-7729-37b0b95f66ec@gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes six SQL keywords to use COERCE_SQL_SYNTAX rather
than relying on SQLValueFunction:
- CURRENT_ROLE
- CURRENT_USER
- USER
- SESSION_USER
- CURRENT_CATALOG
- CURRENT_SCHEMA
Among the six, "user", "current_role" and "current_catalog" require
specific SQL functions to allow ruleutils.c to map them to the SQL
keywords these require when using COERCE_SQL_SYNTAX. Having
pg_proc.proname match with the keyword ensures that the compatibility
remains the same when projecting any of these keywords in a FROM clause
to an attribute name when an alias is not specified. This is covered by
the tests added in 2e0d80c, making sure that a correct mapping happens
with each SQL keyword. The three others (current_schema, session_user
and current_user) already have pg_proc entries for this job, so this
brings more consistency between the way such keywords are treated in the
parser, the executor and ruleutils.c.
SQLValueFunction is reduced to half its contents after this change,
simplifying its logic a bit as there is no need to enforce a C collation
anymore for the entries returning a name as a result. I have made a few
performance tests, with a million-ish calls to these keywords without
seeing a difference in run-time or in perf profiles
(ExecEvalSQLValueFunction() is removed from the profiles). The
remaining SQLValueFunctions are now related to timestamps and dates.
Bump catalog version.
Reviewed-by: Corey Huinker
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to how the INHERIT option controls whether or not the
permissions of the granted role are automatically available to the
grantee, the new SET permission controls whether or not the grantee
may use the SET ROLE command to assume the privileges of the granted
role.
In addition, the new SET permission controls whether or not it
is possible to transfer ownership of objects to the target role
or to create new objects owned by the target role using commands
such as CREATE DATABASE .. OWNER. We could alternatively have made
this controlled by the INHERIT option, or allow it when either
option is given. An advantage of this approach is that if you
are granted a predefined role with INHERIT TRUE, SET FALSE, you
can't go and create objects owned by that role.
The underlying theory here is that the ability to create objects
as a target role is not a privilege per se, and thus does not
depend on whether you inherit the target role's privileges. However,
it's surely something you could do anyway if you could SET ROLE
to the target role, and thus making it contingent on whether you
have that ability is reasonable.
Design review by Nathan Bossat, Wolfgang Walther, Jeff Davis,
Peter Eisentraut, and Stephen Frost.
Discussion: http://postgr.es/m/CA+Tgmob+zDSRS6JXYrgq0NWdzCXuTNzT5eK54Dn2hhgt17nm8A@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
eqjoinsel() currently makes use of MCV stats only when we have such
stats for both sides of the clause. As coded, though, it would
fetch those stats even when they're present for just one side.
This can be a bit expensive with high statistics targets, leading
to wasted effort in common cases such as joining a unique column
to a non-unique column. So it seems worth the trouble to do a quick
pre-check to confirm that both sides have MCVs before fetching either.
Also, tweak the API spec for get_attstatsslot() to document the
method we're using here.
David Geier, Tomas Vondra, Tom Lane
Discussion: https://postgr.es/m/b9846ca0-5f1c-9b26-5881-aad3f42b07f0@gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1cc29fe7c, which taught EXPLAIN to print PARAM_EXEC Params as
the referenced expressions, included some checks to prevent matching
Params found in SubPlans or InitPlans to NestLoopParams of upper query
levels. At the time, this seemed possibly necessary to avoid false
matches because of the planner's habit of re-using the same PARAM_EXEC
slot in multiple places in a plan. Furthermore, in the absence of
LATERAL no such reference could be valid anyway. But it's possible
now that we have LATERAL, and in the wake of 46c508fbc and 1db5667ba
I believe the false-match hazard is gone. Hence, remove the
in_same_plan_level checks. As shown in the regression test changes,
this provides a useful improvement in readability for EXPLAIN of
LATERAL-using subplans.
Richard Guo, reviewed by Greg Stark and myself
Discussion: https://postgr.es/m/CAMbWs4-YSOcQXAagJetP95cAeZPqzOy5kM5yijG0PVW5ztRb4w@mail.gmail.com
|
| |
|
|
|
| |
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
| |
It was referred to as subxact array in a few places and subxip array in
others. By changing it to subxip array, we make it consistent with similar
references to xip array.
Author: Japin Li
Reviewd by: Julien Rouhaud, Richard Guo
Discussion: https://postgr.es/m/MEYP282MB1669DCE7AC193A947CED2A95B6009@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a check on the recursion depth when including authentication
configuration files, something that has never been done when processing
'@' files for database and user name lists in pg_hba.conf. On HEAD,
this was leading to a rather confusing error, as of:
FATAL: exceeded maxAllocatedDescs (NN) while trying to open file "/path/blah.conf"
This refactors the code so as the error reported is now the following,
which is the same as for GUCs:
FATAL: could not open file "/path/blah.conf": maximum nesting depth exceeded
This reduces a bit the verbosity of the error message used for files
included in user and database lists, reporting only the file name of
what's failing to load, without mentioning the relative or absolute path
specified after '@' in a HBA file. The absolute path is built upon what
'@' defines anyway, so there is no actual loss of information. This
makes the future inclusion logic much simpler. A follow-up patch will
add an error context to be able to track on which line of which file the
inclusion is failing, to close the loop, providing all the information
needed to know the full chain of events.
This logic has been extracted from a larger patch written by Julien,
rewritten by me to have a unique code path calling AllocateFile() on
authentication files, and is useful on its own. This new interface
will be used later for authentication files included with
@include[_dir,_if_exists], in a follow-up patch.
Author: Michael Paquier, Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/Y2xUBJ+S+Z0zbxRW@paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of dozens of mostly-duplicate pg_foo_aclcheck() functions,
write one common function object_aclcheck() that can handle almost all
of them. We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.
There are a few pg_foo_aclcheck() that don't work via the generic
function and have special APIs, so those stay as is.
I also changed most pg_foo_aclmask() functions to static functions,
since they are not used outside of aclchk.c.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Instead of dozens of mostly-duplicate pg_foo_ownercheck() functions,
write one common function object_ownercheck() that can handle almost
all of them. We already have all the information we need, such as
which system catalog corresponds to which catalog table and which
column is the owner column.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com
|
| |
|
|
|
|
|
|
| |
These zero out the space added by repalloc. This is a common pattern
that is quite hairy to code by hand.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/b66dfc89-9365-cb57-4e1f-b7d31813eeec@enterprisedb.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code building an absolute path to a file included, as prefixed by
'@' in authentication files, for user and database lists uses the same
logic as for GUCs, except that it has no need to know about DataDir as
there is always a calling file to rely to build the base directory path.
The refactoring done in a1a7bb8 makes this move straight-forward, and
unifies the code used for GUCs and authentication files, and the
intention is to rely also on that for the upcoming patch to be able to
include full files from HBA or ident files.
Note that this gets rid of an inconsistency introduced in 370f909, that
copied the logic coming from GUCs but applied it for files included in
authentication files, where the result buffer given to
join_path_components() must have a size of MAXPGPATH. Based on a
double-check of the existing code, all the other callers of
join_path_components() already do that, except the code path changed
here.
Discussion: https://postgr.es/m/Y2igk7q8OMpg+Yta@paquier.xyz
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A NULL result should be reported when a stats timestamp is set to 0, but
c037471 missed that, leading to a confusing timestamp value after for
example a DML on a freshly-created relation with no scans done on it
yet.
This impacted the following attributes for two system views:
- pg_stat_all_tables.last_idx_scan
- pg_stat_all_tables.last_seq_scan
- pg_stat_all_indexes.last_idx_scan
Reported-by: Robert Treat
Analyzed-by: Peter Eisentraut
Author: Dave Page
Discussion: https://postgr.es/m/CABV9wwPzMfSaz3EfKXXDxKmMprbxwF5r6WPuxqA=5mzRUqfTGg@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code in charge of listing and classifying a set of configuration
files in a directory was located in guc-file.l, being used currently for
GUCs under "include_dir". This code is planned to be used for an
upcoming feature able to include configuration files for ident and HBA
files from a directory, similarly to GUCs. In both cases, the file
names, suffixed by ".conf", have to be ordered alphabetically. This
logic is moved to a new file, called conffiles.c, so as it is easier to
share this facility between GUCs and the HBA/ident parsing logic.
Author: Julien Rouhaud, Michael Paquier
Discussion: https://postgr.es/m/Y2IgaH5YzIq2b+iR@paquier.xyz
|
| |
|
|
|
|
|
|
|
| |
In the wake of bfb9dfd93, there are no longer any stat() calls in
guc-file.l, but the work leading to dac048f71 did not get the memo.
Noted by Michael Paquier
Discussion: https://www.postgresql.org/message-id/Y2OosGi1Xh9x/lEn%40paquier.xyz
|
| |
|
|
|
|
|
|
| |
This has little practical value, but there's no reason to let the
partition strategy names travel through DDL as strings.
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/20221021093216.ffupd7epy2mytkux@alvherre.pgsql
|
| |
|
|
|
|
|
|
|
| |
Oversight in dac048f71eb
Michael Paquier
Reviewed by Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/Y2IATvRGo347Lvd1%40paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have various requirements when using a dlist_head to keep track of the
number of items in the list. This, traditionally, has been done by
maintaining a counter variable in the calling code. Here we tidy this up
by adding "dclist", which is very similar to dlist but also keeps track of
the number of items stored in the list.
Callers may use the new dclist_count() function when they need to know how
many items are stored. Obtaining the count is an O(1) operation.
For simplicity reasons, dclist and dlist both use dlist_node as their node
type and dlist_iter/dlist_mutable_iter as their iterator type. dclists
have all of the same functionality as dlists except there is no function
named dclist_delete(). To remove an item from a list dclist_delete_from()
must be used. This requires knowing which dclist the given item is stored
in.
Additionally, here we also convert some dlists where additional code
exists to keep track of the number of items stored and to make these use
dclists instead.
Author: David Rowley
Reviewed-by: Bharath Rupireddy, Aleksander Alekseev
Discussion: https://postgr.es/m/CAApHDvrtVxr+FXEX0VbViCFKDGxA3tWDgw9oFewNXCJMmwLjLg@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
| |
Some cases would result in "cache lookup failed for statistics object",
due to trying to fetch inherited statistics when only non-inherited
ones are available or vice versa.
Richard Guo and Justin Pryzby
Discussion: https://postgr.es/m/20221030170520.GM16921@telsasoft.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds a function to perform a cross-check between the initial
value of the C declaration associated to a GUC and its actual boot
value in assert-enabled builds. The purpose of this is to prevent
anybody reading these C declarations from being fooled by mismatched
values before they are loaded at program startup.
The following rules apply depending on the GUC type:
* bool - can be false, or same as boot_val.
* int - can be 0, or same as the boot_val.
* real - can be 0.0, or same as the boot_val.
* string - can be NULL, or strcmp'd equal to the boot_val.
* enum - equal to the boot_val.
This is done for the system as well custom GUCs loaded by external
modules, which may require extension developers to adapt the C
declaration of the variables used by these GUCs (testing this change
with some of my own modules has allowed me to catch some stupid typos,
FWIW). This may finish by being a bad experiment depending on the
feedbcak received, but let's see how it goes.
Author: Peter Smith
Reviewed-by: Nathan Bossart, Tom Lane, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to 7d25958, and this commit takes care of all the
remaining inconsistencies between the initial value used in the C
variable associated to a GUC and its default value stored in the GUC
tables (as of pg_settings.boot_val).
Some of the initial values of the GUCs updated rely on a compile-time
default. These are refactored so as the GUC table and its C declaration
use the same values. This makes everything consistent with other
places, backend_flush_after, bgwriter_flush_after, port,
checkpoint_flush_after doing so already, for example.
Extracted from a larger patch by Peter Smith. The spots updated in the
modules are from me.
Author: Peter Smith, Michael Paquier
Reviewed-by: Nathan Bossart, Tom Lane, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
|
| |
|
|
|
|
|
|
|
| |
These don't offer anything over plain Assert, and their usage had
already been declared obsolescent.
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/20221009210148.GA900071@nathanxps13
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we add a new 'copy' parameter to tuplesort_getdatum so that we can
instruct the function not to datumCopy() byref Datums before returning.
Similar to 91e9e89dc, this can provide significant performance
improvements in nodeSort when sorting by a single byref column and the
sort's targetlist contains only that column.
This allows us to re-enable Datum sorts for byref types which was disabled
in 3a5817695 due to a reported memory leak.
Additionally, here we slightly optimize DISTINCT aggregates so that we no
longer perform any datumCopy() when we find the current value not to be
distinct from the previous value. Previously the code would always take a
copy of the most recent Datum and pfree the previous value, even when the
values were the same. Testing shows a small but noticeable performance
increase when aggregate transitions are skipped due to the current
transition value being the same as the prior one.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvqS6wC5U==k9Hd26E4EQXH3QR67-T4=Q1rQ36NGvjfVSg@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvqHonfe9G1cVaKeHbDx70R_zCrM3qP2AGXpGrieSKGnhA@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These numbers are strictly-monotone identifiers assigned to each rule
of pg_hba_file_rules and each map of pg_ident_file_mappings when loading
the HBA and ident configuration files, indicating the order in which
they are checked at authentication time, until a match is found.
With only one file loaded currently, this is equivalent to the line
numbers assigned to the entries loaded if one wants to know their order,
but this becomes mandatory once the inclusion of external files is
added to the HBA and ident files to be able to know in which order the
rules and/or maps are applied at authentication. Note that NULL is used
when a HBA or ident entry cannot be parsed or validated, aka when an
error exists, contrary to the line number.
Bump catalog version.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the choice of result scale of numeric power() for integer
exponents consistent with the choice for non-integer exponents, and
with the result scale of other numeric functions. Specifically, the
result scale will be at least as large as the scale of either input,
and sufficient to ensure that the result has at least 16 significant
digits.
Formerly, the result scale was based only on the scale of the first
input, without taking into account the weight of the result. For
results with negative weight, that could lead to results with very few
or even no non-zero significant digits (e.g., 10.0 ^ (-18) produced
0.0000000000000000).
Fix this by moving responsibility for the choice of result scale into
power_var_int(), which already has code to estimate the result weight.
Per report by Adrian Klaver and suggested fix by Tom Lane.
No back-patch -- arguably this is a bug fix, but one which is easy to
work around, so it doesn't seem worth the risk of changing query
results in stable branches.
Discussion: https://postgr.es/m/12a40226-70ac-3a3b-3d3a-fdaf9e32d312%40aklaver.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AuthToken gains a regular expression, and IdentLine is changed so as it
uses an AuthToken rather than tracking separately the ident user string
used for the regex compilation and its generated regex_t. In the case
of pg_ident.conf, a set of AuthTokens is built in the pre-parsing phase
of the file, and an extra regular expression is compiled when building
the list of IdentLines, after checking the sanity of the fields in a
pre-parsed entry.
The logic in charge of computing and executing regular expressions is
now done in a new set of routines called respectively
regcomp_auth_token() and regexec_auth_token() that are wrappers around
pg_regcomp() and pg_regexec(), working on AuthTokens. While on it, this
patch adds a routine able to free an AuthToken, free_auth_token(), to
simplify a bit the logic around the requirement of using a specific free
routine for computed regular expressions. Note that there are no
functional or behavior changes introduced by this commit.
The goal of this patch is to ease the use of regular expressions with
more items of pg_hba.conf (user list, database list, potentially
hostnames) where AuthTokens are used extensively. This will be tackled
later in a separate patch.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
|
| |
|
|
|
|
|
|
|
|
| |
These routines have been renamed in a19e5ce. There is no need to keep
the compatibility declarations on HEAD, as once an extension moves to
the new routine name when compiling with v16~ the code would work the
same way when recompiled on v15. No backpatch to v15 for this one,
because ABI compatibility has to be maintained there.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, the existing routine name able to initialize a SRF
function with materialize mode is unpopular, so rename it. Equally, the
flags of this function are renamed, as of:
- SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC
- SRF_SINGLE_BLESS -> MAT_SRF_BLESS
The previous function and flags introduced in 9e98583 are kept around
for compatibility purposes, so as any extension code already compiled
with v15 continues to work as-is. The declarations introduced here for
compatibility will be removed from HEAD in a follow-up commit.
The new names have been suggested by Andres Freund and Melanie
Plageman.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Backpatch-through: 15
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the non-recursive term of a SEARCH BREADTH FIRST recursive
query has only constants in its target list, the planner will
fold the starting RowExpr added by rewrite into a simple Const
of type RECORD. The executor doesn't have any problem with
that --- but EXPLAIN VERBOSE will encounter the Const as the
ultimate source of truth about what the field names of the
SET column are, and it didn't know what to do with that.
Fortunately, we can pull the identifying typmod out of the
Const, in much the same way that record_out would.
For reasons that remain a bit obscure to me, this only fails
with SEARCH BREADTH FIRST, not SEARCH DEPTH FIRST or CYCLE.
But I added regression test cases for both of those options
too, just to make sure we don't break it in future.
Per bug #17644 from Matthijs van der Vleuten. Back-patch
to v14 where these constructs were added.
Discussion: https://postgr.es/m/17644-3bd1f3036d6d7a16@postgresql.org
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It can be useful to know when a relation has last been used, e.g., when
evaluating whether an index is still required. It was already possible to
infer the time of the last usage by tracking, e.g.,
pg_stat_all_indexes.idx_scan over time. But far from everybody does so.
To make it easier to detect the last time a relation has been scanned, track
that time in each relation's pgstat entry. To minimize overhead a) the
timestamp is updated only when the backend pending stats entry is flushed to
shared stats b) the last transaction's stop timestamp is used as the
timestamp.
Bumps catalog and stats format versions.
Author: Dave Page <dpage@pgadmin.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/CA+OCxozrVHNFVEPkweUHMZje+t1tfY816d9MZYc6eZwOOusOaQ@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous patch made addition of new GUCs cheap, but other GUC
operations aren't improved and indeed get a bit slower, because
hash_seq_search() is slower than just scanning a pointer array.
However, most performance-critical GUC operations only need
to touch a relatively small fraction of the GUCs; especially
so for AtEOXact_GUC(). We can improve matters at the cost
of a bit more space by adding dlist or slist links to the
GUC data structures. This patch invents lists that track
(1) all GUCs with non-default "source";
(2) all GUCs with nonempty state stack (implying they've
been changed in the current transaction);
(3) all GUCs due for reporting to the client.
All of guc.c's performance-critical cases can make use of one or
another of these lists to avoid searching the whole hash table.
In particular, the stack list means that transaction end
doesn't take time proportional to the number of GUCs, but
only to the number changed in the current transaction.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of bsearch() in favor of hashed lookup. The main
advantage is that it becomes far cheaper to add new GUCs, since
we needn't re-sort the pointer array. Adding N new GUCs had
been O(N^2 log N), but now it's closer to O(N). We need to
sort only in SHOW ALL and equivalent functions, which are
hopefully not performance-critical to anybody.
Also, merge GetNumConfigOptions() into get_guc_variables(),
because in a world where the set of GUCs isn't fairly static
you really want to consider those two results as tied together
not independent.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only real argument for using malloc directly was that we needed
the ability to not throw error on OOM; but mcxt.c grew that feature
awhile ago.
Keeping the data in a memory context improves accountability and
debuggability --- for example, without this it's almost impossible
to detect memory leaks in the GUC code with anything less costly
than valgrind. Moreover, the next patch in this series will add a
hash table for GUC lookup, and it'd be pretty silly to be using
palloc-dependent hash facilities alongside malloc'd storage of the
underlying data.
This is a bit invasive though, in particular causing an API break
for GUC check hooks that want to modify the GUC's value or use an
"extra" data structure. They must now use guc_malloc() and
guc_free() instead of malloc() and free(). Failure to change
affected code will result in assertion failures or worse; but
thanks to recent effort in the mcxt infrastructure, it shouldn't
be too hard to diagnose such oversights (at least in assert-enabled
builds).
One note is that this changes ParseLongOption() to return short-lived
palloc'd not malloc'd data. There wasn't any caller for which the
previous definition was better.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We lack a version of repalloc() that supports MCXT_ALLOC_NO_OOM
semantics, so invent repalloc_extended() with the usual set of
flags. repalloc_huge() becomes a legacy wrapper for that.
Also, fix dynahash.c so that it can support HASH_ENTER_NULL
requests when using the default palloc-based allocator.
The only reason it didn't do that already was the lack of the
MCXT_ALLOC_NO_OOM option when that code was written, ages ago.
While here, simplify a few overcomplicated tests in mcxt.c.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before commit c6e0fe1f2, functions such as AllocSetFree could pretty
safely presume that they were given a valid chunk pointer for their
own type of context, because the indirect call through a memory
context object and method struct would be very unlikely to work
otherwise. But now, if pfree() is mistakenly invoked on a pointer
to garbage, we have three chances in eight of ending up at one of
these functions. That means we need to take extra measures to
verify that we are looking at what we're supposed to be looking at,
especially in debug builds.
Hence, add code to verify that the chunk's back-link to a block header
leads to a memory context object that satisfies the right sort of
IsA() check. This is still a bit weaker than what we did before,
but for the moment assume that an IsA() check is sufficient.
As a compromise between speed and safety, implement these checks
as Asserts when dealing with small chunks but plain test-and-elogs
when dealing with large (external) chunks. The latter case should
not be too performance-critical, but the former case probably is.
In slab.c, all chunks are small; but nonetheless use a plain test
in SlabRealloc, because that is certainly not performance-critical,
indeed we should be suspicious that it's being called in error.
In aset.c, additionally add some assertions that the "value" field
of the chunk header is within the small range allowed for freelist
indexes. Without that, we might find ourselves trying to wipe
most of memory when CLOBBER_FREED_MEMORY is enabled, or scribbling
on a "freelist header" that's far away from the context object.
Eventually, field experience might show us that it's smarter for
these tests to be active always, but for now we'll try to get
away with just having them as assertions.
While at it, also be more uniform about asserting that context
objects passed as parameters are of the type we expect. Some
places missed that altogether, and slab.c was for no very good
reason doing it differently from the other allocators.
Discussion: https://postgr.es/m/3578387.1665244345@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the Trap and TrapMacro macros, which were nearly unused
and confusingly had the opposite condition polarity from the
otherwise-functionally-equivalent Assert macros.
Having done that, it's very hard to justify carrying the errorType
argument of ExceptionalCondition, so drop that too, and just
let it assume everything's an Assert. This saves about 64K
of code space as of current HEAD.
Discussion: https://postgr.es/m/3928703.1665345117@sss.pgh.pa.us
|