summaryrefslogtreecommitdiff
path: root/src/backend/utils/adt/selfuncs.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Modify prefix_selectivity() so that it will never estimate the selectivityTom Lane2008-03-081-150/+215
| | | | | | | | | | | | | | of the generated range condition var >= 'foo' AND var < 'fop' as being less than what eqsel() would estimate for var = 'foo'. This is intuitively reasonable and it gets rid of the need for some entirely ad-hoc coding we formerly used to reject bogus estimates. The basic problem here is that if the prefix is more than a few characters long, the two boundary values are too close together to be distinguishable by comparison to the column histogram, resulting in a selectivity estimate of zero, which is often not very sane. Change motivated by an example from Peter Eisentraut. Arguably this is a bug fix, but I'll refrain from back-patching it for the moment.
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-011-2/+2
|
* Fix mergejoin cost estimation so that we consider the statistical ranges ofTom Lane2007-12-081-111/+259
| | | | | | | | | | the two join variables at both ends: not only trailing rows that need not be scanned because there cannot be a match on the other side, but initial rows that will be scanned without possibly having a match. This allows a more realistic estimate of startup cost to be made, per recent pgsql-performance discussion. In passing, fix a couple of bugs that had crept into mergejoinscansel: it was not quite up to speed for the task of estimating descending-order scans, which is a new requirement in 8.3.
* Re-run pgindent with updated list of typedefs. (Updated README shouldBruce Momjian2007-11-151-3/+2
| | | | avoid this problem in the future.)
* pgindent run for 8.3.Bruce Momjian2007-11-151-46/+49
|
* Second pass at improving LIKE/regex estimation in non-C locales. It turnsTom Lane2007-11-091-8/+55
| | | | | | | | | | out that it's actually quite likely that a string that is an extension of the given prefix will sort as larger than the "greater" string our previous code created. To provide some defense against that, do the comparisons against a modified string instead of just the bare prefix. We tack on "Z", "z", "y", or "9", whichever is seen as largest in the current locale. Testing suggests that this is sufficient at least for cases involving ASCII data.
* Improve the performance of LIKE/regex estimation in non-C locales, by makingTom Lane2007-11-071-19/+33
| | | | | | | | | | | | | | | | make_greater_string() try harder to generate a string that's actually greater than its input string. Before we just assumed that making a string that was memcmp-greater was enough, but it is easy to generate examples where this is not so when the locale is not C. Instead, loop until the relevant comparison function agrees that the generated string is greater than the input. Unfortunately this is probably not enough to guarantee that the generated string is greater than all extensions of the input, so we cannot relax the restriction to C locale for the LIKE/regex index optimization. But it should at least improve the odds of getting a useful selectivity estimate in prefix_selectivity(). Per example from Guillaume Smet. Backpatch to 8.1, mainly because that's what the complainant is using...
* Fix patternsel() and callers to do the right thing for NOT LIKE and the otherTom Lane2007-11-071-57/+52
| | | | | | | | | | | | | | negated-match operators. patternsel had been using the supplied operator as though it were a positive-match operator, and thus obtaining a wrong result, which was even more wrong after the caller subtracted it from 1. Seems cleanest to give patternsel an explicit "negate" argument so that it knows what's going on. Also install the same factorization scheme for pattern join selectivity estimators; even though they are just stubs at the moment, this may keep someone from making the same type of mistake when they get filled out. Per report from Greg Mullane. Backpatch to 8.2 --- previous releases do not show the problem because patternsel() doesn't actually use the operator directly.
* Apply a band-aid fix for the problem that 8.2 and up completely misestimateTom Lane2007-08-311-2/+15
| | | | | | | | | | | | | | | | | | the number of rows likely to be produced by a query such as SELECT * FROM t1 LEFT JOIN t2 USING (key) WHERE t2.key IS NULL; What this is doing is selecting for t1 rows with no match in t2, and thus it may produce a significant number of rows even if the t2.key table column contains no nulls at all. 8.2 thinks the table column's null fraction is relevant and thus may estimate no rows out, which results in terrible plans if there are more joins above this one. A proper fix for this will involve passing much more information about the context of a clause to the selectivity estimator functions than we ever have. There's no time left to write such a patch for 8.3, and it wouldn't be back-patchable into 8.2 anyway. Instead, put in an ad-hoc test to defeat the normal table-stats-based estimation when an IS NULL test is evaluated at an outer join, and just use a constant estimate instead --- I went with 0.5 for lack of a better idea. This won't catch every case but it will catch the typical ways of writing such queries, and it seems unlikely to make things worse for other queries.
* Tsearch2 functionality migrates to core. The bulk of this work is byTom Lane2007-08-211-1/+5
| | | | | | | | Oleg Bartunov and Teodor Sigaev, but I did a lot of editorializing, so anything that's broken is probably my fault. Documentation is nonexistent as yet, but let's land the patch so we can get some portability testing done.
* Check return code from strxfrm on Windows since it has aMagnus Hagander2007-05-051-1/+14
| | | | | non-standard way of indicating errors, so we don't try to allocate INT_MAX bytes to store a result in.
* Some further performance tweaks for planning large inheritance trees thatTom Lane2007-04-211-2/+2
| | | | | | | | are mostly excluded by constraints: do the CE test a bit earlier to save some adjust_appendrel_attrs() work on excluded children, and arrange to use array indexing rather than rt_fetch() to fetch RTEs in the main body of the planner. The latter is something I'd wanted to do for awhile anyway, but seeing list_nth_cell() as 35% of the runtime gets one's attention.
* Make 'col IS NULL' clauses be indexable conditions.Tom Lane2007-04-061-7/+32
| | | | Teodor Sigaev, with some kibitzing from Tom Lane.
* Fix array coercion expressions to ensure that the correct volatility isTom Lane2007-03-271-39/+11
| | | | | | | | | seen by code inspecting the expression. The best way to do this seems to be to drop the original representation as a function invocation, and instead make a special expression node type that represents applying the element-type coercion function to each array element. In this way the element function is exposed and will be checked for volatility. Per report from Guillaume Smet.
* Fix some problems with selectivity estimation for partial indexes.Tom Lane2007-03-211-24/+27
| | | | | | | | | | | | | | | | | | | | | First, genericcostestimate() was being way too liberal about including partial-index conditions in its selectivity estimate, resulting in substantial underestimates for situations such as an indexqual "x = 42" used with an index on x "WHERE x >= 40 AND x < 50". While the code is intentionally set up to favor selecting partial indexes when available, this was too much... Second, choose_bitmap_and() was likewise easily fooled by cases of this type, since it would similarly think that the partial index had selectivity independent of the indexqual. Fixed by using predicate_implied_by() rather than simple equality checks to determine redundancy. This is a good deal more expensive but I don't see much alternative. At least the extra cost is only paid when there's actually a partial index under consideration. Per report from Jeff Davis. I'm not going to risk back-patching this, though.
* Fix up the remaining places where the expression node structure would loseTom Lane2007-03-171-3/+5
| | | | | | | | | | | | | | available information about the typmod of an expression; namely, Const, ArrayRef, ArrayExpr, and EXPR and ARRAY SubLinks. In the ArrayExpr and SubLink cases it wasn't really the data structure's fault, but exprTypmod() being lazy. This seems like a good idea in view of the expected increase in typmod usage from Teodor's work to allow user-defined types to have typmods. In particular this responds to the concerns we had about eliminating the special-purpose hack that exprTypmod() used to have for BPCHAR Consts. We can now tell whether or not such a Const has been cast to a specific length, and report or display properly if so. initdb forced due to changes in stored rules.
* Replace direct assignments to VARATT_SIZEP(x) with SET_VARSIZE(x, len).Tom Lane2007-02-271-2/+2
| | | | | | | | | | | Get rid of VARATT_SIZE and VARATT_DATA, which were simply redundant with VARSIZE and VARDATA, and as a consequence almost no code was using the longer names. Rename the length fields of struct varlena and various derived structures to catch anyplace that was accessing them directly; and clean up various places so caught. In itself this patch doesn't change any behavior at all, but it is necessary infrastructure if we hope to play any games with the representation of varlena headers. Greg Stark and Tom Lane
* Turn the rangetable used by the executor into a flat list, and avoid storingTom Lane2007-02-221-2/+2
| | | | | | | | | | | | | | | | | useless substructure for its RangeTblEntry nodes. (I chose to keep using the same struct node type and just zero out the link fields for unneeded info, rather than making a separate ExecRangeTblEntry type --- it seemed too fragile to have two different rangetable representations.) Along the way, put subplans into a list in the toplevel PlannedStmt node, and have SubPlan nodes refer to them by list index instead of direct pointers. Vadim wanted to do that years ago, but I never understood what he was on about until now. It makes things a *whole* lot more robust, because we can stop worrying about duplicate processing of subplans during expression tree traversals. That's been a constant source of bugs, and it's finally gone. There are some consequent simplifications yet to be made, like not using a separate EState for subplans in the executor, but I'll tackle that later.
* Get rid of some old and crufty global variables in the planner. WhenTom Lane2007-02-191-3/+3
| | | | | | | | | this code was last gone over, there wasn't really any alternative to globals because we didn't have the PlannerInfo struct being passed all through the planner code. Now that we do, we can restructure things to avoid non-reentrancy. I'm fooling with this because otherwise I'd have had to add another global variable for the planned compact range table list.
* Revert gincostestimate changes.Teodor Sigaev2007-01-311-115/+1
|
* Allow GIN's extractQuery method to signal that nothing can satisfy the query.Teodor Sigaev2007-01-311-1/+115
| | | | | | | | | | | | | In this case extractQuery should returns -1 as nentries. This changes prototype of extractQuery method to use int32* instead of uint32* for nentries argument. Based on that gincostestimate may see two corner cases: nothing will be found or seqscan should be used. Per proposal at http://archives.postgresql.org/pgsql-hackers/2007-01/msg01581.php PS tsearch_core patch should be sightly modified to support changes, but I'm waiting a verdict about reviewing of tsearch_core patch.
* Dept of second thoughts: the IQ of estimate_array_length() needs to beTom Lane2007-01-281-1/+4
| | | | | kept on par with that of scalararraysel(), else estimates that should track might not. Hence teach it about binary-compatible cases, too.
* Fix scalararraysel() to cope with binary-compatible cases, such as text[]Tom Lane2007-01-281-8/+83
| | | | | | versus varchar[]. This oversight probably explains Ryan Holmes' recent complaint --- he was getting a generic selectivity estimate instead of anything intelligent.
* Put back planner's ability to cache the results of mergejoinscansel(),Tom Lane2007-01-221-5/+26
| | | | | | | | | | which I had removed in the first cut of the EquivalenceClass rewrite to simplify that patch a little. But it's still important --- in a four-way join problem mergejoinscansel() was eating about 40% of the planning time according to gprof. Also, improve the EquivalenceClass code to re-use join RestrictInfos rather than generating fresh ones for each join considered. This saves some memory space but more importantly improves the effectiveness of caching planning info in RestrictInfos.
* Refactor planner's pathkeys data structure to create a separate, explicitTom Lane2007-01-201-6/+5
| | | | | | | | | | | | | | representation of equivalence classes of variables. This is an extensive rewrite, but it brings a number of benefits: * planner no longer fails in the presence of "incomplete" operator families that don't offer operators for every possible combination of datatypes. * avoid generating and then discarding redundant equality clauses. * remove bogus assumption that derived equalities always use operators named "=". * mergejoins can work with a variety of sort orders (e.g., descending) now, instead of tying each mergejoinable operator to exactly one sort order. * better recognition of redundant sort columns. * can make use of equalities appearing underneath an outer join.
* Support ORDER BY ... NULLS FIRST/LAST, and add ASC/DESC/NULLS FIRST/NULLS LASTTom Lane2007-01-091-2/+19
| | | | | | | | | | | | per-column options for btree indexes. The planner's support for this is still pretty rudimentary; it does not yet know how to plan mergejoins with nondefault ordering options. The documentation is pretty rudimentary, too. I'll work on improving that stuff later. Note incompatible change from prior behavior: ORDER BY ... USING will now be rejected if the operator is not a less-than or greater-than member of some btree opclass. This prevents less-than-sane behavior if an operator that doesn't actually define a proper sort ordering is selected.
* Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian2007-01-051-2/+2
| | | | back-stamped for this.
* Fix regex_fixed_prefix() to cope reasonably well with regex patterns of theTom Lane2007-01-031-44/+93
| | | | | | | | | | form '^(foo)$'. Before, these could never be optimized into indexscans. The recent changes to make psql and pg_dump generate such patterns (for \d commands and -t and related switches, respectively) therefore represented a big performance hit for people with large pg_class catalogs, as seen in recent gripe from Erik Jones. While at it, be more paranoid about case-sensitivity checking in multibyte encodings, and fix some other corner cases in which a regex might be interpreted too liberally.
* Restructure operator classes to allow improved handling of cross-data-typeTom Lane2006-12-231-65/+83
| | | | | | | | | | | | | | | | cases. Operator classes now exist within "operator families". While most families are equivalent to a single class, related classes can be grouped into one family to represent the fact that they are semantically compatible. Cross-type operators are now naturally adjunct parts of a family, without having to wedge them into a particular opclass as we had done originally. This commit restructures the catalogs and cleans up enough of the fallout so that everything still works at least as well as before, but most of the work needed to actually improve the planner's behavior will come later. Also, there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way to create a new family right now is to allow CREATE OPERATOR CLASS to make one by default. I owe some more documentation work, too. But that can all be done in smaller pieces once this infrastructure is in place.
* Fix some planner bugs exposed by reports from Arjen van der Meijden. TheseTom Lane2006-12-151-9/+47
| | | | | | | | | | | | | | | | | | | | | | | | are all in new-in-8.2 logic associated with indexability of ScalarArrayOpExpr (IN-clauses) or amortization of indexscan costs across repeated indexscans on the inside of a nestloop. In particular: Fix some logic errors in the estimation for multiple scans induced by a ScalarArrayOpExpr indexqual. Include a small cost component in bitmap index scans to reflect the costs of manipulating the bitmap itself; this is mainly to prevent a bitmap scan from appearing to have the same cost as a plain indexscan for fetching a single tuple. Also add a per-index-scan-startup CPU cost component; while prior releases were clearly too pessimistic about the cost of repeated indexscans, the original 8.2 coding allowed the cost of an indexscan to effectively go to zero if repeated often enough, which is overly optimistic. Pay some attention to index correlation when estimating costs for a nestloop inner indexscan: this is significant when the plan fetches multiple heap tuples per iteration, since high correlation means those tuples are probably on the same or adjacent heap pages.
* pgindent run for 8.2.Bruce Momjian2006-10-041-94/+97
|
* Change patternsel (LIKE/regex selectivity estimation) so that if thereTom Lane2006-09-201-108/+202
| | | | | | | | | is a large enough histogram, it will use the number of matches in the histogram to derive a selectivity estimate, rather than the admittedly pretty bogus heuristics involving examining the pattern contents. I set 'large enough' at 100, but perhaps we should change that later. Also apply the same technique in contrib/ltree's <@ and @> estimator. Per discussion with Stefan Kaltenbrunner and Matteo Beccati.
* Improve usage of effective_cache_size parameter by assuming that all theTom Lane2006-09-191-2/+3
| | | | | | | | | | | tables in the query compete for cache space, not just the one we are currently costing an indexscan for. This seems more realistic, and it definitely will help in examples recently exhibited by Stefan Kaltenbrunner. To get the total size of all the tables involved, we must tweak the handling of 'append relations' a bit --- formerly we looked up information about the child tables on-the-fly during set_append_rel_pathlist, but it needs to be done before we start doing any cost estimation, so push it into the add_base_rels_to_query scan.
* Work around bug in strxfmt() but in MS VS2005.Bruce Momjian2006-07-261-2/+10
| | | | William ZHANG
* Add a fudge factor to genericcostestimate() to prevent the planner fromTom Lane2006-07-241-1/+19
| | | | | | | thinking that indexes of different sizes are equally attractive. Per gripe from Jim Nasby. (I remain unconvinced that there's such a problem in existing releases, but CVS HEAD definitely has got a problem because of its new count-only-leaf-pages approach to indexscan costing.)
* Remove 576 references of include files that were not needed.Bruce Momjian2006-07-141-12/+1
|
* Fix oversight in planning for multiple indexscans driven byTom Lane2006-07-011-8/+81
| | | | | | | | ScalarArrayOpExpr index quals: we were estimating the right total number of rows returned, but treating the index-access part of the cost as if a single scan were fetching that many consecutive index tuples. Actually we should treat it as a multiple indexscan, and if there are enough of 'em the Mackert-Lohman discount should kick in.
* Make the planner estimate costs for nestloop inner indexscans on the basisTom Lane2006-06-061-34/+74
| | | | | | | | | | | | | | | | | | | | | that the Mackert-Lohmann formula applies across all the repetitions of the nestloop, not just each scan independently. We use the M-L formula to estimate the number of pages fetched from the index as well as from the table; that isn't what it was designed for, but it seems reasonably applicable anyway. This makes large numbers of repetitions look much cheaper than before, which accords with many reports we've received of overestimation of the cost of a nestloop. Also, change the index access cost model to charge random_page_cost per index leaf page touched, while explicitly not counting anything for access to metapage or upper tree pages. This may all need tweaking after we get some field experience, but in simple tests it seems to be giving saner results than before. The main thing is to get the infrastructure in place to let cost_index() and amcostestimate functions take repeated scans into account at all. Per my recent proposal. Note: this patch changes pg_proc.h, but I did not force initdb because the changes are basically cosmetic --- the system does not look into pg_proc to decide how to call an index amcostestimate function, and there's no way to call such a function from SQL at all.
* Add a GUC parameter seq_page_cost, and use that everywhere we formerlyTom Lane2006-06-051-3/+3
| | | | | | | | assumed that a sequential page fetch has cost 1.0. This patch doesn't in itself change the system's behavior at all, but it opens the door to people adopting other units of measurement for EXPLAIN costs. Also, if we ever decide it's worth inventing per-tablespace access cost settings, this change provides a workable intellectual framework for that.
* GIN: Generalized Inverted iNdex.Teodor Sigaev2006-05-021-1/+19
| | | | text[], int4[], Tsearch2 support for GIN.
* Avoid assuming that statistics for a parent relation reflect the properties ofTom Lane2006-05-021-5/+13
| | | | | | | | | | | | | the union of its child relations as well. This might have been a good idea when it was originally coded, but it's a fatally bad idea when inheritance is being used for partitioning. It's better to have no stats at all than completely misleading stats. Per report from Mark Liberman. The bug arguably exists all the way back, but I've only patched HEAD and 8.1 because we weren't particularly trying to support partitioning before 8.1. Eventually we ought to look at deriving union statistics instead of just punting, but for now the drop kick looks good.
* Generalize mcv_selectivity() to support both VAR OP CONST and CONST OP VARTom Lane2006-04-271-14/+14
| | | | | | cases. This was not needed in the existing uses within selfuncs.c, but if we're gonna export it for general use, the extra generality seems helpful. Motivated by looking at ltree example.
* If we're going to expose VariableStatData for contrib modules to use,Tom Lane2006-04-271-13/+5
| | | | then we should export a reasonable set of the supporting routines too.
* Move ltree parentsel() selectivity function into /contrib/ltree.Bruce Momjian2006-04-261-204/+2
|
* Enhanced containment selectivity function for /contrib/ltreeBruce Momjian2006-04-261-1/+180
| | | | Matteo Beccati
* Eliminate some no-longer-needed workarounds for palloc's old behaviorTom Lane2006-04-201-37/+21
| | | | | | | | of rejecting palloc(0). Also, tweak like_selectivity() to avoid assuming the presented pattern is nonempty; although that assumption is valid, it doesn't really help much, and the new coding is more correct anyway since it properly handles redundant wildcards. In combination these changes should eliminate a Coverity warning noted by Martijn.
* Update copyright for 2006. Update scripts.Bruce Momjian2006-03-051-2/+2
|
* Allow row comparisons to be used as indexscan qualifications.Tom Lane2006-01-251-1/+12
| | | | This completes the project to upgrade our handling of row comparisons.
* Add selectivity-calculation code for RowCompareExpr nodes. Simplistic,Tom Lane2006-01-141-1/+60
| | | | but a lot better than nothing at all ...
* Improve patternsel() by applying the operator itself to each valueTom Lane2006-01-101-92/+180
| | | | | | | | | listed in the column's most-common-values statistics entry. This gives us an exact selectivity result for the portion of the column population represented by the MCV list, which can be a big leg up in accuracy if that's a large fraction of the population. The heuristics involving pattern contents and prefix are applied only to the part of the population not included in the MCV list.