diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2007-05-30 20:12:03 +0000 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2007-05-30 20:12:03 +0000 |
commit | d526575f893c1a4e05ebd307e80203536b213a6d (patch) | |
tree | 529be7e5571f622bad1daab0d02de0c6669e9b81 /src/backend/commands/analyze.c | |
parent | 0a6f2ee84de589e14941da640fb686c7eda7be01 (diff) | |
download | postgresql-d526575f893c1a4e05ebd307e80203536b213a6d.tar.gz |
Make large sequential scans and VACUUMs work in a limited-size "ring" of
buffers, rather than blowing out the whole shared-buffer arena. Aside from
avoiding cache spoliation, this fixes the problem that VACUUM formerly tended
to cause a WAL flush for every page it modified, because we had it hacked to
use only a single buffer. Those flushes will now occur only once per
ring-ful. The exact ring size, and the threshold for seqscans to switch into
the ring usage pattern, remain under debate; but the infrastructure seems
done. The key bit of infrastructure is a new optional BufferAccessStrategy
object that can be passed to ReadBuffer operations; this replaces the former
StrategyHintVacuum API.
This patch also changes the buffer usage-count methodology a bit: we now
advance usage_count when first pinning a buffer, rather than when last
unpinning it. To preserve the behavior that a buffer's lifetime starts to
decrease when it's released, the clock sweep code is modified to not decrement
usage_count of pinned buffers.
Work not done in this commit: teach GiST and GIN indexes to use the vacuum
BufferAccessStrategy for vacuum-driven fetches.
Original patch by Simon, reworked by Heikki and again by Tom.
Diffstat (limited to 'src/backend/commands/analyze.c')
-rw-r--r-- | src/backend/commands/analyze.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 2754a6db6a..d77aec2dd7 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -8,7 +8,7 @@ * * * IDENTIFICATION - * $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.107 2007/04/30 03:23:48 tgl Exp $ + * $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.108 2007/05/30 20:11:56 tgl Exp $ * *------------------------------------------------------------------------- */ @@ -63,10 +63,13 @@ typedef struct AnlIndexData /* Default statistics target (GUC parameter) */ int default_statistics_target = 10; +/* A few variables that don't seem worth passing around as parameters */ static int elevel = -1; static MemoryContext anl_context = NULL; +static BufferAccessStrategy vac_strategy; + static void BlockSampler_Init(BlockSampler bs, BlockNumber nblocks, int samplesize); @@ -94,7 +97,8 @@ static bool std_typanalyze(VacAttrStats *stats); * analyze_rel() -- analyze one relation */ void -analyze_rel(Oid relid, VacuumStmt *vacstmt) +analyze_rel(Oid relid, VacuumStmt *vacstmt, + BufferAccessStrategy bstrategy) { Relation onerel; int attr_cnt, @@ -120,6 +124,8 @@ analyze_rel(Oid relid, VacuumStmt *vacstmt) else elevel = DEBUG2; + vac_strategy = bstrategy; + /* * Use the current context for storing analysis info. vacuum.c ensures * that this context will be cleared when I return, thus releasing the @@ -845,7 +851,7 @@ acquire_sample_rows(Relation onerel, HeapTuple *rows, int targrows, * looking at it. We don't maintain a lock on the page, so tuples * could get added to it, but we ignore such tuples. */ - targbuffer = ReadBuffer(onerel, targblock); + targbuffer = ReadBufferWithStrategy(onerel, targblock, vac_strategy); LockBuffer(targbuffer, BUFFER_LOCK_SHARE); targpage = BufferGetPage(targbuffer); maxoffset = PageGetMaxOffsetNumber(targpage); |