summaryrefslogtreecommitdiff
path: root/rts/Sparks.c
diff options
context:
space:
mode:
authorDuncan Coutts <duncan@well-typed.com>2011-06-01 19:48:15 +0100
committerDuncan Coutts <duncan@well-typed.com>2011-07-18 16:31:14 +0100
commitfa8d20e6d85212290b633159b6ef2d77fb1c4021 (patch)
tree0dd7f7926d2c3a482e451a691f794fde2cae9a38 /rts/Sparks.c
parent556557ebee2758acade603e25a8a16266dea791d (diff)
downloadhaskell-fa8d20e6d85212290b633159b6ef2d77fb1c4021.tar.gz
Classify overflowed sparks separately
When you use `par` to make a spark, if the spark pool on the current capability is full then the spark is discarded. This represents a loss of potential parallelism and it also means there are simply a lot of sparks around. Both are things that might be of concern to a programmer when tuning a parallel program that uses par. The "+RTS -s" stats command now reports overflowed sparks, e.g. SPARKS: 100001 (15521 converted, 84480 overflowed, 0 dud, 0 GC'd, 0 fizzled)
Diffstat (limited to 'rts/Sparks.c')
-rw-r--r--rts/Sparks.c8
1 files changed, 6 insertions, 2 deletions
diff --git a/rts/Sparks.c b/rts/Sparks.c
index d358ae6660..26b8199035 100644
--- a/rts/Sparks.c
+++ b/rts/Sparks.c
@@ -64,8 +64,12 @@ newSpark (StgRegTable *reg, StgClosure *p)
SparkPool *pool = cap->sparks;
if (!fizzledSpark(p)) {
- pushWSDeque(pool,p);
- cap->spark_stats.created++;
+ if (pushWSDeque(pool,p)) {
+ cap->spark_stats.created++;
+ } else {
+ /* overflowing the spark pool */
+ cap->spark_stats.overflowed++;
+ }
} else {
cap->spark_stats.dud++;
}