| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* use str() if exception derives from OSError / EnvironmentError. This
way we will print the file name (if it exists).
* use repr() for any other exception
* add tests for debug() function
* backport contextlib.redirect_stderr
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
|
|
|
|
|
|
|
| |
The function was exiting after one second due to a subprocess.TimeoutException
Fixes #1913
Signed-off-by: guille <guille@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Preamble
=======
We have a [memory leak test suite](https://github.com/giampaolo/psutil/blob/e1ea2bccf8aea404dca0f79398f36f37217c45f6/psutil/tests/__init__.py#L897), which calls a function many times and fails if the process memory increased. We do this in order to detect missing `free()` or `Py_DECREF` calls in the C modules. When we do, then we have a memory leak.
The problem
==========
A problem we've been having for probably over 10 years, is the false positives. That's because the memory fluctuates. Sometimes it may increase (or even decrease!) due to how the OS handles memory, the Python's garbage collector, the fact that RSS is an approximation and who knows what else. So thus far we tried to compensate that by using the following logic:
- warmup (call fun 10 times)
- call the function many times (1000)
- if memory increased before/after calling function 1000 times, then keep calling it for another 3 secs
- if it still increased at all (> 0) then fail
This logic didn't really solve the problem, as we still had occasional false positives, especially lately on FreeBSD.
The solution
=========
This PR changes the internal algorithm so that in case of failure (mem > 0 after calling fun() N times) we retry the test for up to 5 times, increasing N (repetitions) each time, so we consider it a failure only if the memory **keeps increasing** between runs. So for instance, here's a legitimate failure:
```
psutil.tests.test_memory_leaks.TestModuleFunctionsLeaks.test_disk_partitions ...
Run #1: extra-mem=696.0K, per-call=3.5K, calls=200
Run #2: extra-mem=1.4M, per-call=3.5K, calls=400
Run #3: extra-mem=2.1M, per-call=3.5K, calls=600
Run #4: extra-mem=2.7M, per-call=3.5K, calls=800
Run #5: extra-mem=3.4M, per-call=3.5K, calls=1000
FAIL
```
If, on the other hand, the memory increased on one run (say 200 calls) but decreased on the next run (say 400 calls), then it clearly means it's a false positive, because memory consumption may be > 0 on second run, but if it's lower than the previous run with less repetitions, then it cannot possibly represent a leak (just a fluctuation):
```
psutil.tests.test_memory_leaks.TestModuleFunctionsLeaks.test_net_connections ...
Run #1: extra-mem=568.0K, per-call=2.8K, calls=200
Run #2: extra-mem=24.0K, per-call=61.4B, calls=400
OK
```
Note about mallinfo()
================
Aka #1275. `mallinfo()` on Linux is supposed to provide memory metrics about how many bytes gets allocated on the heap by `malloc()`, so it's supposed to be way more precise than RSS and also [USS](http://grodola.blogspot.com/2016/02/psutil-4-real-process-memory-and-environ.html). In another branch were I exposed it, I verified that fluctuations still occur even when using `mallinfo()` though, despite less often. So that means even `mallinfo()` would not grant 100% stability.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Support for Python 3.3 was dropped in version 5.4.1. Support for Python
3.2 was dropped earlier. Remove all references to these unsupported
versions including documentation, scripts, workarounds, etc. Eases
maintenance as fewer workarounds are used for unsupported environments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* #1058: have Makefile use PYTHONWARNINGS=all by default for (almost) all commands
* #1058 fix linux tests warnings
* #1058: try not to use imp module
* #1058: get rid of imp module completely
* #1058: ignore unicode warnings
* #1058: ignore stderr from procsmem.py
* #1058: fix resource warning from Popen
* #1058: get rid of contextlib.nested (deprecated)
|
| |
|
|
|
|
| |
http://bugs.python.org/issue30204
|
| |
|
| |
|
|
|
|
| |
replacing '\x00' for the whole string
|
| |
|
|\ |
|
| | |
|
|/
|
|
| |
OpenBSD
|
|\
| |
| | |
Modules aren't scripts.
|
| |
| |
| |
| | |
They shouldn't have shebang lines.
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
| |
results in a considerable speedup (from ~ 30% to 50%)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
UnicodeEncodeError and then retry; also add unit tests for unicode path names passed to disk_usage()
|
| |
|
|
|
|
| |
propset functionnality which no longer applies now that we're using Mercurial.
|
|
|
|
| |
broken in 0.6.0 release (patch by Riccardo Murri)
|
| |
|
|
|
|
| |
and 3.x avoiding the 2to3 run at installation time
|
| |
|
|
|
|
| |
defaultdict to construct a table where 'values' are all the processes having 'keys' as their parent and then recursively iterate over it. This is an order of magnitude faster
|
|
|
|
| |
cleaner and more efficient version
|