diff options
author | Frank Harrison <frank@doublethefish.com> | 2020-04-26 21:07:00 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2020-04-26 22:07:00 +0200 |
commit | 9a11ae2cc9b20d6f570f5a3e410354902ef818b2 (patch) | |
tree | a3d6fecf65ce4b7774a465e1eaa504901b24a4a8 /tox.ini | |
parent | be5a61b13e48a129613e0c659bfd28bf9824f53c (diff) | |
download | pylint-git-9a11ae2cc9b20d6f570f5a3e410354902ef818b2.tar.gz |
benchmark| Potential solution for performance regressions (#3473)
* benchmark| Add benchmarking option to tox
* benchmark| Adds basic performance benchmark baselines for pylint
Here we establish baseline benchmarks for the system when used in
minimal way.
Here we just confirm that -j1 vs -jN gives some boost in performance under
simple situations, establishing a baseline for other benchmarks.
Co-authored-by: Pierre Sassoulas <pierre.sassoulas@gmail.com>
Diffstat (limited to 'tox.ini')
-rw-r--r-- | tox.ini | 32 |
1 files changed, 30 insertions, 2 deletions
@@ -1,5 +1,5 @@ [tox] -envlist = py35, py36, py37, py38, pypy, pylint +envlist = py35, py36, py37, py38, pypy, pylint, benchmark skip_missing_interpreters = true [testenv:pylint] @@ -53,13 +53,15 @@ deps = mccabe pytest pytest-xdist + pytest-benchmark pytest-profiling setenv = COVERAGE_FILE = {toxinidir}/.coverage.{envname} commands = - python -Wignore -m coverage run -m pytest {toxinidir}/tests/ {posargs:} + ; Run tests, ensuring all benchmark tests do not run + python -Wignore -m coverage run -m pytest --benchmark-disable {toxinidir}/tests/ {posargs:} ; Transform absolute path to relative path ; for compatibility with coveralls.io and fix 'source not available' error. @@ -132,3 +134,29 @@ commands = rm -f extensions.rst python ./exts/pylint_extensions.py sphinx-build -W -b html -d _build/doctrees . _build/html + +[testenv:benchmark] +deps = + https://github.com/PyCQA/astroid/tarball/master#egg=astroid-master-2.0 + coverage<5.0 + isort + mccabe + pytest + pytest-xdist + pygal + pytest-benchmark + +commands = + ; Run the only the benchmark tests, grouping output and forcing .json output so we + ; can compare benchmark runs + python -Wi -m pytest --exitfirst \ + --failed-first \ + --benchmark-only \ + --benchmark-save=batch_files \ + --benchmark-save-data \ + --benchmark-autosave \ + {toxinidir}/tests \ + --benchmark-group-by="group" \ + {posargs:} + +changedir = {toxworkdir} |