Skip to content

Commit 93b945b

Browse files
committed
Rename performance project to: pyperformance
1 parent 202d921 commit 93b945b

File tree

119 files changed

+141
-139
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

119 files changed

+141
-139
lines changed

.gitignore

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
# Created by setup.py sdist
1111
build/
1212
dist/
13-
performance.egg-info/
13+
pyperformance.egg-info/
1414

1515
# Created by the pyperformance script
1616
venv/

.travis.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,8 @@ python:
77
- pypy
88
install:
99
- pip install -U 'setuptools>=18.5' 'pip>=6.0'
10-
# Need to install performance, performance/tests/test_compare.py imports it
10+
# Need to install pyperformance,
11+
# pyperformance/tests/test_compare.py imports it
1112
- pip install -e .
1213
script:
1314
- python runtests.py

README.rst

+11-11
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,22 @@
22
The Python Benchmark Suite
33
##########################
44

5-
.. image:: https://img.shields.io/pypi/v/performance.svg
6-
:alt: Latest performance release on the Python Cheeseshop (PyPI)
7-
:target: https://pypi.python.org/pypi/performance
5+
.. image:: https://img.shields.io/pypi/v/pyperformance.svg
6+
:alt: Latest pyperformance release on the Python Cheeseshop (PyPI)
7+
:target: https://pypi.python.org/pypi/pyperformance
88

9-
.. image:: https://travis-ci.org/python/performance.svg?branch=master
10-
:alt: Build status of performance on Travis CI
11-
:target: https://travis-ci.org/python/performance
9+
.. image:: https://travis-ci.org/python/pyperformance.svg?branch=master
10+
:alt: Build status of pyperformance on Travis CI
11+
:target: https://travis-ci.org/python/pyperformance
1212

13-
The ``performance`` project is intended to be an authoritative source of
13+
The ``pyperformance`` project is intended to be an authoritative source of
1414
benchmarks for all Python implementations. The focus is on real-world
1515
benchmarks, rather than synthetic benchmarks, using whole applications when
1616
possible.
1717

18-
* `performance documentation <http://pyperformance.readthedocs.io/>`_
19-
* `performance GitHub project <https://github.com/python/performance>`_
18+
* `pyperformance documentation <http://pyperformance.readthedocs.io/>`_
19+
* `pyperformance GitHub project <https://github.com/python/pyperformance>`_
2020
(source code, issues)
21-
* `Download performance on PyPI <https://pypi.python.org/pypi/performance>`_
21+
* `Download pyperformance on PyPI <https://pypi.python.org/pypi/pyperformance>`_
2222

23-
performance is distributed under the MIT license.
23+
pyperformance is distributed under the MIT license.

TODO.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Port PyPy benchmarks
5555

5656
Repository: https://bitbucket.org/pypy/benchmarks/
5757

58-
Different from performance?
58+
Different from pyperformance?
5959

6060
* json_bench
6161

@@ -95,13 +95,13 @@ Deliberate choice to not add it:
9595

9696
Done:
9797

98-
* ai (called bm_nqueens in performance)
98+
* ai (called bm_nqueens in pyperformance)
9999
* bm_chameleon
100100
* bm_mako
101101
* chaos
102102
* crypto_pyaes
103103
* deltablue
104-
* django (called django_template in performance)
104+
* django (called django_template in pyperformance)
105105
* dulwich_log
106106
* fannkuch
107107
* float
@@ -112,11 +112,11 @@ Done:
112112
* html5lib
113113
* mdp
114114
* meteor-contest
115-
* nbody_modified (called nbody in performance)
115+
* nbody_modified (called nbody in pyperformance)
116116
* nqueens
117117
* pidigits
118-
* pyflate-fast (called pyflate in performance)
119-
* raytrace-simple (called raytrace in performance)
118+
* pyflate-fast (called pyflate in pyperformance)
119+
* raytrace-simple (called raytrace in pyperformance)
120120
* richards
121121
* scimark_fft
122122
* scimark_lu
@@ -140,7 +140,7 @@ pyston benchmarks
140140

141141
Add benchmarks from the Pyston benchmark suite:
142142
https://github.com/dropbox/pyston-perf
143-
and convince Pyston to use performance :-)
143+
and convince Pyston to use pyperformance :-)
144144

145145
TODO:
146146

@@ -150,7 +150,7 @@ TODO:
150150
- django_template3_10x
151151
- django_template3
152152
- django_template
153-
- fasta (it's different than performance "regex_dna")
153+
- fasta (it's different than pyperformance "regex_dna")
154154
- interp2
155155
- pyxl_bench_10x
156156
- pyxl_bench2_10x
@@ -172,4 +172,4 @@ Done:
172172
- richards
173173
- sqlalchemy_imperative, sqlalchemy_imperative2, sqlalchemy_imperative2_10x:
174174
use --rows cmdline option to control the number of SQL rows
175-
- sre_compile_ubench: performance has a much more complete benchmark on regex
175+
- sre_compile_ubench: pyperformance has a much more complete benchmark on regex

doc/benchmark.conf.sample

+3-3
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ git_remote = remotes/origin
2828
# Create files into bench_dir:
2929
# - bench_dir/bench-xxx.log
3030
# - bench_dir/prefix/: where Python is installed
31-
# - bench_dir/venv/: Virtual environment used by performance
31+
# - bench_dir/venv/: Virtual environment used by pyperformance
3232
bench_dir = ~/bench_tmpdir
3333

3434
# Link Time Optimization (LTO)?
@@ -64,10 +64,10 @@ install = True
6464
# Run "sudo python3 -m pyperf system tune" before running benchmarks?
6565
system_tune = True
6666

67-
# --benchmarks option for 'performance run'
67+
# --benchmarks option for 'pyperformance run'
6868
benchmarks =
6969

70-
# --affinity option for 'pyperf system tune' and 'performance run'
70+
# --affinity option for 'pyperf system tune' and 'pyperformance run'
7171
affinity =
7272

7373
# Upload generated JSON file?

doc/benchmarks.rst

+10-10
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Available Groups
77
================
88

99
Like individual benchmarks (see "Available benchmarks" below), benchmarks group
10-
are allowed after the `-b` option. Use ``python3 -m performance list_groups``
10+
are allowed after the `-b` option. Use ``python3 -m pyperformance list_groups``
1111
to list groups and their benchmarks.
1212

1313
Available benchmark groups:
@@ -23,13 +23,13 @@ Available benchmark groups:
2323
start-up time.
2424
* ``template``: Templating libraries
2525

26-
Use the ``python3 -m performance list_groups`` command to list groups and their
26+
Use the ``python3 -m pyperformance list_groups`` command to list groups and their
2727
benchmarks.
2828

2929
Available Benchmarks
3030
====================
3131

32-
In performance 0.5.5, the following microbenchmarks have been removed because
32+
In pyperformance 0.5.5, the following microbenchmarks have been removed because
3333
they are too short, not representative of real applications and are too
3434
unstable.
3535

@@ -42,7 +42,7 @@ unstable.
4242
2to3
4343
----
4444

45-
Run the 2to3 tool on the ``performance/benchmarks/data/2to3/`` directory: copy
45+
Run the 2to3 tool on the ``pyperformance/benchmarks/data/2to3/`` directory: copy
4646
of the ``django/core/*.py`` files of Django 1.1.4, 9 files.
4747

4848
Run the ``python -m lib2to3 -f all <files>`` command where ``python`` is
@@ -89,7 +89,7 @@ Copyright (C) 2005 Carl Friedrich Bolz
8989

9090
Image generated by bm_chaos (took 3 sec on CPython 3.5) with the command::
9191

92-
python3 performance/benchmarks/bm_chaos.py --worker -l1 -w0 -n1 --filename chaos.ppm --width=512 --height=512 --iterations 50000
92+
python3 pyperformance/benchmarks/bm_chaos.py --worker -l1 -w0 -n1 --filename chaos.ppm --width=512 --height=512 --iterations 50000
9393

9494

9595
crypto_pyaes
@@ -139,7 +139,7 @@ dulwich_log
139139
-----------
140140

141141
Iterate on commits of the asyncio Git repository using the Dulwich module.
142-
Use ``performance/benchmarks/data/asyncio.git/`` repository.
142+
Use ``pyperformance/benchmarks/data/asyncio.git/`` repository.
143143

144144
Pseudo-code of the benchmark::
145145

@@ -215,7 +215,7 @@ See the `Mercurial project <https://www.mercurial-scm.org/>`_.
215215
html5lib
216216
--------
217217

218-
Parse the ``performance/benchmarks/data/w3_tr_html5.html`` HTML file (132 KB)
218+
Parse the ``pyperformance/benchmarks/data/w3_tr_html5.html`` HTML file (132 KB)
219219
using ``html5lib``. The file is the HTML 5 specification, but truncated to
220220
parse the file in less than 1 second (around 250 ms).
221221

@@ -385,7 +385,7 @@ pyflate
385385
-------
386386

387387
Benchmark of a pure-Python bzip2 decompressor: decompress the
388-
``performance/benchmarks/data/interpreter.tar.bz2`` file in memory.
388+
``pyperformance/benchmarks/data/interpreter.tar.bz2`` file in memory.
389389

390390
Copyright 2006--2007-01-21 Paul Sladen:
391391
http://www.paul.sladen.org/projects/compression/
@@ -427,7 +427,7 @@ From http://www.lshift.net/blog/2008/10/29/toy-raytracer-in-python
427427

428428
Image generated by the command (took 68.4 sec on CPython 3.5)::
429429

430-
python3 performance/benchmarks/bm_raytrace.py --worker --filename=raytrace.ppm -l1 -w0 -n1 -v --width=800 --height=600
430+
python3 pyperformance/benchmarks/bm_raytrace.py --worker --filename=raytrace.ppm -l1 -w0 -n1 -v --width=800 --height=600
431431

432432

433433
regex_compile
@@ -525,7 +525,7 @@ spambayes
525525

526526
Run a canned mailbox through a SpamBayes ham/spam classifier.
527527

528-
Data files from ``performance/benchmarks/data`` directory:
528+
Data files from ``pyperformance/benchmarks/data`` directory:
529529

530530
* ``spambayes_mailbox``: Mailbox file which contains 64 emails
531531
* ``spambayes_hammie.pkl``: Ham data (serialized by pickle)

doc/changelog.rst

+1
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ Changelog
44
Version 0.8.1
55
-------------
66

7+
* Project renamed from "performance" to "pyperformance"
78
* Upgrade pyperf from version 1.6.0 to 1.6.1. The project has been renamed from
89
"perf" to "pyperf". Update imports.
910
* Issue #54: Update Genshi to 0.7.3. It is now compatible with Python 3.8.

doc/index.rst

+7-7
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,17 @@
22
The Python Performance Benchmark Suite
33
######################################
44

5-
The ``performance`` project is intended to be an authoritative source of
5+
The ``pyperformance`` project is intended to be an authoritative source of
66
benchmarks for all Python implementations. The focus is on real-world
77
benchmarks, rather than synthetic benchmarks, using whole applications when
88
possible.
99

10-
* `performance documentation <http://pyperformance.readthedocs.io/>`_
11-
* `performance GitHub project <https://github.com/python/performance>`_
10+
* `pyperformance documentation <http://pyperformance.readthedocs.io/>`_
11+
* `pyperformance GitHub project <https://github.com/python/pyperformance>`_
1212
(source code, issues)
13-
* `Download performance on PyPI <https://pypi.python.org/pypi/performance>`_
13+
* `Download pyperformance on PyPI <https://pypi.python.org/pypi/pyperformance>`_
1414

15-
performance is distributed under the MIT license.
15+
pyperformance is distributed under the MIT license.
1616

1717
Documenation:
1818

@@ -27,7 +27,7 @@ Documenation:
2727
Other Python Benchmarks:
2828

2929
* CPython: `speed.python.org <https://speed.python.org/>`_ uses pyperf,
30-
performance and `Codespeed <https://github.com/tobami/codespeed/>`_ (Django
30+
pyperformance and `Codespeed <https://github.com/tobami/codespeed/>`_ (Django
3131
web application)
3232
* PyPy: `speed.pypy.org <http://speed.pypy.org/>`_
3333
uses `PyPy benchmarks <https://bitbucket.org/pypy/benchmarks>`_
@@ -41,7 +41,7 @@ Other Python Benchmarks:
4141

4242
See also the `Python speed mailing list
4343
<https://mail.python.org/mailman/listinfo/speed>`_ and the `Python pyperf module
44-
<http://pyperf.readthedocs.io/>`_ (used by performance).
44+
<http://pyperf.readthedocs.io/>`_ (used by pyperformance).
4545

4646
Image generated by bm_raytrace (pure Python raytrace):
4747

doc/usage.rst

+17-17
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,15 @@ Usage
55
Installation
66
============
77

8-
Command to install performance::
8+
Command to install pyperformance::
99

10-
python3 -m pip install performance
10+
python3 -m pip install pyperformance
1111

1212
The command installs a new ``pyperformance`` program.
1313

1414
If needed, ``pyperf`` and ``six`` dependencies are installed automatically.
1515

16-
performance works on Python 2.7, 3.4 and newer.
16+
pyperformance works on Python 2.7, 3.4 and newer.
1717

1818
On Python 2, the ``virtualenv`` program (or the Python module) is required
1919
to create virtual environments. On Python 3, the ``venv`` module of the
@@ -27,7 +27,7 @@ extension. Commands on Fedora to install dependencies:
2727
* Python 3: ``sudo dnf install python3-devel``
2828
* PyPy: ``sudo dnf install pypy-devel``
2929

30-
In some cases, performance fails to create a virtual environment. In this case,
30+
In some cases, pyperformance fails to create a virtual environment. In this case,
3131
upgrading virtualenv on the system can fix the issue. Example::
3232

3333
sudo python2 -m pip install -U virtualenv
@@ -42,8 +42,8 @@ Commands to compare Python 2 and Python 3 performances::
4242
pyperformance run --python=python3 -o py3.json
4343
pyperformance compare py2.json py3.json
4444

45-
Note: ``python3 -m performance ...`` syntax works as well (ex: ``python3 -m
46-
performance run -o py3.json``), but requires to install performance on each
45+
Note: ``python3 -m pyperformance ...`` syntax works as well (ex: ``python3 -m
46+
pyperformance run -o py3.json``), but requires to install pyperformance on each
4747
tested Python version.
4848

4949
JSON files are produced by the pyperf module and so can be analyzed using pyperf
@@ -141,7 +141,7 @@ Options of the ``list`` command::
141141
except the negative arguments. Otherwise we run only
142142
the positive arguments.
143143

144-
Use ``python3 -m performance list -b all`` to list all benchmarks.
144+
Use ``python3 -m pyperformance list -b all`` to list all benchmarks.
145145

146146

147147
venv
@@ -247,31 +247,31 @@ How to get stable benchmarks
247247

248248
* Run ``python3 -m pyperf system tune`` command
249249
* Compile Python using LTO (Link Time Optimization) and PGO (profile guided
250-
optimizations): use the :ref:`performance compile <cmd-compile>` command with
250+
optimizations): use the :ref:`pyperformance compile <cmd-compile>` command with
251251
uses LTO and PGO by default
252252
* See advices of the pyperf documentation:
253253
`How to get reproductible benchmark results
254254
<http://pyperf.readthedocs.io/en/latest/run_benchmark.html#how-to-get-reproductible-benchmark-results>`_.
255255

256256

257-
performance virtual environment
258-
===============================
257+
pyperformance virtual environment
258+
=================================
259259

260-
To run benchmarks, performance first creates a virtual environment. It installs
260+
To run benchmarks, pyperformance first creates a virtual environment. It installs
261261
requirements with fixed versions to get a reproductible environment. The system
262262
Python has unknown module installed with unknown versions, and can have
263263
``.pth`` files run at Python startup which can modify Python behaviour or at
264264
least slow down Python startup.
265265

266266

267-
What is the goal of performance
268-
===============================
267+
What is the goal of pyperformance
268+
=================================
269269

270270
A benchmark is always written for a specific purpose. Depending how the
271271
benchmark is written and how the benchmark is run, the result can be different
272272
and so have a different meaning.
273273

274-
The performance benchmark suite has multiple goals:
274+
The pyperformance benchmark suite has multiple goals:
275275

276276
* Help to detect performance regression in a Python implementation
277277
* Validate that an optimization change makes Python faster and don't
@@ -284,7 +284,7 @@ The performance benchmark suite has multiple goals:
284284
Don't disable GC nor ASLR
285285
-------------------------
286286

287-
The pyperf module and performance benchmarks are designed to produce
287+
The pyperf module and pyperformance benchmarks are designed to produce
288288
reproductible results, but not at the price of running benchmarks in a special
289289
mode which would not be used to run applications in production. For these
290290
reasons, the Python garbage collector, Python randomized hash function and
@@ -313,11 +313,11 @@ Warmups and steady state
313313

314314
A borderline issue are the benchmarks "warmups". The first values of each
315315
worker process are always slower: 10% slower in the best case, it can be 1000%
316-
slower or more on PyPy. Right now (2017-04-14), performance ignore first values
316+
slower or more on PyPy. Right now (2017-04-14), pyperformance ignore first values
317317
considered as warmup until a benchmark reachs its "steady state". The "steady
318318
state" can include temporary spikes every 5 values (ex: caused by the garbage
319319
collector), and it can still imply further JIT compiler optimizations but with
320-
a "low" impact on the average performance.
320+
a "low" impact on the average pyperformance.
321321

322322
To be clear "warmup" and "steady state" are a work-in-progress and a very
323323
complex topic, especially on PyPy and its JIT compiler.

performance/__main__.py

-2
This file was deleted.
File renamed without changes.

pyperformance/__main__.py

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
import pyperformance.cli
2+
pyperformance.cli.main()

0 commit comments

Comments
 (0)