Perftester
A lightweight Python package for performance testing of Python functions.
Install / Use
/learn @nyggus/PerftesterREADME
perftester: Lightweight performance testing of Python functions
Installation
Install using pip:
pip install perftester
The package has three external dependencies: memory_profiler (repo), easycheck (repo), and rounder (repo).
perftesteris still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus<at>gmail.com.
Pre-introduction: TL;DR
At the most basic level, using perftester is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
Benchmarking
You have time_benchmark() and memory_benchmark() functions:
import perftester as pt
def foo(x, n): return [x] * n
pt.time_benchmark(foo, x=129, n=100)
and this will print the results of the time benchmark, with raw results similar to those that timeit.repeat() returns, but unlike it, pt.time_benchmark() returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
The above call did actually run timeit.repeat() function, with the default configuration of Number=100_000 and Repeat=5. If you want to change any of these, you can use arguments Number and Repeat, correspondigly:
pt.time_benchmark(foo, x=129, n=100, Number=1000)
pt.time_benchmark(foo, x=129, n=100, Repeat=2)
pt.time_benchmark(foo, x=129, n=100, Number=1000, Repeat=2)
These calls do not change the default settings so you use the arguments' values on the fly. Later you will learn how to change the default settings and the settings for a particular function.
Some of you may wonder why the
NumberandRepeatarguments violate what we can call the Pythonic style, by using a capital first letter for function arguments. The reason is simple: I wanted to minimize a risk of conflicts that would happen when benchmarking (or testing) a function with any of the argumentsNumberorRepeat(or both). A chance that a Python function will have aNumberor aRepeatargument is rather small. If that happens, however, you can usefunctools.partial()to overcome the problem:
from functools import partial
def bar(Number, Repeat): return [Number] * Repeat
bar_p = partial(bar, Number=129, Repeat=100)
pt.time_benchmark(bar_p, Number=100, Repeat=2)
Benchmarking RAM usage is similar:
pt.memory_usage_benchmark(foo, x=129, n=100)
It uses the memory_profiler.memory_usage() function, which runs the function just once to measure its memory use. Almost always, there is no need to repeat it, unless there is great randomness in memory usage by the function. If you have good reasons to change this behavior (e.g., in the case of such randomness), you can request several calls of the function, using the Repeat argument:
pt.memory_usage_benchmark(foo, x=129, n=100, Repeat=100)
You can learn more in the detailed description of the package below.
Testing
The API of perftester testinf functions is similar to that of benchmarking functions, the only difference being limits you need to provide. You can determine those limits using the above benchmark functions. Here are examples of several performance tests using perftester:
>>> import perftester as pt
>>> def foo(x, n): return [x] * n
# A raw test
>>> pt.time_test(foo, raw_limit=9.e-07, x=129, n=100)
# A relative test
>>> pt.time_test(foo, relative_limit=7, x=129, n=100)
# A raw test
>>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
# A relative test
>>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
You can, certainly, use Repeat and Number:
>>> pt.time_test(foo, relative_limit=7, x=129, n=100, Repeat=3, Number=1000)
Raw tests work with raw executation time. Relative tests work with relative time against a call of an empty function; that way, the test should be more or less independent of the machine you run the test on; so, a quick machine should provide more or less similar relative results as a slow machine.
Relative results, however, can differ between different operating systems.
You can use these testing functions in pytest, or in dedicated doctest files. You can, however, use perftester as a separate performance testing framework. Read on to learn more about that. What's more, perftester offers more functionalities, and a config object that offers you much more control of testing.
That's all in this short introduction. If you're interested in more advanced use of perftester, read on to read a far more detailed introduction. In addition, files in the docs folder explain in detail particular functionalities that perftester offers.
Introduction
perftester is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
Under the hood, perftester is a wrapper around two functions from other modules:
perftester.time_benchmark()andperftester.time_test()usetimeit.repeat()perftester.memory_usage_benchmark()andperftester.memory_usage_test()usememory_profiler.memory_usage()
What perftester offers is a testing framework with as simple syntax as possible.
You can use perftester in three main ways:
- in an interactive session, for simple benchmarking of functions;
- as part of another testing framework, like
doctestorpytests; and - as an independent testing framework.
The first way is a different type of use from the other two. I use it to learn the behavior of functions (interms of execution time and memory use) I am working on right now, so not for actual testing.
When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
Using perftester
Use it as a separate testing framework
To use perftester that way,
- Collect tests in Python modules whose names start with "perftester_"; for instance, "perftester_module1.py", perftester_module2.py" and the like.
- Inside these modules, collect testing functions that start with "perftester_"; for instance,
def perftester_func_1(),def perftester_func_2(), and the like (note how similar this approach is to that whichpytestuses); - You can create a config_perftester.py file, in which you can change any configuration you want, using the
perftester.configobject. The file should be located in the folder from which you will run the CLI commandperftester. If this file is not there,perftesterwill use its default configuration. Note that cofig_perftester.py is a Python module,soperftesterconfiguration is done in actual Python code. - Now you can run performance tests using
perftesterin your shell. You can do it in three ways:$ perftesterrecursively collects allperftestermodules from the directory in which the command was run, and from all its subdirectories; then it runs all the collectedperftestertests;$ perftester path_to_dirrecursively collects allperftestermodules from path_to_dir/ and runs all perftesters located in them.$ perftester path_to_file.pyruns all perftesters from the module given in the path.
Read more about using perftester that way here.
It does make a difference how you do that. When you run the
perftestercommand with each testing file independently, each file will be tested in a separated session, so with a new instance of thept.configobject. When you run the command for a directory, all the functions will be tested in one session. And when you run a bareperftestercommand, all your tests will be run in one session.
There is no best approach, but remember to choose one that suits your needs.
Use perftester inside pytest
This is a very simple approach, perhaps the simplest one: When you use pytest, you can simply add perftester testing functions to pytest testing functions, and that way both frameworks will be combined, or rather the pytest framework will run perftester tests. The amount of additional work is minimal.
For instance, you can write the following test function:
import perftester as pt
from my_module import f1 # assume that f1 takes two arguments, a string (x) and a float (y)
def test_performance_of_f1():
pt.time_test(
f1,
raw_limit=10, relative_limit=None,
x="whatever string", y=10.002)
This will use either the settings for this particular function (if you set them in pt.config) or the default settings (also from pt.config). However, you can also use Number and Repeat arguments, in order to overwrite these settings (passed to timeit.repeat() as number and repeat, respectively) for this particular function call:
import perftester as pt
from my_module import f1 # assume that f1 takes two arguments, a string (x) and a float (y)
def test_performance_of_f1():
pt.time_test(
f1,
raw_limit=10, relative_limit=Non
