ApprovalTests.Python
ApprovalTests for python
Install / Use
/learn @approvals/ApprovalTests.PythonREADME
ApprovalTests.Python
<!-- toc -->Contents
- What can I use ApprovalTests for?
- Getting Started
- Overview
- Reporters
- Support and Documentation
- For developers
- Weekly Ensemble
- Pull Requests<!-- endToc -->
Capturing Human Intelligence - ApprovalTests is an open source assertion/verification library to aid testing.
approvaltests is the ApprovalTests port for Python.
For more information see: www.approvaltests.com.
What can I use ApprovalTests for?
You can use ApprovalTests to verify objects that require more than a simple assert including long strings, large arrays,
and complex hash structures and objects. ApprovalTests really shines when you need a more granular look at the test
failure. Sometimes, trying to find a small difference in a long string printed to STDOUT is just too hard!
ApprovalTests solves this problem by providing reporters which let you view the test results in one of many popular diff
utilities.
Getting Started
What Are Approvals
If you need to gain a better understanding or are new to this concept, start here.
New Projects
If you are starting a new project, we suggest you use the Starter Project. You can just clone this and go. It's great for exercises, katas, and green field projects.
Minimal Example Tutorial
If this is first time approvaltesting in python, consider starting here: Minimal Example Tutorial
Adding to Existing Projects
From pypi:
pip install approvaltests
Overview
Approvals work by comparing the test results to a golden master. If no golden master exists you can create a snapshot
of the current test results and use that as the golden master. The reporter helps you manage the golden master.
Whenever your current results differ from the golden master, Approvals will launch an external application for you to
examine the differences. Either you will update the master because you expected the changes and they are good,
or you will go back to your code and update or roll back your changes to get your results back in line with the
golden master.
Example using pytest
<!-- snippet: getting_started_with_pytest.py --><a id='snippet-getting_started_with_pytest.py'></a>
from approvaltests.approvals import verify
def test_simple() -> None:
result = "Hello ApprovalTests"
verify(result)
<sup><a href='/tests/examples/getting_started_with_pytest.py#L1-L6' title='Snippet source file'>snippet source</a> | <a href='#snippet-getting_started_with_pytest.py' title='Start of snippet'>anchor</a></sup>
<!-- endSnippet -->Install the plugin pytest-approvaltests and use it to select a reporter:
pip install pytest-approvaltests
pytest --approvaltests-use-reporter='PythonNative'
The reporter is used both to alert you to changes in your test output, and to provide a tool to update the golden master. In this snippet, we chose the 'PythonNative' reporter when we ran the tests. For more information about selecting reporters see the documentation
Example using unittest
<!-- snippet: getting_started_with_unittest.py --><a id='snippet-getting_started_with_unittest.py'></a>
import unittest
from approvaltests.approvals import verify
class GettingStartedTest(unittest.TestCase):
def test_simple(self) -> None:
verify("Hello ApprovalTests")
if __name__ == "__main__":
unittest.main()
<sup><a href='/tests/examples/getting_started_with_unittest.py#L1-L12' title='Snippet source file'>snippet source</a> | <a href='#snippet-getting_started_with_unittest.py' title='Start of snippet'>anchor</a></sup>
<!-- endSnippet -->This example has the same behaviour as the pytest version, but uses the built-in test framework unittest instead.
Example using CLI
You can invoke a verify() call from the command line. This allows invoking python approvals from any other stack via subprocesses.
Usage
python -m approvaltests --test-id hello --received "hello world!"
or
python -m approvaltests -t hello -r "hello world!"
or
echo "hello world!" | python -m approvaltests -t hello
Argument Definitions
-
--test-idor-t: Test identifier used to name theapproved.txtandreceived.txtfiles for the test. -
--receivedor-r: The output of the program under test (a string) that is passed to the verify method.stdin: Instead of providing areceivedargument, you may usestdin.
Reporters
Selecting a Reporter
All verify...() functions take an optional options parameter that can configure reporters (as well as many other aspects).
ApprovalTests.Python comes with a few reporters configured, supporting Linux, Mac OSX, and Windows.
In the example shown below, we pass in an options with a reporter we're selecting directly:
<a id='snippet-select_reporter_from_class'></a>
class TestSelectReporterFromClass(unittest.TestCase):
def test_simple(self):
verify("Hello", options=Options().with_reporter(ReportWithBeyondCompare()))
<sup><a href='/tests/samples/test_getting_started.py#L26-L32' title='Snippet source file'>snippet source</a> | <a href='#snippet-select_reporter_from_class' title='Start of snippet'>anchor</a></sup>
<!-- endSnippet -->You can also use the GenericDiffReporterFactory to find and select the first diff utility that exists on our system.
An advantage of this method is you can modify the reporters.json file directly to handle your unique system.
<!-- snippet: select_reporter_from_factory --><a id='snippet-select_reporter_from_factory'></a>
class TestSelectReporter(unittest.TestCase):
@override
def setUp(self):
self.factory = GenericDiffReporterFactory()
def test_simple(self):
verify(
"Hello", options=Options().with_reporter(self.factory.get("BeyondCompare"))
)
<sup><a href='/tests/samples/test_getting_started.py#L11-L23' title='Snippet source file'>snippet source</a> | <a href='#snippet-select_reporter_from_factory' title='Start of snippet'>anchor</a></sup>
<!-- endSnippet -->Or you can build your own GenericDiffReporter on the fly
<!-- snippet: custom_generic_diff_reporter --><a id='snippet-custom_generic_diff_reporter'></a>
class GettingStartedTest(unittest.TestCase):
def test_simple(self):
verify(
"Hello",
options=Options().with_reporter(
GenericDiffReporter.create(r"C:\my\favorite\diff\utility.exe")
),
)
<sup><a href='/tests/samples/test_getting_started.py#L35-L46' title='Snippet source file'>snippet source</a> | <a href='#snippet-custom_generic_diff_reporter' title='Start of snippet'>anchor</a></sup>
<!-- endSnippet -->As long as C:/my/favorite/diff/utility.exe can be invoked from the command line using the format utility.exe file1 file2
then it will be compatible with GenericDiffReporter. Otherwise you will have to derive your own reporter, which
we won't cover here.
JSON file for collection of reporters
To wrap things up, I should note that you can completely replace the collection of reporters known to the reporter factory by writing your own JSON file and loading it.
For example if you had C:/myreporters.json
[
["BeyondCompare4", "C:/Program Files (x86)/Beyond Compare 4/BCompare.exe"],
["WinMerge", "C:/Program Files (x86)/WinMerge/WinMergeU.exe"],
["Tortoise", "C:/Program Files (x86)/TortoiseSVN/bin/tortoisemerge.exe"]
]
You could then use that file by loading it into the factory:
import unittest
from approvaltests.approvals import verify
from approvaltests.reporters.generic_diff_reporter_factory import GenericDiffReport
Related Skills
node-connect
352.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
