Pylero
Python wrapper for the Polarion WSDL API
Install / Use
/learn @RedHatQE/PyleroREADME
Pylero
Welcome to Pylero, the Python wrapper for the Polarion WSDL API. The Pylero wrapper enables native python access to Polarion objects and functionality using object oriented structure and functionality. This allows the developers to use Pylero in a natural fashion without being concerned about the Polarion details.
All Pylero objects inherit from BasePolarion. The objects used in the library are all generated from the SOAP factory class, using the python-suds library. The Pylero class attributes are generated dynamically as properties, based on a mapping dict between the pylero naming convention and the Polarion attribute names.
The use of properties allows the pylero object attributes to be virtual with no need for syncing between them and the Polarion objects they are based on.
The Polarion WSDL API does not implement validation/verification of data passed in, so the Pylero library takes care of this itself. All enums are validated before being sent to the server and raise an error if not using a valid value. A number of workflow implementations are also included, for example when creating a Document, it automatically creates the Heading work item at the same time.
Polarion Work Items are configured per installation, to give native workitem objects (such as TestCase), the library connects to the Polarion server, downloads the list of workitems and creates them.
Important Notice
Polarion Web Service (SOAP/WSDL API) is in maintenance mode since Polarion 2410.
Siemens has announced that the Polarion Web Service API used by Pylero is now in maintenance mode. For new features and forward-looking development, users are advised to check the Polarion REST API.
Pylero will continue to work with the existing WSDL API, but no new features will be added to the WSDL API by Polarion.
Installation
Install from Pypi
Pylero package have been published to Pypi:
https://pypi.org/project/pylero/
Install Pylero Pypi package with:
$ pip install pylero
By default the latest package and dependencies will be installed.
Install from repo
Pylero is located in a git repository and can be cloned from:
$ git clone https://github.com/RedHatQE/pylero.git
From the root of the project, run:
$ pip install .
Build pip package
After cloned the repo and in the dir:
$ python -m build
both wheel and bdist format will be built and the package could be found under dist directory.
Then both files could be used to install the package with pip install locally.
Pylero must be configured (see next section) before it can be used.
Configuration
A configuration file must be filled out, which must be located either in the current dir (the dir where the script is executed from) named .pylero or in the user's home dir ~/.pylero
Default settings are stored in LIBDIR/pylero.cfg. This file should not be modified, as it will be overwritten with any future updates. Certificates should be verified automatically, but if they aren't, you can add the path to your CA to the cert_path config option. These are the configurable values:
[webservice]
url=https://{your polarion web URL}/polarion
svn_repo=https://{your polarion web URL}/repo
user={your username}
password={your password}
token={your personal access token}
default_project={your default project}
#cert_path=/dir/with/certs
#disable_manual_auth=False
If the token is given it will be used for login by default, else if both user and password are given the password will be used for login, else if user is provided and password value is blank, it will prompt you for a password, else if none of them are provided it will prompt you for the token to login and access any of the pylero objects.
These can also be overridden with the following environment variables:
POLARION_URL
POLARION_REPO
POLARION_USERNAME
POLARION_PASSWORD
POLARION_TOKEN
POLARION_TIMEOUT
POLARION_PROJECT
POLARION_CERT_PATH
POLARION_DISABLE_MANUAL_AUTH
Requirements
The install_requires attribute in setup.py installs the following requirements:
suds; python_version < '3.0'
suds-py3; python_version >= '3.0'
click
readline; python_version <= '3.6'
Usage
There is a pylero script installed that opens a python shell with all the objects in the library already loaded:
$ pylero
>>> tr = TestRun("example", project_id="project_name")
Alternatively, you can open a python shell and import the objects that you want to use:
$ python
Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pylero.test_run import TestRun
>>> tr = TestRun("example", project_id="project_name")
Examples
There's a possibility that Workitems mentioned in this example may not exactly match the types that your Polarion server was configured with.
In those cases, importing TestCase, Requirement etc will fail, to overcome
that, you either need to reach out to your Polarion admin for knowing the
workitem types that are configured or can find those from below code.
>>> from pylero.work_item import *
>>> globals()['workitems'].values()
dict_values(['TestCase', 'TestSuite', 'BusinessCase', 'Requirement', 'ChangeRequest', 'IncidentReport', 'Defect', 'Task', 'Risk'])
Note: This is only required once when you start using Pylero as the work item
type don't usually change. The dict_values from above listing are importable,
ex: from pylero.work_item TesCase works for me and it may differ for you based
on above returned values.
import datetime
from pylero.test_run import TestRun
from pylero.test_record import TestRecord
from pylero.work_item import TestCase, Requirement
from pylero.document import Document
# Creating a Test Run Template:
tr = TestRun.create_template("myproj",
"Static Query Test",
parent_template_id="Empty",
select_test_cases_by="staticQueryResult",
query="type:testcase AND status:approved")
# Creating a Test Run:
tr = TestRun.create("myproj", "My Test Run", "Static Query Test")
# changing status
tr.status = "inprogress"
# getting and changing a custom attribute in TestRun
arch = tr.get_custom_field("arch")
arch = "i386"
tr.set_custom_field("arch", arch)
# saving the data to the server
tr.update()
# Adding a test record
tr.add_test_record_by_fields(test_case_id="MYPROJ-1813",
test_result="passed",
test_comment="went smoothly",
executed_by="user1",
executed=datetime.datetime.now(),
duration=10.50,
defect_work_item_id="MYPROJ-1824")
# Getting specific WorkItems
tc = TestCase(project_id="myproj", work_item_id="MYPROJ-2015")
req = Requirement(project_id="myproj", work_item_id="MYPROJ-2019")
# Getting required custom fields for specific Work Items
reqs = TestCase.custom_fields("myproj")[1]
# returns [u'caseimportance', u'caselevel', u'caseautomation', u'caseposneg']
reqs = Requirement.custom_fields("myproj")[1]
# returns [u'reqtype']
# Getting the valid values for the custom enumerations
tc.get_valid_field_values("caseimportance")
# returns [critical, high, medium, low]
# Creating a specific Work Item
tc = TestCase.create("myproj",
"Title",
"Description",
caseimportance="high",
caselevel="component",
caseautomation="notautomated",
caseposneg="positive")
# Note if the custom required fields are not specified, an exception will be raised
# Custom field for work items are accessed like regular attributes
tc.caseimportance = "critical"
# to save changes
tc.update()
# Creating a document
doc = Document.create("myproj", "Testing", "API doc", "The API Document",
["testcase"], document_type="testspecification")
# Adding a Functional Test Case work item to the document
wi = TestCase()
wi.tcmscaseid = "12345"
wi.title = "[GUI] Host Network QoS-'named'"
wi.author = "user1"
wi.tcmscategory = "Functional"
wi.caseimportance = "critical"
wi.status = "proposed"
wi.setup = "DC/Cluster/Host"
wi.teardown = """
Proceed with the VM Network QoS paradigm, that is creating Network QoS
entities that can be shared between different networks - let's refer to this
as ""named"" QoS. This QoS entities are created via DC> QoS > Host Network"
"""
steps = TestSteps()
steps.keys = ["step", "expectedResult"]
step1 = TestStep()
step1.values = ["This is step 1", "Step 1 expected result"]
step2 = TestStep()
step2.values = ["This is step 2", "Step 2 expected result"]
arr_step = [step1, step2]
steps.steps = arr_step
wi.test_steps = steps
wi.caseautomation = "notautomated"
wi.caseposneg = "positive"
wi.caselevel = "component"
new_wi = doc.create_work_item(None, wi)
# Getting a list of documents in a space.
docs = Document.get_documents(proj="myproj", space="Testing")
# Create template from document
TestRun.create_template("myproj",
"tpl_tp_12071",
select_test_cases_by="staticLiveDoc",
doc_with_space="Testing/tp_12071")
# create a test run based on the template
tr = TestRun.create("myproj", "tp_12071_1", "tpl_tp_12071")
# process a record
rec = tr.records[0]
rec.duration = "10.0"
rec.executed_by = "user1"
rec.executed = datetime.datetime.now()
rec.result = "passed"
wi = _WorkItem(uri=rec.test_case_id)
step
Related Skills
node-connect
351.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
110.9kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
110.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
351.8kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
