SkillAgentSearch skills...

Pdll

Pairwise Difference Learning (PDL) is a meta-learning framework that leverages pairwise differences to transform multiclass problems into binary tasks. This repository includes the original PDL Classifier implementation, along with extended versions for regression and weighted learning scenarios.

Install / Use

/learn @Karim-53/Pdll
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Pairwise difference learning library (pdll)

Downloads

Pairwise Difference Learning (PDL) library is a python module. It contains a scikit-learn compatible implementation of PDL Classifier, as described in Belaid et al. 2024

PDL Classifier or PDC is a meta learner that can reduce multiclass classification problem into a binary classification problem (similar/different).

<img src="https://github.com/user-attachments/assets/e15057cf-fef8-4061-8bb9-611adde0128b" width="70%">

Installation

To install the package, run the following command:

pip install -U pdll

Usage

from pdll import PairwiseDifferenceClassifier

from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_blobs

# Generate random data with 2 features, 10 points, and 3 classes
X, y = make_blobs(n_samples=10, n_features=2, centers=3, random_state=0)

pdc = PairwiseDifferenceClassifier(estimator=RandomForestClassifier())
pdc.fit(X, y)
print('score:', pdc.score(X, y))

y_pred = pdc.predict(X)
proba_pred = pdc.predict_proba(X)

Please consult examples/ directory for more examples.

How does it work?

The PDL algorithm works by transforming the multiclass classification problem into a binary classification problem. The algorithm works as follows:

Example 1: Graphical abstract

<img src="./results/abstract.png" width="800"/>

Example 2: PDC trained on the Iris dataset

<details> <summary>Clic to show</summary> We provide a minimalist classification example using the Iris dataset. The dataset is balanced, so the prior probabilities of each of the 3 classes are equal: p(Setosa) = p(Versicolour) = p(Virginica) = 1/3

Three Anchor Points

  • Flower 1: y1 = Setosa
  • Flower 2: y2 = Versicolour
  • Flower 3: y3 = Virginica

One Query Point

  • Flower Q: yq (unknown target)

Pairwise Predictions The model predicts the likelihood that both points have a similar class:

  • g_sym(Flower Q, Flower 1) = 0.6
  • g_sym(Flower Q, Flower 2) = 0.3
  • g_sym(Flower Q, Flower 3) = 0.0

Given the above data, the first step is to update the priors.

Posterior using Flower 1:

  • p_post,1(Setosa) = 0.6
  • p_post,1(Versicolour) = (1/3 * (1 - 0.6)) / (1 - 1/3) = 0.2
  • p_post,1(Virginica) = (1/3 * (1 - 0.6)) / (1 - 1/3) = 0.2

Similarly, we calculate for anchors 2 and 3:

  • p_post,2(Setosa) = 0.35

  • p_post,2(Versicolour) = 0.30

  • p_post,2(Virginica) = 0.35

  • p_post,3(Setosa) = 0.5

  • p_post,3(Versicolour) = 0.5

  • p_post,3(Virginica) = 0.0

Averaging over the three predictions:

Finally, the predicted class is the most likely prediction:

ŷ_q = arg max_{y ∈ Y} p_post(y) = Setosa

</details>

Evaluation

To reproduce the experiment of the paper, please run run_benchmark.py with a base learner and a dataset number, between 0 and 99. Example:

python run_benchmark.py --model DecisionTreeClassifier --data 0

Scores will be stored in ./results/tmp/ directory.

Experiment

We use 99 datasets from the OpenML repository. We compare the performance of the PDC algorithm with 7 base learners. We use the macro F1 score as a metric. The search space is inspired from TPOT a state-of-the-art library in optimizing Sklearn pipelines

<details> <summary>Description of the search space per estimator</summary>

| Estimator | # parameters | # combinations | |------------------------|--------------|----------------| | DecisionTree | 4 | 350 | | RandomForest | 7 | 1000 | | ExtraTree | 6 | 648 | | HistGradientBoosting | 6 | 486 | | Bagging | 6 | 96 | | ExtraTrees | 7 | 1000 | | GradientBoosting | 5 | 900 |

</details> <details> <summary>Search space per estimator</summary>

| Estimator | Parameter | Values | |----------------------------|------------------------|--------------------------------------------------------| | DecisionTreeClassifier | criterion | gini, entropy | | | max depth | None, 1, 2, 4, 6, 8, 11 | | | min samples split | 2, 4, 8, 16, 21 | | | min samples leaf | 1, 2, 4, 10, 21 | | RandomForestClassifier | criterion | gini, entropy | | | min samples split | 2, 4, 8, 16, 21 | | | max features | sqrt, 0.05, 0.17, 0.29, 0.41, 0.52, 0.64, 0.76, 0.88, 1.0 | | | min samples leaf | 1, 2, 4, 10, 21 | | | bootstrap | True, False | | ExtraTreeClassifier | criterion | gini, entropy | | | min samples split | 2, 5, 10 | | | min samples leaf | 1, 2, 4 | | | max features | sqrt, log2, None | | | max leaf nodes | None, 2, 12, 56 | | | min impurity decrease | 0.0, 0.1, 0.5 | | HistGradientBoostingClassifier | max iter | 100, 10 | | | learning rate | 0.1, 0.01, 1 | | | max leaf nodes | 31, 3, 256 | | | min samples leaf | 20, 4, 64 | | | l2 regularization | 0, 0.01, 0.1 | | | max bins | 255, 2, 64 | | BaggingClassifier | n estimators | 10, 5, 100, 256 | | | max samples | 1.0, 0.5 | | | max features | 0.5, 0.9, 1.0 | | | bootstrap | True, False | | | bootstrap features | False, True | | ExtraTreesClassifier | criterion | gini, entropy | | | max features | sqrt, 0.05, 0.17, 0.29, 0.41, 0.52, 0.64, 0.76, 0.88, 1.0 | | | min samples split | 2, 4, 8, 16, 21 | | | min samples leaf | 1, 2, 4, 10, 21 | | | bootstrap | False, True | | GradientBoostingClassifier | learning rate | 0.1, 0.01, 1 | | | min samples split | 2, 4, 8, 16, 21 | | | min samples leaf | 1, 2, 4, 10, 21 | | | subsample | 1.0, 0.05, 0.37, 0.68 | | | max features | None, 0.15, 0.68 |

</details> <details> <summary>OpenML benchmark datasets</summary>

| data_id | NumberOfClasses | NumberOfInstances | NumberOfFeatures | NumberOfSymbolicFeatures | NumberOfFeatures_post_processing | MajorityClassSize | MinorityClassSize | |----------:|------------------:|--------------------:|-------------------:|---------------------------:|-----------------------------------:|--------------------:|--------------------:| | 43 | 2 | 306 | 4 | 2 | 3 | 225 | 81 | | 48 | 3 | 151 | 6 | 3 | 5 | 52 | 49 | | 59 | 2 | 351 | 35 | 1 | 34 | 225 | 126 | | 61 | 3 | 150 | 5 | 1 | 4 | 50 | 50 | | 164 | 2 | 106 | 58 | 58 | 57 | 53 | 53 | | 333 | 2 | 556 |

Related Skills

View on GitHub
GitHub Stars22
CategoryData
Updated6mo ago
Forks4

Languages

Python

Security Score

87/100

Audited on Sep 18, 2025

No findings