CEILS
Counterfactual Explanations as Interventions in Latent Space (CEILS) is a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile.
Install / Use
/learn @FLE-ISP/CEILSREADME

CEILS
Counterfactual Explanations as Interventions in Latent Space (CEILS) is a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile.
Authors & contributors:
Riccardo Crupi, Alessandro Castelnovo, Daniele Regoli, Beatriz San Miguel Gonzalez
You can cite this work as:
@article{crupi2022counterfactual,
title={Counterfactual explanations as interventions in latent space},
author={Crupi, Riccardo and Castelnovo, Alessandro and Regoli, Daniele and San Miguel Gonzalez, Beatriz},
journal={Data Mining and Knowledge Discovery},
pages={1--37},
year={2022},
publisher={Springer}
}
Documentation
To know more about this research work, please refer to our full paper (ArXiv).
Currently, CEILS has been published and/or presented in:
- 8th Causal Inference Workshop at UAI (causalUAI2021) (Video) by Riccardo Crupi</li>
- Workshop on Explainable AI in Finance @ICAIF 2021 by Beatriz San Miguel</li>
- ICAART - 14th International Conference on Agents and Artificial Intelligence @ICAART 2022 by Beatriz San Miguel</li>
- Data Mining and Knowledge Discovery Springer 2022
Installation
Create a new environment based on Python 3.9 or 3.6 and install the requirements.
Python 3.9:
pip install -r requirements.txt
Python 3.6:
pip install -r requirements_py36.txt
CEILS Workflow
CEILS workflow consists of the following steps:
<p align="center"> <img src="https://user-images.githubusercontent.com/92302358/140288321-2ca4caf8-2e32-421c-916c-b466d6006663.png" alt="drawing" class="center" width="300" height="300"/> </p>Inputs
Two main inputs are needed:
- Data. Prepare your dataset as a
pandas.DataFramefor the features (X) and apandas.Seriesfor the target variable (Y) - Causal graph. Define your causal relations in a causal graph (G) using
networkx.DiGraph.
Moreover, you need to define the features constrains (immutable, higher, lower) as a python dictionary, e.g. constraints_features = {"immutable": ["native-country"], "higher": ["age"]}
Generation of structural equations and the model in the latent space
In the method create_structural_eqs(X, Y, G) from core.build_struct_eq the following steps are carried out:
- generation of structural equations (F) mapping U to X (F: U->X)
- computation of residuals (U)
- generation of original ML model to predict the target variable Y using the features dataset (C: X->Y)
- composition of the model in the latent space, integrating the previous components (C_causal(U) = C(F(U)))
Summary of the main variables and functions involved:
<p align="center"> <img src="https://user-images.githubusercontent.com/92302358/140289908-c827961d-f4b7-457d-9bd8-4e8f226fbf4f.png" alt="drawing" class="center" width="300" height="300"/> </p>Generation of counterfactual explanations
In the method create_counterfactuals(X, Y, G, F, C_causal, constraints_features, numCF=20) from core.counter_causal_generator, two set of counterfactual explanations will be generated based on:
- CEILS approach: uses the model in the latent space and a general counterfactual generator (Alibi in our current implementation)
- Baseline approach: uses the original model and the library Alibi
Evaluation
In the method calculate_metrics(X, Y, G, categ_features, constraints_features) from core.metrics, a set of metrics will be computed to compare the two sets of counterfactual explanations.
The metrics will be printed.
Experiments
Currenly we have included 3 experiments based on public datasets and 2 experiments with synthetic data:
Experiments are under a specific folder in:
\experiments_run
We recommend to check the run_experiment.py file to know the details and understand the whole CEILS workflow.
Synthetic datasets experiments are the best way to have a first understanding of our solution
Related Skills
diffs
336.9kUse the diffs tool to produce real, shareable diffs (viewer URL, file artifact, or both) instead of manual edit summaries.
clearshot
Structured screenshot analysis for UI implementation and critique. Analyzes every UI screenshot with a 5×5 spatial grid, full element inventory, and design system extraction — facts and taste together, every time. Escalates to full implementation blueprint when building. Trigger on any digital interface image file (png, jpg, gif, webp — websites, apps, dashboards, mockups, wireframes) or commands like 'analyse this screenshot,' 'rebuild this,' 'match this design,' 'clone this.' Skip for non-UI images (photos, memes, charts) unless the user explicitly wants to build a UI from them. Does NOT trigger on HTML source code, CSS, SVGs, or any code pasted as text.
openpencil
1.8kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
ui-ux-pro-max-skill
51.3kAn AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
