Aggregate
Tools for creating and working with aggregate probability distributions.
Install / Use
/learn @mynl/AggregateREADME
| |activity| |doc| |version| | |py-versions| |downloads| |stars| |forks| | |license| |packages| |zenodo|
aggregate: working with actuarial compound distributions
Purpose
aggregate builds approximations to compound (aggregate) probability distributions quickly and accurately.
It can be used to solve insurance, risk management, and actuarial problems using realistic models that reflect
underlying frequency and severity. It delivers the speed and accuracy of parametric distributions to situations
that usually require simulation, making it as easy to work with an aggregate (compound) probability distribution
as the lognormal. aggregate includes an expressive language called DecL to describe aggregate distributions
and is implemented in Python under an open source BSD-license.
Aggregate White Paper
Aggregate: fast, accurate, and flexible approximation of compound probability distributions <https://www.cambridge.org/core/journals/annals-of-actuarial-science/article/aggregate-fast-accurate-and-flexible-approximation-of-compound-probability-distributions/1BF9A534D944D983B1D780C60885F065>_ describes the Aggregate class within aggregate. This paper has been published in the peer reviewed journal Annals of Actuarial Science <https://www.cambridge.org/core/journals/annals-of-actuarial-science>_'s Actuarial Software series.
The paper describes the purpose, implementation, and use Aggregate, showing how it can be used to create and manipulate compound frequency-severity distributions.
Version History
.. Conda Forge: https://github.com/conda-forge/aggregate-feedstock https://anaconda.org/conda-forge/aggregate/files
0.30.0
- Added
comonotonic_allocationstoPortfolioto implement the method of Denuit, Michel, et al. "Comonotonicity and Pareto optimality, with application to collaborative insurance." Insurance: Mathematics and Economics 120 (2025): 1-16. This uses numba if available. Warning: it can be very slow without numba!
0.29.0
- Portfolio analyze_distortions2 to iron out annoyances with current function but retain it for backwards compatibility.
- Portfolio calibrate_distortions2 for same reasons, args coc and reg_p.
- Spectral tvar_info_df and plot_affine for working with weighted TVaR distortions.
- Changed behavior of Distortion.random_distortion so that input number of knots includes mass and mean if present.
- Added random_distortion_ex(n=1, random_state=None) in Distortion class to simulate across types, extending random_distortion which is only a wtdtvar.
0.28.1
applymaptomapper Pandas update.
0.28.0
- Added
standard_shapeto Distortion and added to distortion_df created by Portfolio.calibrate_distortions. - Updated dependencies and imports for doc build.
- Added
spectral.consistent_distortionsto create consistent family of representative distortions.
0.27.1
- Fixed a bug with recommend unit in a portfolio with all fixed components.
- Adjusted line styles in twelve plot and clarified use in doc string.
- Corrected ROE calculation of natural allocation premium when g(s) = 1.
0.27.0
* Removed control over logging and just use ``logger = logging.getLogger(__name__)`` in all modules. Removed ``log_test`` function and ``LoggerManager`` class.
* Removed ``numba`` as a requirement - huge library, hardly used. Only occurs in spectral module.
* Replaced build_docs batch file with doc-test which mirrors readthedocs process more closely.
0.26.0
extensionsno longer setspd.float_formatto Engineering.- Added
tweedie.Tweedieclass toextensionsto compute the Tweedie class distributions for all valid :math:p. (Dangling jax dependence.)
0.25.0
* Tweak ``extensions.ft.FourierTools``: added ``invert_simpson`` method using Simpson's rule,
better for stable distributions. This is the method used by ``scipy.stats``.
* Bumped to 0.25 which should have done in 0.24.2 because it added new functionality
* Tidied docs
* ``knobble_fonts`` uses serif font by default in matplotlib, and sets up
in color mode by default.
0.24.2
~~~~~~~~~~
* Added ``Distortion.make_q`` to return the risk adjusted probabilities used
in pricing. Same logic as ``price_ex``. Makes it easy to compute the natural
allocation from a distortion.
* Added ``extensions.ft.FourierTools`` class, which performs direct inversion of a (continuous) Fourier transform (characteristic function)
using FFTs. This is particularly useful for stable distributions, where the Fourier transform is known but the density is not. See examples in Section 5 of the documentation.
* Added ``make_levy_chf`` to ``extensions`` to compute the characteristic function of a Levy stable distribution.
0.24.1
~~~~~~~~~~
* Added script to build the documentation from a local clone of the repository.
* Added ``Aggregate.unwrap`` to adjust aggregates computed with too few buckets
but enough space. It unwraps the computed aggregate by adjusting the index. This
reverses the "wagon-wheel" effect, whereby FFTs wrap-around the end of the array.
* Vectorized ``ultilities.estimate_agg_percentile`` for use in ``Aggregate.unwrap``
0.24.0
~~~~~~~~~~
* Added state to Distortions so they can be pickled. Involved separating part of ``Distortion.__init__``
into a new method, ``Distortion._complete_init``. This is called from ``__init__`` and ``__setstate__``.
Ensured _complete_init refers to arguments as self.argname, not argname and set self
variables in class ``__init__`` method.
* Fixed mixture g functions to handle input multidimensional arrays.
* Simplified ``Distortion.__repr__`` and ``Distortion.__str__``.
* Added ``Distortion.id`` to generate a unique ID depending on ``__dict__`` argument elements.
* Corrected ``g_prime`` for minimum distortion.
* Fixed biTVaR distortion to handle p1==1 by including the mass explicitly.
* Added ``Distortion.price_ex`` to combine best of price and price2 methods and improve flexibility. It sorts and summarizes if needed. Optional return formats.
* Added four numba compiled functions to Distortion for fast computation of
g.g(1-ps.cumsum()) and g.price( kind='ask'). These are tvar_gS, bitvar_gS,
tvar_ra (for risk adjusted expected value) and bitvar_ra. In each case the
values are computed without any copies of the original data, making them
far more memory efficient for very large input arrays. At the extreme,
bitvar_ra results in a speed up of the order of 2000x in realistic
situations, even with small (100s) input vectors. The functions are static
members of Distortion (numba requirement). They are not parallelized
because of the cumulative computation of S. See the file
PyWork/Distortion-price-tester.ipynb for tests (TODO: integraete into the
documentation.) This addition results in numba being a required package.
* Removed dependency on ``titlecase`` package.
* Removed ``Distortion.calibrate`` method, which was not used and never tested. It lives with ``Portfolio``.
0.23.0
~~~~~~~~~~
* Added ``sample_df`` dataframe to ``Portfolio`` when created from a sample
to store the sample. Original sample is needed in various applications.
* Added ``swap_density_df(self, new_df, padding=1)`` to ``Portfolio``.
* Fixed errors in Case Studies caused by changes in Pandas.
* Added ability to create Markdown case output, rather than HTML.
* Added beta distortion (generalizes the PH and dual)
* Updated ``np.alltrue`` to ``np.all``; updated ``NoConverge`` in ``scipy.optimize``.
* Added ``Distortion.calibrate`` to calibrate to a pricing target from input ``density_df`` (TODO: needs testing).
* Added `wtdtvar`` to ``Distortion`` to compute the weighted TVaR from p values and weights,
masses and mean components.
* Added ``minimum`` to ``Distortion`` to create a new ``Distortion`` as the minimum of a list of input Distortions. The list is passed as shape.
* Added ``random_distortion`` to ``Distortions`` to compute a random distortion, useful
for testing!
* Fixed ``tvar`` distortion to allow p=1 (max)
* Simplified ``Distortion.__repr__`` and ``Distortion.__str__``.
* Added `Distortion.ph``, ``.wang``, ..., methods for common distortions, with better
hints for parameters. All are static methods that delegate to the constructor.
* Fixed documentation build errors.
0.22.0
~~~~~~~~~~
* Created version 0.22.0, "convolation" for AAS submission
0.21.4
~~~~~~~~
* Updated requirement using ``pipreqs`` recommendations
* Color graphics in documentation
* Added ``expected_shift_reduce = 16 # Set this to the number of expected shift/reduce conflicts`` to ``parser.py``
to avoid warnings. The conflicts are resolved in the correct way for the grammar to work.
* Issues: there is a difference between ``dfreq[1]`` and ``1 claim ... fixed``, e.g.,
when using spliced severities. These should not occur.
0.21.3
~~~~~~~~
* Risk progression, defaults to linear allocation.
* Added ``g_insurance_statistics`` to ``extensions`` to plot insurance statistics from a distortion ``g``.
* Added ``g_risk_appetite`` to ``extensions`` to plot risk appetite from a distortion ``g`` (value, loss ratio,
return on capital, VaR and TVaR weights).
* Corrected Wang distortion derivative.
* Vectorized ``Distortion.g_prime`` calculation for proportional hazard
* Added ``tvar_weights`` function to ``spectral`` to compute the TVaR weights of a distortion. (Work in progress)
* Updated dependencies in pyproject.toml file.
0.21.2
~~~~~~~~
* Misc documentation updates.
* Experimental magic functions, allowing, eg. %agg [spec] to create an aggregate object (one-liner).
* 0.21.1 yanked from pypi due to error in pyproject.toml.
0.21.0
~~~~~~~~~
* Moved ``sly`` into the project for better control. ``sly`` is a Python implementation of lex and yacc parsing tools.
It is written by Dave Beazley. Per
