Xtensor
C++ tensors with broadcasting and lazy computing
Install / Use
/learn @xtensor-stack/XtensorREADME
Multi-dimensional arrays with broadcasting and lazy computing.
Introduction
xtensor is a C++ library meant for numerical analysis with multi-dimensional
array expressions.
xtensor provides
- an extensible expression system enabling lazy broadcasting.
- an API following the idioms of the C++ standard library.
- tools to manipulate array expressions and build upon
xtensor.
Containers of xtensor are inspired by NumPy, the
Python array programming library. Adaptors for existing data structures to
be plugged into our expression system can easily be written.
In fact, xtensor can be used to process NumPy data structures inplace
using Python's buffer protocol.
Similarly, we can operate on Julia and R arrays. For more details on the NumPy,
Julia and R bindings, check out the xtensor-python,
xtensor-julia and
xtensor-r projects respectively.
Up to version 0.26.0, xtensor requires a C++ compiler supporting C++14.
xtensor 0.26.x requires a C++ compiler supporting C++17.
xtensor 0.27.x requires a C++ compiler supporting C++20.
Installation
Package managers
We provide a package for the mamba (or conda) package manager:
mamba install -c conda-forge xtensor
Install from sources
xtensor is a header-only library.
You can directly install it from the sources:
cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix
make install
Installing xtensor using vcpkg
You can download and install xtensor using the vcpkg dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install xtensor
The xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
Trying it online
You can play with xtensor interactively in a Jupyter notebook right now! Just click on the binder link below:
The C++ support in Jupyter is powered by the xeus-cling C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.

Documentation
For more information on using xtensor, check out the reference documentation
http://xtensor.readthedocs.io/
Dependencies
xtensor depends on the xtl library and
has an optional dependency on the xsimd
library:
| xtensor | xtl |xsimd (optional) |
|-----------|---------|-------------------|
| master | ^0.8.0 | ^13.2.0 |
| 0.27.1 | ^0.8.0 | ^13.2.0 |
| 0.27.0 | ^0.8.0 | ^13.2.0 |
| 0.26.0 | ^0.8.0 | ^13.2.0 |
| 0.25.0 | ^0.7.5 | ^11.0.0 |
| 0.24.7 | ^0.7.0 | ^10.0.0 |
| 0.24.6 | ^0.7.0 | ^10.0.0 |
| 0.24.5 | ^0.7.0 | ^10.0.0 |
| 0.24.4 | ^0.7.0 | ^10.0.0 |
| 0.24.3 | ^0.7.0 | ^8.0.3 |
| 0.24.2 | ^0.7.0 | ^8.0.3 |
| 0.24.1 | ^0.7.0 | ^8.0.3 |
| 0.24.0 | ^0.7.0 | ^8.0.3 |
| 0.23.x | ^0.7.0 | ^7.4.8 |
| 0.22.0 | ^0.6.23 | ^7.4.8 |
The dependency on xsimd is required if you want to enable SIMD acceleration
in xtensor. This can be done by defining the macro XTENSOR_USE_XSIMD
before including any header of xtensor.
Usage
Basic usage
Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.
#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xview.hpp"
xt::xarray<double> arr1
{{1.0, 2.0, 3.0},
{2.0, 5.0, 7.0},
{2.0, 5.0, 7.0}};
xt::xarray<double> arr2
{5.0, 6.0, 7.0};
xt::xarray<double> res = xt::view(arr1, 1) + arr2;
std::cout << res;
Outputs:
{7, 11, 14}
Initialize a 1-D array and reshape it inplace.
#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
xt::xarray<int> arr
{1, 2, 3, 4, 5, 6, 7, 8, 9};
arr.reshape({3, 3});
std::cout << arr;
Outputs:
{{1, 2, 3},
{4, 5, 6},
{7, 8, 9}}
Index Access
#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
xt::xarray<double> arr1
{{1.0, 2.0, 3.0},
{2.0, 5.0, 7.0},
{2.0, 5.0, 7.0}};
std::cout << arr1(0, 0) << std::endl;
xt::xarray<int> arr2
{1, 2, 3, 4, 5, 6, 7, 8, 9};
std::cout << arr2(0);
Outputs:
1.0
1
The NumPy to xtensor cheat sheet
If you are familiar with NumPy APIs, and you are interested in xtensor, you can check out the NumPy to xtensor cheat sheet provided in the documentation.
Lazy broadcasting with xtensor
Xtensor can operate on arrays of different shapes of dimensions in an element-wise fashion. Broadcasting rules of xtensor are similar to those of NumPy and libdynd.
Broadcasting rules
In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other.
For example, if A has shape (2, 3), and B has shape (4, 2, 3), the
result of a broadcasted operation with A and B has shape (4, 2, 3).
(2, 3) # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result
The same rule holds for scalars, which are handled as 0-D expressions. If A
is a scalar, the equation becomes:
() # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result
If matched up dimensions of two input arrays are different, and one of them has
size 1, it is broadcast to match the size of the other. Let's say B has the
shape (4, 2, 1) in the previous example, so the broadcasting happens as
follows:
(2, 3) # A
(4, 2, 1) # B
---------
(4, 2, 3) # Result
Universal functions, laziness and vectorization
With xtensor, if x, y and z are arrays of broadcastable shapes, the
return type of an expression such as x + y * sin(z) is not an array. It
is an xexpression object offering the same interface as an N-dimensional
array, which does not hold the result. Values are only computed upon access
or when the expression is assigned to an xarray object. This allows to
operate symbolically on very large arrays and only compute the result for the
indices of interest.
We provide utilities to vectorize any scalar function (taking multiple
scalar arguments) into a function that will perform on xexpressions, applying
the lazy broadcasting rules which we just described. These functions are called
xfunctions. They are xtensor's counterpart to NumPy's universal functions.
In xtensor, arithmetic operations (+, -, *, /) and all special
functions are xfunctions.
Iterating over xexpressions and broadcasting Iterators
All xexpressions offer two sets of functions to retrieve iterator pairs (and
their const counterpart).
begin()andend()provide instances ofxiterators which can be used to iterate over all the elements of the expression. The order in which elements are listed isrow-majorin that the index of last dimension is incremented first.begin(shape)andend(shape)are similar but take a broadcasting shape as an argument. Elements are iterated upon in a row-major way, but certain dimensions are repeated to match the provided shape as per the rules described above. For an expressione,e.begin(e.shape())ande.begin()are equivalent.
Runtime vs compile-time dimensionality
Two container classes implementing multi-dimensional arrays are provided:
xarray and xtensor.
xarraycan be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays.xtensorhas a dimension set at compilation time, which enables many optimizations. For example, shapes and strides ofxtensorinstances are allocated on the stack instead of the heap.
xarray and xtensor container are both xexpressions and can be involved
and mixed in universal functions, assigned to each other etc...
Besides, two access operators are provided:
- The variadic template
operator()which can take multiple integral arguments or none. - And the
operator[]which tak
Related Skills
node-connect
336.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
336.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.0kCommit, push, and open a PR
