Tensor
C++ template metaprogram driven tensor math library
Install / Use
/learn @thenumbernine/TensorREADME
Differential Geometry Tensor Library
After using fixed-dimension vectors and tensors for a few decades, and having a few goes at designing a C++ math library around them, and then getting into differential geometry and relativity, and trying to design a C++ math library around that, this is the latest result.
I know I've put the word "Tensor" in the title. Ever since the deep learning revolution in AI computer scientists have come to believe that a "tensor" is an arbitrary dimensioned array of numbers, preferrably larger dimensioned and smaller-indexed. This library is moreso centered around "tensor" in the original differential geometry definition: a geometric object that lives in the tangent space at some point on a manifold and is invariant to coordinate transforms. This means I am designing this library centered around compile-time sized small arrays and larger ranks/degrees/grades (whatever the term is for the number of indexes).
The old and pre-C++11 and ugly version had extra math indicators like Upper<> and Lower<> for tracking valence, but I've done away with that now.
There was no programmatically functional reason to track it (unless I wanted to verify Einstein-index summation correctness, which I never got to), so I've just done away with it.
What that means is you'll have to keep track of upper/lower/tensor basis valence all by yourself, and do your own metric multiplying all by yourself.
Luckily the default tensor * operator is a outer+contraction, aka matrix-multiply in the rank-2 case, which means any contraction of tensor a's last index with tensor bs first index under metric g can just be written as a * g * b.
This version has added a lot of C++20 tricks. So I guess overall this library is midway between a mathematician's and a programmer's and a physicist's implementation.
- All headers and templates. No source files to be compiled. Don't get anxious over seeing those custom build scripts, this library is strictly headers, you can just copy and paste the files wherever you want.
- Familiar vector and types and functions for 2D 3D 4D, with support for arbitrary-dimension, arbitrary-rank.
- Lots of compile time and template driven stuff.
- C++20
- Some sort of GLSL half-compatability, though creative freedom where I want it.
- Compressed storage for identity, symmetric, and antisymmetric tensor indexes. For example, a 3x3 symmetric matrix takes 6 floats. A 3x3 antisymmetric matrix takes 3 floats. The 2x2, 3x3x3, 4x4x4x4 etc rank-N dim-N Levi-Civita permutation tensors take up 1 float.
Examples:
Example of Einstein index summation notation / Ricci calculus. $a_{ij} := \frac{1}{2} (a_{ij} + a_{ji}), c_i := {b_{ij}}^j, d_i := a_{ij} {b^{jk}}_k$
Index<'i'> i;
Index<'j'> j;
Index<'k'> k;
auto a = float3x3({{1,2,3},{4,5,6},{7,8,9}});
// lazy-evaluation of swizzles, addition, subtraction
a(i,j) = .5 * (a(i,j) + a(j,i));
auto b = float3s3s3([](int i, int j, int k) -> float { return i+j+k; });
// mid-evaluation caching of traces and tensor-multiplies
auto c = b(i,j,j).assignI();
auto d = (a(i,j) * b(j,k,k)).assignI();
Example of using a totally-antisymmetric tensor for implementing the cross-product. $(a \times b)_i := \epsilon_{ijk} a^j b^k$
float3 cross(float3 a, float3 b) {
// Create the Levi-Civita totally-antisymmetric permutation tensor for dim 3 rank 3 ... using up a single float of memory:
constexpr auto LC = float3a3a3(1);
static_assert(sizeof(LC) == sizeof(float));
// cross(a,b)_i = ε_ijk * a^j * b^k
return (LC * b) * a;
}
Same thing but using exterior algebra. $a \times b = \star (a \wedge b)$
float3 cross(float3 a, float3 b) {
// Create a 3x3 antisymmetric
auto c = a.wedge(b);
// ... storing only 3 floats. mind you c[i][j] , 0 <= i < 3, 0 <= j < 3 are all accessible
static_assert(sizeof(a) == 3*sizeof(float));
// Return the dual, mapping those 3 antisymmetric rank-2 floats onto 3 rank-1 floats:
return hodgeDual(c);
}
Example of using a totally-antisymmetric tensor to construct a basis perpendicular to a vector:
float2x3 basis(float3 n) {
// Return a 3x3 antisymmetric, only storing 3 values, equivalent of initializing a matrix with the cross products of 'n' and each basis vector
auto d = hodgeDual(n);
static_assert(sizeof(d) == 3*sizeof(float));
// Done.
// We now have 3 vectors perpendicular to 'n' stored the 3 rows/cols of 'd'.
// But maybe 'n' is closer to one of them, in which case the cross tends to zero, so best to pick the largest two:
auto sq = float3([&d](int i) -> float { return d(i).lenSq(); });
auto n2 = (
(sq.x > sq.y)
? ((sq.x > sq.z) ? d.x : d.z)
: ((sq.y > sq.z) ? d.y : d.z)
).normalize();
return float2x3({n2, n.cross(n2)});
}
Example of using the Hodge dual to compute the exterior-algebra inner-product of a and b. In the case of rank > 1, a and b are antisymmetrized. $\langle a, b \rangle = \star (a \wedge \star b)$
template<typename A, typename B>
//here 'isSquare' means all dimension sizes match
requires (std::is_same_v<typename A::dimseq, typename B::dimseq> && A::isSquare && B::isSquare)
auto innerExt(A const & a, B const & b) {
//totally-antisymmetric tensors are space-optimized,
//so the storage of bstar will always be <= the storage of b
auto bstar = b.dual();
//at this point c should be a n-form for dimension n,
//which will take up a single numeric value (while representing n distinct tensor indexes)
auto c = a.wedge(bstar);
//return c's dual, which is a 0-form scalar, times the input rank factorial.
return c.dual() * factorial(a.rank);
}
Example: Implementing the Levi-Civita totally-antisymmetric tensor in an orthonormal metric. $\epsilon_{i_1 ... i_n} = 1$
// the 'NaR' suffix stands for N-dimensional, totally-antisymmetric, R-rank, and expects <N,R> to come in the template arguments.
auto LC = floatNaR<dim,dim>(1);
// uses 1 whole float of storage.
Example: Implementing the covariant valence Levi-Civita totally-antisymmetric tensor for an arbitrary metric $g_{ij}$: $\epsilon_{i_1 ... i_n} = \sqrt{|det(g_{ij})|}, \epsilon^{i_1 ... i_n} = \frac{1}{\sqrt{|det(g_{ij})|}}$
auto g = floatNsN<dim>( /* provide your metric here */ );
auto detg = g.determinant();
auto LC_lower = floatNaR<dim, dim>(sqrt(det)); // uses 1 whole float of storage.
auto LC_upper = LC_lower / detg;
Mind you LC_lower[0][0]...[0] and LC_lower[dim-1][dim-1]...[dim-1] and every possible index access between are all valid C++ expressions. But yeah, just 1 float of storage.
Example: ... and using it to compute the generalized Kronecker delta tensor. $\delta^{i_1 ... i_n}_{j_1 ... j_n} = \epsilon_{j_1 ... j_n} \epsilon^{i_1 ... i_n}$
auto KD = LC_lower.outer(LC_upper);
KD is now a rank-2*dim tensor of dimension dim.
Once again it is represented by just a single float.
Take note of the order of your outer product and therefore the order of your result's indexes. In this case the generalized-Kronecker-delta lower indexes are first.
Example: Calculating a parallelotope/simplex measure (length, area, volume, etc) of any simplex in any dimension (i.e. lines in 1D 2D 3D 4D ..., triangles in 2D 3D 4D ..., tetrahedrons in 3D 4D ... etc).
// v holds the row-vectors of the parallelotope basis vectors.
// v must be degree-2, but does not have to be square.
// v should have its height m < width n
// The dimension of the parallelotope will be m, the dimension which the points of the simplex exist within will be n.
auto measure(auto const & v) {
return v.wedgeAll().normExt();
}
auto measureSimplex(auto const & v) {
return measure(v) / factorial(v.template dim<0>);
}
API Reference:
Tensors:
tensor is not a typename, but is a term I will use interchangeably for the various tensor storage types. These currently include: vec, zero, ident, sym, asym, symR, asymR.
Vectors:
vec<type, dim> = vectors:
- rank-1
.s=std::array<type>= elementstd::arrayaccess. Tempted to change it back totype[] sfor ease of use as pointer access... but I do like the ease of iterator use withstd::array... hmm...- for 1D through 4D:
.x .y .z .w,.s0 .s1 .s2 .s2storage. .subset<size>(index), .subset<size,index>()= return a vector reference to a subset of this vector.
Matrices:
mat<type, dim1, dim2> = vec<vec<type,dim2>,dim1> = matrices:
- rank-2
- Right now indexing is row-major, so matrices appear as they appear in C, and so that matrix indexing
A.i.jmatches math indexing $A_{ij}$. This disagrees with GL compatability, so you'll have to upload your matrices to GL transposed.
Tensor/tensor operator result storage optimizations:
mat+T=mat
Zero Vector:
zero<type, dim> = vectors of zeroes. Requires only the inner type worth of storage. Use nesting for arbitrary-rank, arbitrary-dimension zero tensors all for no extra storage.
- rank-1 Always returns zero. This is a shorthand for simpliciations like symmetrizing antisymmetric indexes or antisymmetrizing symmetric indexes. Currently the internal mechanics expect - and provide - tensors such that, if any internal storage is zero, then all will be zero.
Tensor/tensor operator result storage optimizations:
zero+T=T
Identity Matrix:
ident<type, dim> = identity matrix.
- rank-2
This will only take up a single value of storage. Specifying the internal storage type is still required, which enables
identto be used in conjunction with outer products to produce optimized-storage tensors, i.e. $a_{ij} \otimes \delta_{kl}$ =outer( mat<float,3,3>(...), ident<float,3>(1) )will produce a rank-4 tensor $c_{ijkl} = a_{ij} \cdot \delta_{kl}$ which will only require 9 floats of storage, not 81 floats, as a naive outer product would require. Notice thatidentis rank
