SkillAgentSearch skills...

TensorType

Framework for type-safe pure functional and non-cubical tensor processing, written in Idris 2

Install / Use

/learn @bgavran/TensorType
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

TensorType

build

TLDR; numpy, but with types, first-class axes, and tensors over structured data

TensorType is a framework for pure functional tensor processing, implemented in Idris 2. It

  • is type-safe: tensor shapes, indexing and contractions are checked at compile time
  • handles structured data natively: supports tensors over trees, interaction protocols and arbitrary containers, instead of just rectangular arrays
  • supports named axes: axes are first-class objects carrying names, checked for consistency at call sites
  • provides a familiar interface: it aims to offer the standard NumPy/PyTorch API in a purely functional, dependently typed language

At the moment its main purpose is enabling rapid prototyping of structured neural network architectures. For instance, it is expressive enough to implement generalised cross-attention (as described in the Generalised Transformers blog post).

[!NOTE] TensorType is in early stages of development; expect breaking changes. Named axes are not yet fully integrated, and TensorType is not yet performant. Down the line the goal is to obtain performance systematically, not at the expense of types, but because of them.

Examples

(taken from examples/BasicExamples.idr)

Import Data.Tensor at the top of your file.

import Data.Tensor

Now you can construct tensors directly:

t0 : Tensor ["j" ~~> 3, "k" ~~> 4] Double
t0 = ># [ [0, 1, 2, 3]
        , [4, 5, 6, 7]
        , [8, 9, 10, 11]]

This declares the type of a 3 x 4 matrix with axes named "j" and "k", and uses ># to populate it with values. ># behaves like a constructor: it takes a concrete value and turns it into the tensor of the appropriate shape (it should be visually read as a 'map' (>) into a 'tensor' (#)).

You can use functions analogous to NumPy's, such as np.arange and np.reshape:

t1 : Tensor ["i" ~~> 6] Double
t1 = arange

t2 : Tensor ["i" ~~> 2, "j" ~~> 3] Double
t2 = reshape t1

where the difference from NumPy is that these operations are typechecked - meaning they will fail at compile-time if you supply an array with the wrong shape.

failing
  failConcrete : Tensor ["j" ~~> 3, "k" ~~> 4] Double
  failConcrete = ># [ [0, 1, 2, 3, 999]
                    , [4, 5, 6, 7]
                    , [8, 9, 10, 11]]
failing
  failReshape : Tensor ["i" ~~> 7, "j" ~~> 2] Double
  failReshape = arange {n=7}

They will also fail if you inconsistently bind axis names, for instance if you bind the same name to two different sizes:

failing
  failBinding : Tensor ["j" ~~> 3, "j" ~~> 4] Double
  failBinding = ># [ [0, 1, 2, 3]
                   , [4, 5, 6, 7]
                   , [8, 9, 10, 11]]

You can perform all sorts of familiar numeric operations:

exampleSum : Tensor ["j" ~~> 3, "k" ~~> 4] Double
exampleSum = t0 + t0

exampleOp : Tensor ["j" ~~> 3, "k" ~~> 4] Double
exampleOp = abs (- (t0 * t0) <&> (+7))

including standard linear algebra:

dotExample : Tensor [] Double
dotExample = dot t1 (t1 <&> (+5))

matMulExample : Tensor ["i" ~~> 2, "k" ~~> 4] Double
matMulExample = matMul t2 t0

transposeExample : Tensor ["k" ~~> 4, "j" ~~> 3] Double
transposeExample = transposeMatrix t0

which all have their types checked at compile-time. For instance, you can't add tensors of different shapes, perform matrix multiplication if the dimensions of matrices don't match, or do any of these if you mislabel an axis.

failing
  sumFail : Tensor ["j" ~~> 3, "k" ~~> 4] Double
  sumFail = t0 + t1
  
failing
  matMulFail : Tensor ["i" ~~> 7] Double
  matMulFail = matMul t0 t1

Like in NumPy, you can safely index into tensors, set values of tensors, and perform slicing:

||| Retrieves the value of t0 at location [1, 2]
indexExample : Double
indexExample = t0 @@ [1, 2]

||| Sets the value of t0 at location [1, 3] to 99 
setExample : Tensor ["j" ~~> 3, "k" ~~> 4] Double
setExample = set t0 [1, 3] 99

||| Takes the first two rows, and 1st column of t0
sliceExample : Tensor ["j" ~~> 2, "k" ~~> 1] Double
sliceExample = take [2, 1] t0

which will all fail if you go out of bounds:

failing
  indexExampleFail : Double
  indexExampleFail = t1 @@ [7, 2]

failing
  sliceFail : Tensor ["j" ~~> 10, "k" ~~> 2] Double
  sliceFail = take [10, 2] t0

All of the above also works with non-rectangular tensors. These are tensors whose shape is a tree, an inductive type, and even a continuation, rather than a rectangular grid. That is, instead of binding an axis name to a number, we bind it to something called a "container", by using ~> instead of ~~>. As a matter of fact, ~~> behind the scenes desugars to ~>, and we have been using this all along. Let's see t0 in this new form:

t0Again : Tensor ["j" ~> Vect 3, "k" ~> Vect 4] Double
t0Again = t0

Here Vect does not refer to Vect from Data.Vect, but rather the Vect container implemented here.

Everything we've seen above can be recast with this new type explicitly:

t1again : Tensor ["i" ~> Vect 6] Double
t1again = arange

The real power of container tensors comes from using other containers in the place of Vect. Here is a container BinTree of binary trees recast as a tree-tensor:

{-
       60
      /  \
     7    2 
    / \
(-42)  46 
-}
treeExample1 : Tensor ["myTree" ~> BinTree] Double
treeExample1 = ># Node 60 (Node 7 (Leaf (-42)) (Leaf 46)) (Leaf 2)

Unlike Vect, this container has a branching shape rather than a linear one. Here is another tree-tensor with a different shape:

{-
   5
  / \
100  4
-}
treeExample2 : Tensor ["myTree" ~> BinTree] Double
treeExample2 = ># Node 5 (Leaf 100) (Leaf 4)

Perhaps surprisingly, all linear algebra operations follow smoothly. The example below is the dot product of trees. The fact that these trees don't have the same number of elements is irrelevant; what matters is that the container defining them (BinTree) is the same.

dotProductTree : Tensor [] Double
dotProductTree = dot treeExample1 treeExample2

We can do much more. Here's a tree-tensor with values only on its leaves:

{-
        *
      /   \
     *     2 
    / \
(-42)  46 
-}
treeLeafExample : Tensor ["myTree" ~> BinTreeLeaf] Double
treeLeafExample = ># Node' (Node' (Leaf (-42)) (Leaf 46)) (Leaf 2)

and here's a tree-tensor with values only on its nodes:

{-
       60
      /  \
     7    *
    / \
   *   * 
-}
treeNodeExample : Tensor ["myTree" ~> BinTreeNode] Double
treeNodeExample = ># Node 60 (Node 7 Leaf' Leaf')  Leaf'

This can get complex and nested, as treeExample3 and treeExample4 show. But it is still fully type-checked and works as you'd expect.

treeExample3 : Tensor ["myTree" ~> BinTreeNode, "j" ~> Vect 2] Double
treeExample3 = ># Node [4,1] (Node [17, 4] Leaf' Leaf') Leaf'

treeExample4 : Tensor ["myTree"     ~> BinTreeNode,
                       "myTreeLeaf" ~> BinTreeLeaf,
                       "k"          ~> Vect 3] Double
treeExample4 = >#
  Node (Node'
          (Leaf [1,2,3])
          (Leaf [4,5,6]))
    Leaf'
    (Node (Leaf [178, -43, 63]) Leaf' Leaf')

For instance, we can index into treeExample1:

{- 
We can index into any of these structures
       60
      /  \
     7    2  <---- indexing here is okay
    / \
(-42)  46 
-}
indexTreeExample1 : Double
indexTreeExample1 = treeExample1 @@ [GoRight AtLeaf]

This will fail at compile-time if you try to index outside of the tree structure:

failing
  {- 
         60
        /  \
       7    2  
      / \    \
  (-42)  46   X   <---- indexing here throws an error
  -}
  indexTreeExample1Fail : Double
  indexTreeExample1Fail = treeExample1 @@ [GoRight (GoRight AtLeaf)]

Likewise, you can perform reshapes, views, reversals, sorting and traversals of non-cubical tensors. Here is the in-order traversal of treeExample1 from above.

{-
       60
      /  \
     7    2 
    / \
(-42)  46 
-}
traversalExample : Tensor ["myList" ~> List] Double
traversalExample = restructure (wrapIntoVector inorder) treeExample1

All of these can be used to define novel network architectures, see src/Architectures for examples.

Installation instructions

It is recommended to manage the installation of this package (and generally, Idris 2) using the package manager pack. The instructions below assume you've got both Idris 2 and pack installed.

If you want to just try it out in the REPL:

  1. Clone the repository, and cd into it.
  2. Run pack repl examples/BasicExamples.idr.
  3. That's it!

To use TensorType in your project:

  1. Run pack query tensortype in the command-line to check whether your pack database is synced. If you don't see tensortype printed as output, you may need to run pack update-db first.
  2. Add tensortype to the depends argument in your project's .ipkg file. (See examples/tensortype-examples.ipkg for an example)
  3. Include import Data.Tensor at the top of your source files.
  4. That's it!

Aim of TensorType

Attempts to bring deep learning to statically typed langua

View on GitHub
GitHub Stars34
CategoryDevelopment
Updated12h ago
Forks5

Languages

Idris

Security Score

90/100

Audited on Mar 31, 2026

No findings