NeuraLogic
Deep relational learning through differentiable logic programming.
Install / Use
/learn @GustikS/NeuraLogicREADME
NeuraLogic
Integrating deep and relational learning through differentiable logic programming.
- This is the official implementation of the concept of Deep Learning with Relational Logic Representations
- instantiated through the framework of Lifted Relational Neural Networks
- This is the (Java) backend, you can find Python frontend at PyNeuralogic
About
At the core of the framework, there is a custom language you can use to write differentiable programs encoding your learning scenarios, similarly to classic Deep Learning (DL) frameworks (e.g. TensorFlow). However, the language follows a logic programming paradigm and is declarative in nature (it's similar to Datalog). This means that instead of directly encoding the computation graph, you just declare:
- the inputs (and their numeric values, if any)
- i.e. the observed facts/data = objects, structures, knowledge graphs, relational databases, ...
- e.g.
atom(oxygen_1),0.3 stable(oxygen),8 protons(oxygen),1.7 energy(oxygen,leve2),[1.2,0,-1] features(oxygen),[[0,2.1],[1.7,-1]] bond(oxygen_1,hydrogen_2,covalent_1)
- the outputs (and their expected values - for supervised learning)
- i.e. the queries = classification labels, regression targets, ...
- e.g.
1 class,4.7 target(molecule_3),0 relation(carbon,xenon,fluor)
- a set of rules applicable in your domain (and their learnable parameters
W)- i.e. the generic prior/knowledge/bias which you want to use.
- these rules will be used to (automatically) infer (link) the outputs from the inputs
- e.g. explicit/interpretable rules such as
0.99 covalent(B) <= oxygen(X), hydrogen(Y), bond(X,Y,B). - or more implicit/flexible concepts such as
embed(X) <= W_1 embed(Y), bond(X,Y,_).- with these you can easily encode a lot of diverse structured deep learning models!
- i.e. the generic prior/knowledge/bias which you want to use.
Example
Consider a simple program for learning with molecular data<sup>1</sup>, encoding a generic idea that some hidden representation (predicate h(.)) of any chemical atom (variable X) is somewhat dependent on the other atoms (a(Y)) bound to it (b(X,Y)), with a parameterized rule as:
W_h1 h(X) <= W_a a(Y), W_b b(X,Y).
Additionally, let's assume that representation of a molecule (q) follows from representations of all the contained atoms (h(X)), i.e.:
W_q q <= W_h2 h(X).
These 2 rules, parameterized with the tensors W_*'s, then form a learning program, which can be used to classify molecules. Actually, it directly encodes a popular idea known as Graph Neural Networks.
Execution of this program ("template") for 2 input molecule samples will generate 2 parameterized computational graphs as follows:

Each computation node in the graphs is associated with some (differentiable) activation function defined by a user (or settings).
The parameters W_* in the program are then automatically optimized to reflect the expected output values (A_q) through gradient descent.
For detailed syntax and semantics, please check out the concept of "Lifted Relational Neural Networks". For a deep dive into the principles in full scientific context, please see my dissertation thesis or the book on Deep learning with relational logic representations.
<a name="myfootnote1">1</a>: Note that NeuraLogic is by no means designed or limited to learning with chemical data/models/knowledge, but we use it as an example domain here for consistency.
Use Cases
While the framework can be used to encode anything from MLPs, CNNs, RNNs, etc., it is not well suited for classic deep learning with regular data based on large homogeneous tensor operations. The framework is rather meant for efficient encoding of deep relational learning scenarios<sup>2</sup>, i.e. using dynamic (weight-sharing) neural networks to learn from data with irregular structure(s). That is why we exploit the declarative language with first-order expressiveness, as it allows for compact encoding of complex relational scenarios (similarly to how Prolog can be elegant for relational problems, while not so much for classic programming).
The framework is mostly optimized for quick, high-level prototyping of learning scenarios with sparse, irregular, relational data and complex, dynamically structured models. Particularly, you may find it beneficial for encoding various:
- Graph neural networks
- where you can use it to go well beyond the existing models
- Knowledge base completions
- with complex models requiring e.g. chained inference
- learning with Relational background knowledge/bias
- which can be as expressive as the models themselves
- approaches from Neural-symbolic integration
- for combining (fuzzy) logic inference with neural networks
- and other more crazy ideas, such as learning with
- hypergraphs and ontologies
- recursion
- e.g. for latent type hierarchies
- and generic latent logic programs
<a name="myfootnote2">2</a>: if you come from deep learning background, you may be familiar with related terms such as "Geometric deep learning" or "Graph representation learning" (but this framework is not limited to graphs only).
Getting started
Prerequisite
Java ≥ 1.8
- continuously tested with JDK 1.8 on the latest Ubuntu, Windows and macOS<sup>3</sup>
Running examples
- download a release into some directory
DIR- or build from source with Maven or IntelliJ IDEA
- clone this repository (or just download the Resources/datasets directory) within
DIRgit clone https://github.com/GustikS/NeuraLogic
- try some trivial examples from terminal in
DIR- a simple familiar XOR problem
- scalar:
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/neural/xor/naive - vectorized:
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/neural/xor/vectorized
- scalar:
- a simple relational problem (Example 2 from this paper)
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/simple/family
- molecule classification problem (mutagenesis)
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/relational/molecules/mutagenesis -ts 100
- knowledge-base completion problem (countries)
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/relational/kbs/nations
- a simple familiar XOR problem
- you can check various exported settings and results in
DIR/target- if you have Graphviz installed (
which dot), you can observe the internal computation structures in debug mode:java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/neural/xor/naive -iso -1 -prune -1 -debug all- this should show a graph of the 1) worflow, 2) template, 3) inference graphs, 4) neural networks + weight updates, and 5) final learned template
java -jar NeuraLogic.jar -sd ./NeuraLogic/Resources/datasets/simple/family -iso -1 -prune -1 -debug all- note we turn off network pruning and compression here so that you can observe direct correspondence to the original example
- if you have Graphviz installed (
<a name="myfootnote3">3</a>: if you don't have Java in your system already, get it either from Oracle or OpenJDK
- for simple usage, it is enough to get the runtime en
