๐ฏ What is Tensor Logic?
The Big Idea (30 seconds)
Tensor Logic is a new programming language that unifies:
- ๐ง Neural Networks (deep learning)
- ๐ค Symbolic AI (logical reasoning)
- ๐ Statistical AI (probabilistic models)
Key Insight: Logical rules and tensor operations (Einstein summation) are the same thing!
Why Should You Care as an RL Engineer?
โ Current RL Problems:
- Sample inefficiency
- Can't inject prior knowledge
- Black box policies
- Hallucinations in model-based RL
โ Tensor Logic Solutions:
- Inject logical rules (physics, safety)
- Sample-efficient learning
- Explainable decisions
- Reliable predictions (T=0)
Your Learning Checklist
- Read the 30-second summary
- Understand why RL engineers should care
- Watch the interactive demos
- Try the exercises
- Complete the quiz
๐งฉ Core Concepts
Einstein Summation Beginner
Einstein notation automatically sums over repeated indices:
// Matrix multiply: C[i,k] = A[i,j] B[j,k]
// The index 'j' appears twice โ sum over it!
C_ik = ฮฃ_j (A_ij ร B_jk)
import torch
A = torch.rand(3, 4) # 3ร4 matrix
B = torch.rand(4, 5) # 4ร5 matrix
# Traditional way
C = A @ B
# Einstein way (same result!)
C = torch.einsum('ij,jk->ik', A, B)
Key Rules:
- Repeated index = sum over it
- Output has indices NOT summed
- Order doesn't matter (commutative)
Logic Programming Beginner
Datalog represents knowledge as facts and rules:
// Facts
Parent(Alice, Bob)
Parent(Bob, Charlie)
// Rules (if-then statements)
Ancestor(x, y) โ Parent(x, y)
Ancestor(x, z) โ Ancestor(x, y), Parent(y, z)
This means:
- Parents are ancestors
- If X is ancestor of Y, and Y is parent of Z, then X is ancestor of Z
// This Datalog rule:
Aunt(x, z) โ Sister(x, y), Parent(y, z)
// Is the SAME as this tensor equation:
A[x,z] = H(S[x,y] ร P[y,z])
// Where H() = step function (converts >0 to 1)
The Unification Intermediate
Logic Rules โ Einstein Summation
A Datalog rule is:
- Join relations (multiply tensors)
- Project out variables (sum over indices)
- Apply step function (threshold)
In database terms: JOIN + PROJECT
In tensor terms: EINSUM + STEP
This enables:
- Writing neural networks as logic programs
- Writing logic programs as tensor equations
- Automatic differentiation of logic!
- GPU acceleration of reasoning
Embedding Space Reasoning Advanced
Instead of Boolean (0/1) tensors, use continuous embeddings:
// Boolean: Parent[Alice, Bob] = 1 or 0
// Embedded: Parent = Emb[Alice] โ Emb[Bob]
With a temperature parameter T:
- T = 0: Pure deductive logic (no errors)
- T > 0: Analogical reasoning (generalization)
Similar entities borrow inferences from each other!
Example: If Alice and Amy have similar embeddings, and Alice is parent of Bob, then the system might infer (with some probability) that Amy could be parent of someone similar to Bob.
โ๏ธ Interactive Demos
Demo 1: Matrix Multiplication Visualizer
Demo 2: Temperature Slider
See how temperature affects reasoning:
Demo 3: Einsum Playground
Type an einsum equation and see the result:
๐ฎ Applications to RL
1. Safe RL with Constraints
// Inject safety rules into your policy:
Unsafe(s) = InCollision(s, obstacle)
Q[s,a] = Q_neural[s,a] * (1 - Unsafe(s))
// At T=0: Hard constraint (never violate)
// At T>0: Soft constraint (penalize violations)
Use Case: Robot navigation that NEVER hits walls
2. Sample-Efficient RL
// Encode physics as tensor equations:
S_next[pos] = S[pos] + S[vel] * dt
S_next[vel] = S[vel] + Action * dt
// Learn only the residuals (errors in physics)
S_next = Physics(S, A) + NeuralCorrection(S, A)
Use Case: Learn with 10x fewer samples
3. Explainable Policies
// Extract reasoning chains:
Action[s] = argmax(Q[s,a] * Valid[s,a])
// Query: Why did you choose action A?
// Answer: Rules R1, R2, R3 fired with scores [0.8, 0.6, 0.9]
Use Case: Debug RL agent decisions
4. Hierarchical RL
// High-level: Symbolic planning
Plan(s, goal) โ Action(s, a, s'), Plan(s', goal)
// Low-level: Neural control
ฯ(a | s, subgoal) = NeuralPolicy(s, subgoal)
// Combine with temperature:
// T=0 for planning (reliable)
// T>0 for control (robust)
Use Case: Robot task planning + motor control
๐ก Your Challenge:
Think about your current RL project. What constraints would you want to inject? What prior knowledge could help?
๐จ Hands-On Exercises
Exercise 1: Einstein Notation Beginner
Implement these using torch.einsum():
import torch
A = torch.rand(3, 4)
B = torch.rand(4, 5)
X = torch.rand(2, 3, 4)
# 1. Matrix multiply C = A @ B
# Answer: torch.einsum('ij,jk->ik', A, B)
# 2. Batch matrix multiply
# Answer: torch.einsum('bij,jk->bik', X, B)
# 3. Dot product for each row
# Answer: torch.einsum('ij,ij->i', A, A[:, :4])
- Completed exercise 1.1
- Completed exercise 1.2
- Completed exercise 1.3
Exercise 2: Logic to Tensors Intermediate
Convert this Datalog program to tensor equations:
// Facts:
Parent(Alice, Bob)
Parent(Bob, Charlie)
// Rules:
Grandparent(x, z) โ Parent(x, y), Parent(y, z)
// Your task: Implement as Boolean tensors
// Hint: GP[i,k] = step(P[i,j] * P[j,k])
- Created Parent tensor
- Implemented Grandparent rule
- Verified results
Exercise 3: Safe RL Policy Advanced
Implement a policy with safety constraints:
class SafePolicy:
def __init__(self, state_dim, action_dim):
self.policy_net = MLP(state_dim, action_dim)
# TODO: Add constraint checker
def forward(self, state, temperature=0.0):
logits = self.policy_net(state)
# TODO: Apply constraints with temperature
return F.softmax(logits, dim=-1)
- Implemented constraint checker
- Added temperature control
- Tested on CartPole
๐ Knowledge Check Quiz
๐บ๏ธ Learning Roadmap
๐ 12-Week Learning Plan
Follow this structured path to master Tensor Logic:
Phase 1: Foundations (Weeks 1-3)
Week 1: Tensor Operations- Learn Einstein summation notation
- Practice with torch.einsum()
- Implement matrix ops in einsum
- Learn basic Datalog
- Try online Datalog interpreter
- Write recursive rules
- Understand embedding spaces
- Train Word2Vec model
- Visualize with t-SNE
Phase 2: Core Tensor Logic (Weeks 4-7)
- Read paper sections 2-3 carefully
- Implement basic tensor logic interpreter
- Convert neural networks to tensor logic
- Implement reasoning in embedding space
Phase 3: Applications (Weeks 8-12)
- Implement GNN in tensor logic
- Build safe RL system with constraints
- Apply to your own RL project
- Write blog post about experience
๐ Resources
- Paper: "Tensor Logic" by Pedro Domingos (2025)
- Blog: "Einsum is All You Need" by Rocktยจaschel
- Course: Stanford CS224W (Graph Neural Networks)
- Tool: PyTorch with einsum support
- Community: r/MachineLearning, tensor-logic.org
๐ฏ Next Steps
Today:
- โ Read this interactive guide
- โ Try Exercise 1 (einsum practice)
This Week:
- Complete Phase 1 Week 1
- Run tensor_logic_starter.py
This Month:
- Complete Phase 1 (foundations)
- Start implementing your own examples