๐Ÿง  Tensor Logic Interactive Guide

For Entry-Level RL Engineers

Based on Pedro Domingos' Paper (2025)

๐ŸŽฏ What is Tensor Logic?

The Big Idea (30 seconds)

Tensor Logic is a new programming language that unifies:

  • ๐Ÿง  Neural Networks (deep learning)
  • ๐Ÿค– Symbolic AI (logical reasoning)
  • ๐Ÿ“Š Statistical AI (probabilistic models)

Key Insight: Logical rules and tensor operations (Einstein summation) are the same thing!

Why Should You Care as an RL Engineer?

โŒ Current RL Problems:

  • Sample inefficiency
  • Can't inject prior knowledge
  • Black box policies
  • Hallucinations in model-based RL

โœ… Tensor Logic Solutions:

  • Inject logical rules (physics, safety)
  • Sample-efficient learning
  • Explainable decisions
  • Reliable predictions (T=0)

Your Learning Checklist

  • Read the 30-second summary
  • Understand why RL engineers should care
  • Watch the interactive demos
  • Try the exercises
  • Complete the quiz

๐Ÿงฉ Core Concepts

Einstein Summation
Logic Programming
Unification
Embeddings

Einstein Summation Beginner

Einstein notation automatically sums over repeated indices:

// Matrix multiply: C[i,k] = A[i,j] B[j,k]
// The index 'j' appears twice โ†’ sum over it!

C_ik = ฮฃ_j (A_ij ร— B_jk)
PyTorch Example:
import torch

A = torch.rand(3, 4)  # 3ร—4 matrix
B = torch.rand(4, 5)  # 4ร—5 matrix

# Traditional way
C = A @ B

# Einstein way (same result!)
C = torch.einsum('ij,jk->ik', A, B)

Key Rules:

  • Repeated index = sum over it
  • Output has indices NOT summed
  • Order doesn't matter (commutative)

Logic Programming Beginner

Datalog represents knowledge as facts and rules:

// Facts
Parent(Alice, Bob)
Parent(Bob, Charlie)

// Rules (if-then statements)
Ancestor(x, y) โ† Parent(x, y)
Ancestor(x, z) โ† Ancestor(x, y), Parent(y, z)

This means:

  • Parents are ancestors
  • If X is ancestor of Y, and Y is parent of Z, then X is ancestor of Z
Mind-Blowing Connection:
// This Datalog rule:
Aunt(x, z) โ† Sister(x, y), Parent(y, z)

// Is the SAME as this tensor equation:
A[x,z] = H(S[x,y] ร— P[y,z])

// Where H() = step function (converts >0 to 1)

The Unification Intermediate

Logic Rules โ‰ˆ Einstein Summation

A Datalog rule is:

  1. Join relations (multiply tensors)
  2. Project out variables (sum over indices)
  3. Apply step function (threshold)

In database terms: JOIN + PROJECT

In tensor terms: EINSUM + STEP

This enables:

  • Writing neural networks as logic programs
  • Writing logic programs as tensor equations
  • Automatic differentiation of logic!
  • GPU acceleration of reasoning

Embedding Space Reasoning Advanced

Instead of Boolean (0/1) tensors, use continuous embeddings:

// Boolean: Parent[Alice, Bob] = 1 or 0
// Embedded: Parent = Emb[Alice] โŠ— Emb[Bob]
The Magic:

With a temperature parameter T:

  • T = 0: Pure deductive logic (no errors)
  • T > 0: Analogical reasoning (generalization)

Similar entities borrow inferences from each other!

Example: If Alice and Amy have similar embeddings, and Alice is parent of Bob, then the system might infer (with some probability) that Amy could be parent of someone similar to Bob.

โš™๏ธ Interactive Demos

Demo 1: Matrix Multiplication Visualizer

3ร—3

Demo 2: Temperature Slider

See how temperature affects reasoning:

T = 0.0
Adjust temperature and click "Run Demo"

Demo 3: Einsum Playground

Type an einsum equation and see the result:

Select an example and click "Execute"

๐ŸŽฎ Applications to RL

1. Safe RL with Constraints

// Inject safety rules into your policy:
Unsafe(s) = InCollision(s, obstacle)
Q[s,a] = Q_neural[s,a] * (1 - Unsafe(s))

// At T=0: Hard constraint (never violate)
// At T>0: Soft constraint (penalize violations)

Use Case: Robot navigation that NEVER hits walls

2. Sample-Efficient RL

// Encode physics as tensor equations:
S_next[pos] = S[pos] + S[vel] * dt
S_next[vel] = S[vel] + Action * dt

// Learn only the residuals (errors in physics)
S_next = Physics(S, A) + NeuralCorrection(S, A)

Use Case: Learn with 10x fewer samples

3. Explainable Policies

// Extract reasoning chains:
Action[s] = argmax(Q[s,a] * Valid[s,a])

// Query: Why did you choose action A?
// Answer: Rules R1, R2, R3 fired with scores [0.8, 0.6, 0.9]

Use Case: Debug RL agent decisions

4. Hierarchical RL

// High-level: Symbolic planning
Plan(s, goal) โ† Action(s, a, s'), Plan(s', goal)

// Low-level: Neural control
ฯ€(a | s, subgoal) = NeuralPolicy(s, subgoal)

// Combine with temperature:
// T=0 for planning (reliable)
// T>0 for control (robust)

Use Case: Robot task planning + motor control

๐Ÿ’ก Your Challenge:

Think about your current RL project. What constraints would you want to inject? What prior knowledge could help?

๐Ÿ”จ Hands-On Exercises

Exercise 1: Einstein Notation Beginner

Implement these using torch.einsum():

import torch

A = torch.rand(3, 4)
B = torch.rand(4, 5)
X = torch.rand(2, 3, 4)

# 1. Matrix multiply C = A @ B
# Answer: torch.einsum('ij,jk->ik', A, B)

# 2. Batch matrix multiply
# Answer: torch.einsum('bij,jk->bik', X, B)

# 3. Dot product for each row
# Answer: torch.einsum('ij,ij->i', A, A[:, :4])
  • Completed exercise 1.1
  • Completed exercise 1.2
  • Completed exercise 1.3

Exercise 2: Logic to Tensors Intermediate

Convert this Datalog program to tensor equations:

// Facts:
Parent(Alice, Bob)
Parent(Bob, Charlie)

// Rules:
Grandparent(x, z) โ† Parent(x, y), Parent(y, z)

// Your task: Implement as Boolean tensors
// Hint: GP[i,k] = step(P[i,j] * P[j,k])
  • Created Parent tensor
  • Implemented Grandparent rule
  • Verified results

Exercise 3: Safe RL Policy Advanced

Implement a policy with safety constraints:

class SafePolicy:
    def __init__(self, state_dim, action_dim):
        self.policy_net = MLP(state_dim, action_dim)
        # TODO: Add constraint checker

    def forward(self, state, temperature=0.0):
        logits = self.policy_net(state)
        # TODO: Apply constraints with temperature
        return F.softmax(logits, dim=-1)
  • Implemented constraint checker
  • Added temperature control
  • Tested on CartPole

๐Ÿ“ Knowledge Check Quiz

1. What is the fundamental insight of Tensor Logic?
A) Tensors are faster than matrices
B) Logic rules โ‰ˆ Einstein summation over Boolean tensors
C) Neural networks can replace all symbolic AI
D) Python is the best AI language
2. What does temperature T=0 mean?
A) Freeze the model weights
B) Pure deductive logic (no errors, no generalization)
C) Use CPU instead of GPU
D) Turn off learning
3. In Y[i,k] = A[i,j] B[j,k], what happens to index j?
A) It's kept in the output
B) It's summed over (Einstein convention)
C) It's maximized
D) It's ignored
4. How can Tensor Logic help RL?
A) Makes training 1000x faster automatically
B) Injects prior knowledge and ensures safety constraints
C) Replaces neural networks entirely
D) Only works for discrete action spaces
5. What is the difference between T=0 and T>0?
A) Speed vs accuracy
B) Deductive (exact) vs analogical (similarity-based) reasoning
C) Training vs inference
D) CPU vs GPU execution

๐Ÿ—บ๏ธ Learning Roadmap

๐Ÿ“… 12-Week Learning Plan

Follow this structured path to master Tensor Logic:

Phase 1: Foundations (Weeks 1-3)

Week 1: Tensor Operations
  • Learn Einstein summation notation
  • Practice with torch.einsum()
  • Implement matrix ops in einsum
Week 2: Logic Programming
  • Learn basic Datalog
  • Try online Datalog interpreter
  • Write recursive rules
Week 3: Embeddings
  • Understand embedding spaces
  • Train Word2Vec model
  • Visualize with t-SNE

Phase 2: Core Tensor Logic (Weeks 4-7)

  • Read paper sections 2-3 carefully
  • Implement basic tensor logic interpreter
  • Convert neural networks to tensor logic
  • Implement reasoning in embedding space

Phase 3: Applications (Weeks 8-12)

  • Implement GNN in tensor logic
  • Build safe RL system with constraints
  • Apply to your own RL project
  • Write blog post about experience

๐Ÿ“š Resources

  • Paper: "Tensor Logic" by Pedro Domingos (2025)
  • Blog: "Einsum is All You Need" by Rocktยจaschel
  • Course: Stanford CS224W (Graph Neural Networks)
  • Tool: PyTorch with einsum support
  • Community: r/MachineLearning, tensor-logic.org

๐ŸŽฏ Next Steps

Today:

  • โœ“ Read this interactive guide
  • โœ“ Try Exercise 1 (einsum practice)

This Week:

  • Complete Phase 1 Week 1
  • Run tensor_logic_starter.py

This Month:

  • Complete Phase 1 (foundations)
  • Start implementing your own examples