Emma Leonhart

Neurosymbolic AI, knowledge graphs, and making AI actually explainable.

About

My main interest is in neurosymbolic AI — combining neural networks with structured symbolic reasoning. I believe explainable AI starts with explaining how AI itself works, which is why I build interactive visualizers that let people see the math behind the models with their own hands.

I’m passionate about explainable AI because I think it’s critical for democracy. AI will bring unprecedented economic abundance, but that abundance needs equally unprecedented democratic social structures to go alongside it. AI is essentially the study of how to think — our insights into machine reasoning apply to human reasoning too. If we can explain how AI thinks, we can explain the world, and a society that understands the systems shaping it is one that can actually govern them.

I also build AI tooling and knowledge graph infrastructure, from context management (OpenClaw) to structured data pipelines (Wikidata, Pramana).

Explaining AI

Database Theory

sutraDB Theory →
Interactive visualizations of database theory — how graph and vector databases work, and the innovations behind sutraDB: HNSW in RDF, subgraph SIMD indexing, SPARQL exit conditions, and traversal optimization.
8 Interactive Visualizations

Projects

Vibecoding Tutorial
A beginner-friendly guide to vibecoding with AI. Covers good habits like unit testing, project structure, and working effectively with AI assistants.
Tutorial
cleanvibe
Python library that helps bootstrap well-documented vibecoding projects with clean structure from the start.
Python
claw.py
Library for creating exportable OpenClaw context — portable, structured context for AI agent sessions.
PythonAI Tooling

Links