Neurosymbolic AI, knowledge graphs, and making AI actually explainable.
My main interest is in neurosymbolic AI — combining neural networks with structured symbolic reasoning. I believe explainable AI starts with explaining how AI itself works, which is why I build interactive visualizers that let people see the math behind the models with their own hands.
I’m passionate about explainable AI because I think it’s critical for democracy. AI will bring unprecedented economic abundance, but that abundance needs equally unprecedented democratic social structures to go alongside it. AI is essentially the study of how to think — our insights into machine reasoning apply to human reasoning too. If we can explain how AI thinks, we can explain the world, and a society that understands the systems shaping it is one that can actually govern them.
I also build AI tooling and knowledge graph infrastructure, from context management (OpenClaw) to structured data pipelines (Wikidata, Pramana).