“AGI should be just like natural intelligence. Something which plans, something which is able to predict, produce new knowledge, be cheap and efficient, and be adaptive to the environment. It should reason, it should not mimic.” — Eve Bodnia, Founder & CEO, Logical Intelligence
Listen or watch now on
YouTube, Spotify, or Apple Podcasts
Eve Bodnia is the co-founder and CEO of Logical Intelligence, which is developing energy-based reasoning models (EBMs) as an alternative to large language models. She argues that LLMs, which operate by recognizing and recombining patterns within language space, are structurally incapable of genuine reasoning. Eve's alternative: Kona—an EBM that reasons in abstract latent space, learns rules about the world rather than surface patterns, and can interface with language models as one output channel among many. Eve traces the core ideas behind her architecture to decades of work in symmetry groups, condensed matter physics, and brain science—fields that share, as she explains, the same underlying mathematics. In a public demo, Kona solved a complex reasoning task for roughly $4 in compute, compared to an estimated $15,000 using frontier LLMs. With Yann LeCun serving as founding chair of its technical board, Logical Intelligence sits at the center of a small but growing effort to rethink AI beyond language-based models.
In our conversation, we explore:
Why Eve believes LLMs can’t truly extrapolate knowledge, even at larger scale
What energy-based reasoning models are—and where the “energy” concept comes from
The $4 vs. $15,000 benchmark, and what it tells us about the cost of guessing vs. knowing
How Logical Intelligence showed spontaneous knowledge transfer at just 16M parameters
Why systems like chip design, surgical robotics, and power grids need more than probabilistic AI
What formally verified code generation means for the future of programming
Why the math behind particle physics also explains how the brain filters signal from noise
How meeting Grigori Perelman as a teenager shaped Eve’s views on ego and ownership in science
Why Eve believes humans must remain the constraint-setters in advanced AI
How meditation, piano, and Eastern philosophy support her creative process
Thank you to the partners who make this possible
Granola: The app that might actually make you love meetings.
Persona: Trusted identity verification for any use case.
Explore the episode
Timestamps
(00:00) Introduction
(03:03) Eve’s encounter with Grigori Perelman
(05:38) Why bizarre people are Eve’s favorite people
(06:56) Her early obsession with math and physics
(09:02) The manifold hypothesis and language
(11:54) The Kekulé Problem
(14:05) Eve’s upbringing and her CERN research in high school
(17:40) Eve’s academic path
(20:36) Symmetry in nature
(22:58) Spirituality and creativity
(27:00) Theory vs. experiment
(29:03) Uncovering a critical gap in AI models
(33:45) What Logical Intelligence is building
(35:50) Logical Intelligence’s use cases
(42:08) Energy-based models explained
(45:06) LLMs vs. EBMs
(48:01) AGI defined
(51:22) Kona’s knowledge extrapolation
(53:20) The team behind Logical Intelligence
(58:09) Early investors in Logical Intelligence
(58:50) Feynman’s influence on Eve’s work
(1:01:15) How Eve sustains her creativity
(1:03:42) Final meditations
Follow Eve Bodnia
LinkedIn: https://www.linkedin.com/in/eve-bodnia-351b41355
X: https://x.com/evelovesolive
Website: https://logicalintelligence.com
Resources and episode mentions
Books
The Creative Act: A Way of Being: https://www.amazon.com/Creative-Act-Way-Being/dp/0593652886
Impro: Improvisation and the Theatre: https://www.amazon.com/Impro-Improvisation-Theatre-Keith-Johnstone/dp/0878301178
Perfectly Reasonable Deviations from the Beaten Track: https://www.amazon.com/Perfectly-Reasonable-Deviations-Beaten-Track/dp/0465023711
Letting Go: The Pathway of Surrender: https://www.amazon.com/Letting-David-Hawkins-M-D-Ph-D/dp/1401945015
People
Grigori Perelman: https://en.wikipedia.org/wiki/Grigori_Perelman
Cormac McCarthy: https://en.wikipedia.org/wiki/Cormac_McCarthy
Rick Rubin on X: https://x.com/RickRubin
David Krakauer’s website: https://davidckrakauer.com
Yann LeCun on LinkedIn: https://www.linkedin.com/in/yann-lecun
Michael Freedman: https://www.microsoft.com/en-us/research/people/michaelf
Vladislav Isenbaev on LinkedIn: https://www.linkedin.com/in/isenbaev
Other resources
Fields Medal: https://en.wikipedia.org/wiki/Fields_Medal
Manifold hypothesis: https://en.wikipedia.org/wiki/Manifold_hypothesis
The Kekulé Problem: https://nautil.us/the-kekul-problem-236574
CERN: https://www.home.cern
Can Humans Stay Smart in the Age of AI? (David Krakauer, President of the Santa Fe Institute): https://www.generalist.com/p/maintaining-human-intelligence-in-the-ai-era-david-krakauer
ICPC global: https://icpc.global
Eve’s post on X about Feynman’s writings: https://x.com/evelovesolive/status/2002470354485457115
Mahayana Buddhism: https://en.wikipedia.org/wiki/Mahayana
Subscribe to the show
I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].









