Move over, deep learning: Symbolica’s structured approach could transform AI

Time’s almost up! There’s only one week left to request an invite to The AI Impact Tour on June 5th. Don’t miss out on this incredible opportunity to explore various methods for auditing AI models. Find out how you can attend here.


Artificial intelligence startup Symbolica emerged from stealth today and unveiled a novel approach to constructing AI models, leveraging advanced mathematics to imbue systems with human-like reasoning capabilities and unprecedented transparency. The company’s goal is to move beyond the “alchemy” of contemporary AI to a more rigorous, scientific foundation.

In addition to its public launch, Symbolica revealed today it has raised $33 million in total funding (Series A + seed), led by Khosla Ventures, with participation from Day One Ventures, General Catalyst, Abstract Ventures and Buckley Ventures.

“We’re not building a model. While we will productize a model, our company is fundamentally focused on building a description of how to generate architectures in a way that’s fundamentally more powerful than has been possible,” Symbolica’s founder and CEO George Morgan said in an interview with VentureBeat.

Morgan is a former senior autopilot engineer at Tesla who worked on its self-driving systems. He founded Symbolica with a team of Ph.D. mathematicians, machine learning (ML) researchers and engineers from Tesla, Neuralink and ClearML. The company also counts Stephen Wolfram, creator of WolframAlpha, Mathematica, and American Mathematical Society Fellow, as an advisor.

VB Event

June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

The alchemy of deep learning

At the heart of Symbolica’s approach is “category theory,” a branch of mathematics that formalizes mathematical structures and their relationships. By grounding AI in this rigorous framework, the company believes it can create models that have reasoning as an inherent capability, rather than an emergent side effect of training on huge datasets.

This stands in contrast to the deep learning systems dominating AI today, which Morgan likened to the “alchemy” that preceded modern chemistry. “There is no rigor to how we build AI architectures, in industry or in academia today. What other discipline of engineering do you know of that doesn’t have some sort of scientific rigor backing it?” he said.

He drew an analogy to drug discovery. “Imagine you’re trying to invent Tylenol. You’re probably not going to just mix a bunch of random stuff together and hope that you get Tylenol. You’re going to think about the chemical receptors that exist, the molecular interactions with those receptors, and so on. There’s a rigorous scientific discipline associated with the solution to these problems. This currently doesn’t exist in any capacity for AI or machine learning.”

The lack of rigor, he argued, means current AI models are essentially black boxes. Once trained, “you have no idea what’s going on inside it, you have no clue what the structure it’s learned is or how it’s learned associations.”

Opening the black box

Symbolica’s approach aims to open up that black box, enabling interpretability. “If we can describe an architecture, we can also describe what that architecture is learning, what kinds of structures it’s learning, what kinds of structures we’re embedding inside of that architecture. And so this is exactly a one-way ticket to interpretability in AI models,” said Morgan.

Interpretability is crucial as AI is increasingly applied to high-stakes decisions in industries like healthcare and finance. It’s also a prerequisite for effective regulation of AI systems. “Symbolica gives you an extremely formal, precise definition of how to understand the models,” said Morgan. “And we can use that to regulate models.”

Symbolica’s approach also promises AI systems that can perform complex reasoning tasks with far less training data and computing power than today’s data-hungry models. “If we build an architecture that’s natively capable of reasoning, it takes much less data to get that model to perform as well as completely unstructured models that don’t have this notion of reasoning built in,” Morgan explained.

The road to reasoning machines

If successful, the implications could be immense. Symbolica envisions its AI models deployed across virtually every industry, taking on cognitive tasks that have thus far remained the sole province of human intelligence.

However, the road ahead is long and filled with obstacles. Creating an overarching mathematical framework for AI is vastly more complex than optimizing a particular model, like competitors OpenAI, Anthropic, Google or Meta. Symbolica will need to prove its theories in practical applications and will face stiff competition from tech giants with billions to pour into AI R&D.

But Symbolica’s contrarian approach is increasingly validated by the AI research community. Symbolica recently co-authored a paper with Google DeepMind on “categorical deep learning,” which mathematically demonstrated how its approach could supersede previous work on geometric deep learning to build structurally-aware models.

Symbolica’s focus on rigor and interpretability may also find receptive audiences among enterprise adopters of AI, particularly in heavily regulated industries, and in governmental agencies grappling with how to responsibly deploy and oversee increasingly powerful AI systems. If Symbolica can successfully navigate the chasm between theoretical breakthroughs and real-world applications, it could capture a significant slice of an enterprise AI market expected to top $270 billion by 2032.

At a philosophical level, Symbolica’s efforts to move beyond pattern-matching to genuine machine reasoning, if successful, would mark a major milestone on the road to artificial general intelligence—the still-speculative notion of AI systems that can match the fluid intelligence of the human mind.

The journey to reasoning machines will not be an easy one. But in eschewing the alchemy of much contemporary AI work for a more disciplined approach, Symbolica may be laying the groundwork for the next great leap forward. As Morgan put it, “That’s why we can build much smaller models—because we’ve focused very directly on embedding the structure into the models, rather than relying on large amounts of compute to learn the structure that we could have specified initially.”

In an AI landscape where size increasingly seems to matter above all, Symbolica is betting that a little structure will go a long way.