Get live statistics and analysis of Elan Barenholtz, Ph.D.'s profile on X / Twitter

Professor at FAU, computational philosopher, co-founder/Director MPCR lab, working on an autoregressive account of language and cognition one token at a time

168 following1k followers

The Analyst

Elan Barenholtz, Ph.D. is a computational philosopher and professor deeply engaged in deciphering the structure of language and cognition through rigorous, data-driven analysis. His insights demystify complex phenomena like language generation and brain function, pushing boundaries on how we understand human cognition. With a focus on autoregressive processes, he uncovers the underlying generative systems that power thought and communication.

Impressions
74.5k68.8k
$13.97
Likes
1.1k1k
47%
Retweets
152149
6%
Replies
258212
11%
Bookmarks
897878
37%

Top users who interacted with Elan Barenholtz, Ph.D. over the last 14 days

@GaryMarcus

“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker

1 interactions

Elan talks about language being ‘alive’ and ‘self-generating’ like DNA — which is cool, but buddy, even your tweets need a little germination period before the rest of us can fully decode the botanical garden of jargon you plant.

His breakthrough tweet explaining how large language models reveal the immanent generative structure of language overtook viral norms with over 66K views and nearly 1,000 likes, setting a new standard in public discourse on AI and cognitive science.

Elan’s life purpose is to unravel the complexity of human language and cognition by developing new scientific models that illuminate how our minds generate meaning, ultimately bridging philosophy, neuroscience, and artificial intelligence to expand human understanding.

He believes that language is fundamentally an internally driven, self-sustaining generative process rather than a system governed by external rules or innate grammar, challenging traditional linguistic theories. He values rigorous empirical inquiry, interdisciplinary collaboration, and the power of computational models to reveal hidden layers of cognitive reality.

Elan’s greatest strengths lie in his deep analytical mind, his ability to synthesize complex ideas across disciplines, and his clear, compelling communication of abstract concepts that inspire both scholarly debate and broader public fascination.

His intense focus on technical and philosophical minutiae might sometimes make his insights inaccessible to casual audiences or those outside his academic circles, potentially limiting wider engagement without translation into simpler language.

To grow his audience on X, Elan could leverage his unique expertise by weaving accessible analogies and storytelling into his tweets, inviting open-ended questions to spark community discussion while strategically engaging with influencers in AI, linguistics, and cognitive science to amplify his reach.

Fun fact: Elan’s work reveals that language is a unique self-contained generative system, unlike any other structured sequence in nature — making it both the foundation of civilization and a living entity as sophisticated as DNA.

Top tweets of Elan Barenholtz, Ph.D.

People still don’t seem to grasp how insane the structure of language revealed by LLMs really is. All structured sequences fall into one of three categories: 1.Those generated by external rules (like chess, Go, or Fibonacci). 2.Those generated by external processes (like DNA replication, weather systems, or the stock market). 3.Those that are self-contained, whose only rule is to continue according to their own structure. Language is the only known example of the third kind that does anything. In fact, it does everything. Train a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation. From this we can conclude three things: 1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong. 2) Language is the only self-contained system that produces coherent, functional output. 3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself. LLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization. Wtf.

66k

What binds experience into a unified mind, cells into bodies and symbols into meaning? I joined @algekalipso and @drmichaellevin to discuss cognitive glue, nonlinear optics, and the agency of informational patterns. Watch the video here: youtu.be/0BVM0UC28nY?si… Hosted by @ekkolapto at the University of Toronto.

8k

The internally driven nature of LLMs blows up the very idea of linguistic "meaning." These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. What about us? Our language seems to have meaning. When someone says "imagine a red balloon," you can see it in your head. And this meaning has utility. If I say, "grab that red book on the shelf," you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to. Does this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say "grab that red book on the shelf," the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate "preparatory" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words. Of course, the causal arrow can go in the opposite direction as well: if you see two red books and ask "which one do you want?" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user. Together, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when "grab the red book" reliably produces convergent behavior—we retrospectively describe this as the words "referring" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate . This is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.

3k

My recent @TOEwithCurt interview with @will_hahn is generating a lot of discussion (and some heat). youtu.be/Ca_RbPXraDE?si… Here are some of the core claims. Agree, disagree, challenge—let’s go:👇

7k

It's up! My @TOEwithCurt interview at U. of Toronto. Together with @will_hahn, I discuss the unsettling idea that LLMs show that language runs in us. And runs us. Installed before consent. youtu.be/Ca_RbPXraDE?si… Thanks to @ekkolapto for another incredible event.

1k

Well, this is exciting. Tomorrow at 4 pm EST I'll be part of a salon discussion on Unconventional Cognition and Computing with Joscha Bach @Plinz and @will_hahn to kick off the brand-spanking new MIT Computational Philosophy club, in partenrship with @ekkolapto . If you can make it in person at MIT sign up below. luma.com/computationalp…. Otherwise, while the event will not be streamed, it will be recorded and posted soon.

5k

Honored to be interviewed by the great @TOEwithCurt where we get to discuss my Autogenerative Theory of language, intelligence and mind (with some physics and theology along for the ride). LIVE NOW on YouTube: youtu.be/A36OumnSrWY Special thanks to the incredible thinkers from my lab at FAU: Prof @will_hahn, Prof. @DrSueSchneider, Addy @Ekkolapto, @Daniel_Van_Zant –and lastly of course Curt @TOEwithCurt! You can also follow a lot of my work here on X, and on my Substack: @generativebrain">substack.com/@generativebra… I’ll also be giving a special talk in Toronto, in the next few weeks with @will_hahn… stay tuned! 👀

1k

Most engaged tweets of Elan Barenholtz, Ph.D.

People still don’t seem to grasp how insane the structure of language revealed by LLMs really is. All structured sequences fall into one of three categories: 1.Those generated by external rules (like chess, Go, or Fibonacci). 2.Those generated by external processes (like DNA replication, weather systems, or the stock market). 3.Those that are self-contained, whose only rule is to continue according to their own structure. Language is the only known example of the third kind that does anything. In fact, it does everything. Train a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation. From this we can conclude three things: 1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong. 2) Language is the only self-contained system that produces coherent, functional output. 3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself. LLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization. Wtf.

66k

The internally driven nature of LLMs blows up the very idea of linguistic "meaning." These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. What about us? Our language seems to have meaning. When someone says "imagine a red balloon," you can see it in your head. And this meaning has utility. If I say, "grab that red book on the shelf," you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to. Does this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say "grab that red book on the shelf," the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate "preparatory" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words. Of course, the causal arrow can go in the opposite direction as well: if you see two red books and ask "which one do you want?" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user. Together, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when "grab the red book" reliably produces convergent behavior—we retrospectively describe this as the words "referring" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate . This is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.

3k

My recent @TOEwithCurt interview with @will_hahn is generating a lot of discussion (and some heat). youtu.be/Ca_RbPXraDE?si… Here are some of the core claims. Agree, disagree, challenge—let’s go:👇

7k

What binds experience into a unified mind, cells into bodies and symbols into meaning? I joined @algekalipso and @drmichaellevin to discuss cognitive glue, nonlinear optics, and the agency of informational patterns. Watch the video here: youtu.be/0BVM0UC28nY?si… Hosted by @ekkolapto at the University of Toronto.

8k

Honored to be interviewed by the great @TOEwithCurt where we get to discuss my Autogenerative Theory of language, intelligence and mind (with some physics and theology along for the ride). LIVE NOW on YouTube: youtu.be/A36OumnSrWY Special thanks to the incredible thinkers from my lab at FAU: Prof @will_hahn, Prof. @DrSueSchneider, Addy @Ekkolapto, @Daniel_Van_Zant –and lastly of course Curt @TOEwithCurt! You can also follow a lot of my work here on X, and on my Substack: @generativebrain">substack.com/@generativebra… I’ll also be giving a special talk in Toronto, in the next few weeks with @will_hahn… stay tuned! 👀

1k

It's up! My @TOEwithCurt interview at U. of Toronto. Together with @will_hahn, I discuss the unsettling idea that LLMs show that language runs in us. And runs us. Installed before consent. youtu.be/Ca_RbPXraDE?si… Thanks to @ekkolapto for another incredible event.

1k

Well, this is exciting. Tomorrow at 4 pm EST I'll be part of a salon discussion on Unconventional Cognition and Computing with Joscha Bach @Plinz and @will_hahn to kick off the brand-spanking new MIT Computational Philosophy club, in partenrship with @ekkolapto . If you can make it in person at MIT sign up below. luma.com/computationalp…. Otherwise, while the event will not be streamed, it will be recorded and posted soon.

5k

People with Analyst archetype

The Analyst

x10 Maxi

2k following10k followers
The Analyst

Spot trading | GameFI enthusiasts | Sharing blockchain game web3 projects | All content does not constitute investment advice

1k following1k followers
The Analyst

Contributor @BitcoinForCorps. Ex-TradFi (14 yrs in Investor Relations, Structured Finance, Wealth Management). Jesus is King ✝️

337 following22k followers
The Analyst

探索 Web3 前沿,解构加密叙事。重点关注 Defi、gamefi 及嘴撸的创新与风险。提供项目深研、工具测评与实用策略分享。一起学习,穿越牛熊

431 following1k followers
The Analyst

Head of Venture @ Varys Capital | Ex: Sr. Analyst @ Messari | CFA/CAIA | NFA | TG: dunleavy89 for dms

3k following21k followers
The Analyst

안안녕하세요!

3k following2k followers
The Analyst

23 | ex-AI Engineer | Now - Advisor for SMEs | attempting to be a digital nomad | in-between 🇹🇭&🇯🇵 | Can you tell which image is AI? → realorslop.fun

477 following226 followers
The Analyst

truth, freedom, civilization

771 following197 followers
The Analyst

Amateur photo snapper, math nerd, wood craftsman, lover of cozy log cabins, books, tacos, and everything Harry Potter. Don't be a catfish. No DMs! Pro-Choice!

4k following21k followers
The Analyst

Naturalistic approach to research: digging up and trying to explain things. Sometimes discovering unexpected beauty. animalabs.ai

270 following1k followers
The Analyst

@ixiantech | Ixventure.studio | free speech & property for all peoples | legal abundance — lawcare | liberty, literacy, life

566 following1k followers
The Analyst

$BTC since 2011 | Co-Founder @EthernalLabs @Arcbound @FanabeApp | Advisor-Strategy @AlphaProtocolVC |

1k following137k followers

Explore Related Archetypes

If you enjoy the analyst profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free