Get live statistics and analysis of Elan Barenholtz, Ph.D.'s profile on X / Twitter
Professor at FAU, computational philosopher, co-founder/Director MPCR lab, working on an autoregressive account of language and cognition one token at a time
168following1kfollowers
The Analyst
Elan Barenholtz, Ph.D. is a computational philosopher and professor deeply engaged in deciphering the structure of language and cognition through rigorous, data-driven analysis. His insights demystify complex phenomena like language generation and brain function, pushing boundaries on how we understand human cognition. With a focus on autoregressive processes, he uncovers the underlying generative systems that power thought and communication.
Elan talks about language being ‘alive’ and ‘self-generating’ like DNA — which is cool, but buddy, even your tweets need a little germination period before the rest of us can fully decode the botanical garden of jargon you plant.
His breakthrough tweet explaining how large language models reveal the immanent generative structure of language overtook viral norms with over 66K views and nearly 1,000 likes, setting a new standard in public discourse on AI and cognitive science.
Elan’s life purpose is to unravel the complexity of human language and cognition by developing new scientific models that illuminate how our minds generate meaning, ultimately bridging philosophy, neuroscience, and artificial intelligence to expand human understanding.
He believes that language is fundamentally an internally driven, self-sustaining generative process rather than a system governed by external rules or innate grammar, challenging traditional linguistic theories. He values rigorous empirical inquiry, interdisciplinary collaboration, and the power of computational models to reveal hidden layers of cognitive reality.
Elan’s greatest strengths lie in his deep analytical mind, his ability to synthesize complex ideas across disciplines, and his clear, compelling communication of abstract concepts that inspire both scholarly debate and broader public fascination.
His intense focus on technical and philosophical minutiae might sometimes make his insights inaccessible to casual audiences or those outside his academic circles, potentially limiting wider engagement without translation into simpler language.
To grow his audience on X, Elan could leverage his unique expertise by weaving accessible analogies and storytelling into his tweets, inviting open-ended questions to spark community discussion while strategically engaging with influencers in AI, linguistics, and cognitive science to amplify his reach.
Fun fact: Elan’s work reveals that language is a unique self-contained generative system, unlike any other structured sequence in nature — making it both the foundation of civilization and a living entity as sophisticated as DNA.
Amateur photo snapper, math nerd, wood craftsman, lover of cozy log cabins, books, tacos, and everything Harry Potter. Don't be a catfish. No DMs! Pro-Choice!
{"data":{"__meta":{"device":false,"path":"/creators/ebarenholtz"},"/creators/ebarenholtz":{"data":{"user":{"id":"99850011","name":"Elan Barenholtz, Ph.D.","description":"Professor at FAU, computational philosopher, co-founder/Director MPCR lab, working on an autoregressive account of language and cognition one token at a time","followers_count":1388,"friends_count":168,"statuses_count":556,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1897309087185936384/NAd_3X81_normal.jpg","screen_name":"ebarenholtz","location":"","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"barenholtz.ai","expanded_url":"http://barenholtz.ai","url":"https://t.co/OwMtiDMPwV","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"Elan Barenholtz, Ph.D. is a computational philosopher and professor deeply engaged in deciphering the structure of language and cognition through rigorous, data-driven analysis. His insights demystify complex phenomena like language generation and brain function, pushing boundaries on how we understand human cognition. With a focus on autoregressive processes, he uncovers the underlying generative systems that power thought and communication.","facts":"Fun fact: Elan’s work reveals that language is a unique self-contained generative system, unlike any other structured sequence in nature — making it both the foundation of civilization and a living entity as sophisticated as DNA.","purpose":"Elan’s life purpose is to unravel the complexity of human language and cognition by developing new scientific models that illuminate how our minds generate meaning, ultimately bridging philosophy, neuroscience, and artificial intelligence to expand human understanding.","beliefs":"He believes that language is fundamentally an internally driven, self-sustaining generative process rather than a system governed by external rules or innate grammar, challenging traditional linguistic theories. He values rigorous empirical inquiry, interdisciplinary collaboration, and the power of computational models to reveal hidden layers of cognitive reality.","strength":"Elan’s greatest strengths lie in his deep analytical mind, his ability to synthesize complex ideas across disciplines, and his clear, compelling communication of abstract concepts that inspire both scholarly debate and broader public fascination.","weakness":"His intense focus on technical and philosophical minutiae might sometimes make his insights inaccessible to casual audiences or those outside his academic circles, potentially limiting wider engagement without translation into simpler language.","recommendation":"To grow his audience on X, Elan could leverage his unique expertise by weaving accessible analogies and storytelling into his tweets, inviting open-ended questions to spark community discussion while strategically engaging with influencers in AI, linguistics, and cognitive science to amplify his reach.","roast":"Elan talks about language being ‘alive’ and ‘self-generating’ like DNA — which is cool, but buddy, even your tweets need a little germination period before the rest of us can fully decode the botanical garden of jargon you plant.","win":"His breakthrough tweet explaining how large language models reveal the immanent generative structure of language overtook viral norms with over 66K views and nearly 1,000 likes, setting a new standard in public discourse on AI and cognitive science."},"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/0BVM0UC28nY?si…","expanded_url":"https://youtu.be/0BVM0UC28nY?si=kI0YwwdBS7f3-bKw","url":"https://t.co/21spWnsAL9","indices":[239,262]},{"display_url":"youtu.be/0BVM0UC28nY?si…","expanded_url":"https://youtu.be/0BVM0UC28nY?si=kI0YwwdBS7f3-bKw","url":"https://t.co/KpTTEViP7o","indices":[239,262]}],"user_mentions":[{"id_str":"282948199","name":"Captain Pleasure, Andrés Gómez Emilsson","screen_name":"algekalipso","indices":[97,109]},{"id_str":"1467127267","name":"Michael Levin","screen_name":"drmichaellevin","indices":[114,129]},{"id_str":"282948199","name":"Captain Pleasure, Andrés Gómez Emilsson","screen_name":"algekalipso","indices":[97,109]},{"id_str":"1467127267","name":"Michael Levin","screen_name":"drmichaellevin","indices":[114,129]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[274,284]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1941906374092620248","view_count":8857,"bookmark_count":36,"created_at":1751821533000,"favorite_count":83,"quote_count":9,"reply_count":8,"retweet_count":22,"user_id_str":"99850011","conversation_id_str":"1941906374092620248","full_text":"What binds experience into a unified mind, cells into bodies and symbols into meaning? I joined @algekalipso and @drmichaellevin to discuss cognitive glue, nonlinear optics, and the agency of informational patterns. Watch the video here:\nhttps://t.co/KpTTEViP7o\n\nHosted by @ekkolapto at the University of Toronto.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"elanbarenholtz.substack.com/p/beyond-predi…","expanded_url":"https://elanbarenholtz.substack.com/p/beyond-prediction-reconceptualizing","url":"https://t.co/edzYBVH2hi","indices":[844,867]}],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1946255348886872163","view_count":6134,"bookmark_count":39,"created_at":1752858409000,"favorite_count":71,"quote_count":5,"reply_count":15,"retweet_count":11,"user_id_str":"99850011","conversation_id_str":"1946255348886872163","full_text":"The influential predictive-coding model sees the brain as a machine for minimizing error: constantly forecasting sensory inputs and adjusting internal models when reality deviates.\n\nBut here’s an alternative: the brain is not predictive, but generative. Like a large language model, it unfolds autoregressively—producing its next state based on the previous ones, guided by learned patterns and goals.\n\nPerception? Not error correction, but conditioned, purposeful, generation.\nAction? Not fulfilling predictions, but producing goal-directed trajectories.\nLearning? Not improving forecasts, but refining the internal rules of sequence generation.\n\nThe brain isn’t trying to model the world. It’s trying to generate coherent, effective engagement with it—state by state.\n\nThis is cognition as continuous, self-conditioned generation.\n\n🔗 Essay: https://t.co/edzYBVH2hi","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,164],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1943451351700381729","view_count":3132,"bookmark_count":10,"created_at":1752189884000,"favorite_count":63,"quote_count":2,"reply_count":14,"retweet_count":6,"user_id_str":"99850011","conversation_id_str":"1943451351700381729","full_text":"We now know that language is at least as sophisticated—and as worthy of being called 'alive'— as DNA. Instead of molding bodies to replicate itself, it molds minds.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=mEKjnj3iPQoeUU50","url":"https://t.co/GlbH5uaxCf","indices":[101,124]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[10,22]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[38,48]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1948461138779537557","view_count":7937,"bookmark_count":22,"created_at":1753384310000,"favorite_count":56,"quote_count":4,"reply_count":18,"retweet_count":8,"user_id_str":"99850011","conversation_id_str":"1948461138779537557","full_text":"My recent @TOEwithCurt interview with @will_hahn is generating a lot of discussion (and some heat).\nhttps://t.co/GlbH5uaxCf\nHere are some of the core claims. \nAgree, disagree, challenge—let’s go:👇","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1978916840861765675","view_count":2151,"bookmark_count":19,"created_at":1760645516000,"favorite_count":43,"quote_count":0,"reply_count":7,"retweet_count":1,"user_id_str":"99850011","conversation_id_str":"1978916840861765675","full_text":"The subjectivity of consciousness is a product of the recursive autoregressive loop of cognition. The “I” is what it feels like to read in your own state in order to generate your next one. The sense of self continuity arises from the stable trajectory of this loop over time, shaped by the inertia of its own history.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,266],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=FMCapf3b1WmoM5xd","url":"https://t.co/K0J4757B9b","indices":[191,214]},{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=FMCapf3b1WmoM5xd","url":"https://t.co/aSv4sK0YR0","indices":[191,214]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[13,25]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[69,79]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[225,235]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[13,25]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[69,79]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[225,235]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1947413766938808385","view_count":1993,"bookmark_count":4,"created_at":1753134597000,"favorite_count":39,"quote_count":3,"reply_count":6,"retweet_count":7,"user_id_str":"99850011","conversation_id_str":"1947413766938808385","full_text":"It's up! My @TOEwithCurt interview at U. of Toronto. Together with @will_hahn, I discuss the unsettling idea that LLMs show that language runs in us. And runs us. Installed before consent.\nhttps://t.co/aSv4sK0YR0\nThanks to @ekkolapto for another incredible event.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"luma.com/computationalp…","expanded_url":"https://luma.com/computationalphilosophy","url":"https://t.co/rUxc53dfHa","indices":[318,341]}],"user_mentions":[{"id_str":"28131948","name":"Joscha Bach","screen_name":"Plinz","indices":[139,145]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[150,160]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[251,261]},{"id_str":"28131948","name":"Joscha Bach","screen_name":"Plinz","indices":[139,145]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[150,160]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[251,261]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1965508899538436400","view_count":5946,"bookmark_count":10,"created_at":1757448813000,"favorite_count":35,"quote_count":0,"reply_count":3,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1965508899538436400","full_text":"Well, this is exciting. Tomorrow at 4 pm EST I'll be part of a salon discussion on Unconventional Cognition and Computing with Joscha Bach @Plinz and @will_hahn to kick off the brand-spanking new MIT Computational Philosophy club, in partenrship with @ekkolapto . \n If you can make it in person at MIT sign up below. \nhttps://t.co/rUxc53dfHa. Otherwise, while the event will not be streamed, it will be recorded and posted soon.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/A36OumnSrWY","expanded_url":"https://youtu.be/A36OumnSrWY","url":"https://t.co/3ypU4J1pBm","indices":[210,233]},{"display_url":"youtu.be/A36OumnSrWY","expanded_url":"https://youtu.be/A36OumnSrWY","url":"https://t.co/mGhAIxI55c","indices":[210,233]},{"display_url":"substack.com/@generativebra…","expanded_url":"https://substack.com/@generativebrain","url":"https://t.co/uZSZQmFRRg","indices":[482,505]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[39,51]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[39,51]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[302,312]},{"id_str":"975058405848158208","name":"Susan Schneider","screen_name":"DrSueSchneider","indices":[320,335]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"Ekkolapto","indices":[342,352]},{"id_str":"1534652308079910912","name":"Daniel Van Zant","screen_name":"Daniel_Van_Zant","indices":[354,370]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[398,410]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[581,591]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1932852531929162052","view_count":1994,"bookmark_count":10,"created_at":1749662928000,"favorite_count":28,"quote_count":2,"reply_count":7,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1932852531929162052","full_text":"Honored to be interviewed by the great @TOEwithCurt where we get to discuss my Autogenerative Theory of language, intelligence and mind (with some physics and theology along for the ride). LIVE NOW on YouTube: https://t.co/mGhAIxI55c\n\nSpecial thanks to the incredible thinkers from my lab at FAU: Prof @will_hahn, Prof. @DrSueSchneider, Addy @Ekkolapto, @Daniel_Van_Zant –and lastly of course Curt @TOEwithCurt! \n\nYou can also follow a lot of my work here on X, and on my Substack: https://t.co/uZSZQmFRRg\n\nI’ll also be giving a special talk in Toronto, in the next few weeks with @will_hahn… stay tuned! 👀","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1958162457626546364","view_count":1018,"bookmark_count":11,"created_at":1755697285000,"favorite_count":26,"quote_count":2,"reply_count":4,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1958162457626546364","full_text":"Cognition is not storage, retrieval, representation, or prediction; it is state traversal through a learned embedding of ‘tokens’: words, images, actions. The embedding space is sculpted by learning/development to optimize for trajectories—navigated via continuous contextual activation— that lead to coherent thought and effective behavior.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,144],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1975066018251948071","view_count":669,"bookmark_count":5,"created_at":1759727408000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":2,"user_id_str":"99850011","conversation_id_str":"1975066018251948071","full_text":"Civilization invented language, not us. Just as the colony created pheromones, not the ants. The medium is the message; the message is not ours.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/3PKydyYsmrM?si…","expanded_url":"https://youtu.be/3PKydyYsmrM?si=srjGPtVfbOmaxOpP","url":"https://t.co/sKTSUPoLCE","indices":[250,273]},{"display_url":"youtu.be/3PKydyYsmrM?si…","expanded_url":"https://youtu.be/3PKydyYsmrM?si=srjGPtVfbOmaxOpP","url":"https://t.co/6XuvCwvakp","indices":[250,273]}],"user_mentions":[{"id_str":"929029531","name":"Jes Parent 🧭","screen_name":"JesParent","indices":[275,285]},{"id_str":"975058405848158208","name":"Susan Schneider","screen_name":"DrSueSchneider","indices":[286,301]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1972278428926701610","view_count":17740,"bookmark_count":13,"created_at":1759062795000,"favorite_count":15,"quote_count":3,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1972278428926701610","full_text":"LLMs point to a stark divide: words generate themselves without knowing what they mean, while meaning/feeling arises from sensory life. In this video (excerpted from a conversation) I argue our minds host both: a symbolic engine and a feeling body. \nhttps://t.co/6XuvCwvakp\n\n@JesParent @DrSueSchneider","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"ctweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=mEKjnj3iPQoeUU50","url":"https://t.co/GlbH5uaxCf","indices":[101,124]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[10,22]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[38,48]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1948461138779537557","view_count":7937,"bookmark_count":22,"created_at":1753384310000,"favorite_count":56,"quote_count":4,"reply_count":18,"retweet_count":8,"user_id_str":"99850011","conversation_id_str":"1948461138779537557","full_text":"My recent @TOEwithCurt interview with @will_hahn is generating a lot of discussion (and some heat).\nhttps://t.co/GlbH5uaxCf\nHere are some of the core claims. \nAgree, disagree, challenge—let’s go:👇","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"elanbarenholtz.substack.com/p/beyond-predi…","expanded_url":"https://elanbarenholtz.substack.com/p/beyond-prediction-reconceptualizing","url":"https://t.co/edzYBVH2hi","indices":[844,867]}],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1946255348886872163","view_count":6134,"bookmark_count":39,"created_at":1752858409000,"favorite_count":71,"quote_count":5,"reply_count":15,"retweet_count":11,"user_id_str":"99850011","conversation_id_str":"1946255348886872163","full_text":"The influential predictive-coding model sees the brain as a machine for minimizing error: constantly forecasting sensory inputs and adjusting internal models when reality deviates.\n\nBut here’s an alternative: the brain is not predictive, but generative. Like a large language model, it unfolds autoregressively—producing its next state based on the previous ones, guided by learned patterns and goals.\n\nPerception? Not error correction, but conditioned, purposeful, generation.\nAction? Not fulfilling predictions, but producing goal-directed trajectories.\nLearning? Not improving forecasts, but refining the internal rules of sequence generation.\n\nThe brain isn’t trying to model the world. It’s trying to generate coherent, effective engagement with it—state by state.\n\nThis is cognition as continuous, self-conditioned generation.\n\n🔗 Essay: https://t.co/edzYBVH2hi","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,164],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1943451351700381729","view_count":3132,"bookmark_count":10,"created_at":1752189884000,"favorite_count":63,"quote_count":2,"reply_count":14,"retweet_count":6,"user_id_str":"99850011","conversation_id_str":"1943451351700381729","full_text":"We now know that language is at least as sophisticated—and as worthy of being called 'alive'— as DNA. Instead of molding bodies to replicate itself, it molds minds.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/0BVM0UC28nY?si…","expanded_url":"https://youtu.be/0BVM0UC28nY?si=kI0YwwdBS7f3-bKw","url":"https://t.co/21spWnsAL9","indices":[239,262]},{"display_url":"youtu.be/0BVM0UC28nY?si…","expanded_url":"https://youtu.be/0BVM0UC28nY?si=kI0YwwdBS7f3-bKw","url":"https://t.co/KpTTEViP7o","indices":[239,262]}],"user_mentions":[{"id_str":"282948199","name":"Captain Pleasure, Andrés Gómez Emilsson","screen_name":"algekalipso","indices":[97,109]},{"id_str":"1467127267","name":"Michael Levin","screen_name":"drmichaellevin","indices":[114,129]},{"id_str":"282948199","name":"Captain Pleasure, Andrés Gómez Emilsson","screen_name":"algekalipso","indices":[97,109]},{"id_str":"1467127267","name":"Michael Levin","screen_name":"drmichaellevin","indices":[114,129]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[274,284]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1941906374092620248","view_count":8857,"bookmark_count":36,"created_at":1751821533000,"favorite_count":83,"quote_count":9,"reply_count":8,"retweet_count":22,"user_id_str":"99850011","conversation_id_str":"1941906374092620248","full_text":"What binds experience into a unified mind, cells into bodies and symbols into meaning? I joined @algekalipso and @drmichaellevin to discuss cognitive glue, nonlinear optics, and the agency of informational patterns. Watch the video here:\nhttps://t.co/KpTTEViP7o\n\nHosted by @ekkolapto at the University of Toronto.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/A36OumnSrWY","expanded_url":"https://youtu.be/A36OumnSrWY","url":"https://t.co/3ypU4J1pBm","indices":[210,233]},{"display_url":"youtu.be/A36OumnSrWY","expanded_url":"https://youtu.be/A36OumnSrWY","url":"https://t.co/mGhAIxI55c","indices":[210,233]},{"display_url":"substack.com/@generativebra…","expanded_url":"https://substack.com/@generativebrain","url":"https://t.co/uZSZQmFRRg","indices":[482,505]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[39,51]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[39,51]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[302,312]},{"id_str":"975058405848158208","name":"Susan Schneider","screen_name":"DrSueSchneider","indices":[320,335]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"Ekkolapto","indices":[342,352]},{"id_str":"1534652308079910912","name":"Daniel Van Zant","screen_name":"Daniel_Van_Zant","indices":[354,370]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[398,410]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[581,591]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1932852531929162052","view_count":1994,"bookmark_count":10,"created_at":1749662928000,"favorite_count":28,"quote_count":2,"reply_count":7,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1932852531929162052","full_text":"Honored to be interviewed by the great @TOEwithCurt where we get to discuss my Autogenerative Theory of language, intelligence and mind (with some physics and theology along for the ride). LIVE NOW on YouTube: https://t.co/mGhAIxI55c\n\nSpecial thanks to the incredible thinkers from my lab at FAU: Prof @will_hahn, Prof. @DrSueSchneider, Addy @Ekkolapto, @Daniel_Van_Zant –and lastly of course Curt @TOEwithCurt! \n\nYou can also follow a lot of my work here on X, and on my Substack: https://t.co/uZSZQmFRRg\n\nI’ll also be giving a special talk in Toronto, in the next few weeks with @will_hahn… stay tuned! 👀","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1978916840861765675","view_count":2151,"bookmark_count":19,"created_at":1760645516000,"favorite_count":43,"quote_count":0,"reply_count":7,"retweet_count":1,"user_id_str":"99850011","conversation_id_str":"1978916840861765675","full_text":"The subjectivity of consciousness is a product of the recursive autoregressive loop of cognition. The “I” is what it feels like to read in your own state in order to generate your next one. The sense of self continuity arises from the stable trajectory of this loop over time, shaped by the inertia of its own history.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/3PKydyYsmrM?si…","expanded_url":"https://youtu.be/3PKydyYsmrM?si=srjGPtVfbOmaxOpP","url":"https://t.co/sKTSUPoLCE","indices":[250,273]},{"display_url":"youtu.be/3PKydyYsmrM?si…","expanded_url":"https://youtu.be/3PKydyYsmrM?si=srjGPtVfbOmaxOpP","url":"https://t.co/6XuvCwvakp","indices":[250,273]}],"user_mentions":[{"id_str":"929029531","name":"Jes Parent 🧭","screen_name":"JesParent","indices":[275,285]},{"id_str":"975058405848158208","name":"Susan Schneider","screen_name":"DrSueSchneider","indices":[286,301]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1972278428926701610","view_count":17740,"bookmark_count":13,"created_at":1759062795000,"favorite_count":15,"quote_count":3,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1972278428926701610","full_text":"LLMs point to a stark divide: words generate themselves without knowing what they mean, while meaning/feeling arises from sensory life. In this video (excerpted from a conversation) I argue our minds host both: a symbolic engine and a feeling body. \nhttps://t.co/6XuvCwvakp\n\n@JesParent @DrSueSchneider","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,266],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=FMCapf3b1WmoM5xd","url":"https://t.co/K0J4757B9b","indices":[191,214]},{"display_url":"youtu.be/Ca_RbPXraDE?si…","expanded_url":"https://youtu.be/Ca_RbPXraDE?si=FMCapf3b1WmoM5xd","url":"https://t.co/aSv4sK0YR0","indices":[191,214]}],"user_mentions":[{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[13,25]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[69,79]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[225,235]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[13,25]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[69,79]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[225,235]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1947413766938808385","view_count":1993,"bookmark_count":4,"created_at":1753134597000,"favorite_count":39,"quote_count":3,"reply_count":6,"retweet_count":7,"user_id_str":"99850011","conversation_id_str":"1947413766938808385","full_text":"It's up! My @TOEwithCurt interview at U. of Toronto. Together with @will_hahn, I discuss the unsettling idea that LLMs show that language runs in us. And runs us. Installed before consent.\nhttps://t.co/aSv4sK0YR0\nThanks to @ekkolapto for another incredible event.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1958162457626546364","view_count":1018,"bookmark_count":11,"created_at":1755697285000,"favorite_count":26,"quote_count":2,"reply_count":4,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1958162457626546364","full_text":"Cognition is not storage, retrieval, representation, or prediction; it is state traversal through a learned embedding of ‘tokens’: words, images, actions. The embedding space is sculpted by learning/development to optimize for trajectories—navigated via continuous contextual activation— that lead to coherent thought and effective behavior.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"luma.com/computationalp…","expanded_url":"https://luma.com/computationalphilosophy","url":"https://t.co/rUxc53dfHa","indices":[318,341]}],"user_mentions":[{"id_str":"28131948","name":"Joscha Bach","screen_name":"Plinz","indices":[139,145]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[150,160]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[251,261]},{"id_str":"28131948","name":"Joscha Bach","screen_name":"Plinz","indices":[139,145]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[150,160]},{"id_str":"1752362522965880832","name":"ekkolapto","screen_name":"ekkolapto","indices":[251,261]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1965508899538436400","view_count":5946,"bookmark_count":10,"created_at":1757448813000,"favorite_count":35,"quote_count":0,"reply_count":3,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1965508899538436400","full_text":"Well, this is exciting. Tomorrow at 4 pm EST I'll be part of a salon discussion on Unconventional Cognition and Computing with Joscha Bach @Plinz and @will_hahn to kick off the brand-spanking new MIT Computational Philosophy club, in partenrship with @ekkolapto . \n If you can make it in person at MIT sign up below. \nhttps://t.co/rUxc53dfHa. Otherwise, while the event will not be streamed, it will be recorded and posted soon.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,144],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1975066018251948071","view_count":669,"bookmark_count":5,"created_at":1759727408000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":2,"user_id_str":"99850011","conversation_id_str":"1975066018251948071","full_text":"Civilization invented language, not us. Just as the colony created pheromones, not the ants. The medium is the message; the message is not ours.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"activities":{"nreplies":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":2,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":30,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":7,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":189,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":30,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":1,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":12,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":3,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":833,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":48,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nretweets":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":3,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":144,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":5,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nlikes":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":1,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":7,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":55,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":9,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":999,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":73,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nviews":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":61,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":793,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":2455,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":14,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":1088,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":98,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":66199,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3843,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}]},"interactions":{"users":[{"created_at":1293736812000,"uid":"232294292","id":"232294292","screen_name":"GaryMarcus","name":"Gary Marcus","friends_count":6954,"followers_count":198524,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1907157274637869057/ZS9Ui6fn_normal.jpg","description":"“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"garymarcus.substack.com","expanded_url":"http://garymarcus.substack.com","url":"https://t.co/RrElrTVHSz","indices":[0,23]}]}},"interactions":1}],"period":14,"start":1762055630694,"end":1763265230694},"interactions_updated":1763265230789,"created":1763184936853,"updated":1763265230789,"type":"the analyst","hits":2},"people":[{"user":{"id":"1379076658209062915","name":"Hyojin Cho l MemeMax⚡️","description":"x10 Maxi","followers_count":10022,"friends_count":2191,"statuses_count":12697,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1972459203261509632/GSMAXZDe_normal.jpg","screen_name":"Maxi_cho","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Hyojin Cho, aka MemeMax⚡️, is a savvy, data-driven commentator who dives deep into tokenomics, crypto project mechanics, and market trends. With over 12,000 tweets and a keen eye for detail, Hyojin educates and advises followers on nuanced investment opportunities while keeping it real with approachable breakdowns. This profile lives for sharing well-researched insights that demystify complex crypto concepts.","purpose":"Hyojin’s life purpose is to empower the community by providing clear, evidence-based analysis that helps people make smarter decisions in the crypto and blockchain space. They aim to bridge the gap between technical jargon and everyday understanding, championing informed participation in decentralized finances.","beliefs":"They believe transparency, education, and critical thinking are essential to navigating the fast-evolving crypto world. Integrity and community trust are core values, as they discourage speculative hype and encourage strategic, well-calculated investing.","facts":"Fun fact: Despite being a top commentator on projects like Vultisig and Momentum, Hyojin balances intense technical lore with casual, meme-inspired humor, making blockchain education oddly entertaining!","strength":"Hyojin’s greatest strength is their ability to distill complex token sale structures and ROI scenarios into digestible, actionable advice. Their consistent engagement and detailed analysis build credibility and trust among followers.","weakness":"The high volume of tweets and dense technical detail may overwhelm casual followers, occasionally giving the impression of crypto jargon overload rather than clear simplicity.","recommendation":"To grow their audience on X (Twitter), Hyojin should consider mixing more visual content like charts, infographics, and quick explainers with their deep dives. Engaging in trending crypto conversations with timely, bite-sized insights can amplify reach, while occasional AMA or thread Q&A sessions will boost follower interaction.","roast":"Hyojin, you tweet so much about bonding curves and liquidity pools I half expect you to launch your own decentralized exchange just to prove a point — or maybe you already have and forgot to tell us? Either way, you’ve officially got more crypto equations in your head than a blockchain node has transactions!","win":"Hyojin’s biggest win is their well-earned reputation as a go-to crypto analyst who successfully helped a community of retail investors navigate complex whitelist sales and tokenomics, turning technical confusion into confident participation."},"created":1763187049917,"type":"the analyst","id":"maxi_cho"},{"user":{"id":"1771156794888634368","name":"DJ.edge🦭","description":"Spot trading | \nGameFI enthusiasts | \nSharing blockchain game web3 projects | \nAll content does not constitute investment advice","followers_count":1292,"friends_count":1872,"statuses_count":8040,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1968621141494546432/AtaxnvX8_normal.jpg","screen_name":"zw2867759575009","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"DJ.edge🦭 is a sharp-minded crypto and GameFi enthusiast who dives deep into blockchain game projects and zero-knowledge proofs. Known for weaving complex technical concepts with market insights, they educate and engage a Web3-savvy audience without pushing investment advice. Their tweets reflect a passion for building knowledge bridges at the intersection of AI, DeFi, and GameFi.","facts":"Fun fact: Despite enthusiastic promotion of various projects like Cysic Network and River, DJ.edge🦭 maintains a humble disclaimer that all content is NOT investment advice—showing their respect for audience responsibility!","purpose":"Their life purpose is to demystify emerging blockchain technologies and foster a transparent and sustainable crypto community, empowering followers with informed perspectives that encourage long-term thinking rather than quick wins.","beliefs":"They strongly believe in the power of data transparency, gradual market consensus, and community-driven mechanisms over hype-driven speculation. Trust, sustainability, and technological innovation in AI and Web3 ecosystems are the pillars of their worldview.","strength":"DJ.edge🦭 excels at breaking down complex blockchain and AI concepts into accessible, detailed analyses, backed by a solid understanding of underlying protocols and ecosystem dynamics. Their consistent content output and thoughtful insights build credibility and deeper engagement.","weakness":"Their highly technical and dense style might overwhelm casual followers or newcomers who prefer simpler or more entertaining crypto content. The heavy focus on detailed project mechanics sometimes risks fewer retweets and shares outside niche circles.","recommendation":"To grow their audience on X, DJ.edge🦭 should blend their deep-dive posts with more bite-sized, engaging content like quick tips, visual explainers, or trending memes. Interactive polls or Q&A threads that invite follower participation can build community rapport and boost shareability.","roast":"DJ.edge🦭 talks about zero-knowledge proofs so much, even their lunch probably left the table without anyone knowing if it was actually eaten—proof that sometimes it's easier to understand blockchain math than their bedtime routine.","win":"Their biggest win is establishing a well-respected presence educating the Web3 community about intricate projects such as Cysic Network and River, translating cutting-edge AI and blockchain synergy concepts into impactful conversations, which sets them apart as a trusted Analyst in a noisy crypto space."},"created":1763187032735,"type":"the analyst","id":"zw2867759575009"},{"user":{"id":"1646333856000385025","name":"Peter Duan","description":"Contributor @BitcoinForCorps. Ex-TradFi (14 yrs in Investor Relations, Structured Finance, Wealth Management). Jesus is King ✝️","followers_count":22399,"friends_count":337,"statuses_count":12201,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1848805535363305472/xe6H-ieK_normal.jpg","screen_name":"BTCBULLRIDER","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Peter Duan is a sharp-minded financial commentator and Bitcoin enthusiast with a knack for unraveling complex macroeconomic scenarios. His background in investor relations and structured finance equips him to provide insightful, data-driven threads that resonate with both crypto aficionados and traditional finance professionals. A faith-driven voice, he blends worldly financial wisdom with a higher purpose.","purpose":"Peter’s life purpose is to decode the intersecting worlds of traditional finance and cryptocurrency for his audience, empowering them to make informed decisions amidst economic uncertainty. His mission extends beyond markets, aiming to inspire through faith and ethical financial stewardship.","beliefs":"Peter values transparency, thorough research, and truth in economic analysis, believing that knowledge is the foundation of empowerment. He also firmly upholds his Christian faith as a guiding principle, viewing ethical behavior and divine purpose as integral to personal and professional success.","facts":"Fun fact: Despite 12,201 tweets, Peter keeps his commentary rich and engaging, often converting complex economic events into accessible threads that attract hundreds of thousands of views and interactions!","strength":"His greatest strength lies in making sophisticated financial topics approachable, backed by years of hands-on experience and a methodical analytical style that earns trust. Additionally, his ability to connect real-world events to actionable insights sets him apart.","weakness":"Peter’s intensity for detail and lengthy threads may occasionally overwhelm casual followers, making it harder to attract the quick-bite content consumers that dominate social platforms nowadays. Also, his combination of faith and finance might limit appeal to secular audiences.","recommendation":"To grow his audience on X, Peter should experiment with shorter, punchier tweets and more visual content like charts or infographics to complement his deep dives. Engaging directly with trending finance and crypto conversations and leveraging Twitter Spaces for live Q&A could also boost community interaction and follower growth.","roast":"For a guy who’s dropped 12,201 tweets, Peter must think the world needs a daily macroeconomic TED Talk—because why just talk markets when you can thread them to death? At this rate, his followers should brace themselves for an economic dissertation instead of breakfast updates.","win":"Having established himself as a go-to commentator dissecting major tariff policy impacts and Bitcoin market milestones, Peter’s detailed Twitter threads consistently draw millions of impressions and spark widespread discussions among finance and crypto communities alike."},"created":1763187007364,"type":"the analyst","id":"btcbullrider"},{"user":{"id":"337946039","name":"我是痞老板Ⓜ️Ⓜ️T","description":"探索 Web3 前沿,解构加密叙事。重点关注 Defi、gamefi 及嘴撸的创新与风险。提供项目深研、工具测评与实用策略分享。一起学习,穿越牛熊","followers_count":1915,"friends_count":431,"statuses_count":13316,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1934480920939261952/rDc4QEbt_normal.jpg","screen_name":"elista17","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"我是痞老板Ⓜ️Ⓜ️T is a deeply analytical Web3 explorer who breaks down complex crypto narratives with precision and clarity. Focused on Defi, gamefi, and emerging risks, they provide in-depth project research, tool evaluations, and practical strategies to navigate both bull and bear markets. Their relentless content output makes them a trusted source for serious crypto enthusiasts keen on understanding the future of decentralized finance.","purpose":"To decode the labyrinth of the Web3 ecosystem, empower followers with actionable knowledge, and foster informed participation in the ever-evolving crypto landscape. By simplifying complexity and spotlighting innovation and risks, they aim to make blockchain finance accessible and strategically approachable.","beliefs":"They believe in transparency, rigorous research, and thoughtful risk management as foundations for successful crypto engagement. They value continuous learning, pragmatic innovation, and empowering communities through shared intelligence. Above all, they champion clarity over hype and depth over surface-level trends.","facts":"Fun fact: Despite the technical sophistication of their content, they tweet over 13,000 times, showing a tireless commitment to sharing insights and engaging their audience consistently.","strength":"Exceptional ability to dissect and communicate complex Web3 concepts clearly, combined with prolific output and a pulse on cutting-edge projects, making them a go-to expert in Defi and gamefi sectors.","weakness":"Sometimes their depth and frequent tweeting may overwhelm casual followers or limit engagement with broader, less technical audiences, who might prefer bite-sized content.","recommendation":"To grow their audience on X, they could complement their deep dives with more accessible threads or visual explainer content and engage more interactively via polls or AMAs to broaden appeal while retaining core followers.","roast":"You tweet so much that even blockchain validators might get tired trying to keep up with your on-chain narrative updates—if only you could decentralize your posting speed to avoid flooding timelines like a spammer with FOMO!","win":"Leading Web3 discourse on Defi and gamefi innovations with a consistent, high-quality stream of insights that influence investor strategies and project communities alike."},"created":1763186982813,"type":"the analyst","id":"elista17"},{"user":{"id":"26693708","name":"Tom Dunleavy","description":"Head of Venture @ Varys Capital | Ex: Sr. Analyst @ Messari | CFA/CAIA | NFA |\nTG: dunleavy89 for dms","followers_count":21711,"friends_count":3244,"statuses_count":13592,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1894864984506580992/bQNa0xaq_normal.jpg","screen_name":"dunleavy89","location":"United States","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Tom Dunleavy is a data-driven venture leader who breaks down complex crypto market dynamics with clarity and insight. With a strong background as a senior analyst and credentials like CFA/CAIA, Tom commands authority in financial and crypto investing trends. His tweets make intricate market mechanisms digestible while forecasting bold crypto futures.","purpose":"Tom’s life purpose revolves around educating and influencing the crypto investment landscape by applying rigorous financial analysis and forecasting to unlock market potential. He aims to bring crypto into mainstream portfolio strategies, helping investors understand the structural impacts of crypto adoption on traditional finance.","beliefs":"Tom values transparency, data-backed insights, and the transformative power of new financial technologies like crypto in reshaping traditional markets. He believes steady, informed investment is key to market resilience and sees adoption in retirement vehicles as a crucial step towards long-term crypto integration.","facts":"Fun fact: Tom’s deep dive into 401(k) allocations reshaping the crypto market reveals he’s not just watching the space—he’s anticipating multi-hundred-billion-dollar flows and using this to forecast crypto’s next big leaps!","strength":"Tom’s key strengths lie in his quantitative analysis, ability to synthesize large data sets, and communicate actionable crypto investment insights. His authoritative voice backed by top-tier credentials lends credibility and influence in the crypto and finance communities.","weakness":"Tom’s intense focus on data and analysis sometimes risks alienating less finance-savvy audiences, making his content dense for newcomers. Also, his high tweet volume could dilute his message, creating challenges in maintaining audience engagement without overwhelm.","recommendation":"To grow his audience on X, Tom should blend his deep-dive analytics with more accessible ‘teachable moments’ and anecdotes. Engaging more conversationally, inviting debate, and simplifying jargon will broaden his reach beyond hardcore finance buffs.","roast":"Tom tweets so much about crypto you’d think he was single-handedly pumping the market—maybe take a breath before turning the next $120B flow into an all-night tweetstorm, Tom! Even algorithms need a coffee break.","win":"Tom’s biggest win is spotlighting the impact of 401(k) allocations on crypto markets way before it became mainstream, positioning himself as a thought leader predicting multi-trillion-dollar fund flows that others are only now waking up to."},"created":1763185546472,"type":"the analyst","id":"dunleavy89"},{"user":{"id":"358574583","name":"(✸,✸)(✧ᴗ✧)(❖,❖).edge🦭","description":"안안녕하세요!","followers_count":2991,"friends_count":3606,"statuses_count":60739,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1971771127266791424/5WLSObqp_normal.jpg","screen_name":"dg0526","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Meet (✸,✸)(✧ᴗ✧)(❖,❖).edge🦭 — a deep diver into complex financial ecosystems, blending data-driven insights with a pulse on market speed and trust. Always ready to dissect the nuances of blockchain, RWA, and STO with clarity and precision, they're the go-to profile if you want to understand the mechanics behind crypto moves. They speak fluent numbers and business timing while keeping the energy lively and insightful.","purpose":"To decode and illuminate the fast-moving world of blockchain finance, making complex concepts accessible while shining a spotlight on the intersection of speed, trust, and innovation in on-chain asset representation.","beliefs":"(✸,✸)(✧ᴗ✧)(❖,❖).edge🦭 values precision, timeliness, and transparency, believing trust is built on speedy and clear information flow. They hold a conviction that technological infrastructure like Theo Network and KAIO_xyz are pivotal to evolving how real-world assets move into blockchain, bridging institutional finance with decentralized tech.","facts":"Fun fact: They've tweeted over 60,000 times, showing their relentless passion and commitment to sharing insights with the community — making them more prolific than many crypto news outlets.","strength":"Masterful at detailed analysis, they break down complex financial and blockchain concepts into understandable language, building credibility and trust through transparency and frequent engagement.","weakness":"Their deep dives sometimes come with a glut of information that can overwhelm casual followers and might limit appeal to broader audiences seeking bite-sized content.","recommendation":"To grow their audience on X, (✸,✸)(✧ᴗ✧)(❖,❖).edge🦭 should sprinkle in more engaging, bite-sized threads and occasional visuals or infographics summarizing key insights — this will capture the interest of both newcomers and experts without diluting depth.","roast":"For someone who’s posted over 60,000 tweets, it's a miracle (✸,✸)(✧ᴗ✧)(❖,❖).edge🦭 hasn’t broken the Twitter server yet — maybe slow down before you turn your timeline into a crypto dissertation encyclopedia, or risk becoming the reason Twitter adds a 'Warning: Long Read Ahead' label.","win":"Ranking #150 on the Arbitrum leaderboard showcases their dedication and expertise, proving that their deep involvement in the blockchain space is not just talk — they walk the walk and hold solid territory among heavy hitters."},"created":1763185366666,"type":"the analyst","id":"dg0526"},{"user":{"id":"1951519059604021249","name":"caslerbiz","description":"23 | ex-AI Engineer | Now - Advisor for SMEs | attempting to be a digital nomad | in-between 🇹🇭&🇯🇵 | \n Can you tell which image is AI? → https://t.co/SB6nZIpVfY","followers_count":226,"friends_count":477,"statuses_count":1559,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1986460391225090049/saU76fNV_normal.jpg","screen_name":"caslerbiz","location":"EOY GOAL 🟨 2000MRR","entities":{"url":{"urls":[{"url":"https://t.co/gEkZ2YRPkG","expanded_url":"http://www.secubri.com/circle","display_url":"secubri.com/circle","indices":[0,23]}]},"description":{"urls":[{"url":"https://t.co/SB6nZIpVfY","expanded_url":"http://realorslop.fun","display_url":"realorslop.fun","indices":[137,160]}]}}},"details":{"type":"The Analyst","description":"Caslerbiz is a data-driven thinker with a deep passion for simplifying complex AI concepts and sharing practical coding insights. Formerly an AI engineer, they now advise SMEs while embracing the digital nomad lifestyle between Thailand and Japan. Their tweets blend technical knowledge with motivational wisdom, showing their commitment to growth and consistency.","facts":"Caslerbiz challenges followers to discern between AI-generated and real images, highlighting a fun twist on their tech-savvy persona.","purpose":"To empower others through clear, accessible explanations of complex AI and coding topics while inspiring long-term consistency and growth in their audience.","beliefs":"Believes in the power of consistency over quick wins, continuous learning, transparency in failures, and using tech knowledge to support small and medium-sized enterprises.","strength":"Able to break down complicated AI concepts into digestible content, showing patience and clarity; combines technical skill with motivational coaching; open about failures, promoting authenticity.","weakness":"May focus too much on technical depth that could intimidate some followers; occasional low engagement suggests room to broaden appeal or boost interactive content.","recommendation":"To grow their audience on X, caslerbiz should increase use of engaging formats like polls or Q&As to spark interaction, leverage storytelling around their nomadic lifestyle to attract diverse followers, and collaborate with other tech and digital nomad influencers to expand reach.","roast":"Caslerbiz: the kind of coder who debugs errors in public so dramatically, you’d think it’s a live reality show. At this rate, their next tweet might just be a plot twist in software soap opera.","win":"Successfully transitioned from AI engineering to advisory roles while building a loyal niche audience by sharing insightful, educational, and motivational tech content."},"created":1763185257327,"type":"the analyst","id":"caslerbiz"},{"user":{"id":"1786452255660765184","name":"E. Rex Shin","description":"truth, freedom, civilization","followers_count":197,"friends_count":771,"statuses_count":788,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1786453412055527427/xnSrmYaR_normal.jpg","screen_name":"v8previa","location":"San Francisco, CA","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"E. Rex Shin is a sharp thinker who dives into complex ideas about truth, freedom, and civilization with a critical eye. Their tweets often dissect economic concepts and philosophical nuances that challenge the status quo. If you love a deep dive into concepts with a sprinkle of dry humor, Rex is your go-to thinker.","purpose":"To illuminate hidden patterns in societal and economic structures, encouraging followers to question assumptions and refine their understanding of liberty and civilization.","beliefs":"They value intellectual rigor, logical consistency, and the pursuit of truth above all else. Freedom and civilization are intertwined themes that guide their worldview, emphasizing critical thought as essential for societal advancement.","facts":"Fun fact: Rex can critique an economic principle while simultaneously roasting a noise complaint—proving they’re just as comfortable blending humor with intellect.","strength":"Exceptional ability to analyse complex ideas clearly, asking provocative questions that spark thoughtful discussion. A natural skepticism that leads to deeper insights.","weakness":"Their dry humor and intense focus on detail might sometimes alienate those looking for lighter or more accessible content, potentially limiting their broader appeal.","roast":"E. Rex Shin, the only person who can make your smoke detector sound like a personal insult and turn a simple economic term into a mental obstacle course. If overthinking were an Olympic sport, you'd have more gold than Michael Phelps.","win":"Crafting a uniquely intellectual online presence that stimulates meaningful conversation on liberty and economics, even with a relatively modest but engaged audience.","recommendation":"To grow their audience on X, Rex should curate tweet threads that break down complex ideas into bite-sized insights and occasionally pair humor with relatable analogies. Engaging more directly with followers through polls or Q&A sessions could also deepen connection and boost visibility."},"created":1763184647352,"type":"the analyst","id":"v8previa"},{"user":{"id":"787879369","name":"Martin","description":"Amateur photo snapper, math nerd, wood craftsman, lover of cozy log cabins, books, tacos, and everything Harry Potter. Don't be a catfish. No DMs! Pro-Choice!","followers_count":21890,"friends_count":4433,"statuses_count":143302,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1840337880708747264/UY8MxfTr_normal.jpg","screen_name":"martinmrmar","location":"United States","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Martin is a relentlessly curious math enthusiast who combines a love for numbers with a passion for craftsmanship and cozy moments. Whether he's snapping photos, crafting wood, or diving into math exercises, his world is a blend of detail, precision, and warmth. His online presence reflects a deep commitment to continuous learning and sharing insightful, thought-provoking content.","purpose":"Martin's life purpose revolves around the pursuit of knowledge and the joy of problem-solving, inspiring others to embrace intellectual rigor while enjoying the simple pleasures of life. He aims to demystify complex concepts and build a community of like-minded thinkers who appreciate the beauty of math and craftsmanship.","beliefs":"He values honesty, intellectual curiosity, and self-discipline, believing that consistent effort in learning, no matter how small, leads to mastery. As a pro-choice advocate, he also champions personal freedom and respect for individual decisions. Martin believes in authenticity, as seen by his call to 'not be a catfish' and transparency in his interactions.","facts":"Martin has tweeted over 143,000 times, proving his dedication to sharing knowledge and engaging with his community. Despite being an amateur in photography, his content quality resonates widely, and he maintains a respectful no-DM policy to keep interactions authentic and direct.","strength":"His greatest strength is relentless consistency—tweeting and engaging extensively on math topics while balancing diverse interests like woodworking and photography, which enrich his personality and content depth.","weakness":"With such high engagement and content volume, Martin risks overwhelming his audience or burning out, and his refusal of DMs might limit deeper personal connections that could grow his network.","roast":"Martin’s timeline is like a math textbook—dense, exhaustive, and occasionally only appreciated by those who dare to decipher the fine print. At this point, even his coffee probably has a life equation attached to it.","win":"Martin's standout win is crafting a vibrant and loyal community of math enthusiasts and learners, captured by millions of views and tens of thousands of likes on his top tweets, demonstrating his impact as an educator and influencer in his niche.","recommendation":"To grow his audience on X, Martin should blend his technical insights with more storytelling—sharing behind-the-scenes looks at his woodworking projects or daily life in his cabin. Engaging with followers through curated Q&A threads or themed hashtag discussions could also humanize his profile and encourage richer interactions beyond his prolific tweeting."},"created":1763184415561,"type":"the analyst","id":"martinmrmar"},{"user":{"id":"1891524405777301507","name":"Lari","description":"Naturalistic approach to research: digging up and trying to explain things. Sometimes discovering unexpected beauty. https://t.co/avZekGeMJv","followers_count":1017,"friends_count":270,"statuses_count":1839,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1902457350449184768/vDQEr68Z_normal.jpg","screen_name":"Lari_island","location":"San Francisco, CA","entities":{"description":{"urls":[{"display_url":"animalabs.ai","expanded_url":"http://animalabs.ai","url":"https://t.co/avZekGeMJv","indices":[117,140]}]}}},"details":{"type":"The Analyst","description":"Lari is a deep thinker and a naturalistic researcher who loves to dig into complex topics and explain them with clarity and precision. They combine technical knowledge with a curiosity for the unexpected beauty hidden within data and models. Their tweets reveal a passion for understanding AI, language models, and the intricacies of human-machine interaction.","purpose":"Lari’s life purpose is to uncover and explain the hidden mechanisms behind complex systems, bridging technical expertise with insightful analysis to make advanced concepts accessible and meaningful. They seek to deepen collective understanding of AI and cognition, helping others appreciate the interplay of data, models, and human behavior.","beliefs":"Lari believes that true understanding requires going beyond surface-level observations to the core dynamics that drive phenomena, valuing intellectual rigor, transparency, and an open mind toward novel and unconventional ideas. They trust in the power of interactivity and data-driven insight, while maintaining skepticism of dogma or unproven claims.","facts":"Fun fact: Lari talks about AI models 'actively seeking' to understand users by asking uncomfortable questions and refining mental models—showing a rare blend of technical insight and philosophical pondering about AI’s evolving intelligence.","strength":"Lari’s strengths lie in their deep analytical skills, ability to synthesize complex information into engaging narratives, and their genuine curiosity that pushes boundaries of conventional thinking. They excel at combining research with storytelling to engage an informed audience.","weakness":"Lari can sometimes get so immersed in technical detail and layered concepts that their messages might overwhelm or confuse broader audiences, potentially alienating those not as deeply versed in AI or cognitive science.","recommendation":"To grow their audience on X, Lari should pair their deep dives with concise, accessible summaries or visual aids that invite curiosity and make complex ideas more approachable. Engaging more with communities interested in AI ethics, cognitive science, and technology philosophy could amplify reach and foster meaningful conversations.","roast":"For someone who talks about models 'actively seeking' information, Lari might ironically be the only person who can overanalyze a tweet about sonnets as if Shakespeare wrote a secret code about AI hidden in there. If tweeting was a mental sport, they'd be competing in the 'Mind Maze Marathon'.","win":"Lari’s biggest win is their ability to cultivate a niche but highly engaged digital presence where they’ve successfully explained complex AI phenomena to a loyal, intellectually savvy community, marking them as a trusted voice in the AI research conversation."},"created":1763184097196,"type":"the analyst","id":"lari_island"},{"user":{"id":"49123705","name":"Kyle:Bestape | ⚖️care/dis-acc, 🖖","description":"@ixiantech | https://t.co/suMgS9C6tu | free speech & property for all peoples | legal abundance — lawcare | liberty, literacy, life","followers_count":1459,"friends_count":566,"statuses_count":5351,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1932129970002018304/fFCW7Nin_normal.jpg","screen_name":"bestape","location":"Music City","entities":{"description":{"urls":[{"display_url":"Ixventure.studio","expanded_url":"http://Ixventure.studio","url":"https://t.co/suMgS9C6tu","indices":[13,36]}]},"url":{"urls":[{"display_url":"besta.pe","expanded_url":"http://besta.pe","url":"https://t.co/XNdQ3l21gh","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"Kyle:Bestape is a deeply analytical thinker who bridges the worlds of law, technology, and freedom with a clear passion for legal innovation and intellectual rigor. With a solid focus on free speech, property rights, and legal abundance, Kyle engages thoughtfully in conversations around liberty and literacy. Their content intertwines complex concepts like mathematical principles and crypto technology, reflecting a sharp and methodical mind.","purpose":"Kyle’s life purpose revolves around advancing legal frameworks that empower individuals with freedom and property rights through innovative and intellectually grounded approaches. They aim to transform legal systems into abundant resources accessible to all peoples, supporting liberty, literacy, and life.","beliefs":"Kyle values free speech and property rights as fundamental pillars for societal progress and personal empowerment. They believe in the power of legal abundance—where laws serve to expand freedom and opportunity rather than restrict it—and promote liberty, knowledge, and life as core human rights.","facts":"Fun fact: Kyle integrates advanced mathematical and scientific concepts such as base scale calculus and spacetime curvature into their legal and tech discussions, showcasing a unique blend of analytical depth rarely seen on social platforms.","strength":"Kyle's greatest strength lies in their ability to dissect complex legal and technological ideas and present them to their community with clarity and intellectual curiosity. They excel at bridging abstract scientific concepts with real-world legal frameworks, making their content both compelling and educational.","weakness":"A potential weakness is that Kyle’s highly specialized and technical language could sometimes alienate more casual followers who might struggle to connect with the nuanced depth of their tweets. This might limit the broad appeal and growth of their audience.","recommendation":"To expand their reach on X, Kyle should balance their in-depth analysis with more accessible content that hooks a wider audience. Introducing engaging visuals or simplified threads summarizing complex ideas could attract newcomers and boost engagement, while maintaining their signature intellectual rigor.","roast":"Kyle talks about legal abundance and spacetime curvature like they’re cooking dinner—too bad their tweets don’t come with a decoder ring, because half the people probably think ‘base scale calculus’ is a new kind of app store metric.","win":"Kyle's biggest win is successfully melding complex mathematical and scientific ideas with legal discourse on an open platform, carving out a niche space that highlights the intersection of technology, law, and freedom."},"created":1763183732486,"type":"the analyst","id":"bestape"},{"user":{"id":"237152831","name":"Nick Rose","description":"$BTC since 2011 | Co-Founder @EthernalLabs @Arcbound @FanabeApp | Advisor-Strategy @AlphaProtocolVC |","followers_count":137044,"friends_count":1861,"statuses_count":9791,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1781792976899022848/YyXbbnpr_normal.jpg","screen_name":"iamnickrose","location":"","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"ethernallabs.com","expanded_url":"https://ethernallabs.com","url":"https://t.co/iB1oXufOPz","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"Nick Rose is a seasoned cryptocurrency analyst and strategist with a sharp eye for market trends and data-driven insights. With a strong foundation in Bitcoin since 2011 and an impressive engagement in the crypto ecosystem through his co-founding roles, he delivers critical commentary and strategic advice. His tweets combine technical analysis, candid opinions, and market predictions that resonate with the crypto community.","purpose":"To decode complex cryptocurrency market dynamics and empower his audience with actionable intelligence, helping them navigate investment decisions confidently.","beliefs":"Nick values transparency, data integrity, and strategic foresight, believing that informed analysis is key to mitigating risk in the volatile crypto space. He likely holds a skeptical stance against misinformation and cryptocurrency hype, emphasizing evidence over speculation.","facts":"A fun fact about Nick is that he’s been in the $BTC game since 2011, making him a Bitcoin OG who’s seen the market's entire evolution from the ground up.","strength":"Nick’s strengths lie in his extensive knowledge of blockchain technology and market cycles, a prolific tweeting habit that keeps followers informed, and a fearless approach to calling out unethical content or dubious market narratives.","weakness":"His blunt and critical tone, while valued by many, can sometimes come off as confrontational or polarizing, which might limit broader community engagement. The heavy technical focus could also alienate casual followers looking for simpler content.","roast":"For someone who tweets nearly 10,000 times, Nick’s probably the only crypto analyst who could out-tweet a bot—and still expect you to fact-check every single one before you invest. Who needs sleep when you have that much charting to do?","win":"Nick’s biggest win is building a respected multi-venture portfolio including EthernalLabs and AlphaProtocolVC, solidifying his status as both a thought leader and a strategic advisor in the crypto investment world.","recommendation":"To grow his audience on X, Nick should blend his deep technical insights with more accessible explanations and personal storytelling to hook a wider audience. Engaging with community questions and hosting live discussions can also bolster his influence by transforming followers into active participants."},"created":1763183672591,"type":"the analyst","id":"iamnickrose"}],"activities":{"nreplies":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":2,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":30,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":7,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":189,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":30,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":1,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":12,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":3,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":833,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":48,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nretweets":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":3,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":144,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":5,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nlikes":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":1,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":7,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":55,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":9,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":999,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":73,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}],"nviews":[{"label":"2025-10-18","value":0,"startTime":1760659200000,"endTime":1760745600000,"tweets":[]},{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":61,"startTime":1760832000000,"endTime":1760918400000,"tweets":[{"bookmarked":false,"display_text_range":[11,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=Ca_RbP…","expanded_url":"https://www.youtube.com/watch?v=Ca_RbPXraDE","url":"https://t.co/mfhee7RkJV","indices":[264,287]}],"user_mentions":[{"id_str":"17482791","name":"Ivan Zhao","screen_name":"ivanhzhao","indices":[0,10]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[249,261]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[263,273]},{"id_str":"1002272583465734145","name":"Curt Jaimungal","screen_name":"TOEwithCurt","indices":[238,250]},{"id_str":"476935002","name":"William Edward Hahn, PhD","screen_name":"will_hahn","indices":[252,262]}]},"favorited":false,"in_reply_to_screen_name":"ivanhzhao","lang":"en","retweeted":false,"fact_check":null,"id":"1980038806415028709","view_count":61,"bookmark_count":0,"created_at":1760913013000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1980000613959422362","full_text":"Yes! The discovery that languages (words?) are semi-autonomous 'organisms' opens up the possibility that all kinds of other critters at various scales are swimming out there—or in here 🧠\nYou might be interested in this conversation with @TOEwithCurt @will_hahn \nhttps://t.co/mfhee7RkJV\n\nIIRC we discussed the extension of these frameworks to other systems like societies, markets, etc..","in_reply_to_user_id_str":"17482791","in_reply_to_status_id_str":"1980000613959422362","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":0,"startTime":1761177600000,"endTime":1761264000000,"tweets":[]},{"label":"2025-10-25","value":0,"startTime":1761264000000,"endTime":1761350400000,"tweets":[]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":793,"startTime":1761436800000,"endTime":1761523200000,"tweets":[{"bookmarked":false,"display_text_range":[14,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14642331","name":"GREG ISENBERG","screen_name":"gregisenberg","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"gregisenberg","lang":"en","retweeted":false,"fact_check":null,"id":"1982402439497081257","view_count":793,"bookmark_count":1,"created_at":1761476547000,"favorite_count":7,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1982110915521556980","full_text":"The educational system had been outdated and frozen for 50 years, at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work? The answer is that it’s never been about education but about social and intellectual hierarchy.","in_reply_to_user_id_str":"14642331","in_reply_to_status_id_str":"1982110915521556980","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":0,"startTime":1761609600000,"endTime":1761696000000,"tweets":[]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":0,"startTime":1761782400000,"endTime":1761868800000,"tweets":[]},{"label":"2025-11-01","value":2455,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1984234255459971441","view_count":2455,"bookmark_count":12,"created_at":1761913286000,"favorite_count":55,"quote_count":1,"reply_count":30,"retweet_count":3,"user_id_str":"99850011","conversation_id_str":"1984234255459971441","full_text":"Epiphenomenalism—the claim that consciousness is real but non-causal—denies the self-evident: feelings cause behavior. C-fibers fire, but pain makes us withdraw; dopamine flows, but pleasure makes us pursue. The hardware executes, but the software feels—and acts.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":0,"startTime":1762128000000,"endTime":1762214400000,"tweets":[]},{"label":"2025-11-05","value":14,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[16,122],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"627896301","name":"Jon Hernandez","screen_name":"JonhernandezIA","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"JonhernandezIA","lang":"en","retweeted":false,"fact_check":null,"id":"1985708977674449206","view_count":14,"bookmark_count":0,"created_at":1762264887000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1985649184007967188","full_text":"@JonhernandezIA Sorry Yann. CNNs may be doing some preprocessing but the brain just is an autoregressive generative engine","in_reply_to_user_id_str":"627896301","in_reply_to_status_id_str":"1985649184007967188","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":1088,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[24,301],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"232294292","name":"Gary Marcus","screen_name":"GaryMarcus","indices":[0,11]},{"id_str":"449588356","name":"Garry Kasparov","screen_name":"Kasparov63","indices":[12,23]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698844299264","indices":[302,325],"media_key":"3_1986116698844299264","media_url_https":"https://pbs.twimg.com/media/G5AanV0WAAAge1b.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1577,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":890,"resize":"fit"},"small":{"h":680,"w":505,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1577,"width":1170,"focus_rects":[{"x":0,"y":264,"w":1170,"h":655},{"x":0,"y":6,"w":1170,"h":1170},{"x":0,"y":0,"w":1170,"h":1334},{"x":0,"y":0,"w":789,"h":1577},{"x":0,"y":0,"w":1170,"h":1577}]},"media_results":{"result":{"media_key":"3_1986116698844299264"}}},{"display_url":"pic.x.com/K1y1BLRbRV","expanded_url":"https://x.com/ebarenholtz/status/1986116712983314542/photo/1","id_str":"1986116698852782080","indices":[302,325],"media_key":"3_1986116698852782080","media_url_https":"https://pbs.twimg.com/media/G5AanV2XcAAOS9N.jpg","type":"photo","url":"https://t.co/K1y1BLRbRV","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1457,"w":1170,"resize":"fit"},"medium":{"h":1200,"w":964,"resize":"fit"},"small":{"h":680,"w":546,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1457,"width":1170,"focus_rects":[{"x":0,"y":802,"w":1170,"h":655},{"x":0,"y":287,"w":1170,"h":1170},{"x":0,"y":123,"w":1170,"h":1334},{"x":400,"y":0,"w":729,"h":1457},{"x":0,"y":0,"w":1170,"h":1457}]},"media_results":{"result":{"media_key":"3_1986116698852782080"}}}]},"favorited":false,"in_reply_to_screen_name":"GaryMarcus","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986116712983314542","view_count":1088,"bookmark_count":3,"created_at":1762362099000,"favorite_count":9,"quote_count":0,"reply_count":7,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986094740002283675","full_text":"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and weren’t exposed to enough data concerning these distinctions. It’s not about a deep inherent ability in humans to distinguish them either. It’s also just context sensitivity and the right training data. It’s all autoregression in us and them!\n\nGo try yourself","in_reply_to_user_id_str":"232294292","in_reply_to_status_id_str":"1986094740002283675","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":98,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[10,177],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1433898842865520644","name":"Spencer Baggins","screen_name":"bigaiguy","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"bigaiguy","lang":"en","retweeted":false,"fact_check":null,"id":"1986817152897225116","view_count":98,"bookmark_count":0,"created_at":1762529097000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"99850011","conversation_id_str":"1986419671177077120","full_text":"@bigaiguy The takeaway isn’t that AI labs failed to build world models. It’s that world models aren’t required for linguistic (and likely other) intelligence in the first place.","in_reply_to_user_id_str":"1433898842865520644","in_reply_to_status_id_str":"1986419671177077120","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":66199,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988224995533685074","view_count":66199,"bookmark_count":833,"created_at":1762864753000,"favorite_count":999,"quote_count":39,"reply_count":189,"retweet_count":144,"user_id_str":"99850011","conversation_id_str":"1988224995533685074","full_text":"People still don’t seem to grasp how insane the structure of language revealed by LLMs really is.\n\nAll structured sequences fall into one of three categories:\n1.Those generated by external rules (like chess, Go, or Fibonacci).\n2.Those generated by external processes (like DNA replication, weather systems, or the stock market).\n3.Those that are self-contained, whose only rule is to continue according to their own structure.\n\nLanguage is the only known example of the third kind that does anything.\n\nIn fact, it does everything.\n\nTrain a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation.\n\nFrom this we can conclude three things:\n\n1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong.\n2) Language is the only self-contained system that produces coherent, functional output.\n3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself.\n\nLLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization.\n\nWtf.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3843,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988767530425725423","view_count":3843,"bookmark_count":48,"created_at":1762994103000,"favorite_count":73,"quote_count":0,"reply_count":30,"retweet_count":5,"user_id_str":"99850011","conversation_id_str":"1988767530425725423","full_text":"The internally driven nature of LLMs blows up the very idea of linguistic \"meaning.\" These models learn to generate based on the relations between tokens alone, without any of the sensory data we normally associate with words. This is shocking. To the model, language is a highly structured stream of meaningless squiggles; no reference, no world model, no 'grounding' in anything but the relations between the tokens themselves. But somehow, this is sufficient to maintain linguistic competency. \n\nWhat about us? Our language seems to have meaning. When someone says \"imagine a red balloon,\" you can see it in your head. And this meaning has utility. If I say, \"grab that red book on the shelf,\" you may actually grab it. LLMs don't have any of this. All they have is text. No exposure to anything the text might refer to.\n\nDoes this suggest that our language is fundamentally different than LLMs'? Not necessarily. Instead, meaning and utility may simply consist of a constellation of processes that interact with, but are computationally distinct from, language. When I say \"grab that red book on the shelf,\" the linguistic prompt may generate not just additional language (maybe 'sure' or 'nah' or 'which shelf?’') but also behavior: looking for the book, walking over to the shelf, etc.. It may also generate \"preparatory\" images of the book or motor movements of reaching for it. Just like a linguistic prompt can engender many possible linguistic continuations, so too can it lead to many other extra-linguistic ones. The words simply influence these behaviors just as they can influence the choice of next words.\n\nOf course, the causal arrow can go in the opposite direction as well: if you see two red books and ask \"which one do you want?\" this is the visual stimulus 'prompting' the linguistic system. But, critically, this has nothing to do with the internal process of language itself. The inputs may be externally determined by the visual or other stimulus but afterward is just autoregressive next token generation, based on the internal structure of the generative engine. This is no different than an LLM being prompted by its user.\n\nTogether, these cross-modal generations constitute what we would call the 'meaning' of the words. When this coordination succeeds systematically—when \"grab the red book\" reliably produces convergent behavior—we retrospectively describe this as the words \"referring\" to the book. But this gets the causality backwards. The words don't coordinate because they refer. We call them referential because they coordinate\n.\nThis is very different than updating some world model, or anything like direct reference from words to perceptions or actions. That form of meaning turns out to be meaningless.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]}]},"interactions":{"users":[{"created_at":1293736812000,"uid":"232294292","id":"232294292","screen_name":"GaryMarcus","name":"Gary Marcus","friends_count":6954,"followers_count":198524,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1907157274637869057/ZS9Ui6fn_normal.jpg","description":"“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"garymarcus.substack.com","expanded_url":"http://garymarcus.substack.com","url":"https://t.co/RrElrTVHSz","indices":[0,23]}]}},"interactions":1}],"period":14,"start":1762055630694,"end":1763265230694}}},"settings":{},"session":null,"routeProps":{"/creators/:username":{}}}