Get live statistics and analysis of Arun Rao's profile on X / Twitter

Builder of large-scale ML systems; adjunct prof @ucla; ex quant derivatives trader & startup founder. Tweets on AI, tech, science, & econ.
The Analyst
Arun Rao is a sharp-minded builder and thinker, blending expertise in AI, tech, and economics to dissect complex systems and policies with precision. As an adjunct professor and former quant trader, he navigates both academia and the fast-paced startup world, sharing deep insights on AI's future impact. His tweets serve as a beacon for those wanting to understand the intricate dance between innovation, policy, and industry trends.
Top users who interacted with Arun Rao over the last 14 days
Ex - Product leader @ Meta, Yelp, Yahoo. Share playbooks to improve leadership skills and cross career chasms: Influence, Emotional Intelligence, …
grandmaster. senior anon LLM researcher. ex-nerd printing money. what the fuck.
You have been eaten by a grue
🤖 Staff Data Scientist | 💻 Graphs, RL, Gen AI, LLM | 🎓 UMass Amherst + BITSian | Just neural embeddings floating in latent space, actively learning
FR/US/GB AI/ML Person, Director of Research at @GoogleDeepMind, Honorary Professor at @UCL_DARK, @ELLISforEurope Fellow. All posts are personal.
Building @GenReasoning. Previously lots of other things like: reasoning lead Meta AI, Llama 3/2, Galactica, Papers with Code.
we rollin', we ratin', pa-trollin' and tryna catch you writin' dirty tryna catch you writin' dirty, tryna catch you writin' dirty, tryna catch you writin' dirty
built over 40 apps. gave over 30 talks. debated EU commision. organised communities. built & crashed a startup. fought in a cage. won medals. llm since ada 💥
I give you $1M and make you grind for 12 weeks straight @HF0
Canadian settled in the UK. AI Consultant, helping businesses get value from AI.
Writes about economics, posts about rabbits. For serious opinions/analysis, read my blog: noahpinion.blog
Views are my own Partnerships @mHUBchicago; Chicago @aicollective 35 time Groomsmen, IU, Prev: @gocharlieai @uberatg @softbank Chicago Propaganda Factory 🏭
Founder of 1st10 🌱 Building early engineering teams in SF/NY | xBinc Founder (built Stripe, Ripple, Airbnb, etc), acquired by Robinhood in ‘21. Dad.
harvard grad, founder @moccet
Applied ML — All opinions my own
co-founder anthromind.com | ex @google AI | building next-gen scalable oversight for AI systems | pratikkarki.substack.com
I tweet about distribution | building ligoai.com | 4x founder • Generalist
Chief scientist at Redwood Research (@redwood_ai), focused on technical AI safety research to reduce risks from rogue AIs
Arun tweets so much, he could single-handedly keep Twitter’s servers running—if only someone could retweet more than a fraction of his brain-dumps before they vanish into the void. It’s like he’s trying to build an AI model directly from his timeline, but forgot his followers might need a break to breathe.
Arun’s biggest win so far is becoming a trusted, go-to analyst whose real-time critiques of AI policy (like the SB 1047 bill) have sparked crucial conversations among academia, startups, and lawmakers, helping to safeguard the future of open source AI innovation in California.
Arun’s life purpose revolves around advancing knowledge in AI and technology while ensuring the ecosystem remains open, innovative, and resilient. He aims to influence policy and industry decisions to protect and foster open source AI development and technology growth. Ultimately, he strives to bridge academia, industry, and regulatory landscapes to build a smarter, more connected future.
He believes that innovation thrives at the intersection of open collaboration and rigorous analysis. Transparency, intellectual curiosity, and the free flow of ideas are fundamental; restrictive policies that stifle open source development threaten technological progress and societal benefit. Arun values experiential learning, close-knit communities, and geographic clusters as vital engines fueling rapid AI advancements.
Arun’s strengths include his ability to synthesize complex technical, economic, and policy information into sharply reasoned public commentary. His deep domain knowledge, combined with a prolific tweet output, positions him as a highly trusted and influential voice in the AI community. He excels at identifying how legislation impacts technology ecosystems in real time.
His prolific tweeting—clocking nearly 15,000 tweets—might dilute focus and overwhelm followers, potentially causing engagement fatigue. His strong, often critical stance on policies could alienate more moderate or opposing voices, limiting broader coalition-building. Additionally, excessive technical detail might not always resonate with audiences outside his core professional sphere.
To grow his audience on X, Arun should leverage Twitter Spaces or regular AMA sessions to create interactive discussions, bringing his critical insights to a wider, more engaged audience. Simplifying some technical jargon and offering concise, accessible threads can attract non-expert followers interested in AI and policy. Collaborations or shout-outs with other influencers in tech and academia could amplify his reach.
Fun fact: Arun’s highly detailed book recommendations reveal a deep intellectual curiosity that spans evolutionary biology, architectural design, and the history of scientific discovery — proving that cutting-edge AI expertise can be beautifully interdisciplinary.
Top tweets of Arun Rao
Having carefully read the draft of SB 1047, I expect this bill will end the open source AI/LLM development in California and possibly the US (given how many AI startups exist in CA). It’s profoundly misguided and has onerous, ambiguous, and basically impossible to comply with sections. At best, 2-3 large closed AI systems may be able to comply and survive and the entire OSS AI industry would shut down. @garrytan & @martin_casado - are the YC and a16z policy folk pushing back on this?
The supermajority of people who actually build AI in academia and industry think @Scott_Wiener’s SB 1047 billion will destroy open source AI in California and cause most startups to flee the state. Yet he keeps claiming his bill is reasonable and light touch and ignores all the arguments about how this bill makes compliance impossible and will drive OSS away from CA. I hope the CA Assembly and @GavinNewsom see this and decline to pass this bill.
Professors at Stanford and the UCs have come out against @Scott_Wiener’s SB 1047 - my sense is like 90% of AI academia is against it, which is an overwhelming majority. It’s because they have much to lose if OSS AI goes away. Academia, startups, large companies- they all view this law as profoundly misguided. Thanks Fei-Fei and I hope others speak out too. x.com/m2saxon/status…
Thoughts on US vs China regarding AI capex and models: These are from conversations with some very smart Chinese tech insiders, and I hope @ruima or @mattsheehan88 will add more color from their own separate conversations while in China. Context: -US hyperscaler capex should hit $3trn by 2029, making it the largest public or private investment in US history (though the railroads may have been larger slice of GDP 150 years ago). Larry Page, the co-controlling Google shareholder, has said “I am willing to go bankrupt rather than lose this race.” It seems like Zuck and Elon aren’t far behind, and certainly Sama is betting the farm of OpenAI on very aggressive capex deals of about $1 trillion - all to build “data centers full of geniuses” that US companies control. -The main US revenue sources are: 1) ads ($600bn+), 2) enterprise API sales ($80bn+), 3) consumer subscriptions ($5-20bn). Ads grow at 15-25% a year, and the rest is growing at 100%+ rates - fantastic businesses. So why isn’t China joining this insane race? There are few reasons, some that I found very surprising. -China can’t get the best GPUs, and is stuck with H20s, H100s, or older Ascend chips. So building DCs and shells doesn’t make sense, even if China has the power. Though they will push to get the Ascend chips to converge. Meanwhile, some companies have foreign entities and larger offshore GPU fleets (Bytedance), but this is less common. China is limited to ~1mm net new H100 equivalent GPU capacity each year (this is what X.AI / Grok added in a single DC). -Lacking scale of market. The Chinese companies with the most global scale, Alibaba and Bytedance, have the best models but can’t amortize this with a revenue or user base like Google and Meta (or increasingly OpenAI and Anthropic). The other large companies like TenCent, Baidu, Meituan, are even further behind, serving a smaller market. The China-only companies are further constrained in training bc the have to use most of their GPU fleet for inference during daytimes, when their researchers want to train (vs American companies that have global fleets and can do load balancing and resource management more efficiently). -Chinese tech executives are hesitant to spend on capex bc, even within China, they see the market as much as smaller. They can’t charge for consumer subscriptions. Chinese consumers won’t buy. They can’t sell via API and Saas software as enterprise customers only take it at negative margins and don’t buy SaaS. Their best way to monetize is via ads, which they can do with highly customized models for each platform’s apps, but any general public or open weights models are loss leaders. -The first two reasons (GPU constraints and lack of scale), means the best Chinese researchers and engineers prefer to work at US companies on frontier systems - and higher pay is definitely an incentive too. China puts out the highest number of top AI scientists in the world today, due to sheer size and their superior high school to undergraduate programs, but the very top want to leave. -So what’s the Chinese strategy? A cost-effective fast follow. Try to only be 3-5 months behind the frontier, and use distillation and other techniques from US frontier models to not fall behind, while optimizing training runs to be extremely GPU efficient. DeepSeek openly did this, and it’s likely all their labs do it (but US labs don’t!). It’s a big problem for the Chinese labs that the top US models are never released, and only distilled for the consumer / API version that most customers see (for many reasons I won’t go into, eg the GPT-5 checkpoint you use is inferior to other teacher model checkpoints OpenAI has). At some point, the Chinese companies (esp Bytedance and Alibaba) would like to come up with algorithmic improvements to try and take the lead (eg RSI), but right now the goal is to not fall too far behind.
This is an open secret - Only Google and Meta can afford runs for the next 2 generations of models. OpenAI will have to sell to Microsoft, and Anthropic to Amazon or Apple (unclear if those 2 giants have the will to compete). X.AI can probably fund this with Elon or Tesla’s capital. The big caveat is if we find scaling doesn’t matter, and most of the improvements come from the algo side, which seems unlikely. I expect data and compute scaling and algos to each roughly contribute a third to the next gen model capabilities.
Great interview about @OpenAI Deep Research with @isafulf and @josh_tobin_ - this is probably the first great agentic LRM and is a must use tool for any knowledge worker (and over time, anyone doing searches). Glad to see Gemini and Perplexity have competing tools and I’ve been using all three. youtu.be/bNEvJYzoa8A?si…
Time to build a small ADU in Berkeley: -Construction, 6.5 months -Dealing with the city bureaucracy to get multiple permits, inspections, etc, many which are not documented anywhere and based on the whims of inspectors, 23-24 months. CA state law requires that all the city permitting work for an ADU take 3 months but I’ve had numerous city workers tell me “it doesn’t apply to them” (I consulted two lawyers who said I had a good case, but it would take longer to sue than just do what they asked). Basically when you get lawless city departments like this, the only way to deal with it is to pass state laws taking not just timelines away, but privatizing inspections and putting everything in code in a state database. @adenaishii - Berkeley is still quite broken and needs to simplify and standardize its planning and building departments. Housing is slow to build and expensive for a reason - the City Council has let too many requirements and checkpoints pile up over time and you get a bureaucracy that makes building very hard and only affordable for a small minority.
Most engaged tweets of Arun Rao
Having carefully read the draft of SB 1047, I expect this bill will end the open source AI/LLM development in California and possibly the US (given how many AI startups exist in CA). It’s profoundly misguided and has onerous, ambiguous, and basically impossible to comply with sections. At best, 2-3 large closed AI systems may be able to comply and survive and the entire OSS AI industry would shut down. @garrytan & @martin_casado - are the YC and a16z policy folk pushing back on this?
Professors at Stanford and the UCs have come out against @Scott_Wiener’s SB 1047 - my sense is like 90% of AI academia is against it, which is an overwhelming majority. It’s because they have much to lose if OSS AI goes away. Academia, startups, large companies- they all view this law as profoundly misguided. Thanks Fei-Fei and I hope others speak out too. x.com/m2saxon/status…
The supermajority of people who actually build AI in academia and industry think @Scott_Wiener’s SB 1047 billion will destroy open source AI in California and cause most startups to flee the state. Yet he keeps claiming his bill is reasonable and light touch and ignores all the arguments about how this bill makes compliance impossible and will drive OSS away from CA. I hope the CA Assembly and @GavinNewsom see this and decline to pass this bill.
Thoughts on US vs China regarding AI capex and models: These are from conversations with some very smart Chinese tech insiders, and I hope @ruima or @mattsheehan88 will add more color from their own separate conversations while in China. Context: -US hyperscaler capex should hit $3trn by 2029, making it the largest public or private investment in US history (though the railroads may have been larger slice of GDP 150 years ago). Larry Page, the co-controlling Google shareholder, has said “I am willing to go bankrupt rather than lose this race.” It seems like Zuck and Elon aren’t far behind, and certainly Sama is betting the farm of OpenAI on very aggressive capex deals of about $1 trillion - all to build “data centers full of geniuses” that US companies control. -The main US revenue sources are: 1) ads ($600bn+), 2) enterprise API sales ($80bn+), 3) consumer subscriptions ($5-20bn). Ads grow at 15-25% a year, and the rest is growing at 100%+ rates - fantastic businesses. So why isn’t China joining this insane race? There are few reasons, some that I found very surprising. -China can’t get the best GPUs, and is stuck with H20s, H100s, or older Ascend chips. So building DCs and shells doesn’t make sense, even if China has the power. Though they will push to get the Ascend chips to converge. Meanwhile, some companies have foreign entities and larger offshore GPU fleets (Bytedance), but this is less common. China is limited to ~1mm net new H100 equivalent GPU capacity each year (this is what X.AI / Grok added in a single DC). -Lacking scale of market. The Chinese companies with the most global scale, Alibaba and Bytedance, have the best models but can’t amortize this with a revenue or user base like Google and Meta (or increasingly OpenAI and Anthropic). The other large companies like TenCent, Baidu, Meituan, are even further behind, serving a smaller market. The China-only companies are further constrained in training bc the have to use most of their GPU fleet for inference during daytimes, when their researchers want to train (vs American companies that have global fleets and can do load balancing and resource management more efficiently). -Chinese tech executives are hesitant to spend on capex bc, even within China, they see the market as much as smaller. They can’t charge for consumer subscriptions. Chinese consumers won’t buy. They can’t sell via API and Saas software as enterprise customers only take it at negative margins and don’t buy SaaS. Their best way to monetize is via ads, which they can do with highly customized models for each platform’s apps, but any general public or open weights models are loss leaders. -The first two reasons (GPU constraints and lack of scale), means the best Chinese researchers and engineers prefer to work at US companies on frontier systems - and higher pay is definitely an incentive too. China puts out the highest number of top AI scientists in the world today, due to sheer size and their superior high school to undergraduate programs, but the very top want to leave. -So what’s the Chinese strategy? A cost-effective fast follow. Try to only be 3-5 months behind the frontier, and use distillation and other techniques from US frontier models to not fall behind, while optimizing training runs to be extremely GPU efficient. DeepSeek openly did this, and it’s likely all their labs do it (but US labs don’t!). It’s a big problem for the Chinese labs that the top US models are never released, and only distilled for the consumer / API version that most customers see (for many reasons I won’t go into, eg the GPT-5 checkpoint you use is inferior to other teacher model checkpoints OpenAI has). At some point, the Chinese companies (esp Bytedance and Alibaba) would like to come up with algorithmic improvements to try and take the lead (eg RSI), but right now the goal is to not fall too far behind.
This is an open secret - Only Google and Meta can afford runs for the next 2 generations of models. OpenAI will have to sell to Microsoft, and Anthropic to Amazon or Apple (unclear if those 2 giants have the will to compete). X.AI can probably fund this with Elon or Tesla’s capital. The big caveat is if we find scaling doesn’t matter, and most of the improvements come from the algo side, which seems unlikely. I expect data and compute scaling and algos to each roughly contribute a third to the next gen model capabilities.
People with Analyst archetype
Unity game developer Project research Mad Lads #162
NFTs collector!
onchain explorer | emotional trader
My name is Yuri Kovalenok. I take notes and experiments jurij0001.creator-spring.com tiktok.com/@jurij0001 youtube.com/@YuriKovalenok
Onchain data analyst and valuable contents creator | @campnetworkxyz Community Manager | AI & IP educator | @flyingtulip_ @flipster_io maxi 🇰🇷
Swing Setups 📊 | Low Cap Hunter 💎 | Daily TA 📈 | 5+ Yrs Market Exp 🚀 | Trade w/ me 👉 partner.blofin.com/d/ChartDeck
Box office Analyst💰 Film Critic🎥 DM/Email- panindiareview@gmail.com IG: instagram.com/panindiareview…
Class of '17. Sats stacker. Liquidity reader. Sentiment observer. Cycle whisperer. Will binge GTA6. Follow to make bank.
CEO/Founder @ITC_Crypto @ITC_Stocks @ITC_Macro PhD Engineering youtube.com/@intothecrypto… benjamincowen.com
I observe
Explore Related Archetypes
If you enjoy the analyst profiles, you might also like these personality types:
