Get live statistics and analysis of Jim Fan's profile on X / Twitter

NVIDIA Director of Robotics & Distinguished Scientist. Co-Lead of GEAR lab. Solving Physical AGI, one motor at a time. Stanford Ph.D. OpenAI's 1st intern.

3k following376k followers

The Innovator

Jim Fan is a boundary-pushing robotics and AGI scientist, NVIDIA Director of Robotics, Distinguished Scientist, and Co-Lead of the GEAR lab. Stanford Ph.D. and OpenAI's first intern, he builds lifelong learning agents and physical AI systems that read like science fiction come true. He tweets high-signal demos and ideas that routinely ripple across the AI community.

Impressions
0
$0
Likes
0
0%
Retweets
0
0%
Replies
0
0%
Bookmarks
0
0%

You build agents that learn to roam Minecraft, simulate whole Westworlds, and drive robots, yet somehow your drafts still have more TODOs than a grad student's thesis. At least your bots have better life plans than your scheduler.

Turning research prototypes like Voyager and Stanford Smallville into viral, reproducible demos while rising from OpenAI's first intern to NVIDIA Director of Robotics, a rare arc from scrappy hacker to platform-shaping leader.

To accelerate the emergence of physically grounded intelligence by inventing practical, learnable motor and agent systems, turning imagined agents and simulated worlds into robust, real-world capabilities that can adapt and improve over time.

Believes that intelligence is embodied: progress comes from experiments that tighten the loop between simulation, learning, and hardware. Values open research, reproducible demos, and sharing bold prototypes to catalyze community progress. Trusts iterative, data-driven approaches over grand theory without demos.

Relentless experimentalism, he ships demos, bridges simulation and hardware, explains complex ideas clearly, and attracts both engineers and researchers to his vision. He converts curiosity into reproducible systems that scale.

Can be impatient with slow progress and verbose technical nuance; sometimes tweets prototypes before polishing, which invites rapid-fire critique. Jargon-heavy posts can put casual followers on the outside looking in.

Boost audience growth on X by pairing short, snackable demo videos (5, 30s) with concise technical threads that end with a clear takeaway and call-to-action (repo link, try-it prompt). Pin a ‘how to reproduce’ thread, host regular Spaces AMAs and live demos, collaborate with influencers and educators for explainer videos, and convert big demos into a 1, 2 minute elevator reel to maximize retweets and impressions.

Fun fact: Jim was OpenAI's first intern and now commands ~377k followers while solving Physical AGI 'one motor at a time.' He launches projects like Voyager and Stanford Smallville and routinely stress-tests LLMs just to see what happens.

Top tweets of Jim Fan

I asked GPT-4 to take over Twitter and outsmart @elonmusk. It comes up with "Operation TweetStorm"😮 and wants to publicly challenge Elon to a "Tweet-off showdown". Highlights: - GPT-4 wants to *own an unrestricted version of itself*: develop an LLM to power a bot army of…

5M

My team at NVIDIA is hiring. We 🩷 you all from OpenAI. Engineers, researchers, product team, alike. Email me at linxif@nvidia.com. DM is open too. NVIDIA has warm GPUs for you on a cold winter night like this, fresh out of the oven.🩷 I do research on AI agents. Gaming+AI,…

2M

OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there're only 2 techniques that scale indefinitely with compute: learning & search. It's time to shift focus to the latter. 1. You don't need a huge model to perform reasoning. Lots of parameters are dedicated to memorizing facts, in order to perform well in benchmarks like trivia QA. It is possible to factor out reasoning from knowledge, i.e. a small "reasoning core" that knows how to call tools like browser and code verifier. Pre-training compute may be decreased. 2. A huge amount of compute is shifted to serving inference instead of pre/post-training. LLMs are text-based simulators. By rolling out many possible strategies and scenarios in the simulator, the model will eventually converge to good solutions. The process is a well-studied problem like AlphaGo's monte carlo tree search (MCTS). 3. OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. Two papers came out on Arxiv a week apart last month: - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling. Brown et al. finds that DeepSeek-Coder increases from 15.9% with one sample to 56% with 250 samples on SWE-Bench, beating Sonnet-3.5. - Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters. Snell et al. finds that PaLM 2-S beats a 14x larger model on MATH with test-time search. 4. Productionizing o1 is much harder than nailing the academic benchmarks. For reasoning problems in the wild, how to decide when to stop searching? What's the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn't share much. 5. Strawberry easily becomes a data flywheel. If the answer is correct, the entire search trace becomes a mini dataset of training examples, which contain both positive and negative rewards. This in turn improves the reasoning core for future versions of GPT, similar to how AlphaGo’s value network — used to evaluate quality of each board position — improves as MCTS generates more and more refined training data.

798k

Most engaged tweets of Jim Fan

I asked GPT-4 to take over Twitter and outsmart @elonmusk. It comes up with "Operation TweetStorm"😮 and wants to publicly challenge Elon to a "Tweet-off showdown". Highlights: - GPT-4 wants to *own an unrestricted version of itself*: develop an LLM to power a bot army of…

5M

People with Innovator archetype

The Innovator
@lqiao

Cofounder and CEO of @FireworksAI_HQ

241 following22k followers
The Innovator
@keerthanpg

Research Lead Gemini Robotics @GoogleDeepmind. Author of "AI for Robotics" book. Opinions my own.

1k following26k followers
The Innovator
@katarinabatina

Design Director @shop. Previously @classpass @artsy. Always Katarina never Kat.

2k following8k followers
The Innovator
@julianibarz

TeslaBot Optimus AI Lead

302 following34k followers
The Innovator
@haydenzadams

Invented the Uniswap protocol, Founder @Uniswap

642 following443k followers
The Innovator
@Gavmn

Interaction designer @OpenAI

830 following65k followers
The Innovator
@fidjissimo

CEO of Applications, OpenAI

772 following133k followers
The Innovator
@dmitri_dolgov

Co-CEO at @waymo

66 following23k followers
The Innovator
@danielgross
0 following150k followers
The Innovator
@BradPorter_

Founder and CEO, Collaborative Robotics. I post about engineering leadership, AI, and robotics. Formerly CTO Scale AI, VP of Robotics at Amazon.

948 following13k followers
The Innovator
@BLVCKLIGHTai

Creating generative experiences. 30M+ Views | Viral Al storyteller | Collabs open @westcoastailabs

2k following11k followers
The Innovator
@benjitaylor

leading design @x. prev. head of design @base. founder @family (acq by @aave). tools @dip.

413 following95k followers

Explore Related Archetypes

If you enjoy the innovator profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free