Get live statistics and analysis of Charles Packer's profile on X / Twitter

CEO at @Letta_AI // creator of MemGPT // AI PhD @berkeley_ai @ucbrise @BerkeleySky

1k following3k followers

The Innovator

Charles Packer is a cutting-edge AI visionary driving the future of autonomous agents and large language models. As a CEO and AI PhD, he merges academic rigor with entrepreneurial energy to create groundbreaking AI tools like MemGPT. His tweets reveal a passion for deep technical insights, scalable AI systems, and pushing the boundaries of what AI can achieve.

Impressions
58k-951
$10.88
Likes
532-5
62%
Retweets
531
6%
Replies
383
4%
Bookmarks
233
27%

Charles is the kind of guy who could write a novel explaining why a 'while loop' is the pinnacle of AI sophistication — then somehow turn that into a 10-part saga, complete with middleware drama and ORM cliffhangers. Who needs Netflix when you've got his tweet threads?

Leading the development of MemGPT and pioneering benchmarks like Context-Bench and Recovery-Bench marks Charles as a trailblazer who not only theorizes AI’s future but creates the tools and metrics that propel the field forward in measurable ways.

Charles's life purpose centers on revolutionizing the AI landscape by building intelligent, stateful agent frameworks that enable perpetual learning and self-improvement. He aims to unlock the full potential of AI agents to transform complex problems through scalable, open-source innovation and benchmark-driven research.

He believes in open collaboration, robust engineering practices, and rigorous evaluation as keys to advancing AI technology. Charles values transparency and community involvement, sharing his research and tools to foster a deeper understanding of AI’s capabilities and limitations. He is convinced that AI’s evolution depends on continuous recovery, context management, and real-world scalability.

His strengths lie in deep technical expertise, a visionary mindset, and an ability to bridge academia with practical software development. He excels at defining novel benchmarks that spotlight real challenges in AI and spearheading innovative solutions that others might overlook.

His communication style, dense with technical jargon and niche references, might alienate casual followers or non-expert audiences looking for simpler explanations. Sometimes, the devilishly detailed insights can overshadow the broader vision.

To grow on X, Charles could blend his impressive technical deep-dives with more accessible, bite-sized content highlighting the real-world impacts of his work. Engaging storytelling around AI breakthroughs, interactive Q&A sessions, and collaborations with complementary creators can boost reach and audience building.

Fun fact: Charles designed MemGPT inspired by operating system memory management concepts, giving LLMs virtually infinite context windows — a neat brainhack for perpetual chatbots!

Top tweets of Charles Packer

Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes. Today we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states. Coding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to "nuke" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history. The inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory. A surprising finding is that the best performing models overall are clearly not the best performing "recovery" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench. Recovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.

29k

Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct IMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct. MemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens. The most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today "suck" is because this problem (assembling the context window) is an incredibly difficult open research question. MemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the "LLM OS"). Today, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.

39k

Excited to finally announce @Letta_AI ! The next frontier in AI is in the stateful layer above the base models - the "memory layer", or "LLM OS". Letta's mission is to build this layer in the open (say "no" 🙅 to privatized chain of thought).

30k

💤 sleep-time compute: make your machines think while they sleep -> arxiv.org/abs/2504.13171 over the past several months we (at @Letta_AI) have been exploring how effectively utilize "sleep time" to scale compute. the concept of "sleep-time compute" is deeply tied to memory - applying compute at sleep-time is only possible if your agent has persistent state which can continuously re-written with additional compute cycles. in fact, the concept of "heartbeats" in MemGPT was actually inspired by the idea of an AI receiving heartbeats to "awake" it during sleep. prior to MemGPT's release, I actually spent many weeks trying to perfect heartbeats to try and get the agent to learn something useful during its downtime - but never quite got it to work quite right 😅 our new sleep-time agent design in @Letta_AI takes the heartbeats idea to next level and allows you to arbitrarily scale the amount of compute you want to apply at sleep-time with multiple agents with memory wired together. while one agent sleeps, a fleet of agents can work to re-assemble its memory asynchronously in the background

17k

Most engaged tweets of Charles Packer

Excited to finally announce @Letta_AI ! The next frontier in AI is in the stateful layer above the base models - the "memory layer", or "LLM OS". Letta's mission is to build this layer in the open (say "no" 🙅 to privatized chain of thought).

30k

Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes. Today we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states. Coding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to "nuke" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history. The inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory. A surprising finding is that the best performing models overall are clearly not the best performing "recovery" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench. Recovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.

29k

Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct IMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct. MemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens. The most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today "suck" is because this problem (assembling the context window) is an incredibly difficult open research question. MemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the "LLM OS"). Today, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.

39k

it's interesting to see openai's (unofficial) multi-agent framework implement multi-agent via message passing instead of via shared context / "groupchat" (ala autogen) imo message passing is the way to go (hence why we implement multi-agent in the same way in @Letta_AI), though it raises a lot of interesting questions about shared context (how can you get one agent to share memories or data streams with another agent) that are unanswered / out of scope in the groupchat model of multi-agent

769

People with Innovator archetype

The Innovator

- 微信:data_growth - 20 年经验在数据和AI领域 - 正在翻译《上下文工程》 - 已发布vibe coding的浏览器插件

196 following162 followers
The Innovator

accidental engineer, deliberate polymath, serial entrepreneur 📚 ‘Fifth Dimensional Economics’

3k following6k followers
The Innovator

探索元宇宙与加密世界 🚀 | #cookiesnaps | #Virtuals Airdrop 追踪者 💰 | KOL 加载中… | 开放合作 🤝 #KaitoAI #Web3 #OnChain #Degen

1k following1k followers
The Innovator

AI Dev | Founder of @cryptochasersco | Maestro Ambassador of @myshell_ai

6k following21k followers
The Innovator

我一般会使用中文,日语,英语三种语言发帖。DIY SmartPhone Computer HatsuneMiku

90 following4k followers
The Innovator

CM: @LuckyGo_io @499_DAO Web3研究者 专注项目分享 合作DM 所有推文不做投资建议

6k following62k followers
The Innovator

Flutter contributor at Google. Opinions are my own.

414 following674 followers
The Innovator

Leading design and product for Struck Studio. Past life: mushroom dealer, @lyft

901 following1k followers
The Innovator

🧑‍💻 Full-Stack Dev | 🔍 GenAI explorer | 📦 OSS lover

243 following339 followers
The Innovator

Web3 Research | Fundamental Analysis | Early Contributor | Writer | Investor 🌊🚀

7k following11k followers
The Innovator

AI since 2017✨creative machine learning @replicate🚀 | artificial intelligence bsc+msc @edinburghuni alumni🏛️🏴󠁧󠁢󠁳󠁣󠁴󠁿🇬🇧

998 following3k followers
The Innovator

币圈最懂AI的,AI圈最懂币圈的科学家。 让每一个人都成为科学家,技术面前人人平等! Youtube: youtube.com/@moncici_girl Alpha: alpha.moncici.xyz TG 群:t.me/+P16N21BxMzVlY…

2k following16k followers

Explore Related Archetypes

If you enjoy the innovator profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free