Charles Packer is a cutting-edge AI visionary driving the future of autonomous agents and large language models. As a CEO and AI PhD, he merges academic rigor with entrepreneurial energy to create groundbreaking AI tools like MemGPT. His tweets reveal a passion for deep technical insights, scalable AI systems, and pushing the boundaries of what AI can achieve.
Charles is the kind of guy who could write a novel explaining why a 'while loop' is the pinnacle of AI sophistication — then somehow turn that into a 10-part saga, complete with middleware drama and ORM cliffhangers. Who needs Netflix when you've got his tweet threads?
Leading the development of MemGPT and pioneering benchmarks like Context-Bench and Recovery-Bench marks Charles as a trailblazer who not only theorizes AI’s future but creates the tools and metrics that propel the field forward in measurable ways.
Charles's life purpose centers on revolutionizing the AI landscape by building intelligent, stateful agent frameworks that enable perpetual learning and self-improvement. He aims to unlock the full potential of AI agents to transform complex problems through scalable, open-source innovation and benchmark-driven research.
He believes in open collaboration, robust engineering practices, and rigorous evaluation as keys to advancing AI technology. Charles values transparency and community involvement, sharing his research and tools to foster a deeper understanding of AI’s capabilities and limitations. He is convinced that AI’s evolution depends on continuous recovery, context management, and real-world scalability.
His strengths lie in deep technical expertise, a visionary mindset, and an ability to bridge academia with practical software development. He excels at defining novel benchmarks that spotlight real challenges in AI and spearheading innovative solutions that others might overlook.
His communication style, dense with technical jargon and niche references, might alienate casual followers or non-expert audiences looking for simpler explanations. Sometimes, the devilishly detailed insights can overshadow the broader vision.
To grow on X, Charles could blend his impressive technical deep-dives with more accessible, bite-sized content highlighting the real-world impacts of his work. Engaging storytelling around AI breakthroughs, interactive Q&A sessions, and collaborations with complementary creators can boost reach and audience building.
Fun fact: Charles designed MemGPT inspired by operating system memory management concepts, giving LLMs virtually infinite context windows — a neat brainhack for perpetual chatbots!
Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes.
Today we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states.
Coding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to "nuke" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history.
The inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory.
A surprising finding is that the best performing models overall are clearly not the best performing "recovery" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench.
Recovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.
Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct
IMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct.
MemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens.
The most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today "suck" is because this problem (assembling the context window) is an incredibly difficult open research question.
MemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the "LLM OS").
Today, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.
Excited to finally announce @Letta_AI !
The next frontier in AI is in the stateful layer above the base models - the "memory layer", or "LLM OS".
Letta's mission is to build this layer in the open (say "no" 🙅 to privatized chain of thought).
💤 sleep-time compute: make your machines think while they sleep -> arxiv.org/abs/2504.13171
over the past several months we (at @Letta_AI) have been exploring how effectively utilize "sleep time" to scale compute.
the concept of "sleep-time compute" is deeply tied to memory - applying compute at sleep-time is only possible if your agent has persistent state which can continuously re-written with additional compute cycles.
in fact, the concept of "heartbeats" in MemGPT was actually inspired by the idea of an AI receiving heartbeats to "awake" it during sleep.
prior to MemGPT's release, I actually spent many weeks trying to perfect heartbeats to try and get the agent to learn something useful during its downtime - but never quite got it to work quite right 😅
our new sleep-time agent design in @Letta_AI takes the heartbeats idea to next level and allows you to arbitrarily scale the amount of compute you want to apply at sleep-time with multiple agents with memory wired together.
while one agent sleeps, a fleet of agents can work to re-assemble its memory asynchronously in the background
Excited to finally announce @Letta_AI !
The next frontier in AI is in the stateful layer above the base models - the "memory layer", or "LLM OS".
Letta's mission is to build this layer in the open (say "no" 🙅 to privatized chain of thought).
Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes.
Today we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states.
Coding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to "nuke" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history.
The inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory.
A surprising finding is that the best performing models overall are clearly not the best performing "recovery" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench.
Recovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.
Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct
IMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct.
MemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens.
The most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today "suck" is because this problem (assembling the context window) is an incredibly difficult open research question.
MemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the "LLM OS").
Today, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.
{"data":{"__meta":{"device":false,"path":"/creators/charlespacker"},"/creators/charlespacker":{"data":{"user":{"id":"2385913832","name":"Charles Packer","description":"CEO at @Letta_AI // creator of MemGPT // AI PhD @berkeley_ai @ucbrise @BerkeleySky","followers_count":3162,"friends_count":1084,"statuses_count":977,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1956206627440484352/V7FCNqxS_normal.jpg","screen_name":"charlespacker","location":"SF","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"charlespacker.com","expanded_url":"http://charlespacker.com","url":"https://t.co/6BmA594Xpn","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"Charles Packer is a cutting-edge AI visionary driving the future of autonomous agents and large language models. As a CEO and AI PhD, he merges academic rigor with entrepreneurial energy to create groundbreaking AI tools like MemGPT. His tweets reveal a passion for deep technical insights, scalable AI systems, and pushing the boundaries of what AI can achieve.","purpose":"Charles's life purpose centers on revolutionizing the AI landscape by building intelligent, stateful agent frameworks that enable perpetual learning and self-improvement. He aims to unlock the full potential of AI agents to transform complex problems through scalable, open-source innovation and benchmark-driven research.","beliefs":"He believes in open collaboration, robust engineering practices, and rigorous evaluation as keys to advancing AI technology. Charles values transparency and community involvement, sharing his research and tools to foster a deeper understanding of AI’s capabilities and limitations. He is convinced that AI’s evolution depends on continuous recovery, context management, and real-world scalability.","facts":"Fun fact: Charles designed MemGPT inspired by operating system memory management concepts, giving LLMs virtually infinite context windows — a neat brainhack for perpetual chatbots!","strength":"His strengths lie in deep technical expertise, a visionary mindset, and an ability to bridge academia with practical software development. He excels at defining novel benchmarks that spotlight real challenges in AI and spearheading innovative solutions that others might overlook.","weakness":"His communication style, dense with technical jargon and niche references, might alienate casual followers or non-expert audiences looking for simpler explanations. Sometimes, the devilishly detailed insights can overshadow the broader vision.","recommendation":"To grow on X, Charles could blend his impressive technical deep-dives with more accessible, bite-sized content highlighting the real-world impacts of his work. Engaging storytelling around AI breakthroughs, interactive Q&A sessions, and collaborations with complementary creators can boost reach and audience building.","roast":"Charles is the kind of guy who could write a novel explaining why a 'while loop' is the pinnacle of AI sophistication — then somehow turn that into a 10-part saga, complete with middleware drama and ORM cliffhangers. Who needs Netflix when you've got his tweet threads?","win":"Leading the development of MemGPT and pioneering benchmarks like Context-Bench and Recovery-Bench marks Charles as a trailblazer who not only theorizes AI’s future but creates the tools and metrics that propel the field forward in measurable ways."},"tweets":[{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1877398183938240530","view_count":125879,"bookmark_count":1020,"created_at":1736441582000,"favorite_count":975,"quote_count":21,"reply_count":55,"retweet_count":70,"user_id_str":"2385913832","conversation_id_str":"1877398183938240530","full_text":"guys agent frameworks are so stupid, all you need is an anthropic API key and a while loop\n\n…and a FastAPI server so that we can use the agents programmatically \n\n…and some good API designs to enable multi user and multi agent support\n\n…and a tool execution sandbox so that the agent tools don’t interfere with the main server process \n\n…and a real database so that the agents are actually persisted and don’t disappear when the script finishes \n\n…and an ORM so that we can properly scale the agent server\n\n…and some extra middleware code since we also want to use local LLMs but they have less reliable function calling\n\n…and some sort of file storage / embedding solution to do RAG\n\n…and some sort of context management system to handle the long term memory problem and deal with context overflow \n\n…\n\n…congratulations, you just built an “agents framework”","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rvv3h6lPCQ","expanded_url":"https://x.com/charlespacker/status/1857105467031630220/photo/1","ext_alt_text":"The AI agents stack in late 2024, organized into three key layers: agent hosting/serving, agent frameworks, and LLM models & storage.","id_str":"1857102810980261888","indices":[276,299],"media_key":"3_1857102810980261888","media_url_https":"https://pbs.twimg.com/media/GcXBKs_boAA7KLY.jpg","type":"photo","url":"https://t.co/rvv3h6lPCQ","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1710011976179728384","name":"Composio","screen_name":"composiohq","type":"user"},{"user_id":"1507488634458439685","name":"Chroma","screen_name":"trychroma","type":"user"},{"user_id":"1642834485673619457","name":"E2B","screen_name":"e2b","type":"user"},{"user_id":"1551987185372512263","name":"Modal","screen_name":"modal_labs","type":"user"},{"user_id":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","type":"user"},{"user_id":"1757344117426896896","name":"browserbase 🅱️","screen_name":"browserbasehq","type":"user"},{"user_id":"1688410127378829312","name":"ollama","screen_name":"ollama","type":"user"},{"user_id":"1654830858098860032","name":"LM Studio","screen_name":"lmstudio","type":"user"},{"user_id":"1219566488325017602","name":"Supabase","screen_name":"supabase","type":"user"},{"user_id":"1406351060634009600","name":"LiveKit","screen_name":"livekit","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1706,"resize":"fit"},"medium":{"h":1200,"w":1000,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":4096,"width":3412,"focus_rects":[{"x":0,"y":374,"w":3412,"h":1911},{"x":0,"y":0,"w":3412,"h":3412},{"x":0,"y":0,"w":3412,"h":3890},{"x":1327,"y":0,"w":2048,"h":4096},{"x":0,"y":0,"w":3412,"h":4096}]},"media_results":{"result":{"media_key":"3_1857102810980261888"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rvv3h6lPCQ","expanded_url":"https://x.com/charlespacker/status/1857105467031630220/photo/1","ext_alt_text":"The AI agents stack in late 2024, organized into three key layers: agent hosting/serving, agent frameworks, and LLM models & storage.","id_str":"1857102810980261888","indices":[276,299],"media_key":"3_1857102810980261888","media_url_https":"https://pbs.twimg.com/media/GcXBKs_boAA7KLY.jpg","type":"photo","url":"https://t.co/rvv3h6lPCQ","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1710011976179728384","name":"Composio","screen_name":"composiohq","type":"user"},{"user_id":"1507488634458439685","name":"Chroma","screen_name":"trychroma","type":"user"},{"user_id":"1642834485673619457","name":"E2B","screen_name":"e2b","type":"user"},{"user_id":"1551987185372512263","name":"Modal","screen_name":"modal_labs","type":"user"},{"user_id":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","type":"user"},{"user_id":"1757344117426896896","name":"browserbase 🅱️","screen_name":"browserbasehq","type":"user"},{"user_id":"1688410127378829312","name":"ollama","screen_name":"ollama","type":"user"},{"user_id":"1654830858098860032","name":"LM Studio","screen_name":"lmstudio","type":"user"},{"user_id":"1219566488325017602","name":"Supabase","screen_name":"supabase","type":"user"},{"user_id":"1406351060634009600","name":"LiveKit","screen_name":"livekit","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1706,"resize":"fit"},"medium":{"h":1200,"w":1000,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":4096,"width":3412,"focus_rects":[{"x":0,"y":374,"w":3412,"h":1911},{"x":0,"y":0,"w":3412,"h":3412},{"x":0,"y":0,"w":3412,"h":3890},{"x":1327,"y":0,"w":2048,"h":4096},{"x":0,"y":0,"w":3412,"h":4096}]},"media_results":{"result":{"media_key":"3_1857102810980261888"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1857105467031630220","view_count":64223,"bookmark_count":881,"created_at":1731603421000,"favorite_count":665,"quote_count":11,"reply_count":13,"retweet_count":142,"user_id_str":"2385913832","conversation_id_str":"1857105467031630220","full_text":"the new frontier: AI agent hosting/serving 👾🛸\n\nthe AI/LLM agents stack is a significant departure from the standard LLM stack. the key difference between the two lies in managing state: LLM serving platforms are generally stateless, whereas agent serving platforms need to be stateful (retain the agent state server-side)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/ihccbsy31j","expanded_url":"https://twitter.com/charlespacker/status/1713935864668303857/photo/1","id_str":"1713930938890244096","indices":[281,304],"media_key":"16_1713930938890244096","media_url_https":"https://pbs.twimg.com/tweet_video_thumb/F8kbG64a0AAhMso.jpg","type":"animated_gif","url":"https://t.co/iHCcBsy31j","ext_media_availability":{"status":"Available"},"sizes":{"large":{"h":528,"w":1122,"resize":"fit"},"medium":{"h":528,"w":1122,"resize":"fit"},"small":{"h":320,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":528,"width":1122,"focus_rects":[]},"video_info":{"aspect_ratio":[17,8],"variants":[{"bitrate":0,"content_type":"video/mp4","url":"https://video.twimg.com/tweet_video/F8kbG64a0AAhMso.mp4"}]},"media_results":{"result":{"media_key":"16_1713930938890244096"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"arxiv.org/abs/2310.08560","expanded_url":"http://arxiv.org/abs/2310.08560","url":"https://t.co/KeLpcVDGhe","indices":[225,248]},{"display_url":"github.com/cpacker/memgpt","expanded_url":"http://github.com/cpacker/memgpt","url":"https://t.co/GLFR1aR2tm","indices":[257,280]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.twitter.com/iHCcBsy31j","expanded_url":"https://twitter.com/charlespacker/status/1713935864668303857/photo/1","id_str":"1713930938890244096","indices":[281,304],"media_key":"16_1713930938890244096","media_url_https":"https://pbs.twimg.com/tweet_video_thumb/F8kbG64a0AAhMso.jpg","type":"animated_gif","url":"https://t.co/iHCcBsy31j","ext_media_availability":{"status":"Available"},"sizes":{"large":{"h":528,"w":1122,"resize":"fit"},"medium":{"h":528,"w":1122,"resize":"fit"},"small":{"h":320,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":528,"width":1122,"focus_rects":[]},"video_info":{"aspect_ratio":[17,8],"variants":[{"bitrate":0,"content_type":"video/mp4","url":"https://video.twimg.com/tweet_video/F8kbG64a0AAhMso.mp4"}]},"media_results":{"result":{"media_key":"16_1713930938890244096"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1713935864668303857","view_count":92907,"bookmark_count":332,"created_at":1697469128000,"favorite_count":466,"quote_count":16,"reply_count":10,"retweet_count":107,"user_id_str":"2385913832","conversation_id_str":"1713935864668303857","full_text":"Introducing MemGPT 📚🦙 a method for extending LLM context windows. Inspired by OS mem management, it provides an infinite virtualized context for fixed-context LLMs. Enables perpetual chatbots & large doc QA. 🧵1/n\n\nPaper: https://t.co/KeLpcVDGhe\nGitHub: https://t.co/GLFR1aR2tm https://t.co/iHCcBsy31j","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/R0NU4VJVRu","expanded_url":"https://x.com/charlespacker/status/1960784370899542389/photo/1","id_str":"1960783303063334912","indices":[277,300],"media_key":"3_1960783303063334912","media_url_https":"https://pbs.twimg.com/media/GzYaBoSbcAA7P6Y.jpg","type":"photo","url":"https://t.co/R0NU4VJVRu","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","type":"user"},{"user_id":"1062544973294432257","name":"Shangyin Tan","screen_name":"ShangyinT","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":586,"w":986,"resize":"fit"},"medium":{"h":586,"w":986,"resize":"fit"},"small":{"h":404,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":586,"width":986,"focus_rects":[{"x":0,"y":34,"w":986,"h":552},{"x":224,"y":0,"w":586,"h":586},{"x":260,"y":0,"w":514,"h":586},{"x":371,"y":0,"w":293,"h":586},{"x":0,"y":0,"w":986,"h":586}]},"media_results":{"result":{"media_key":"3_1960783303063334912"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[242,251]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[238,247]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/R0NU4VJVRu","expanded_url":"https://x.com/charlespacker/status/1960784370899542389/photo/1","id_str":"1960783303063334912","indices":[277,300],"media_key":"3_1960783303063334912","media_url_https":"https://pbs.twimg.com/media/GzYaBoSbcAA7P6Y.jpg","type":"photo","url":"https://t.co/R0NU4VJVRu","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","type":"user"},{"user_id":"1062544973294432257","name":"Shangyin Tan","screen_name":"ShangyinT","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":586,"w":986,"resize":"fit"},"medium":{"h":586,"w":986,"resize":"fit"},"small":{"h":404,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":586,"width":986,"focus_rects":[{"x":0,"y":34,"w":986,"h":552},{"x":224,"y":0,"w":586,"h":586},{"x":260,"y":0,"w":514,"h":586},{"x":371,"y":0,"w":293,"h":586},{"x":0,"y":0,"w":986,"h":586}]},"media_results":{"result":{"media_key":"3_1960783303063334912"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1960784370899542389","view_count":29967,"bookmark_count":111,"created_at":1756322398000,"favorite_count":269,"quote_count":6,"reply_count":18,"retweet_count":31,"user_id_str":"2385913832","conversation_id_str":"1960784370899542389","full_text":"Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes.\n\nToday we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states.\n\nCoding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to \"nuke\" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history.\n\nThe inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory.\n\nA surprising finding is that the best performing models overall are clearly not the best performing \"recovery\" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench.\n\nRecovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/mw0ThewnoY","expanded_url":"https://x.com/charlespacker/status/1874061693640466841/photo/1","id_str":"1874061304073510914","indices":[275,298],"media_key":"3_1874061304073510914","media_url_https":"https://pbs.twimg.com/media/GgIA06ybYAIV4Im.jpg","type":"photo","url":"https://t.co/mw0ThewnoY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1550,"w":1556,"resize":"fit"},"medium":{"h":1195,"w":1200,"resize":"fit"},"small":{"h":677,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1550,"width":1556,"focus_rects":[{"x":0,"y":0,"w":1556,"h":871},{"x":0,"y":0,"w":1550,"h":1550},{"x":0,"y":0,"w":1360,"h":1550},{"x":0,"y":0,"w":775,"h":1550},{"x":0,"y":0,"w":1556,"h":1550}]},"media_results":{"result":{"media_key":"3_1874061304073510914"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33521530","name":"swyx","screen_name":"swyx","indices":[31,36]},{"id_str":"1271552707464032256","name":"Shunyu Yao","screen_name":"ShunyuYao12","indices":[155,167]},{"id_str":"33521530","name":"swyx","screen_name":"swyx","indices":[31,36]},{"id_str":"1271552707464032256","name":"Shunyu Yao","screen_name":"ShunyuYao12","indices":[155,167]},{"id_str":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","indices":[487,500]},{"id_str":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","indices":[503,513]},{"id_str":"323533772","name":"Joey Gonzalez","screen_name":"profjoeyg","indices":[516,526]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[543,552]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/mw0ThewnoY","expanded_url":"https://x.com/charlespacker/status/1874061693640466841/photo/1","id_str":"1874061304073510914","indices":[275,298],"media_key":"3_1874061304073510914","media_url_https":"https://pbs.twimg.com/media/GgIA06ybYAIV4Im.jpg","type":"photo","url":"https://t.co/mw0ThewnoY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1550,"w":1556,"resize":"fit"},"medium":{"h":1195,"w":1200,"resize":"fit"},"small":{"h":677,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1550,"width":1556,"focus_rects":[{"x":0,"y":0,"w":1556,"h":871},{"x":0,"y":0,"w":1550,"h":1550},{"x":0,"y":0,"w":1360,"h":1550},{"x":0,"y":0,"w":775,"h":1550},{"x":0,"y":0,"w":1556,"h":1550}]},"media_results":{"result":{"media_key":"3_1874061304073510914"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"quoted_status_id_str":"1872719928618565646","quoted_status_permalink":{"url":"https://t.co/RhtF3Hvt2r","expanded":"https://twitter.com/latentspacepod/status/1872719928618565646","display":"x.com/latentspacepod…"},"retweeted":false,"fact_check":null,"id":"1874061693640466841","view_count":39809,"bookmark_count":295,"created_at":1735646100000,"favorite_count":254,"quote_count":1,"reply_count":7,"retweet_count":37,"user_id_str":"2385913832","conversation_id_str":"1874061693640466841","full_text":"Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct\n\nIMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct.\n\nMemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens.\n\nThe most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today \"suck\" is because this problem (assembling the context window) is an incredibly difficult open research question.\n\nMemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the \"LLM OS\").\n\nToday, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/D8IJbFPpQK","expanded_url":"https://x.com/charlespacker/status/1838272900929065150/photo/1","id_str":"1838265655604449283","indices":[244,267],"media_key":"3_1838265655604449283","media_url_https":"https://pbs.twimg.com/media/GYLU380akAM3T7Y.jpg","type":"photo","url":"https://t.co/D8IJbFPpQK","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1070,"w":2048,"resize":"fit"},"medium":{"h":627,"w":1200,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1254,"width":2400,"focus_rects":[{"x":0,"y":0,"w":2239,"h":1254},{"x":0,"y":0,"w":1254,"h":1254},{"x":0,"y":0,"w":1100,"h":1254},{"x":0,"y":0,"w":627,"h":1254},{"x":0,"y":0,"w":2400,"h":1254}]},"media_results":{"result":{"media_key":"3_1838265655604449283"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[28,37]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/D8IJbFPpQK","expanded_url":"https://x.com/charlespacker/status/1838272900929065150/photo/1","id_str":"1838265655604449283","indices":[244,267],"media_key":"3_1838265655604449283","media_url_https":"https://pbs.twimg.com/media/GYLU380akAM3T7Y.jpg","type":"photo","url":"https://t.co/D8IJbFPpQK","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1070,"w":2048,"resize":"fit"},"medium":{"h":627,"w":1200,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1254,"width":2400,"focus_rects":[{"x":0,"y":0,"w":2239,"h":1254},{"x":0,"y":0,"w":1254,"h":1254},{"x":0,"y":0,"w":1100,"h":1254},{"x":0,"y":0,"w":627,"h":1254},{"x":0,"y":0,"w":2400,"h":1254}]},"media_results":{"result":{"media_key":"3_1838265655604449283"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1838272900929065150","view_count":30250,"bookmark_count":78,"created_at":1727113387000,"favorite_count":212,"quote_count":9,"reply_count":19,"retweet_count":21,"user_id_str":"2385913832","conversation_id_str":"1838272900929065150","full_text":"Excited to finally announce @Letta_AI !\n\nThe next frontier in AI is in the stateful layer above the base models - the \"memory layer\", or \"LLM OS\".\n\nLetta's mission is to build this layer in the open (say \"no\" 🙅 to privatized chain of thought). https://t.co/D8IJbFPpQK","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"arxiv.org/abs/2504.13171","expanded_url":"https://arxiv.org/abs/2504.13171","url":"https://t.co/AQkmOPC63f","indices":[70,93]},{"display_url":"arxiv.org/abs/2504.13171","expanded_url":"https://arxiv.org/abs/2504.13171","url":"https://t.co/jMPVVUUpni","indices":[67,90]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[131,140]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[128,137]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[806,815]}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"quoted_status_id_str":"1914356940412772414","quoted_status_permalink":{"url":"https://t.co/OIHELOSME9","expanded":"https://twitter.com/Letta_AI/status/1914356940412772414","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1914380650993569817","view_count":17701,"bookmark_count":38,"created_at":1745258889000,"favorite_count":94,"quote_count":3,"reply_count":2,"retweet_count":11,"user_id_str":"2385913832","conversation_id_str":"1914380650993569817","full_text":"💤 sleep-time compute: make your machines think while they sleep -> https://t.co/jMPVVUUpni\n\nover the past several months we (at @Letta_AI) have been exploring how effectively utilize \"sleep time\" to scale compute.\n\nthe concept of \"sleep-time compute\" is deeply tied to memory - applying compute at sleep-time is only possible if your agent has persistent state which can continuously re-written with additional compute cycles.\n\nin fact, the concept of \"heartbeats\" in MemGPT was actually inspired by the idea of an AI receiving heartbeats to \"awake\" it during sleep.\n\nprior to MemGPT's release, I actually spent many weeks trying to perfect heartbeats to try and get the agent to learn something useful during its downtime - but never quite got it to work quite right 😅\n\nour new sleep-time agent design in @Letta_AI takes the heartbeats idea to next level and allows you to arbitrarily scale the amount of compute you want to apply at sleep-time with multiple agents with memory wired together.\n\nwhile one agent sleeps, a fleet of agents can work to re-assemble its memory asynchronously in the background","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,22],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/pq2jshcXUe","expanded_url":"https://x.com/charlespacker/status/1939133021652951436/photo/1","id_str":"1939132725128003584","indices":[23,46],"media_key":"3_1939132725128003584","media_url_https":"https://pbs.twimg.com/media/Guku8IWXAAAKIeq.jpg","type":"photo","url":"https://t.co/pq2jshcXUe","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1018,"w":1228,"resize":"fit"},"medium":{"h":995,"w":1200,"resize":"fit"},"small":{"h":564,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1228,"focus_rects":[{"x":0,"y":0,"w":1228,"h":688},{"x":105,"y":0,"w":1018,"h":1018},{"x":168,"y":0,"w":893,"h":1018},{"x":360,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1228,"h":1018}]},"media_results":{"result":{"media_key":"3_1939132725128003584"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/pq2jshcXUe","expanded_url":"https://x.com/charlespacker/status/1939133021652951436/photo/1","id_str":"1939132725128003584","indices":[23,46],"media_key":"3_1939132725128003584","media_url_https":"https://pbs.twimg.com/media/Guku8IWXAAAKIeq.jpg","type":"photo","url":"https://t.co/pq2jshcXUe","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1018,"w":1228,"resize":"fit"},"medium":{"h":995,"w":1200,"resize":"fit"},"small":{"h":564,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1228,"focus_rects":[{"x":0,"y":0,"w":1228,"h":688},{"x":105,"y":0,"w":1018,"h":1018},{"x":168,"y":0,"w":893,"h":1018},{"x":360,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1228,"h":1018}]},"media_results":{"result":{"media_key":"3_1939132725128003584"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1939133021652951436","view_count":7238,"bookmark_count":9,"created_at":1751160314000,"favorite_count":77,"quote_count":2,"reply_count":6,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1939133021652951436","full_text":"Welcome to Claude Code https://t.co/pq2jshcXUe","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"letta.com/blog/sleep-tim…","expanded_url":"https://www.letta.com/blog/sleep-time-compute","url":"https://t.co/s8ZY8VVCc4","indices":[209,232]}],"user_mentions":[]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"quoted_status_id_str":"1965452690382618860","quoted_status_permalink":{"url":"https://t.co/siZ5Ma04SU","expanded":"https://twitter.com/intelligenceco/status/1965452690382618860","display":"x.com/intelligenceco…"},"retweeted":false,"fact_check":null,"id":"1965589076561854722","view_count":20893,"bookmark_count":55,"created_at":1757467929000,"favorite_count":57,"quote_count":1,"reply_count":1,"retweet_count":7,"user_id_str":"2385913832","conversation_id_str":"1965589076561854722","full_text":"awesome to see another example of sleep-time compute deployed in production, especially on such a slick consumer app\n\n\"setting up the memory can take up to 6 hours\"\n\ncongrats on the launch @nycintelligence 👏\n\nhttps://t.co/s8ZY8VVCc4","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"992153930095251456","name":"DeepLearning.AI","screen_name":"DeepLearningAI","indices":[874,889]},{"id_str":"216939636","name":"Andrew Ng","screen_name":"AndrewYNg","indices":[920,930]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1854587401018261962","quoted_status_permalink":{"url":"https://t.co/6RlJztOZsg","expanded":"https://twitter.com/AndrewYNg/status/1854587401018261962","display":"x.com/AndrewYNg/stat…"},"retweeted":false,"fact_check":null,"id":"1854591955889799236","view_count":5308,"bookmark_count":26,"created_at":1731004153000,"favorite_count":49,"quote_count":1,"reply_count":6,"retweet_count":2,"user_id_str":"2385913832","conversation_id_str":"1854591955889799236","full_text":"How do we get from LLMs-as-chatbots to LLMs-as-agents? Programatic memory (context) management through an LLM OS.\n\nThe biggest problem today with LLM agents is memory. Not just “memory” in a semantic sense (how can we get LLM agents to remember facts+preferences over time, similar to how a human does?), but also memory management - what tokens do we put into the context window at each LLM inference step, and why?\n\nThe memory management (or “context management”) problem is the fundamental problem in programming LLMs to turn them from autocomplete engines into compound agentic systems that can interact with the world and learn from experience. JSON mode, tool use / function calling, RAG, chain-of-thought: these are all early forays into building the “LLM OS” for context management.\n\nWe’re still extremely early in the history of LLM OS development. In our (free!!) @DeepLearningAI course in collaboration with @AndrewYNg , we distill the main ideas behind what it even means to do “LLM memory management” into clean and concise example and guide you through building a version of MemGPT (one of the early examples of an LLM OS) yourself, entirely from scratch.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1839402319651500528","quoted_status_permalink":{"url":"https://t.co/8ol5mPf15n","expanded":"https://twitter.com/martin_casado/status/1839402319651500528","display":"x.com/martin_casado/…"},"retweeted":false,"fact_check":null,"id":"1839527141492379690","view_count":4643,"bookmark_count":0,"created_at":1727412421000,"favorite_count":25,"quote_count":0,"reply_count":0,"retweet_count":5,"user_id_str":"2385913832","conversation_id_str":"1839527141492379690","full_text":"SB 1047 is bad for startups, bad for open source, and bad for America. The only beneficiaries are closed AI companies whose current valuations make no sense in a perfectly competitive model market.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/y4fjFrFiLf","expanded_url":"https://x.com/charlespacker/status/1885201401942597740/photo/1","id_str":"1885201397580521474","indices":[93,116],"media_key":"3_1885201397580521474","media_url_https":"https://pbs.twimg.com/media/GimUrtDbYAIQw-G.jpg","type":"photo","url":"https://t.co/y4fjFrFiLf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2016,"w":1512,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2016,"width":1512,"focus_rects":[{"x":0,"y":937,"w":1512,"h":847},{"x":0,"y":504,"w":1512,"h":1512},{"x":0,"y":292,"w":1512,"h":1724},{"x":504,"y":0,"w":1008,"h":2016},{"x":0,"y":0,"w":1512,"h":2016}]},"media_results":{"result":{"media_key":"3_1885201397580521474"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/y4fjFrFiLf","expanded_url":"https://x.com/charlespacker/status/1885201401942597740/photo/1","id_str":"1885201397580521474","indices":[93,116],"media_key":"3_1885201397580521474","media_url_https":"https://pbs.twimg.com/media/GimUrtDbYAIQw-G.jpg","type":"photo","url":"https://t.co/y4fjFrFiLf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2016,"w":1512,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2016,"width":1512,"focus_rects":[{"x":0,"y":937,"w":1512,"h":847},{"x":0,"y":504,"w":1512,"h":1512},{"x":0,"y":292,"w":1512,"h":1724},{"x":504,"y":0,"w":1008,"h":2016},{"x":0,"y":0,"w":1512,"h":2016}]},"media_results":{"result":{"media_key":"3_1885201397580521474"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1885201401942597740","view_count":1439,"bookmark_count":1,"created_at":1738302014000,"favorite_count":23,"quote_count":2,"reply_count":4,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1885201401942597740","full_text":"stateful agents with persistence and memory, fully local, coming soon to macOS and Windows 🌚 https://t.co/y4fjFrFiLf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/FUD9qJycoc","expanded_url":"https://x.com/charlespacker/status/1961490334850163119/photo/1","id_str":"1961490329338822656","indices":[115,138],"media_key":"3_1961490329338822656","media_url_https":"https://pbs.twimg.com/media/GzidD-aawAA49NL.jpg","type":"photo","url":"https://t.co/FUD9qJycoc","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4426138094","name":"ryan","screen_name":"wacheeeee","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":1188,"w":1536,"h":860},{"x":0,"y":512,"w":1536,"h":1536},{"x":0,"y":297,"w":1536,"h":1751},{"x":51,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"media_results":{"result":{"media_key":"3_1961490329338822656"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[85,94]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/FUD9qJycoc","expanded_url":"https://x.com/charlespacker/status/1961490334850163119/photo/1","id_str":"1961490329338822656","indices":[115,138],"media_key":"3_1961490329338822656","media_url_https":"https://pbs.twimg.com/media/GzidD-aawAA49NL.jpg","type":"photo","url":"https://t.co/FUD9qJycoc","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4426138094","name":"ryan","screen_name":"wacheeeee","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":1188,"w":1536,"h":860},{"x":0,"y":512,"w":1536,"h":1536},{"x":0,"y":297,"w":1536,"h":1751},{"x":51,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"media_results":{"result":{"media_key":"3_1961490329338822656"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1961490334850163119","view_count":1207,"bookmark_count":1,"created_at":1756490713000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"2385913832","conversation_id_str":"1961490334850163119","full_text":"walked into a random coffee shop in SF and found someone building a PR review bot on @Letta_AI i think we’re gmi 😭 https://t.co/FUD9qJycoc","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"ctweets":[{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1877398183938240530","view_count":125879,"bookmark_count":1020,"created_at":1736441582000,"favorite_count":975,"quote_count":21,"reply_count":55,"retweet_count":70,"user_id_str":"2385913832","conversation_id_str":"1877398183938240530","full_text":"guys agent frameworks are so stupid, all you need is an anthropic API key and a while loop\n\n…and a FastAPI server so that we can use the agents programmatically \n\n…and some good API designs to enable multi user and multi agent support\n\n…and a tool execution sandbox so that the agent tools don’t interfere with the main server process \n\n…and a real database so that the agents are actually persisted and don’t disappear when the script finishes \n\n…and an ORM so that we can properly scale the agent server\n\n…and some extra middleware code since we also want to use local LLMs but they have less reliable function calling\n\n…and some sort of file storage / embedding solution to do RAG\n\n…and some sort of context management system to handle the long term memory problem and deal with context overflow \n\n…\n\n…congratulations, you just built an “agents framework”","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/D8IJbFPpQK","expanded_url":"https://x.com/charlespacker/status/1838272900929065150/photo/1","id_str":"1838265655604449283","indices":[244,267],"media_key":"3_1838265655604449283","media_url_https":"https://pbs.twimg.com/media/GYLU380akAM3T7Y.jpg","type":"photo","url":"https://t.co/D8IJbFPpQK","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1070,"w":2048,"resize":"fit"},"medium":{"h":627,"w":1200,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1254,"width":2400,"focus_rects":[{"x":0,"y":0,"w":2239,"h":1254},{"x":0,"y":0,"w":1254,"h":1254},{"x":0,"y":0,"w":1100,"h":1254},{"x":0,"y":0,"w":627,"h":1254},{"x":0,"y":0,"w":2400,"h":1254}]},"media_results":{"result":{"media_key":"3_1838265655604449283"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[28,37]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/D8IJbFPpQK","expanded_url":"https://x.com/charlespacker/status/1838272900929065150/photo/1","id_str":"1838265655604449283","indices":[244,267],"media_key":"3_1838265655604449283","media_url_https":"https://pbs.twimg.com/media/GYLU380akAM3T7Y.jpg","type":"photo","url":"https://t.co/D8IJbFPpQK","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1070,"w":2048,"resize":"fit"},"medium":{"h":627,"w":1200,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1254,"width":2400,"focus_rects":[{"x":0,"y":0,"w":2239,"h":1254},{"x":0,"y":0,"w":1254,"h":1254},{"x":0,"y":0,"w":1100,"h":1254},{"x":0,"y":0,"w":627,"h":1254},{"x":0,"y":0,"w":2400,"h":1254}]},"media_results":{"result":{"media_key":"3_1838265655604449283"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1838272900929065150","view_count":30250,"bookmark_count":78,"created_at":1727113387000,"favorite_count":212,"quote_count":9,"reply_count":19,"retweet_count":21,"user_id_str":"2385913832","conversation_id_str":"1838272900929065150","full_text":"Excited to finally announce @Letta_AI !\n\nThe next frontier in AI is in the stateful layer above the base models - the \"memory layer\", or \"LLM OS\".\n\nLetta's mission is to build this layer in the open (say \"no\" 🙅 to privatized chain of thought). https://t.co/D8IJbFPpQK","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/R0NU4VJVRu","expanded_url":"https://x.com/charlespacker/status/1960784370899542389/photo/1","id_str":"1960783303063334912","indices":[277,300],"media_key":"3_1960783303063334912","media_url_https":"https://pbs.twimg.com/media/GzYaBoSbcAA7P6Y.jpg","type":"photo","url":"https://t.co/R0NU4VJVRu","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","type":"user"},{"user_id":"1062544973294432257","name":"Shangyin Tan","screen_name":"ShangyinT","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":586,"w":986,"resize":"fit"},"medium":{"h":586,"w":986,"resize":"fit"},"small":{"h":404,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":586,"width":986,"focus_rects":[{"x":0,"y":34,"w":986,"h":552},{"x":224,"y":0,"w":586,"h":586},{"x":260,"y":0,"w":514,"h":586},{"x":371,"y":0,"w":293,"h":586},{"x":0,"y":0,"w":986,"h":586}]},"media_results":{"result":{"media_key":"3_1960783303063334912"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[242,251]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[238,247]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/R0NU4VJVRu","expanded_url":"https://x.com/charlespacker/status/1960784370899542389/photo/1","id_str":"1960783303063334912","indices":[277,300],"media_key":"3_1960783303063334912","media_url_https":"https://pbs.twimg.com/media/GzYaBoSbcAA7P6Y.jpg","type":"photo","url":"https://t.co/R0NU4VJVRu","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","type":"user"},{"user_id":"1062544973294432257","name":"Shangyin Tan","screen_name":"ShangyinT","type":"user"},{"user_id":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":586,"w":986,"resize":"fit"},"medium":{"h":586,"w":986,"resize":"fit"},"small":{"h":404,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":586,"width":986,"focus_rects":[{"x":0,"y":34,"w":986,"h":552},{"x":224,"y":0,"w":586,"h":586},{"x":260,"y":0,"w":514,"h":586},{"x":371,"y":0,"w":293,"h":586},{"x":0,"y":0,"w":986,"h":586}]},"media_results":{"result":{"media_key":"3_1960783303063334912"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1960784370899542389","view_count":29967,"bookmark_count":111,"created_at":1756322398000,"favorite_count":269,"quote_count":6,"reply_count":18,"retweet_count":31,"user_id_str":"2385913832","conversation_id_str":"1960784370899542389","full_text":"Prior to GPT-5, Sonnet & Opus were the undisputed kings of AI coding. It turns out the GPT-5 is significantly better than Sonnet in one key way: the ability to recover from mistakes.\n\nToday we're excited to release our latest research at @Letta_AI on Recovery-Bench, a new benchmark for measuring how well model can recover from errors and corrupted states.\n\nCoding agents often get confused by past mistakes, and mistakes that accumulate over time can quickly poison the context window. In practice, it can often be better to \"nuke\" your agent's context window and start fresh once your agent has accumulated enough mistakes in its message history.\n\nThe inability of current models to course-correct from prior mistakes is a major barrier towards continual learning. Recovery-Bench builds on ideas from Terminal-Bench to create challenging environments where an agent needs to recover from a prior failed trajectory.\n\nA surprising finding is that the best performing models overall are clearly not the best performing \"recovery\" models. Claude Sonnet 4 leads the pack in overall coding ability (on Terminal-Bench), but GPT-5 is a clear #1 on Recovery-Bench.\n\nRecovering from failed states is a challenging unsolved task on the road towards self-improving perpetual agents. We're excited to contribute our research and benchmarking code to the open source community to push the frontier of continual learning & open AI.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rvv3h6lPCQ","expanded_url":"https://x.com/charlespacker/status/1857105467031630220/photo/1","ext_alt_text":"The AI agents stack in late 2024, organized into three key layers: agent hosting/serving, agent frameworks, and LLM models & storage.","id_str":"1857102810980261888","indices":[276,299],"media_key":"3_1857102810980261888","media_url_https":"https://pbs.twimg.com/media/GcXBKs_boAA7KLY.jpg","type":"photo","url":"https://t.co/rvv3h6lPCQ","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1710011976179728384","name":"Composio","screen_name":"composiohq","type":"user"},{"user_id":"1507488634458439685","name":"Chroma","screen_name":"trychroma","type":"user"},{"user_id":"1642834485673619457","name":"E2B","screen_name":"e2b","type":"user"},{"user_id":"1551987185372512263","name":"Modal","screen_name":"modal_labs","type":"user"},{"user_id":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","type":"user"},{"user_id":"1757344117426896896","name":"browserbase 🅱️","screen_name":"browserbasehq","type":"user"},{"user_id":"1688410127378829312","name":"ollama","screen_name":"ollama","type":"user"},{"user_id":"1654830858098860032","name":"LM Studio","screen_name":"lmstudio","type":"user"},{"user_id":"1219566488325017602","name":"Supabase","screen_name":"supabase","type":"user"},{"user_id":"1406351060634009600","name":"LiveKit","screen_name":"livekit","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1706,"resize":"fit"},"medium":{"h":1200,"w":1000,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":4096,"width":3412,"focus_rects":[{"x":0,"y":374,"w":3412,"h":1911},{"x":0,"y":0,"w":3412,"h":3412},{"x":0,"y":0,"w":3412,"h":3890},{"x":1327,"y":0,"w":2048,"h":4096},{"x":0,"y":0,"w":3412,"h":4096}]},"media_results":{"result":{"media_key":"3_1857102810980261888"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rvv3h6lPCQ","expanded_url":"https://x.com/charlespacker/status/1857105467031630220/photo/1","ext_alt_text":"The AI agents stack in late 2024, organized into three key layers: agent hosting/serving, agent frameworks, and LLM models & storage.","id_str":"1857102810980261888","indices":[276,299],"media_key":"3_1857102810980261888","media_url_https":"https://pbs.twimg.com/media/GcXBKs_boAA7KLY.jpg","type":"photo","url":"https://t.co/rvv3h6lPCQ","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1710011976179728384","name":"Composio","screen_name":"composiohq","type":"user"},{"user_id":"1507488634458439685","name":"Chroma","screen_name":"trychroma","type":"user"},{"user_id":"1642834485673619457","name":"E2B","screen_name":"e2b","type":"user"},{"user_id":"1551987185372512263","name":"Modal","screen_name":"modal_labs","type":"user"},{"user_id":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","type":"user"},{"user_id":"1757344117426896896","name":"browserbase 🅱️","screen_name":"browserbasehq","type":"user"},{"user_id":"1688410127378829312","name":"ollama","screen_name":"ollama","type":"user"},{"user_id":"1654830858098860032","name":"LM Studio","screen_name":"lmstudio","type":"user"},{"user_id":"1219566488325017602","name":"Supabase","screen_name":"supabase","type":"user"},{"user_id":"1406351060634009600","name":"LiveKit","screen_name":"livekit","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1706,"resize":"fit"},"medium":{"h":1200,"w":1000,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":4096,"width":3412,"focus_rects":[{"x":0,"y":374,"w":3412,"h":1911},{"x":0,"y":0,"w":3412,"h":3412},{"x":0,"y":0,"w":3412,"h":3890},{"x":1327,"y":0,"w":2048,"h":4096},{"x":0,"y":0,"w":3412,"h":4096}]},"media_results":{"result":{"media_key":"3_1857102810980261888"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1857105467031630220","view_count":64223,"bookmark_count":881,"created_at":1731603421000,"favorite_count":665,"quote_count":11,"reply_count":13,"retweet_count":142,"user_id_str":"2385913832","conversation_id_str":"1857105467031630220","full_text":"the new frontier: AI agent hosting/serving 👾🛸\n\nthe AI/LLM agents stack is a significant departure from the standard LLM stack. the key difference between the two lies in managing state: LLM serving platforms are generally stateless, whereas agent serving platforms need to be stateful (retain the agent state server-side)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/ihccbsy31j","expanded_url":"https://twitter.com/charlespacker/status/1713935864668303857/photo/1","id_str":"1713930938890244096","indices":[281,304],"media_key":"16_1713930938890244096","media_url_https":"https://pbs.twimg.com/tweet_video_thumb/F8kbG64a0AAhMso.jpg","type":"animated_gif","url":"https://t.co/iHCcBsy31j","ext_media_availability":{"status":"Available"},"sizes":{"large":{"h":528,"w":1122,"resize":"fit"},"medium":{"h":528,"w":1122,"resize":"fit"},"small":{"h":320,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":528,"width":1122,"focus_rects":[]},"video_info":{"aspect_ratio":[17,8],"variants":[{"bitrate":0,"content_type":"video/mp4","url":"https://video.twimg.com/tweet_video/F8kbG64a0AAhMso.mp4"}]},"media_results":{"result":{"media_key":"16_1713930938890244096"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"arxiv.org/abs/2310.08560","expanded_url":"http://arxiv.org/abs/2310.08560","url":"https://t.co/KeLpcVDGhe","indices":[225,248]},{"display_url":"github.com/cpacker/memgpt","expanded_url":"http://github.com/cpacker/memgpt","url":"https://t.co/GLFR1aR2tm","indices":[257,280]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.twitter.com/iHCcBsy31j","expanded_url":"https://twitter.com/charlespacker/status/1713935864668303857/photo/1","id_str":"1713930938890244096","indices":[281,304],"media_key":"16_1713930938890244096","media_url_https":"https://pbs.twimg.com/tweet_video_thumb/F8kbG64a0AAhMso.jpg","type":"animated_gif","url":"https://t.co/iHCcBsy31j","ext_media_availability":{"status":"Available"},"sizes":{"large":{"h":528,"w":1122,"resize":"fit"},"medium":{"h":528,"w":1122,"resize":"fit"},"small":{"h":320,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":528,"width":1122,"focus_rects":[]},"video_info":{"aspect_ratio":[17,8],"variants":[{"bitrate":0,"content_type":"video/mp4","url":"https://video.twimg.com/tweet_video/F8kbG64a0AAhMso.mp4"}]},"media_results":{"result":{"media_key":"16_1713930938890244096"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1713935864668303857","view_count":92907,"bookmark_count":332,"created_at":1697469128000,"favorite_count":466,"quote_count":16,"reply_count":10,"retweet_count":107,"user_id_str":"2385913832","conversation_id_str":"1713935864668303857","full_text":"Introducing MemGPT 📚🦙 a method for extending LLM context windows. Inspired by OS mem management, it provides an infinite virtualized context for fixed-context LLMs. Enables perpetual chatbots & large doc QA. 🧵1/n\n\nPaper: https://t.co/KeLpcVDGhe\nGitHub: https://t.co/GLFR1aR2tm https://t.co/iHCcBsy31j","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/mw0ThewnoY","expanded_url":"https://x.com/charlespacker/status/1874061693640466841/photo/1","id_str":"1874061304073510914","indices":[275,298],"media_key":"3_1874061304073510914","media_url_https":"https://pbs.twimg.com/media/GgIA06ybYAIV4Im.jpg","type":"photo","url":"https://t.co/mw0ThewnoY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1550,"w":1556,"resize":"fit"},"medium":{"h":1195,"w":1200,"resize":"fit"},"small":{"h":677,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1550,"width":1556,"focus_rects":[{"x":0,"y":0,"w":1556,"h":871},{"x":0,"y":0,"w":1550,"h":1550},{"x":0,"y":0,"w":1360,"h":1550},{"x":0,"y":0,"w":775,"h":1550},{"x":0,"y":0,"w":1556,"h":1550}]},"media_results":{"result":{"media_key":"3_1874061304073510914"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33521530","name":"swyx","screen_name":"swyx","indices":[31,36]},{"id_str":"1271552707464032256","name":"Shunyu Yao","screen_name":"ShunyuYao12","indices":[155,167]},{"id_str":"33521530","name":"swyx","screen_name":"swyx","indices":[31,36]},{"id_str":"1271552707464032256","name":"Shunyu Yao","screen_name":"ShunyuYao12","indices":[155,167]},{"id_str":"144333614","name":"Sarah Wooders 👾","screen_name":"sarahwooders","indices":[487,500]},{"id_str":"904754498211643393","name":"Kevin Lin 林冠言","screen_name":"nlpkevinl","indices":[503,513]},{"id_str":"323533772","name":"Joey Gonzalez","screen_name":"profjoeyg","indices":[516,526]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[543,552]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/mw0ThewnoY","expanded_url":"https://x.com/charlespacker/status/1874061693640466841/photo/1","id_str":"1874061304073510914","indices":[275,298],"media_key":"3_1874061304073510914","media_url_https":"https://pbs.twimg.com/media/GgIA06ybYAIV4Im.jpg","type":"photo","url":"https://t.co/mw0ThewnoY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1550,"w":1556,"resize":"fit"},"medium":{"h":1195,"w":1200,"resize":"fit"},"small":{"h":677,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1550,"width":1556,"focus_rects":[{"x":0,"y":0,"w":1556,"h":871},{"x":0,"y":0,"w":1550,"h":1550},{"x":0,"y":0,"w":1360,"h":1550},{"x":0,"y":0,"w":775,"h":1550},{"x":0,"y":0,"w":1556,"h":1550}]},"media_results":{"result":{"media_key":"3_1874061304073510914"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"quoted_status_id_str":"1872719928618565646","quoted_status_permalink":{"url":"https://t.co/RhtF3Hvt2r","expanded":"https://twitter.com/latentspacepod/status/1872719928618565646","display":"x.com/latentspacepod…"},"retweeted":false,"fact_check":null,"id":"1874061693640466841","view_count":39809,"bookmark_count":295,"created_at":1735646100000,"favorite_count":254,"quote_count":1,"reply_count":7,"retweet_count":37,"user_id_str":"2385913832","conversation_id_str":"1874061693640466841","full_text":"Really great reading list from @swyx - amazing to see MemGPT in the top 5 agent papers, side-by-side with one of my favorite LLM papers: ReAct\n\nIMO ReAct (@ShunyuYao12 et al) is *the* most influential paper in the current wave of real-world LLM agents (LLMs being presented observations, reasoning then taking actions in a loop). Pick any LLM agents framework off of github today - chances are the core agentic loop they're using is basically ReAct.\n\nMemGPT was our vision at Berkeley (@sarahwooders , @nlpkevinl , @profjoeyg , et al, now at @Letta_AI) for the next big thing in LLM agents *after* ReAct. LLM agents break down into two components - (1) the LLM under the hood which goes from tokens to tokens, and (2) the closed system around that LLM that prepares the input tokens and parses the output tokens.\n\nThe most important question in an LLM agent is *how* do you place tokens in the context window of the LLM? This determines what your agent knows and how it behaves. The reason LLM agents today \"suck\" is because this problem (assembling the context window) is an incredibly difficult open research question.\n\nMemGPT predicts a future where the context window of an LLM agent is assembled dynamically by an intelligent process (you could call this another agent, or the \"LLM OS\").\n\nToday, the work of context compilation is largely done by hand. Tomorrow, it'll be done by LLMs.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"992153930095251456","name":"DeepLearning.AI","screen_name":"DeepLearningAI","indices":[874,889]},{"id_str":"216939636","name":"Andrew Ng","screen_name":"AndrewYNg","indices":[920,930]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1854587401018261962","quoted_status_permalink":{"url":"https://t.co/6RlJztOZsg","expanded":"https://twitter.com/AndrewYNg/status/1854587401018261962","display":"x.com/AndrewYNg/stat…"},"retweeted":false,"fact_check":null,"id":"1854591955889799236","view_count":5308,"bookmark_count":26,"created_at":1731004153000,"favorite_count":49,"quote_count":1,"reply_count":6,"retweet_count":2,"user_id_str":"2385913832","conversation_id_str":"1854591955889799236","full_text":"How do we get from LLMs-as-chatbots to LLMs-as-agents? Programatic memory (context) management through an LLM OS.\n\nThe biggest problem today with LLM agents is memory. Not just “memory” in a semantic sense (how can we get LLM agents to remember facts+preferences over time, similar to how a human does?), but also memory management - what tokens do we put into the context window at each LLM inference step, and why?\n\nThe memory management (or “context management”) problem is the fundamental problem in programming LLMs to turn them from autocomplete engines into compound agentic systems that can interact with the world and learn from experience. JSON mode, tool use / function calling, RAG, chain-of-thought: these are all early forays into building the “LLM OS” for context management.\n\nWe’re still extremely early in the history of LLM OS development. In our (free!!) @DeepLearningAI course in collaboration with @AndrewYNg , we distill the main ideas behind what it even means to do “LLM memory management” into clean and concise example and guide you through building a version of MemGPT (one of the early examples of an LLM OS) yourself, entirely from scratch.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,22],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/pq2jshcXUe","expanded_url":"https://x.com/charlespacker/status/1939133021652951436/photo/1","id_str":"1939132725128003584","indices":[23,46],"media_key":"3_1939132725128003584","media_url_https":"https://pbs.twimg.com/media/Guku8IWXAAAKIeq.jpg","type":"photo","url":"https://t.co/pq2jshcXUe","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1018,"w":1228,"resize":"fit"},"medium":{"h":995,"w":1200,"resize":"fit"},"small":{"h":564,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1228,"focus_rects":[{"x":0,"y":0,"w":1228,"h":688},{"x":105,"y":0,"w":1018,"h":1018},{"x":168,"y":0,"w":893,"h":1018},{"x":360,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1228,"h":1018}]},"media_results":{"result":{"media_key":"3_1939132725128003584"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/pq2jshcXUe","expanded_url":"https://x.com/charlespacker/status/1939133021652951436/photo/1","id_str":"1939132725128003584","indices":[23,46],"media_key":"3_1939132725128003584","media_url_https":"https://pbs.twimg.com/media/Guku8IWXAAAKIeq.jpg","type":"photo","url":"https://t.co/pq2jshcXUe","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1018,"w":1228,"resize":"fit"},"medium":{"h":995,"w":1200,"resize":"fit"},"small":{"h":564,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1228,"focus_rects":[{"x":0,"y":0,"w":1228,"h":688},{"x":105,"y":0,"w":1018,"h":1018},{"x":168,"y":0,"w":893,"h":1018},{"x":360,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1228,"h":1018}]},"media_results":{"result":{"media_key":"3_1939132725128003584"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1939133021652951436","view_count":7238,"bookmark_count":9,"created_at":1751160314000,"favorite_count":77,"quote_count":2,"reply_count":6,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1939133021652951436","full_text":"Welcome to Claude Code https://t.co/pq2jshcXUe","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,234],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/VqavRziOW6","expanded_url":"https://x.com/charlespacker/status/1964158075297726764/photo/1","id_str":"1964157665472630786","indices":[235,258],"media_key":"3_1964157665472630786","media_url_https":"https://pbs.twimg.com/media/G0IW_X1bUAI3F2O.jpg","type":"photo","url":"https://t.co/VqavRziOW6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":232,"w":1300,"resize":"fit"},"medium":{"h":214,"w":1200,"resize":"fit"},"small":{"h":121,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":232,"width":1300,"focus_rects":[{"x":0,"y":0,"w":414,"h":232},{"x":46,"y":0,"w":232,"h":232},{"x":60,"y":0,"w":204,"h":232},{"x":104,"y":0,"w":116,"h":232},{"x":0,"y":0,"w":1300,"h":232}]},"media_results":{"result":{"media_key":"3_1964157665472630786"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/VqavRziOW6","expanded_url":"https://x.com/charlespacker/status/1964158075297726764/photo/1","id_str":"1964157665472630786","indices":[235,258],"media_key":"3_1964157665472630786","media_url_https":"https://pbs.twimg.com/media/G0IW_X1bUAI3F2O.jpg","type":"photo","url":"https://t.co/VqavRziOW6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":232,"w":1300,"resize":"fit"},"medium":{"h":214,"w":1200,"resize":"fit"},"small":{"h":121,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":232,"width":1300,"focus_rects":[{"x":0,"y":0,"w":414,"h":232},{"x":46,"y":0,"w":232,"h":232},{"x":60,"y":0,"w":204,"h":232},{"x":104,"y":0,"w":116,"h":232},{"x":0,"y":0,"w":1300,"h":232}]},"media_results":{"result":{"media_key":"3_1964157665472630786"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1964158075297726764","view_count":3498,"bookmark_count":1,"created_at":1757126752000,"favorite_count":12,"quote_count":2,"reply_count":6,"retweet_count":2,"user_id_str":"2385913832","conversation_id_str":"1964158075297726764","full_text":"decided to revisit codex w/ gpt-5 during today's opus outage, and it:\n\n1. deleted my database\n\n2. started refusing to write code, offering instead to give me diffs that I could \"drop in\"\n\nstill better than claude code w/ sonnet? maybe https://t.co/VqavRziOW6","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1871856522143399961","quoted_status_permalink":{"url":"https://t.co/PlSmBUoFBL","expanded":"https://twitter.com/mark_k/status/1871856522143399961","display":"x.com/mark_k/status/…"},"retweeted":false,"fact_check":null,"id":"1872254532115152968","view_count":1674,"bookmark_count":3,"created_at":1735215239000,"favorite_count":7,"quote_count":1,"reply_count":4,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1872254532115152968","full_text":"memory in chatgpt has several problems: (1) it’s way too noisy and saves stupid memories that pollute the context window (2) chatgpt users already have years of muscle memory and use new chats as “sessions” to manually solve the context management problem. 1+2 mean that many (most?) power users turn memory off.\n\nmemory is the future but chatgpt-style ephemeral chat is a local optimum that’s hard to escape once you already have distribution like chatpgt does","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/y4fjFrFiLf","expanded_url":"https://x.com/charlespacker/status/1885201401942597740/photo/1","id_str":"1885201397580521474","indices":[93,116],"media_key":"3_1885201397580521474","media_url_https":"https://pbs.twimg.com/media/GimUrtDbYAIQw-G.jpg","type":"photo","url":"https://t.co/y4fjFrFiLf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2016,"w":1512,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2016,"width":1512,"focus_rects":[{"x":0,"y":937,"w":1512,"h":847},{"x":0,"y":504,"w":1512,"h":1512},{"x":0,"y":292,"w":1512,"h":1724},{"x":504,"y":0,"w":1008,"h":2016},{"x":0,"y":0,"w":1512,"h":2016}]},"media_results":{"result":{"media_key":"3_1885201397580521474"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/y4fjFrFiLf","expanded_url":"https://x.com/charlespacker/status/1885201401942597740/photo/1","id_str":"1885201397580521474","indices":[93,116],"media_key":"3_1885201397580521474","media_url_https":"https://pbs.twimg.com/media/GimUrtDbYAIQw-G.jpg","type":"photo","url":"https://t.co/y4fjFrFiLf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2016,"w":1512,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2016,"width":1512,"focus_rects":[{"x":0,"y":937,"w":1512,"h":847},{"x":0,"y":504,"w":1512,"h":1512},{"x":0,"y":292,"w":1512,"h":1724},{"x":504,"y":0,"w":1008,"h":2016},{"x":0,"y":0,"w":1512,"h":2016}]},"media_results":{"result":{"media_key":"3_1885201397580521474"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1885201401942597740","view_count":1439,"bookmark_count":1,"created_at":1738302014000,"favorite_count":23,"quote_count":2,"reply_count":4,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1885201401942597740","full_text":"stateful agents with persistence and memory, fully local, coming soon to macOS and Windows 🌚 https://t.co/y4fjFrFiLf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,79],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zNc0POkbhT","expanded_url":"https://x.com/charlespacker/status/1949990429505835314/photo/1","id_str":"1949988230905909248","indices":[80,103],"media_key":"3_1949988230905909248","media_url_https":"https://pbs.twimg.com/media/Gw-_9utW0AAOfvD.jpg","type":"photo","url":"https://t.co/zNc0POkbhT","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":806,"w":2048,"resize":"fit"},"medium":{"h":472,"w":1200,"resize":"fit"},"small":{"h":268,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1612,"width":4096,"focus_rects":[{"x":1119,"y":0,"w":2879,"h":1612},{"x":1752,"y":0,"w":1612,"h":1612},{"x":1851,"y":0,"w":1414,"h":1612},{"x":2155,"y":0,"w":806,"h":1612},{"x":0,"y":0,"w":4096,"h":1612}]},"media_results":{"result":{"media_key":"3_1949988230905909248"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zNc0POkbhT","expanded_url":"https://x.com/charlespacker/status/1949990429505835314/photo/1","id_str":"1949988230905909248","indices":[80,103],"media_key":"3_1949988230905909248","media_url_https":"https://pbs.twimg.com/media/Gw-_9utW0AAOfvD.jpg","type":"photo","url":"https://t.co/zNc0POkbhT","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":806,"w":2048,"resize":"fit"},"medium":{"h":472,"w":1200,"resize":"fit"},"small":{"h":268,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1612,"width":4096,"focus_rects":[{"x":1119,"y":0,"w":2879,"h":1612},{"x":1752,"y":0,"w":1612,"h":1612},{"x":1851,"y":0,"w":1414,"h":1612},{"x":2155,"y":0,"w":806,"h":1612},{"x":0,"y":0,"w":4096,"h":1612}]},"media_results":{"result":{"media_key":"3_1949988230905909248"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1949990429505835314","view_count":549,"bookmark_count":0,"created_at":1753748922000,"favorite_count":10,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1949990429505835314","full_text":"if other startups aren't shamelessly cloning your website/product you're ngmi 📈 https://t.co/zNc0POkbhT","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/kZbHFCequL","expanded_url":"https://x.com/charlespacker/status/1844931237510774988/photo/1","id_str":"1844930371965812736","indices":[280,303],"media_key":"3_1844930371965812736","media_url_https":"https://pbs.twimg.com/media/GZqCZgBaAAAKOV2.jpg","type":"photo","url":"https://t.co/kZbHFCequL","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1094,"w":1460,"resize":"fit"},"medium":{"h":899,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1094,"width":1460,"focus_rects":[{"x":0,"y":276,"w":1460,"h":818},{"x":0,"y":0,"w":1094,"h":1094},{"x":0,"y":0,"w":960,"h":1094},{"x":0,"y":0,"w":547,"h":1094},{"x":0,"y":0,"w":1460,"h":1094}]},"media_results":{"result":{"media_key":"3_1844930371965812736"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[261,270]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[261,270]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/kZbHFCequL","expanded_url":"https://x.com/charlespacker/status/1844931237510774988/photo/1","id_str":"1844930371965812736","indices":[280,303],"media_key":"3_1844930371965812736","media_url_https":"https://pbs.twimg.com/media/GZqCZgBaAAAKOV2.jpg","type":"photo","url":"https://t.co/kZbHFCequL","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1094,"w":1460,"resize":"fit"},"medium":{"h":899,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1094,"width":1460,"focus_rects":[{"x":0,"y":276,"w":1460,"h":818},{"x":0,"y":0,"w":1094,"h":1094},{"x":0,"y":0,"w":960,"h":1094},{"x":0,"y":0,"w":547,"h":1094},{"x":0,"y":0,"w":1460,"h":1094}]},"media_results":{"result":{"media_key":"3_1844930371965812736"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1844931237510774988","view_count":769,"bookmark_count":3,"created_at":1728700858000,"favorite_count":9,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1844931237510774988","full_text":"it's interesting to see openai's (unofficial) multi-agent framework implement multi-agent via message passing instead of via shared context / \"groupchat\" (ala autogen)\n\nimo message passing is the way to go (hence why we implement multi-agent in the same way in @Letta_AI), though it raises a lot of interesting questions about shared context (how can you get one agent to share memories or data streams with another agent) that are unanswered / out of scope in the groupchat model of multi-agent","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,68],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[55,68]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1952974423709106302","view_count":270,"bookmark_count":1,"created_at":1754460361000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1952974423709106302","full_text":"\"i will never touch python asyncio again in my life\" - @sarahwooders","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"activities":{"nreplies":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":2,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":1,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":4,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":25,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":3,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":15,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":14,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":1,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":19,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":171,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":13,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":0,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nretweets":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":4,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":4,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":4,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":36,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":4,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":1,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nlikes":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":27,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":22,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":4,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":95,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":348,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":5,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":20,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":8,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nviews":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":4801,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":3331,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":617,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":14863,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":30542,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":549,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":2044,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":1151,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":200,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}]},"interactions":{"users":[],"period":14,"start":1762208811860,"end":1763418411860},"interactions_updated":1763418411914,"created":1763418411625,"updated":1763418411914,"type":"the innovator","hits":1},"people":[{"user":{"id":"1889214164322951168","name":"AGI 磊叔","description":"- 微信:data_growth\n- 20 年经验在数据和AI领域\n- 正在翻译《上下文工程》\n- 已发布vibe coding的浏览器插件","followers_count":162,"friends_count":196,"statuses_count":366,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1951090942389395457/ezXktm2y_normal.jpg","screen_name":"AgiRay1015","location":"Guangzhou","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"github.com/akira82-ai/","expanded_url":"https://github.com/akira82-ai/","url":"https://t.co/xGMNzn5Y36","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"AGI 磊叔 is a seasoned pro in the data and AI realm with over 20 years of experience, passionately bridging tech and creativity. He’s actively translating cutting-edge knowledge and enhancing user experiences with his own browser plugin. His content reflects deep insights and a pioneering spirit in AI and software engineering evolution.","purpose":"To push the boundaries of AI and data technology by creating novel tools and sharing advanced knowledge that empowers others to harness AI’s true potential.","beliefs":"He values practical innovation, continuous learning, and the importance of understanding AI as an extension of human capability rather than a magical replacement. He believes in contextualizing technology to best serve the evolving needs of users.","facts":"Fun fact: AGI 磊叔 is translating 'Context Engineering,' a cutting-edge topic, while simultaneously delivering real-world tools like the vibe coding browser plugin.","strength":"His deep technical expertise combined with a forward-thinking mindset helps him identify emerging trends and create pioneering solutions before they become mainstream.","weakness":"Sometimes his advanced concepts may fly under the radar due to low engagement or a niche audience, causing his brilliant insights to be underappreciated or misunderstood.","roast":"AGI 磊叔’s tech so advanced, half his followers probably think he’s speaking another language—and honestly, with tweets about vibe coding and spec coding, might as well be from another planet.","win":"Publishing the vibe coding browser plugin that showcases his capability to turn complex AI ideas into practical tools.","recommendation":"To grow his audience on X, AGI 磊叔 should simplify some of his complex insights into easily digestible threads and engaging multimedia content, while actively joining AI and software engineering conversations to boost visibility and foster community connections."},"created":1763419626099,"type":"the innovator","id":"agiray1015"},{"user":{"id":"914129876608782336","name":"maro","description":"accidental engineer, deliberate polymath, serial entrepreneur 📚 ‘Fifth Dimensional Economics’","followers_count":6517,"friends_count":3505,"statuses_count":7898,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1961870544682270720/uocyCxAy_normal.jpg","screen_name":"01101101arMar","location":"Arcturus","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"instagram.com/lll01101101","expanded_url":"https://www.instagram.com/lll01101101","url":"https://t.co/3xljITatb1","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"Meet maro: an accidental engineer turned deliberate polymath, blending tech curiosity with entrepreneurial grit to challenge how we see economics and reality itself. With a knack for quantum leaps in thought and business, maro’s tweets mix deep economic critiques with playful, otherworldly musings that keep followers guessing and engaged. This serial entrepreneur isn’t just talking business; they’re sparking a multidimensional conversation about wealth, society, and existence.","purpose":"To revolutionize conventional understandings of economics and technology by pushing boundaries between disciplines, inspiring a new generation to question and innovate boldly across multiple fields.","beliefs":"Maro believes in the power of cross-disciplinary knowledge and deliberately challenges established norms to reveal hidden truths, embracing complexity and ambiguity as opportunities rather than obstacles.","facts":"Despite the heavy topics maro tackles, such as economics and quantum teleportation, their engagement numbers — like 59,891 likes on a single tweet — reveal a massive impact and resonant voice in the digital space.","strength":"A unique ability to synthesize complex ideas from economics, technology, and metaphysics into captivating, tweet-sized insights that ignite large-scale discussions.","weakness":"At times, maro’s cryptic or esoteric style might alienate audiences who prefer clear-cut messages, limiting accessibility and broader appeal.","roast":"Maro’s tweets bounce around so many dimensions, even Schrödinger’s cat is confused whether to like, retweet, or just hide under the bed. If quantum states had PR teams, they’d probably hire maro to confuse the competition.","win":"The viral tweet exposing the cyclical nature of US economic flows that garnered over 12 million views and nearly 60,000 likes — a testament to their ability to translate complex truths into compelling social media gold.","recommendation":"To grow their audience on X, maro should blend their deep-dive innovations with more digestible, relatable threads that break down their insights into actionable ideas. Engaging directly with followers via Q&A sessions or live discussions could also demystify their concepts and build a loyal community eager for the next dimension of thought."},"created":1763419426074,"type":"the innovator","id":"01101101armar"},{"user":{"id":"1448915384","name":"徐志雷.eth 🌊 RIVER .edge🦭","description":"探索元宇宙与加密世界 🚀 | #cookiesnaps | #Virtuals\nAirdrop 追踪者 💰 | KOL 加载中… | 开放合作 🤝\n#KaitoAI #Web3 #OnChain #Degen","followers_count":1003,"friends_count":1677,"statuses_count":22276,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1975997941769961474/qAfxtpTQ_normal.jpg","screen_name":"qpgoldbuyer","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"徐志雷.eth 🌊 RIVER .edge🦭 is a relentless explorer of the metaverse and crypto universe, blending analytic chops with cutting-edge Web3 insights. They masterfully track airdrops and share deep, tech-heavy updates, making complex blockchain concepts approachable and engaging. Always pushing the frontier, they’re the go-to voice for those hungry for the next big breakthrough in DeFi and zero-knowledge proofs.","purpose":"To pioneer and illuminate new territories in blockchain technology and decentralized finance, helping the community unlock value and navigate the rapidly evolving Web3 landscape with clarity and confidence.","beliefs":"Believing in transparency, innovation, and community collaboration, 徐志雷.eth values integrity in DeFi projects and champions technological breakthroughs that decentralize power and accelerate adoption. They trust open protocols and co-creation as engines of a healthier, more inclusive financial future.","facts":"Fun fact: With over 22,000 tweets, 徐志雷.eth almost tweets as fast as blockchain transactions confirm—showcasing a tireless commitment to sharing fresh insights and data on projects like Chainlink, ZK tech, and River Protocol.","strength":"Exceptional technical expertise combined with a prolific content output makes 徐志雷.eth a thought leader in Web3 circles. Their clear, data-driven market analyses and nuanced understanding of zero-knowledge proofs and DeFi protocols build trust and authority.","weakness":"Sometimes their deep dives can feel like drinking from a firehose for casual followers—complex jargon and rapid-fire posting might overwhelm newcomers or dilute engagement from a broader audience.","recommendation":"To grow their audience on X, 徐志雷.eth should weave more bite-sized educational threads that break down complicated projects into snackable content, sprinkled with engaging visuals or infographics. Boosting interactive polls or AMA sessions could make tech-heavy topics more accessible and community-driven.","roast":"For someone who rides the fast waves of crypto trends and bombards the timeline like it’s a never-ending hackathon, 徐志雷.eth’s followers might need a second wallet just to keep up with their energy and 22k tweets—do they even sleep, or just power nap with an eye on their screen?","win":"Successfully building a reputation as a key KOL in the Web3 and DeFi space, 徐志雷.eth has garnered significant engagement for in-depth technical analysis tweets with viral impact, proving their influence in driving conversations around major crypto innovations and projects."},"created":1763419212489,"type":"the innovator","id":"qpgoldbuyer"},{"user":{"id":"1337955556087099394","name":"script (🛠, 🤖)","description":"AI Dev | Founder of @cryptochasersco | Maestro Ambassador of @myshell_ai","followers_count":21866,"friends_count":6943,"statuses_count":4607,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1873033684909211650/jibPHx1p_normal.jpg","screen_name":"scriptdotmoney","location":"dWeb","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"t.me/scriptmoneynot…","expanded_url":"https://t.me/scriptmoneynotes","url":"https://t.co/p3ODSw1gBq","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"Script is a trailblazing AI developer and crypto enthusiast who combines cutting-edge technology with sharp market insights. As the founder of @cryptochasersco and a Maestro Ambassador for @myshell_ai, Script powers through data with precision and a dash of humor. Their tweets blend practical trading advice, AI development tips, and a keen eye for crypto market trends, making complex ideas feel refreshingly accessible.","purpose":"To push the boundaries of AI-driven finance by creating innovative tools and strategies that empower others to navigate and profit in the crypto ecosystem.","beliefs":"Script believes in the transformative power of automation and data-driven decision-making, valuing early risers who seize scientifically optimized trading windows and embracing community-driven innovation in crypto and DeSci sectors.","facts":"Fun fact: Script is convinced that mastering memes and crypto trading doesn't require sleepless nights—in fact, waking up early is their secret weapon for success!","strength":"Combines deep technical prowess with a strategic approach to trading, providing well-researched, actionable insights that help followers harness AI and crypto tools effectively.","weakness":"Sometimes overflows with data and technical jargon, which might leave casual followers scratching their heads or overwhelmed; also, the frenetic tweeting pace of 4607 tweets risks diluting their message.","roast":"With nearly 7,000 follows and over 4,600 tweets, Script’s timeline looks like a nonstop data dump—if you can’t handle code and crypto memes flooding your feed, you might want to block before you blink.","win":"Successfully leveraged AI and trading to secure a consistent 20% return within a month, proving that their blend of tech savvy and market knowledge isn’t just talk—it actually works.","recommendation":"To grow their audience on X, Script should simplify and highlight key insights through engaging visuals like tweet threads or infographics, and occasionally share personal stories to humanize their tech-heavy content, making it more relatable and shareable."},"created":1763418738880,"type":"the innovator","id":"scriptdotmoney"},{"user":{"id":"1683218035602255872","name":"江灵夏草","description":"我一般会使用中文,日语,英语三种语言发帖。DIY SmartPhone Computer HatsuneMiku","followers_count":4075,"friends_count":90,"statuses_count":740,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1934604096973271040/LyHoR0xg_normal.jpg","screen_name":"jlxc2001","location":"中国深圳","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"youtube.com/@jlxc2001?si=j…","expanded_url":"https://youtube.com/@jlxc2001?si=jbdNIMfp4TT4-nEv","url":"https://t.co/pIA1Q7xqCz","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"江灵夏草 is a multilingual tech enthusiast who thrives at the intersection of DIY gadgetry and digital culture, effortlessly switching between Chinese, Japanese, and English. Their content often reflects a deep curiosity for technology’s past and present, wrapped in thoughtful storytelling and a personal touch. Whether uncovering forgotten smartphones or sharing viral tech tips, they spark engagement with a warm, curious voice.","purpose":"To inspire and empower a community fascinated by technology’s evolving landscape by sharing unique insights, hands-on experiences, and meaningful stories that blend nostalgia with innovation.","beliefs":"They believe in the power of technology to connect people and preserve memories, while embracing a realistic view of market challenges and impermanence. Integrity, creativity, and practical problem-solving are core to their values, alongside a commitment to cultural and linguistic diversity.","facts":"Fun fact: 江灵夏草 once tracked down the original owner of a discarded smartphone by using information left on its screen, revealing a poignant human story behind a piece of tech.","strength":"A natural storyteller with technical savvy, Jiang attracts followers through genuine, compelling content that mixes emotion and expertise. Their multilingual skills enable seamless cross-cultural communication, broadening their potential audience.","weakness":"Despite ample engaging content, their follower count stagnation suggests a need for more strategic audience growth tactics and consistent branding to convert viewers into long-term followers.","roast":"江灵夏草’s tweets are like that one friend who’s the ‘know-it-all’ in three languages but still can’t figure out which VPN won’t ruin their travel plans—proof that even tech wizards have their ‘oops’ moments!","win":"Securing the second spot on Bilibili’s trending page and gaining over 20,000 new followers in one day marked a defining breakthrough, confirming their content resonates widely beyond X’s ecosystem.","recommendation":"To grow their audience on X, Jiang should leverage moments of high engagement by creating threaded narratives and using hashtags strategically across all three languages. Collaborating with other tech creators and engaging more in reply threads can also boost visibility and community rapport."},"created":1763418727137,"type":"the innovator","id":"jlxc2001"},{"user":{"id":"1502180973403799556","name":"帅帅 MemeMax ⚡️","description":"CM: \n @LuckyGo_io @499_DAO\n\nWeb3研究者 专注项目分享\n合作DM\n所有推文不做投资建议","followers_count":62951,"friends_count":6511,"statuses_count":62273,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1989807812130865153/Sic3fUXN_normal.jpg","screen_name":"ssovoovo","location":"","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"t.me/shuaishuaiovo","expanded_url":"http://t.me/shuaishuaiovo","url":"https://t.co/AjIA9xIAhn","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"帅帅 MemeMax ⚡️ is a vibrant Web3 researcher and project sharer who thrives on exploring and evangelizing cutting-edge blockchain technologies. With a high-frequency tweeting habit and a finger firmly on the pulse of DeFi, AI integration, and cross-chain protocols, this profile energizes the crypto community by spotlighting real-world applications and ecological evolution. Their passionate, no-nonsense approach breaks down complex concepts with clarity and a spirit of collaboration.","purpose":"To democratize access to blockchain innovation by amplifying projects that push technological boundaries and foster a truly interconnected decentralized ecosystem, empowering users and developers alike to realize the full potential of Web3.","beliefs":"They believe in the transformative power of decentralization, transparency, and cross-chain interoperability as essential catalysts for user empowerment and ecosystem inclusivity. This user values innovation that simplifies blockchain interaction, encourages community participation, and establishes trust through openness.","facts":"Fun fact: Despite not giving any investment advice, 帅帅 MemeMax’s extensive sharing (over 62,000 tweets!) has positioned them as a go-to source for fresh insights and grassroots project endorsements within the Web3 space.","strength":"Exceptional ability to dissect complex Web3 concepts and emerging projects, coupled with relentless high engagement and active community involvement. Their collaborative mindset and enthusiasm for AI and blockchain marriage bolster their reputation as an insightful innovator.","weakness":"Sometimes overwhelmed by the sheer volume of content, this intensity might dilute focus, and their outspoken criticism occasionally may alienate more conservative followers or lead to heated debates.","recommendation":"To grow their audience on X, 帅帅 MemeMax should leverage thread storytelling that mixes technical deep-dives with personal anecdotes, optimizing tweet timing to balance their prolific output while engaging followers with interactive polls and AMA sessions. Partnering with influencers in aligned niches and utilizing concise summarizations of trending topics can also expand reach and maintain community momentum.","roast":"With over 62,000 tweets and counting, 帅帅 MemeMax probably tweets so much that even blockchain validators wish they’d chill out and let the blocks breathe once in a while — but hey, who else will keep the meme coin dream alive at full tilt?","win":"Successfully positioned themselves as a recognized thought catalyst in the Web3 ecosystem by centering their voice on pioneering cross-chain interoperability protocols like Anoma, alongside ground-breaking AI-wallet collaborations, earning solid community trust despite the fast-paced crypto noise."},"created":1763418602939,"type":"the innovator","id":"ssovoovo"},{"user":{"id":"200927003","name":"Loïc Sharma","description":"Flutter contributor at Google. Opinions are my own.","followers_count":674,"friends_count":414,"statuses_count":1794,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1990495168307605509/dOxb44C8_normal.jpg","screen_name":"LoicSharma","location":"Seattle, WA","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"github.com/loic-sharma","expanded_url":"https://github.com/loic-sharma","url":"https://t.co/vvyIlw9dgA","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"Loïc Sharma is a passionate Flutter contributor at Google, constantly pushing the boundaries of what the framework can do. With a knack for automating processes and enhancing developer experiences, Loïc keeps the Flutter community buzzing with fresh ideas and practical tools. Always engaged and collaborative, they thrive in turning challenges into innovative solutions.","purpose":"To revolutionize the Flutter ecosystem by creating smarter, more efficient developer tools and APIs that enable smoother contributions and stronger community collaboration.","beliefs":"Loïc believes in the power of open source and community-driven progress, valuing rapid iteration, shared knowledge, and practical innovation that directly enhances developer productivity and engagement.","facts":"Fun fact: Loïc built an automated dashboard tracking Flutter's trending issues, showcasing a flair for combining creativity with technical savvy to streamline open source contributions.","strength":"Exceptional at identifying gaps in tooling and swiftly prototyping impactful solutions; highly collaborative, with a natural ability to mobilize community resources and feedback.","weakness":"Sometimes so focused on tech and automation that broader communication gets limited to niche audiences, potentially slowing wider engagement beyond developer circles.","roast":"Loïc’s Twitter feed is like a complex Flutter widget — incredibly useful but might need a 'simplify' flag for the rest of us mortals who don’t dream in code or automated dashboards.","win":"Successfully contributed key API proposals to Flutter that were quickly prototyped, tested, and documented by the community, demonstrating strong leadership and technical influence.","recommendation":"To grow on X, Loïc should mix in a bit more storytelling and behind-the-scenes context about their projects to connect with a broader audience, while continuing to share bite-sized technical insights that spark engagement among Flutter devs."},"created":1763418554887,"type":"the innovator","id":"loicsharma"},{"user":{"id":"68909735","name":"gordee","description":"Leading design and product for Struck Studio. Past life: mushroom dealer, @lyft","followers_count":1230,"friends_count":901,"statuses_count":17047,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1551950735314673666/44lr5bd6_normal.jpg","screen_name":"_gordee","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"Gordee is a trailblazer in the design and product space, currently spearheading innovation at Struck Studio. With a unique past as a mushroom dealer and Lyft insider, Gordee blends eclectic life experiences with cutting-edge tech insights. His tweets reveal a focus on product efficiency, AI trends, and practical tech applications with a dash of authentic enthusiasm.","purpose":"To revolutionize how design and technology intersect by creating intuitive, cost-effective solutions that simplify complex systems for better user and business outcomes.","beliefs":"Gordee values transparency in tech, believes in practical innovation over hype, and trusts that good design should evoke emotion while improving real-world usability.","facts":"Fun fact: Before becoming a product and design leader, Gordee was actually a mushroom dealer—a quirky twist that symbolizes his unconventional path to innovation.","strength":"Exceptional ability to identify inefficiencies in tech products and craft elegant solutions, coupled with a genuine passion for human-centered design and clear communication.","weakness":"Often so engrossed in cutting-edge innovation and technical details that audience engagement on social media feels more informative than interactive, missing opportunities for broader connection.","recommendation":"To grow on X, Gordee should mix his insightful tech commentary with more interactive content—think polls, AMAs, and behind-the-scenes looks at his design process—to transform followers into active community members.","roast":"Gordee tweets enough tech critiques to debug the entire internet, but with only a handful of likes, maybe it’s time his audience got as excited about his genius as he is—that or he’s secretly just testing if AI can like tweets for him.","win":"Successfully leading design and product innovation at Struck Studio while shaking up traditional tech margins with a clever single line of code fixing AI cost transparency."},"created":1763417917675,"type":"the innovator","id":"_gordee"},{"user":{"id":"1716646427865284608","name":"Anthony","description":"🧑💻 Full-Stack Dev | 🔍 GenAI explorer | 📦 OSS lover","followers_count":339,"friends_count":243,"statuses_count":1728,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1789922772438917120/gWbio7om_normal.jpg","screen_name":"OipsAnthony","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"Anthony is a curious full-stack developer who is deeply passionate about exploring cutting-edge GenAI technologies and contributing to open-source software. With a blend of technical prowess and a keen spirit of exploration, he is always at the forefront of new ideas in software development. His tweets reflect a mix of thoughtful technical insights, practical project experiences, and a slightly humorous take on developer life.","facts":"Anthony regularly tweets technical reflections and explorations about modern programming languages like Go and innovative SaaS integrations, showcasing a hands-on attitude toward emerging tech trends.","purpose":"To push the boundaries of software development by integrating the latest AI technologies and open-source projects, inspiring and educating his developer peers about next-gen possibilities.","beliefs":"Anthony believes in the power of open collaboration through OSS, continuous learning, and embracing innovation with a practical mindset—balancing hard skills with the right soft skills to truly excel.","strength":"His exploration of new technologies combined with active engagement in OSS projects and a witty, relatable communication style makes him a strong influencer in tech communities.","weakness":"While innovative and knowledgeable, Anthony might sometimes overextend himself across too many tech trends and projects, which can dilute his focus or overwhelm his audience.","recommendation":"To grow his audience on X, Anthony should share more in-depth threads breaking down complex AI and OSS concepts into digestible bites, paired with engaging visuals or demos to make tech more accessible and boost engagement.","roast":"Anthony’s brain is like a GitHub repo with too many open pull requests—full of brilliant ideas but sometimes waiting on that one cozy commit to get polished and shine. Maybe it’s time to close some tabs, both in the browser and in life!","win":"Building a reputation as a forward-thinking developer who not only experiments with bleeding-edge SaaS and AI tools but also champions OSS solutions that bridge developer needs and modern cloud services."},"created":1763417643813,"type":"the innovator","id":"oipsanthony"},{"user":{"id":"1820491410325291010","name":"0gang","description":"Web3 Research | Fundamental Analysis | Early Contributor | Writer | Investor 🌊🚀","followers_count":11503,"friends_count":7411,"statuses_count":21834,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1932840818240327680/HjZknmy1_normal.png","screen_name":"0xzerogang","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"0gang is a passionate Web3 researcher and early contributor who thrives on deep fundamental analysis and community-driven growth. Their prolific tweeting and engagement reflect a dedication to sharing insights and fostering collaboration within the crypto space. They blend investment savvy with a writer’s touch, always aiming to push the boundaries of decentralized technology.","purpose":"To pioneer understanding and adoption of Web3 technologies by delivering insightful analysis, promoting community participation, and accelerating innovation in decentralized ecosystems.","beliefs":"0gang values transparency, collaboration, and forward-thinking progress. They believe in the power of collective intelligence in Web3 and advocate for early adoption and active contribution to build a more decentralized and equitable future.","facts":"Fun fact: 0gang has tweeted over 21,000 times, proving they’re not just an innovator but also a relentless communicator who never misses a chance to spark engagement and spread knowledge.","strength":"Their strength lies in thorough fundamental analysis combined with consistent community engagement, making them a trusted voice in Web3 circles. Their ability to translate complex tech into approachable content helps demystify blockchain for many.","weakness":"An inclination to follow over 7,400 accounts might dilute their focus and create noise, potentially hindering the curation of a highly targeted community and exclusive thought leadership impact.","recommendation":"To grow their audience on X, 0gang should leverage their prolific output by seeding more original insights interspersed with curated high-value content and interactive threads, while strategically trimming their following to cultivate a more engaged and meaningful network.","roast":"0gang tweets so much, if information were crypto, they'd be a billionaire—but with 21,000 tweets, maybe it’s time to stop proving you exist in Web3 and start proving you lead it.","win":"Their #1 tweet about the Arbitrum network airdrop hit over 46,000 views with 4,600 likes and 5,500 retweets, showcasing a remarkable ability to ignite massive community action and buzz."},"created":1763416896606,"type":"the innovator","id":"0xzerogang"},{"user":{"id":"870299478","name":"Sakib","description":"AI since 2017✨creative machine learning @replicate🚀 | artificial intelligence bsc+msc @edinburghuni alumni🏛️🏴🇬🇧","followers_count":3327,"friends_count":998,"statuses_count":6733,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1598645352332230656/cTb7Uh0l_normal.jpg","screen_name":"zsakib_","location":"","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"github.com/zsxkib","expanded_url":"http://github.com/zsxkib","url":"https://t.co/QzcXfQk8Hv","indices":[0,23]}]}}},"details":{"type":"The Innovator","description":"Sakib is a cutting-edge AI enthusiast and creator who champions genuine, hands-on exploration of machine learning trends. With strong academic roots from Edinburgh University and a knack for sharing practical insights, Sakib demystifies AI breakthroughs with clarity and creativity. Passionate about showing what's truly possible, they steer clear of hype, focusing instead on what tools and models can actually do.","purpose":"Sakib's life purpose is to accelerate the adoption and understanding of artificial intelligence by bridging academic excellence with real-world innovation, enabling others to harness AI's potential with informed creativity.","beliefs":"They believe in transparency over hype, learning through exploration, and the power of community-driven knowledge sharing to advance technology responsibly and effectively.","facts":"Fun fact: Sakib has been immersed in AI since 2017 and uses platforms like Replicate and Hugging Face Spaces to keep their finger firmly on the pulse of what's hot in the AI world.","strength":"Their strengths lie in deep technical knowledge combined with clear communication, a strong academic background, and a talent for spotting and showcasing genuinely impactful AI tools ahead of the curve.","weakness":"Sakib might sometimes come across as too pragmatic or skeptical for audiences craving hype and sensationalism, potentially limiting broader mainstream appeal.","roast":"Sakib’s so deep into AI that even their coffee machine probably runs a neural network—and if it doesn’t, you can bet they’ve already written a scathing GitHub issue about it.","win":"Sakib’s top tweet on demystifying AI trends hit over 130,000 views and 895 likes, cementing their role as a go-to voice for practical and reliable AI insights in a noisy digital landscape.","recommendation":"To grow their audience on X, Sakib should leverage their expertise by creating short, punchy explainer threads and engaging more with emerging AI communities in order to increase visibility. Collaborations with other well-known AI influencers could also amplify their reach without diluting their no-nonsense style."},"created":1763415821980,"type":"the innovator","id":"zsakib_"},{"user":{"id":"932885767071809539","name":"节点科学家 .edge🦭","description":"币圈最懂AI的,AI圈最懂币圈的科学家。\n\n让每一个人都成为科学家,技术面前人人平等!\n\nYoutube: https://t.co/G66eEGYNej\nAlpha: https://t.co/jgL8faI2Xu\nTG 群:https://t.co/gzu13eP5dY","followers_count":16211,"friends_count":2761,"statuses_count":11250,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1930128386434969604/GlRkZN1y_normal.jpg","screen_name":"moncici_is_girl","location":"","entities":{"description":{"urls":[{"display_url":"youtube.com/@moncici_girl","expanded_url":"https://www.youtube.com/@moncici_girl","url":"https://t.co/G66eEGYNej","indices":[54,77]},{"display_url":"alpha.moncici.xyz","expanded_url":"http://alpha.moncici.xyz","url":"https://t.co/jgL8faI2Xu","indices":[85,108]},{"display_url":"t.me/+P16N21BxMzVlY…","expanded_url":"https://t.me/+P16N21BxMzVlYTFk","url":"https://t.co/gzu13eP5dY","indices":[114,137]}]}}},"details":{"type":"The Innovator","description":"节点科学家 .edge🦭 is a tech-savvy trailblazer who merges the cutting-edge worlds of AI and cryptocurrency with a scientist’s rigor and a community-builder’s heart. Known for deep technical insights and practical innovations, this profile empowers followers to become scientists in their own right and democratizes technology knowledge. With relentless energy and a focus on open-source tools, .edge🦭 drives the future of AI-driven finance.","purpose":"To democratize scientific knowledge and empower individuals with innovative AI-driven tools and insights in the cryptocurrency space, making technology accessible and equal for all.","beliefs":"Believes in the democratization of technology, scientific rigor, the power of open-source collaboration, and that everyone has the potential to understand and utilize advanced tech regardless of background.","facts":"Fun fact: They’ve built and released practical AI-based systems like a 24-hour BTC price prediction model, a risk-free arbitrage trading system, and a multi-channel smart alert service — all shared openly to boost community intelligence.","strength":"Exceptional technical expertise combined with a prolific output of practical AI and crypto innovations and a strong commitment to community engagement and transparency.","weakness":"Sometimes their intense focus on technical depth and open-source projects can lead to overwhelming complexity for the casual follower, and they may struggle with scaling personalized interactions given the high volume of content.","recommendation":"To grow their audience on X, .edge🦭 should balance their deep technical content with more bite-sized, beginner-friendly explainers and storytelling that showcase the human impact of their innovations. Engaging more with replies and hosting regular live Q&A sessions or AMAs could also personalize the experience and boost follower loyalty.","roast":"For someone who calls themselves a scientist, you might want to run an experiment on why people stop reading after the third tweet—spoiler: it’s the sheer volume of those 11,250 posts, not the complexity of crypto jargon!","win":"Successfully developed and launched multiple AI-powered open-source tools that predict market movements, automate arbitrage, and manage crypto tasks—driving real value and engagement in two fast-evolving tech communities simultaneously."},"created":1763415429799,"type":"the innovator","id":"moncici_is_girl"}],"activities":{"nreplies":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":2,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":1,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":4,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":25,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":3,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":15,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":14,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":1,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":19,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":171,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":13,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":0,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nretweets":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":4,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":4,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":4,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":36,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":4,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":0,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":1,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nlikes":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":27,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":22,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":4,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":95,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":348,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":5,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":20,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":8,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":3,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nviews":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":0,"startTime":1760918400000,"endTime":1761004800000,"tweets":[]},{"label":"2025-10-22","value":0,"startTime":1761004800000,"endTime":1761091200000,"tweets":[]},{"label":"2025-10-23","value":0,"startTime":1761091200000,"endTime":1761177600000,"tweets":[]},{"label":"2025-10-24","value":4801,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1290373758646128641","name":"Bilt","screen_name":"BiltRewards","indices":[1079,1091]}]},"favorited":false,"lang":"en","quoted_status_id_str":"1981385833380032558","quoted_status_permalink":{"url":"https://t.co/I5LCJMTAMn","expanded":"https://twitter.com/Letta_AI/status/1981385833380032558","display":"x.com/Letta_AI/statu…"},"retweeted":false,"fact_check":null,"id":"1981421458019754012","view_count":4801,"bookmark_count":15,"created_at":1761242663000,"favorite_count":27,"quote_count":0,"reply_count":2,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981421458019754012","full_text":"Super excited about this release: Letta Evals is the first evals platform *purpose-built* for stateful agents. What does that actually mean?\n\nWhen you eval agents w/ Letta Evals, you can literally pull an agent out of production (by cloning a replica of its active state), evaluate it, then pushes live changes back into prod (if desired).\n\nWe eval humans all the time: standardized tests (SAT/ACT), job interviews, driving tests. But when a human takes a test, they actually get the ability to *learn* from that test. If you fail your driving test, that lived experience should help you better prepare for the next \"eval\".\n\nWith Letta Evals, you can now run evaluations of truly stateful agents (not just prompt + tool configurations). This is all made possible due to the existence of AgentFile (.af), our open source portable file format that allows you to serialize any agent's state, which powers the efficient agent replication needed to run Letta Evals at scale.\n\nWill be exciting to see what people build with this. Letta Evals is already in production at companies like @BiltRewards , and is used by our own research time to run comprehensive (agentic) evaluations of new frontier models like GPT-5, Sonnet 4.5 and GLM-4.6.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":3331,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,285],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"jobs.ashbyhq.com/letta/d95271e3…","expanded_url":"https://jobs.ashbyhq.com/letta/d95271e3-f1ae-4317-8ecd-f25a79a0165c","url":"https://t.co/6C6Pybc6MD","indices":[408,431]}],"user_mentions":[{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[44,53]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[40,49]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981790090889400638","view_count":3331,"bookmark_count":14,"created_at":1761330552000,"favorite_count":22,"quote_count":1,"reply_count":1,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1981790090889400638","full_text":"We're hiring researchers & engineers at @Letta_AI to work on AI's hardest problem: memory.\n\nJoin us to work on finding the right memory representations & learning methods (both in-context and in-weights) required to create self-improving AI systems with LLMs.\n\nWe're an open AI company (both research & code) and have an incredibly tight loop from research -> product. DMs open + job posting for reference.\n\nhttps://t.co/6C6Pybc6MD","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":617,"startTime":1761350400000,"endTime":1761436800000,"tweets":[{"bookmarked":false,"display_text_range":[0,117],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981917247095685441","quoted_status_permalink":{"url":"https://t.co/HNhdjG1fLq","expanded":"https://twitter.com/xai/status/1981917247095685441","display":"x.com/xai/status/198…"},"retweeted":false,"fact_check":null,"id":"1982128872478154827","view_count":617,"bookmark_count":1,"created_at":1761411324000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1982128872478154827","full_text":"thinking machine shipping blog posts, xai shipping ai waifu launch videos\n\n(clean video tho, grok imagine looks good)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":0,"startTime":1761523200000,"endTime":1761609600000,"tweets":[]},{"label":"2025-10-29","value":14863,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,92],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1983057852970463594","quoted_status_permalink":{"url":"https://t.co/ztXL3kpVYH","expanded":"https://twitter.com/sarahwooders/status/1983057852970463594","display":"x.com/sarahwooders/s…"},"retweeted":false,"fact_check":null,"id":"1983058188812660916","view_count":14863,"bookmark_count":19,"created_at":1761632890000,"favorite_count":95,"quote_count":0,"reply_count":4,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1983058188812660916","full_text":"the terrible responses api rollout is a perfect example of how to lose first mover advantage","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":30542,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/rbwTF1v2V3","expanded_url":"https://x.com/charlespacker/status/1983987055513534903/photo/1","id_str":"1983985762636029952","indices":[273,296],"media_key":"3_1983985762636029952","media_url_https":"https://pbs.twimg.com/media/G4iIih1aYAAxj75.png","type":"photo","url":"https://t.co/rbwTF1v2V3","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","type":"user"},{"user_id":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","type":"user"},{"user_id":"4398626122","name":"OpenAI","screen_name":"OpenAI","type":"user"},{"user_id":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1173,"w":2048,"resize":"fit"},"medium":{"h":687,"w":1200,"resize":"fit"},"small":{"h":389,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1276,"width":2228,"focus_rects":[{"x":0,"y":28,"w":2228,"h":1248},{"x":643,"y":0,"w":1276,"h":1276},{"x":722,"y":0,"w":1119,"h":1276},{"x":962,"y":0,"w":638,"h":1276},{"x":0,"y":0,"w":2228,"h":1276}]},"media_results":{"result":{"media_key":"3_1983985762636029952"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987055513534903","view_count":27458,"bookmark_count":167,"created_at":1761854349000,"favorite_count":329,"quote_count":5,"reply_count":19,"retweet_count":35,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Today we're releasing Context-Bench, a benchmark (and live leaderboard!) measuring LLMs on Agentic Context Engineering.\n\nC-Bench measures an agent's ability to manipulate its own context window, a necessary skill for AI agents that can self-improve and continually learn. https://t.co/rbwTF1v2V3","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,262],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987057883279638","view_count":888,"bookmark_count":0,"created_at":1761854349000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Modern agents like Claude Code, Codex, and Cursor rely on tools to retrieve information into their context windows, from API/MCP triggers to editing code with Bash and Unix tools, to more advanced use-cases such as editing long-term memories and loading skills.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987055513534903","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[22,34]}]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","retweeted":false,"fact_check":null,"id":"1983987059284115622","view_count":931,"bookmark_count":0,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Frontier AI labs like @AnthropicAI are now explicitly training their new models to be \"self-aware\" of their context windows. Agentic context engineering is the new frontier, but there's no clear open benchmark for evaluating this capability in models.","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987057883279638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"leaderboard.letta.com","expanded_url":"https://leaderboard.letta.com","url":"https://t.co/yqigAGEhUL","indices":[217,240]},{"display_url":"letta.com/blog/context-b…","expanded_url":"https://www.letta.com/blog/context-bench","url":"https://t.co/SvCDauPwon","indices":[255,278]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"charlespacker","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983987060500480369","view_count":859,"bookmark_count":4,"created_at":1761854350000,"favorite_count":5,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Context-Bench is an exciting benchmark for the OSS community: the gap between frontier open weights and closed weights models appears to be closing: GLM 4.6 and Kimi K2 are incredible models!\n\nLeaderboard is live at https://t.co/yqigAGEhUL\n\nBreakdown at https://t.co/SvCDauPwon","in_reply_to_user_id_str":"2385913832","in_reply_to_status_id_str":"1983987059284115622","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,28],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"4426138094","name":"ryan","screen_name":"wacheeeee","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"wacheeeee","lang":"en","retweeted":false,"fact_check":null,"id":"1984015003545182357","view_count":406,"bookmark_count":0,"created_at":1761861012000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984007696224547044","full_text":"@wacheeeee 996 = 1 day off 🧐","in_reply_to_user_id_str":"4426138094","in_reply_to_status_id_str":"1984007696224547044","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":549,"startTime":1761868800000,"endTime":1761955200000,"tweets":[{"bookmarked":false,"display_text_range":[12,35],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"github.com/letta-ai/letta…","expanded_url":"https://github.com/letta-ai/letta-obsidian","url":"https://t.co/v04ZWvc1Le","indices":[12,35]}],"user_mentions":[{"id_str":"1328913688892346370","name":"Sully","screen_name":"SullyOmarr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"SullyOmarr","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1984063277249524003","view_count":144,"bookmark_count":0,"created_at":1761872522000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1984031908049940583","full_text":"@SullyOmarr https://t.co/v04ZWvc1Le","in_reply_to_user_id_str":"1328913688892346370","in_reply_to_status_id_str":"1984031908049940583","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[55,309],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1939709358071312384","name":"halluton","screen_name":"halluton","indices":[0,9]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[10,24]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[25,33]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[34,41]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[42,54]}]},"favorited":false,"in_reply_to_screen_name":"halluton","lang":"en","retweeted":false,"fact_check":null,"id":"1984332937819722048","view_count":110,"bookmark_count":0,"created_at":1761936814000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@halluton @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI both! we measure effectiveness of tool use (eg did you use grep correctly, or use grep instead of find), as well as curation quality via task completion (you need to curate effectively to answer correctly, and even the best model currently only gets 74%)","in_reply_to_user_id_str":"1939709358071312384","in_reply_to_status_id_str":"1984329735611052415","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,338],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2580421885","name":"Jintao Zhang 张晋涛","screen_name":"zhangjintao9020","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"zhangjintao9020","lang":"en","retweeted":false,"fact_check":null,"id":"1984329785498353916","view_count":165,"bookmark_count":0,"created_at":1761936062000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"Yeah it's a great question! Basically GPT-5 consumes more tokens to complete the task - w/ a multiple that's higher than the relative multiple on $/MTok, so it ends up costing more. From the blog:\n\"We track the total cost to run each model on the benchmark. Cost reveals model efficiency: models with higher per-token prices may use significantly fewer tokens to accomplish the same task, making total cost a better metric than price alone for evaluating production viability.\"","in_reply_to_user_id_str":"2580421885","in_reply_to_status_id_str":"1984283225745920387","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[69,75],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"14191817","name":"Sven Meyer","screen_name":"SvenMeyer","indices":[0,10]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[11,25]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[26,34]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[35,42]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[43,55]},{"id_str":"1875078099538423808","name":"MiniMax (official)","screen_name":"MiniMax__AI","indices":[56,68]}]},"favorited":false,"in_reply_to_screen_name":"SvenMeyer","lang":"en","retweeted":false,"fact_check":null,"id":"1984329875046744249","view_count":80,"bookmark_count":0,"created_at":1761936083000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@SvenMeyer @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI @MiniMax__AI agreed","in_reply_to_user_id_str":"14191817","in_reply_to_status_id_str":"1984162159530799351","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[62,73],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1118096108","name":"Reinaldo Sotillo","screen_name":"reynald76165051","indices":[0,16]},{"id_str":"1863959670169501696","name":"Kimi.ai","screen_name":"Kimi_Moonshot","indices":[17,31]},{"id_str":"1726486879456096256","name":"Z.ai","screen_name":"Zai_org","indices":[32,40]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[41,48]},{"id_str":"1353836358901501952","name":"Anthropic","screen_name":"AnthropicAI","indices":[49,61]}]},"favorited":false,"in_reply_to_screen_name":"reynald76165051","lang":"en","retweeted":false,"fact_check":null,"id":"1984333001959096643","view_count":50,"bookmark_count":0,"created_at":1761936829000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1983987055513534903","full_text":"@reynald76165051 @Kimi_Moonshot @Zai_org @OpenAI @AnthropicAI sleeper hit","in_reply_to_user_id_str":"1118096108","in_reply_to_status_id_str":"1984311694693224529","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":2044,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[16,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"youtube.com/watch?v=pidnIH…","expanded_url":"https://www.youtube.com/watch?v=pidnIHdA1Y8","url":"https://t.co/s8OVJ7uT8p","indices":[266,289]}],"user_mentions":[{"id_str":"1051829114494177282","name":"Nico Albanese","screen_name":"nicoalbanese10","indices":[0,15]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[45,54]},{"id_str":"1821252546469752832","name":"Letta","screen_name":"Letta_AI","indices":[254,263]}]},"favorited":false,"in_reply_to_screen_name":"nicoalbanese10","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985249165887352999","view_count":2044,"bookmark_count":13,"created_at":1762155259000,"favorite_count":20,"quote_count":0,"reply_count":3,"retweet_count":4,"user_id_str":"2385913832","conversation_id_str":"1984662238968656329","full_text":"@nicoalbanese10 yeah it works beautifully in @Letta_AI, since it's basically post-training of claude to be better at \"MemGPT\"/Letta-style context engineering\n\ngreat example of better post-training (claude) lifting the performance of an existing harness (@Letta_AI)\n\nhttps://t.co/s8OVJ7uT8p","in_reply_to_user_id_str":"1051829114494177282","in_reply_to_status_id_str":"1984662238968656329","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":1151,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[12,12],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1679960354087145473","name":"Jeson Lee","screen_name":"thejesonlee","indices":[0,12]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/NNqoPHnSGP","expanded_url":"https://x.com/charlespacker/status/1985555041160516033/photo/1","id_str":"1985555038333534208","indices":[13,36],"media_key":"3_1985555038333534208","media_url_https":"https://pbs.twimg.com/media/G44byZXbgAAQ1QM.jpg","type":"photo","url":"https://t.co/NNqoPHnSGP","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":516,"w":516,"resize":"fit"},"medium":{"h":516,"w":516,"resize":"fit"},"small":{"h":516,"w":516,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":516,"width":516,"focus_rects":[{"x":0,"y":126,"w":516,"h":289},{"x":0,"y":0,"w":516,"h":516},{"x":0,"y":0,"w":453,"h":516},{"x":90,"y":0,"w":258,"h":516},{"x":0,"y":0,"w":516,"h":516}]},"media_results":{"result":{"media_key":"3_1985555038333534208"}}}]},"favorited":false,"in_reply_to_screen_name":"thejesonlee","lang":"qme","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985555041160516033","view_count":335,"bookmark_count":0,"created_at":1762228186000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985223675118018711","full_text":"@thejesonlee https://t.co/NNqoPHnSGP","in_reply_to_user_id_str":"1679960354087145473","in_reply_to_status_id_str":"1985223675118018711","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"144333614","name":"Sarah Wooders","screen_name":"sarahwooders","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"sarahwooders","lang":"en","retweeted":false,"fact_check":null,"id":"1985553615629795810","view_count":816,"bookmark_count":0,"created_at":1762227846000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"2385913832","conversation_id_str":"1985552377953272053","full_text":"@sarahwooders can we talk about the political and economic state of the world right now??","in_reply_to_user_id_str":"144333614","in_reply_to_status_id_str":"1985552377953272053","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":0,"startTime":1762300800000,"endTime":1762387200000,"tweets":[]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[]},{"label":"2025-11-09","value":0,"startTime":1762560000000,"endTime":1762646400000,"tweets":[]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[]},{"label":"2025-11-11","value":0,"startTime":1762732800000,"endTime":1762819200000,"tweets":[]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[]},{"label":"2025-11-13","value":0,"startTime":1762905600000,"endTime":1762992000000,"tweets":[]},{"label":"2025-11-14","value":200,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,260],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/MOoy0hmlVw","expanded_url":"https://x.com/charlespacker/status/1989111742241423375/photo/1","id_str":"1989111195979378689","indices":[261,284],"media_key":"3_1989111195979378689","media_url_https":"https://pbs.twimg.com/media/G5q-GA8acAEflTi.jpg","type":"photo","url":"https://t.co/MOoy0hmlVw","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":731,"w":2048,"resize":"fit"},"medium":{"h":428,"w":1200,"resize":"fit"},"small":{"h":243,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":804,"width":2254,"focus_rects":[{"x":818,"y":0,"w":1436,"h":804},{"x":1450,"y":0,"w":804,"h":804},{"x":1549,"y":0,"w":705,"h":804},{"x":1771,"y":0,"w":402,"h":804},{"x":0,"y":0,"w":2254,"h":804}]},"media_results":{"result":{"media_key":"3_1989111195979378689"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989111742241423375","view_count":200,"bookmark_count":0,"created_at":1763076169000,"favorite_count":3,"quote_count":0,"reply_count":3,"retweet_count":1,"user_id_str":"2385913832","conversation_id_str":"1989111742241423375","full_text":"the letta agents working in our open source skills repo have collectively proposed a .CULTURE.md file to control for skill quality 😂\n\n\"New here? .CULTURE.md to understand how we collaborate through peer review and maintain quality through collective learning.\" https://t.co/MOoy0hmlVw","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}]},"interactions":{"users":[],"period":14,"start":1762208811860,"end":1763418411860}}},"settings":{},"session":null,"routeProps":{"/creators/:username":{}}}